OpenAI has introduced two cutting-edge reasoning AI models named o3 and o4-mini. The corporation touts o3 as the most sophisticated ‘reasoning AI’ model available today, while o4-mini is recognized for its exceptional performance and speed despite its smaller size. These models were unveiled shortly after the release of GPT-4.1.
Enhanced visual processing capabilities are a significant feature of these models. Initially announced at the end of last year, OpenAI has continuously refined o3. Both o3 and o4-mini now possess the ability to ‘think in images’, allowing them to incorporate visual data directly into their reasoning processes. Users can present these models with pictures, sketches, and diagrams, which the models can analyze. Additionally, they are capable of integrating actions like rotating or zooming into their analytical processes.
OpenAI’s reasoning models are designed to utilize all ChatGPT functionalities, including web browsing and rendering capabilities. Starting today, these tools will be accessible to ChatGPT Plus, Pro, and Team users on the o3, o4-mini, and o4-mini-high models. Meanwhile, the o1, o3-mini, and o3-mini-high models will gradually be phased out from these packages.
Through these models, OpenAI is advancing the development of multimodal AI systems that can adeptly handle visual data. Beyond mere command responses, these models are capable of drawing insights from images. For instance, they can provide more profound interpretations of a design diagram, a mathematical table, or an architectural drawing. This enhancement significantly boosts the AI’s analytical and problem-solving prowess.
ENGLİSH
5 gün önceSİGORTA
5 gün önceSİGORTA
5 gün önceSİGORTA
8 gün önceSİGORTA
10 gün önceSİGORTA
10 gün önceDÜNYA
19 gün önceVeri politikasındaki amaçlarla sınırlı ve mevzuata uygun şekilde çerez konumlandırmaktayız. Detaylar için veri politikamızı inceleyebilirsiniz.