This article introduces GPT-5.2 in Microsoft Foundry, a new enterprise AI model designed for complex problem-solving and agentic execution. It offers advanced reasoning, context handling, and robust governance, setting a new standard for reliable and secure AI development in professional settings.
The age of AI small talk is over. Enterprise applications demand more than clever chat. They require a reliable, reasoning partner capable of solving the most ambiguous, high-stakes problems, including planning multi-agent workflows and delivering auditable code.
Azure is the foundation for solving these challenges. Today, OpenAI’s GPT-5.2 is announced as generally available in Microsoft Foundry, introducing a new frontier model series purposefully built to meet the needs of enterprise developers and technical leaders—setting a new standard for a new era.
GPT-5.2 series introduces deeper logical chains, richer context handling, and agentic execution that prompts shippable artifacts. For example, design docs, runnable code, unit tests, and deployment scripts can be generated with fewer iterations. The GPT-5.2 series is built on new architecture, delivering superior performance, efficiency, and reasoning depth compared to prior generations. It’s also trained on the proven GPT-5.1 dataset and further enhanced with improved safety and integrations. GPT-5.2 leaps beyond previous models with substantial performance improvements across core metrics.
Today, we’re shipping GPT-5.2 and GPT-5.2-Chat. Each is greatly improved from its predecessor, and together they excel in everyday professional excellence.
GPT-5.2: The most advanced reasoning model that solves harder problems more effectively and with more polish. An example of this is information work, where great thinking is now complemented with better communication skills and improved formatting in spreadsheets and slideshow creation.
GPT-5.2-Chat: A powerful yet efficient workhorse for everyday work and learning, with clear improvements in info-seeking questions, how-to’s and walk-throughs, technical writing, and translation. It’s also more effective at supporting studying and skill-building, as well as offering clearer job and career guidance.
For long term success in complex professional tasks, teams need structured outputs, reliable tool use, and enterprise guardrails. GPT‑5.2 is optimized for these agent scenarios within Foundry’s enterprise-grade platform, offering consistent developer experience across reasoning, chat, and coding.
GPT-5.2’s deep reasoning capabilities, expanded context handling, and agentic patterns make it the smart choice for building AI agents that can tackle long-running, complex tasks across industries, including financial services, healthcare, manufacturing, and customer support.
The results? Agents that maintain reliability through complex workflows and agent service, while producing structured, auditable outputs that scale confidently in Microsoft Foundry.
| Model | Deployment | Pricing (USD $/million tokens) | ||
| Input | Cached Input | Output | ||
| GPT-5.2 | Standard Global | $1.75 | $0.175 | $14.00 |
| Standard Data Zones (US) | $1.925 | $0.193 | $15.40 | |
| GPT-5.2-Chat | Standard Global | $1.75 | $0.175 | $14.00 |
The post Introducing GPT-5.2 in Microsoft Foundry: The new standard for enterprise AI appeared first on Microsoft Azure Blog.
Continue reading on the original blog to support the author
Read full articlePostgreSQL is evolving into a central hub for AI development. By integrating vector search, LLM orchestration, and seamless IDE workflows directly into the managed database service, Microsoft reduces the friction of building and scaling intelligent, data-driven applications.
Azure Storage is shifting from passive storage to an active, AI-optimized platform. Engineers must understand these scale and performance improvements to architect systems capable of handling the high-concurrency, high-throughput demands of autonomous agents and LLM lifecycles.
Microsoft's leadership in AI platforms highlights the transition from experimental LLM demos to production-grade agentic workflows. For engineers, this provides a unified framework for data grounding, multi-agent orchestration, and governance across cloud and edge environments.
This article highlights how Azure Local provides engineers with flexible, sovereign, and resilient cloud capabilities on-premises or at the edge. It enables deploying AI and critical workloads while meeting strict compliance and operational autonomy requirements, even in disconnected environments.