Microsoft Foundry provides a comprehensive, secure, and modular platform for developers to build, deploy, and manage AI agents and applications at scale, integrating advanced models and developer tools. This accelerates the shift from prescriptive logic to intelligent, adaptive systems.
One year ago, at Microsoft Ignite, we set out to redefine enterprise intelligence with Foundry. Our conviction was clear: software would evolve beyond rigid workflows, becoming systems that reason, adapt, and act with purpose. We envisioned developers moving from prescriptive logic to shaping intelligent behavior.
Today, that transformation is accelerating across industries and organizations of every size. The shift is tangible: agents are no longer just assistants, they are dynamic collaborators, seamlessly integrated into the tools we use every day. For builders, agents are reshaping software, and we are delivering a platform that empowers every developer and every business to embrace this moment with confidence and control.
Microsoft Foundry helps builders everywhere turn vision into reality with a modular, interoperable, and secure agent stack. From code to cloud, today demonstrates our focus on empowering developers with a powerful, simple—and trusted—path to production AI apps and agents. Here is the TL;DR:
It all starts with developers, and GitHub is the world’s largest developer community, now serving over 180 million developers. AI-powered tools and agents in GitHub are helping developers move faster, build increasingly innovative apps, and modernize legacy systems more efficiently. More than 500 million pull requests were merged using AI coding agents this year, and with AgentHQ, coding agents like Codex, Claude Code, and Jules will be available soon directly in GitHub and Visual Studio Code so developers can go from idea to implementation faster. GitHub Copilot, the world’s the most popular AI pair programmer, now serves over 26 million users, helping organizations like Pantone, Ahold Delhaize USA, and Commerzbank streamline processes and save time.
Over the last year, developers have moved from experimentation to production. They need tools that let them design, test, monitor, and improve intelligent systems with the same confidence they have in traditional software. That’s why we built a new generation of AI-powered tools: GitHub Agent HQ for unified agent management, Custom Agents to encode domain expertise, and “bring your own models” to empower teams to adapt and innovate. With Copilot Metrics, teams evolve with data, not guesswork.
We’re committed to giving every developer the tools to design, test, and improve intelligent systems, so they can turn ideas into impact, faster than ever. Managed Instance on Azure App Service, now in public preview, lets organizations move existing .NET applications to the cloud with only a few configuration changes.
Enterprises need a consistent foundation to build intelligence at scale. With Microsoft Foundry, we’re unifying models, tools, and knowledge into one open system, empowering organizations to run high-performing agent fleets and intelligent workflows across their business.
Today, teams can choose from over 11,000 frontier models in Foundry, including optimized solutions for scale and specialized models for scientific and industrial breakthroughs. I’m proud to announce Rosetta Fold 3, a next-generation biomolecular structure prediction model developed with the Institute for Protein Design and Microsoft’s AI for Good Lab. Models like these enable researchers and enterprises to tackle the world’s hardest problems with state-of-the-art technology.
Innovation thrives on adaptability and choice. With more than 11,000 models, Microsoft Foundry offers the broadest model selection on any cloud. Foundry Models empowers developers to benchmark, compare, and dynamically route models to optimize performance for every task.
Today’s announcements include:
The more context an agent has, the more grounded, productive, and reliable it’s likely to be. Foundry IQ, now available in public preview, reimagines retrieval-augmented generation (RAG) as a dynamic reasoning process rather than a one-time lookup. Powered by Azure AI Search, it centralizes RAG workflows into a single grounding API, simplifying orchestration and improving response quality while respecting user permissions and data classifications.
Key features include:
Foundry already powers more than 3 billion search queries per day. By combining Foundry IQ with Microsoft Fabric IQ and Work IQ from Microsoft 365 Copilot, Microsoft provides an unparalleled context layer for agents, helping them connect users with the right information at the right time to make informed decisions.
To be force multipliers, agents need access to the same tools and knowledge as the people they support. Foundry Agent Service empowers developers to create sophisticated single and multi-agent systems, connecting models, knowledge, and tools into a single, observable runtime.
Today’s announcements include:

The right tools can transform agents from simple responders into intelligent problem-solvers. With Foundry Tools, now in public preview, developers can provide agents with secure, real-time access to business systems, business logic, and multimodal capabilities to deliver business value.
Now, developers can:

Scaling intelligence requires trust. As organizations rely on agents and AI powered systems for more of their workflows, teams need clearer visibility, stronger guardrails, and faster ways to identify and address risk. This year we’re expanding security and governance with two key announcements: Foundry Control Plane, now in public preview in Microsoft Foundry, and a new integration between Microsoft Defender for Cloud and GitHub Advanced Security, also in public preview. Together they give developers and security teams a more connected way to monitor behavior, guide access, and keep AI systems safe across the full lifecycle.
Foundry Control Plane brings identity, controls, observability, and security together in one place so teams can build, operate, and govern agents with confidence. Key capabilities include:
Defender for Cloud + GitHub Advanced Security integration
Developers and security teams often work in separate tools and lack shared signals to prioritize risks. The new Defender for Cloud and GitHub Advanced Security integration closes this gap. Developers receive AI suggested fixes directly inside GitHub, while security teams track progress in Defender for Cloud in real time. This gives both sides a faster, more connected way to identify issues, remediate them, and keep AI systems secure throughout the app lifecycle.
Six months ago, we launched Foundry Local on Windows and Mac. In that short time, it’s reached 560 million devices, making it one of the fastest-growing runtimes in enterprise history. Leading organizations like NimbleEdge, Morgan Stanley, Dell, and Pieces are already using Local to bring intelligence directly into the environments where work happens, from financial services to healthcare and edge computing.
Today, we’re taking the next step. Foundry Local is now in private preview on Android, the world’s most widely used mobile platform. This means agents can run natively on billions of phones, unlocking real-time inference, privacy-aware computation, and resilience, even where connectivity is unpredictable.
We’re also announcing a new partnership with PhonePe, one of India’s fastest-growing platforms. Together, we’ll bring agentic experiences into everyday consumer applications, showing how Local can transform not just enterprise workflows, but daily life at massive scale.
We see customers building net new AI applications and integrating AI into existing applications. Both require a modern foundation. Managed Instance on Azure App Service, available in public preview, lets organizations move their .NET web applications to the cloud with just a few configuration changes, saving the time and effort of rewriting code. The result is faster migrations with lower overhead, and access to cloud-native scalability, built-in security and AI capabilities in Microsoft Foundry.
We hope you join us at Microsoft Ignite 2025, in-person or virtually, to see these new capabilities in action and learn how they can support your biggest ambitions for your business.
The post Microsoft Foundry: Scale innovation on a modular, interoperable, and secure agent stack appeared first on Microsoft Azure Blog.
Continue reading on the original blog to support the author
Read full articleEngineers gain access to Anthropic's Claude models on Azure Foundry, alongside GPT, offering unparalleled choice for building advanced AI agents. This integration simplifies operationalization, providing robust governance and security for scalable, enterprise-grade AI solutions.
This integration brings Anthropic's most advanced reasoning to Azure, enabling engineers to build secure, agentic workflows with a 1M token context window. It simplifies the path to production by combining frontier intelligence with enterprise-grade governance and data connectivity.
PostgreSQL is evolving into a central hub for AI development. By integrating vector search, LLM orchestration, and seamless IDE workflows directly into the managed database service, Microsoft reduces the friction of building and scaling intelligent, data-driven applications.
Maia 200 represents a shift toward custom first-party silicon optimized for LLM inference. It offers engineers high-performance FP4/FP8 compute and a flexible software stack, significantly reducing the cost and latency of deploying massive models like GPT-5.2 at scale.