• Open

    Microsoft 365 Copilot APIs: Unlocking enterprise knowledge for AI with the Retrieval API — Now in Public Preview
    Read how the Retrieval API gives developers a secure, compliant and scalable way to integrate enterprise content into their AI workflows. The post Microsoft 365 Copilot APIs: Unlocking enterprise knowledge for AI with the Retrieval API — Now in Public Preview appeared first on Microsoft 365 Developer Blog.  ( 26 min )

  • Open

    Model Mondays S2:E2 - Understanding Model Context Protocol (MCP)
    This week in Model Mondays, we focus on the Model Context Protocol (MCP) — and learn how to securely connect AI models to real-world tools and services using MCP, Azure AI Foundry, and industry-standard authorization. Read on for my recap   About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ step by step. Here’s how it works: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream If you want to grow your skills with the latest in AI model development, Model Mondays is the place to start. Want to follow along? Register Here - to watc…  ( 31 min )
    Deploy Machine Learning Models the Smart Way with Azure Blob & Web App
    💡 Why This Approach? Traditional deployments often include models inside the app, leading to: Large container sizes Long build times Slow cold starts Painful updates when models change With Azure Blob Storage, you can offload the model and only fetch it at runtime — reducing size, improving flexibility, and enabling easier updates. What You will Need An ML model (model.pkl, model.pt, etc.) An Azure Blob Storage account A Python web app (FastAPI, Flask, or Streamlit) Azure Web App (App Service for Python) Azure Python SDK: azure-storage-blob Step 1: Save and Upload Your Model to Blob Storage First, save your trained model locally: # PyTorch example import torch torch.save(model.state_dict(), "model.pt") Then, upload it to Azure Blob Storage: from azure.storage.blob import BlobServiceC…  ( 26 min )
  • Open

    Using Azure API Management as a proxy for Application Insights Telemetry
    Introduction Organizations enforcing Entra ID authentication on their Application Insights resources often face a sudden problem: browser-based telemetry stops flowing. This happens when local authentication is disabled — a necessary step to enforce strict identity controls — but sending data from browser environments comes with inherent security challenges, and the Application Insights JavaScript SDK is no exception. As a result, telemetry from web clients is silently dropped, leaving gaps in monitoring and frustrated teams wondering how to re-enable secure telemetry ingestion. This article provides a solution: using Azure API Management (APIM) as a secure proxy that authenticates telemetry using a managed identity before forwarding it to Application Insights. This pattern restores observ…  ( 34 min )
  • Open

    Anthropic Claude Sonnet 4 and Opus 4 Now Available in GitHub Copilot for JetBrains and Eclipse
    Anthropic Claude Sonnet 4 and Claude Opus 4 are now available in GitHub Copilot Chat for JetBrains IDEs and Eclipse. Model Availability✨ Claude Sonnet 4: Available for users on Pro, Pro+, Business, and Enterprise plans. Claude Opus 4: Available for users on Pro+ and Enterprise plans. For full details, please see this documentation. How to Get Started🚀 JetBrains IDEs: Click the GitHub Copilot icon –> Open GitHub Copilot Chat-> […] The post Anthropic Claude Sonnet 4 and Opus 4 Now Available in GitHub Copilot for JetBrains and Eclipse appeared first on Microsoft for Java Developers.  ( 23 min )
  • Open

    Announcing Shortcut Transformations: from files to Delta tables. Always in sync, no pipelines required.
    Shortcut transformations is a new capability in Microsoft Fabric that simplifies the process of converting raw files, starting with .CSV files, into Delta tables. This feature eliminates the need for traditional ETL pipelines, enabling users to transform data directly on top of files with minimal setup. Why use Shortcut transformations Shortcut transformations help users: What … Continue reading “Announcing Shortcut Transformations: from files to Delta tables. Always in sync, no pipelines required.”  ( 7 min )
  • Open

    Distributed Databases: Adaptive Optimization with Graph Neural Networks and Causal Inference
    Introduction and Motivation The explosive growth of data-driven applications has pushed distributed database systems to their limits, especially as organizations demand real-time consistency, high availability, and efficient resource utilization across global infrastructures. The CAP theorem—asserting that a distributed system can guarantee at most two out of consistency, availability, and partition tolerance—forces architects to make challenging trade-offs. Traditional distributed databases rely on static policies and heuristics, which cannot adapt to the dynamic nature of modern workloads and evolving data relationships. Recent advances in Graph Neural Networks (GNNs) offer a new paradigm for modeling and optimizing distributed systems. Unlike conventional machine learning, GNNs naturall…  ( 30 min )
  • Open

    Voice Conversion in Azure AI Speech
    We are delighted to announce the availability of the Voice Conversion (VC) feature in Azure AI Speech service, which is currently in preview. What is voice Conversion Voice Conversion (or voice changer, speech to speech conversion) is the process of transforming the voice characteristics of a given audio to a target voice speaker, and after Voice Conversion, the resulting audio reserves source audio’s linguistic content and prosody while the voice timbre sounds like the target speaker. Below is a diagram of Voice Conversion.   The purpose of Voice Conversion There are 3 reasons users need Voice Conversion functionality: Voice Conversion can replicate your content using a different voice identity while maintaining the original prosody and emotion. For instance, in education, teachers can …  ( 30 min )

  • Open

    Microsoft Intune data-driven management | Device Query & Copilot
    Proactively manage and secure all your devices — whether they run Windows, macOS, iOS, or Android. With cross-platform analytics, multi-device queries, and in-depth troubleshooting tools, you can pinpoint problems fast and take targeted action at scale.  Even without deep technical expertise, you can generate complex queries, identify vulnerabilities, and deploy remediations — all in a few clicks. With built-in Copilot support, daily tasks like policy validation, device comparison, and risk analysis become faster, smarter, and easier to act on.  Jeremy Chapman, Director of Microsoft 365, shares how to stay ahead of potential issues and keep endpoints running smoothly.  Spot and fix performance issues before users contact support.  Use Advanced Analytics in Microsoft Intune. Check it out.…  ( 42 min )
  • Open

    On-premises data gateway June 2025 release
    The June 2025 release of the on-premises data gateway is (version 3000.274). What’s New Fabric pipeline – Azure database for PostgreSQL connector version 2.0 is now generally available. This new version is enhanced to support TLS 1.3, new table action – upsert, as well as the script activity in data pipeline.  Fabric pipeline – Enhanced … Continue reading “On-premises data gateway June 2025 release”  ( 6 min )
    Announcing new features for Microsoft Fabric Extension in VS Code
    The Microsoft Fabric Extension for VS Code introduces two new features that enhance the management of Fabric items directly within the editor. Users can now perform CRUD operations on Fabric items and switch between multiple tenants easily. These updates aim to improve workflow efficiency and are based on customer feedback, inviting further suggestions for enhancement.  ( 6 min )
  • Open

    Dev Proxy v0.29 with refactored architecture, MCP server, and exposed LM prompts
    Introducing Dev Proxy v0.29, with a major architectural overhaul, control over language model prompts, and improved diagnostics. The post Dev Proxy v0.29 with refactored architecture, MCP server, and exposed LM prompts appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Quest 9: I want to use a ready-made template
    Building robust, scalable AI apps is tough, especially when you want to move fast, follow best practices, and avoid being bogged down by endless setup and configuration. In this quest, you’ll discover how to accelerate your journey from prototype to production by leveraging ready-made templates and modern cloud tools. Say goodbye to decision fatigue and hello to streamlined, industry-approved workflows you can make your own. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🚀 What You’ll Build A fully functional AI application deployed on Azure, customized to solve a real problem that matters to you.  A codebase powered by a producti…  ( 23 min )
  • Open

    Quest 9: I want to use a ready-made template
    Building robust, scalable AI apps is tough, especially when you want to move fast, follow best practices, and avoid being bogged down by endless setup and configuration. In this quest, you’ll discover how to accelerate your journey from prototype to production by leveraging ready-made templates and modern cloud tools. Say goodbye to decision fatigue and hello to streamlined, industry-approved workflows you can make your own. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🚀 What You’ll Build A fully functional AI application deployed on Azure, customized to solve a real problem that matters to you.  A codebase powered by a producti…  ( 23 min )
  • Open

    Removing Azure Resource Manager reliance on Azure DevOps sign-ins
    Azure DevOps will no longer depend on the Azure Resource Manager (ARM) resource (https://management.azure.com) when you sign in or refresh Microsoft Entra access tokens. Previously, Azure DevOps required the ARM audience during sign-in and token refresh flows. This requirement meant administrators had to allow all Azure DevOps users to bypass ARM-based Conditional Access policies (CAPs) […] The post Removing Azure Resource Manager reliance on Azure DevOps sign-ins appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Customer Managed Keys in OneLake: Strengthening Data Protection and Control
    One of the highly requested features in Microsoft Fabric is now available: the ability to encrypt data in OneLake using your own keys. As organizations face growing data volumes and tighter regulatory expectations, Customer-Managed Keys (CMK) offer a powerful way to enforce enterprise-grade security and ensure strict ownership of encryption keys and access. With Microsoft’s … Continue reading “Customer Managed Keys in OneLake: Strengthening Data Protection and Control”  ( 7 min )
    New in Fabric Data Agent: Data source instructions for smarter, more accurate AI responses
    We’re excited to introduce Data Source Instructions, a powerful new feature in the Fabric Data Agent that helps you get more precise, relevant answers from your structured data. What are Data Source instructions? When you use the Data Agent to ask questions in natural language, the agent must determine which data source to use and … Continue reading “New in Fabric Data Agent: Data source instructions for smarter, more accurate AI responses”  ( 6 min )
    Fabric June 2025 Feature Summary
    Welcome to the June 2025 update. The June 2025 Fabric update introduces several key enhancements across multiple areas. Power BI celebrates its 10th anniversary with a range of community events, contests, expert-led sessions, and special certification exam discounts. In Data Engineering, Fabric Notebooks now support integration with variable libraries in preview, empowering users to manage … Continue reading “Fabric June 2025 Feature Summary”  ( 17 min )
  • Open

    Create Stunning AI Videos with Sora on Azure AI Foundry!
    Special credit to Rory Preddy for creating the GitHub resource that enable us to learn more about Azure Sora. Reach him out on LinkedIn to say thanks. Introduction Artificial Intelligence (AI) is revolutionizing content creation, and video generation is at the forefront of this transformation. OpenAI's Sora, a groundbreaking text-to-video model, allows creators to generate high-quality videos from simple text prompts. When paired with the powerful infrastructure of Azure AI Foundry, you can harness Sora's capabilities with scalability and efficiency, whether on a local machine or a remote setup.   In this blog post, I’ll walk you through the process of generating AI videos using Sora on Azure AI Foundry. We’ll cover the setup for both local and remote environments. Requirements: Azure AI …  ( 26 min )
  • Open

    Semantic Kernel Python Gets a Major Vector Store Upgrade
    We’re excited to announce a significant update to Semantic Kernel Python’s vector store implementation. Version 1.34 brings a complete overhaul that makes working with vector data simpler, more intuitive, and more powerful. This update consolidates the API, improves developer experience, and adds new capabilities that streamline AI development workflows. What Makes This Release Special? The […] The post Semantic Kernel Python Gets a Major Vector Store Upgrade appeared first on Semantic Kernel.  ( 25 min )

  • Open

    Introducing Microsoft Purview Alert Triage Agents for Data Loss Prevention & Insider Risk Management
    Surface the highest-risk alerts across your environment, no matter their default severity, and take action. Customize how your agents reason, teach them what matters to your organization, and continuously refine to reduce time-to-resolution.    Talhah Mir, Microsoft Purview Principal GPM, shows how to triage, investigate, and contain potential data risks before they escalate.  Focus on the most high-risk alerts in your queue.  Save time by letting Alert Triage Agents for DLP and IRM surface what matters. Watch how it works. Stay in control.  Tailor triage priorities with your own rules to focus on what really matters. See how to customize your alert triage agent using Security Copilot. View alert triage agent efficiency stats.  Know what your agent is doing and how well it’s performing…  ( 28 min )
  • Open

    OneLake security – updates and news
    It’s been almost 3 months since we announced OneLake security at FabCon 2025 in Las Vegas, and while the interest has not slowed down, we’ve also been working behind the scenes to improve the feature and address your feedback. In this blog post, we’ll go through some of the latest updates on OneLake security including … Continue reading “OneLake security – updates and news”  ( 6 min )
  • Open

    🎉 Now in Public Preview: Create Dev Boxes on Behalf of Your Developers
    We’re excited to announce that one of our most-requested features is officially in Public Preview: You can now create Dev Boxes on behalf of your developers. Waiting around to get started is a thing of the past.  Whether you’re onboarding a new hire, running a hackathon, or setting up for a customer demo, this feature makes it […] The post 🎉 Now in Public Preview: Create Dev Boxes on Behalf of Your Developers  appeared first on Develop from the cloud.  ( 23 min )
  • Open

    Deploying MCP Server Using Azure Container Apps
    Authors: Joel Borellis, Mohamad Al Jazaery, Hwasu Kim, Kira Soderstrom  Github Link  This repository is a great starting point. It demonstrates deploying an MCP server with Azure Container Apps and includes three example clients, each using a different agent framework to interact with the server. What’s Inside MCP Server with Sport News ToolsThe sample server, built using the fastmcp package, exposes tools like “Get NFL News”. It supports API key authentication and is designed to be easily extended with additional tools or data sources. You can run it locally or deploy it to Azure Container Apps for scalable, cloud-native hosting.   Three Client SamplesEach example demonstrates how different agent frameworks can consume tools from the MCP server. All the examples use Azure-OpenAI as a…  ( 25 min )

  • Open

    Deprecation of MS-APP-ACTS-AS header in Shifts Management Microsoft Graph APIs
    In app-only access scenarios, Shifts Management Graph APIs previously required the MS-APP-ACTS-AS: userId header to indicate the user on whose behalf the application was acting. However, this conflicted with the Microsoft Graph permission model where there is no signed-in user for app-only access scenarios. To align Shifts Graph APIs with this model, the MS-APP-ACTS-AS header […] The post Deprecation of MS-APP-ACTS-AS header in Shifts Management Microsoft Graph APIs appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Simplifying Secrets Management in Strapi on Azure App Service
    We’re excited to announce a major enhancement to the deployment experience for Strapi on Azure App Service. Building on the foundation laid out in our  overview, quick start, and FAQ , this update introduces automated and secure secrets management using Azure Key Vault. What’s New? The updated ARM template now provisions an Azure Key Vault instance alongside your Strapi application. This integration enables secure storage of sensitive credentials such as database passwords and Strapi-specific secrets. Here’s what makes this enhancement powerful: Secure by Default: Public access to the Key Vault is disabled out of the box. Instead, private endpoints are configured to ensure secure communication within your virtual network. Auto-Generated Secrets: Strapi secrets are now automatically genera…  ( 24 min )
  • Open

    How to make your SQL scalar user-defined function (UDF) inlineable in Microsoft Fabric Warehouse
    In our previous blog post Inline Scalar user-defined functions in Microsoft Fabric Warehouse (Preview) we have announced the availability of SQL native scalar UDFs. We also emphasized the importance of inlining and how that can affect scenarios in which UDF can be used. In this post, we aim to highlight common patterns that prevent inlining … Continue reading “How to make your SQL scalar user-defined function (UDF) inlineable in Microsoft Fabric Warehouse “  ( 9 min )
    Inline Scalar user-defined functions (UDFs) in Microsoft Fabric Warehouse (Preview)
    SQL native Scalar user-defined functions (UDFs) in Microsoft Fabric Warehouse and SQL analytics endpoint are now in preview. A scalar UDF is a custom code implemented in T-SQL that accepts parameters, performs an action such as complex calculation, and returns a result of that action as a single value. They can contain local variables, calls … Continue reading “Inline Scalar user-defined functions (UDFs) in Microsoft Fabric Warehouse (Preview)”  ( 8 min )
  • Open

    Quest 7: Create an AI Agent with Tools from an MCP Server
    The JS AI Build-a-thon is in full swing — and we’re turning up the power in Quest 7!If you're just joining us, this is part of an ongoing challenge to help JavaScript and TypeScript developers build AI-powered apps from scratch. Catch up here and join the conversation on Discord.   Last quest, you dipped your toes into agentic development — giving your AI the ability to act and reason.This time, we’re taking it further. In Quest 7, you’ll explore the Model Context Protocol (MCP) — a growing protocol in Agentic development that unlocks standardized tool-usage in AI agents via an MCP server. Available tools for the OS Patrol agent 🎯 What You’ll Build This quest is all about connecting your AI agent to tools that do real things.You’ll: Create and spin up an MCP server using the MCP TypeScri…  ( 25 min )

  • Open

    💻 Spring Cleaning for Dev Boxes: Mastering Manual & Automatic Offboarding
    Let’s face it. Sometimes your Dev Box just… hangs around too long. Whether you’ve moved to a new project, left the company, want to create a new dev box with the latest tools, it’s time to clean things up. 🎯 With Dev Box Auto-Deletion now in public preview, offboarding just got a whole lot easier. […] The post 💻 Spring Cleaning for Dev Boxes: Mastering Manual & Automatic Offboarding appeared first on Develop from the cloud.  ( 24 min )

  • Open

    S2E01 Recap: Advanced Reasoning Session
    About Model Mondays Want to know what Reasoning models are and how you can build advanced reasoning scenarios like a Deep Research agent using Azure AI Foundry? Check out this recap from Model Mondays Season 2 Ep 1. Model Mondays is a weekly series to help you build your model IQ in three steps:1. Catch the 5-min Highlights on Monday, to get up to speed on model news2. Catch the 15-min Spotlight on Monday, for a deep-dive into a model or tool3. Catch the 30-min AMA on Friday, for a Q&A session with subject matter experts Want to follow along? Register Here- to watch upcoming livestreams for Season 2 Visit The Forum- to see the full AMA schedule for Season 2 Register Here - to join the AMA on Friday Jun 20 Spotlight On: Advanced Reasoning This week, the Model Mondays spotlight was on Adv…  ( 31 min )
    Getting Started with the AI Toolkit: A Beginner’s Guide with Demos and Resources
    If you're curious about building AI solutions but don’t know where to start, Microsoft’s AI Toolkit is a great place to begin. Whether you’re a student, developer, or just someone exploring AI for the first time, this toolkit helps you build real-world solutions using Microsoft’s powerful AI services. In this blog, I’ll Walk you through what the AI Toolkit is, how you can get started, and where you can find helpful demos and ready-to-use code samples. What is the AI Toolkit? The AI Toolkit is a collection of tools, templates, and sample apps that make it easier to build AI-powered applications and copilots using Microsoft Azure. With the AI Toolkit, you can: Build intelligent apps without needing deep AI expertise. Use templates and guides that show you how everything works. Quickly proto…  ( 24 min )
  • Open

    Boosting Productivity with Ansys RedHawk-SC and Azure NetApp Files Intelligent Data Infrastructure
    Table of Contents Abstract Introduction Using Ansys Access with Azure NetApp Files Architecture Diagram Ansys Redhawk Scenario Details Overview and Context HPC Simulation Environment Cloud Shift Drivers Why Azure NetApp Files Capacity and Scale Fits for Ansys Access to support RedHawk-SC Use Cases Power Integrity Simulations Transient Simulations with Frequent Checkpoints Multiple Concurrent Simulation Runs End-to-End Engineering Workflows Azure Well-Architected Pillars And Considerations Performance Efficiency Parallel I/O and Low Latency Large Volumes Dynamic Service Levels and Volume Resizing Protocol Optimization Performance Isolation and QoS Cluster Right-Sizing Cost Optimization Pay-As-You-Go Model Storage Efficiency for data protection Reserved Capacity Tiering for Cold Data Archiva…  ( 41 min )
  • Open

    Balance governance and flexibility with Dev Box project policies
    As organizations scale their development efforts, managing access to cloud resources becomes critical. Platform engineers need to strike a balance between enforcing governance and enabling developer agility. At Build 2025, we announced the general availability of project policies in Microsoft Dev Box, which provides a powerful way to improve resource control and governance for cloud […] The post Balance governance and flexibility with Dev Box project policies appeared first on Develop from the cloud.  ( 24 min )
    Control cloud costs with Dev Box hibernation features
    Cost is one of the most important concerns in any cloud-native rollout. IT admins need powerful tools to control costs without slowing development. At Build 2025, we were excited to announce the general availability of hibernation in Microsoft Dev Box. This feature empowers platform engineers to optimize resource usage while empowering developers to get what […] The post Control cloud costs with Dev Box hibernation features appeared first on Develop from the cloud.  ( 23 min )

  • Open

    Engaging Employees: A Journey Through Data Analytics
    Our Team (Sorted Alphabetically): Ashkan Allahyari| ashkan.allahayri@ru.nl Ole Bekker| ole.bekker2@ru.nl Robin Elster| robin.elster@ru.nl Waad Hegazy| waad.hegazy@ru.nl Lea Hierl| lea.hierl@ru.nl Linda Pham| linda.pham@ru.nl  Master Business Analysis & Modelling, Radboud University Master Strategic Human Resources Leadership, Radboud University Student Exchange at Radboud University Project Overview At Radboud University, our team in the course Data-Driven Analytics for Responsible Business Solutions, embraced a unique opportunity to apply our passion for data analytics to a real-world challenge. Tasked with analyzing employee turnover at VenturaGear—a company committed to fostering a thriving workplace—we conducted an in-depth study to uncover the root causes of attrition. By le…  ( 36 min )
  • Open

    Announcing Public Preview of the Root Cert API in App Service Environment v3
    What is the Root Cert API? The Root Cert API allows customers to programmatically add root certificates to their ASE, making them available during the startup of apps. Root certificates are public certificates that identify a root certificate authority (CA). These are essential for establishing trust in secure communications. By adding root certificates to your ASE, all web apps hosted within that ASE will have them installed in their root store. This ensures that apps can securely communicate with internal services or APIs that use certificates issued by private or enterprise CAs. Previously, this functionality was only available in private preview through a workaround involving certificate uploads and a special app setting and included a number of limitations. With the new Root Cert API,…  ( 27 min )

  • Open

    Mastering Model Context Protocol (MCP): Building Multi Server MCP with Azure OpenAI
    The Model Context Protocol (MCP) is rapidly becoming the prominent framework for building truly agentic, interoperable AI applications.   Many articles document MCP servers for single-server use, this project stands out as the starter template that combines Azure OpenAI integration with a Multi-Server MCP architecture on a custom interface, enabling you to connect and orchestrate multiple tool servers—through a customized UI.   Here, we will deep dive into Multi Server MCP implementation, connecting both local custom and ready MCP Servers in a single client session through the MultiServerMCP library from Langchain adapters, enabling agentic orchestration across different domains. While most triggers to OOB MCP servers leveraged Github Copilot for input, this project allows for custom app i…  ( 31 min )
    Announcing the Extension of Some Language Understanding Intelligence Service (LUIS) Functionality
    In 2022, we announced the deprecation of LUIS by September 30, 2025 with a recommendation to migrate to conversational language understanding (CLU). In response to feedback from our valued customers, we have decided to extend the availability of certain functionalities in LUIS until March 31, 2026. This extension aims to support our customers in their smooth migration to CLU, ensuring minimal disruption to their operations.  Extension Details  Here are some details on when and how the LUIS functionality will change:  October 2022: LUIS resource creation is no longer available. October 31, 2025: The LUIS portal will no longer be available.  LUIS Authoring (via REST API only) will continue to be available. March 31, 2026: LUIS Authoring, including via REST API, will no longer be available.  LUIS Runtime will no longer be available.  Before these retirement dates, please migrate to conversational language understanding (CLU), a capability of Azure AI Service for Language. CLU provides many of the same capabilities as LUIS, plus enhancements such as:  Enhanced AI quality using state-of-the-art machine learning models  The LLM-powered Quick Deploy feature to deploy a CLU model with no training Multilingual capabilities that allow you to train in one language and predict in 99+ others  Built-in routing between conversational language understanding and custom question answering projects using orchestration workflow  Access to a suite of features available on Azure AI Service for Language in the Azure AI Foundry  Looking Ahead  On March 31, 2026, LUIS will be fully deprecated, and any LUIS inferencing requests will return an error message. We encourage all our customers to complete their migration to CLU as soon as possible to avoid any disruptions.  We appreciate your understanding and cooperation as we work together to ensure a smooth migration.   Thank you for your continued support and trust in our services.  ( 21 min )
  • Open

    Drive carbon reductions in cloud migrations with Sustainability insights in Azure Migrate
    Introduction As sustainability becomes a core priority for organizations worldwide, Azure Migrate now empowers customers to quantify environmental impact alongside cost savings when planning their cloud journey. With the new Sustainability Benefits capability in Azure Migrate's Business Case, customers can now view estimated emissions savings when migrating from on-premises environments to Azure — making sustainability a first-class consideration in cloud transformation. Align with Global Sustainability Goals With governments and enterprises racing to meet net-zero targets — including a 55% emissions reduction target by 2030 in the EU and net-zero goals in the US and UK by 2050 — cloud migration offers a meaningful path to emissions reduction. With Azure’s carbon-efficient infrastructure p…  ( 25 min )
  • Open

    Connect Spring AI to Local AI Models with Foundry Local
    What is Azure AI Foundry and Foundry Local? Azure AI Foundry is Microsoft’s comprehensive platform for enterprise AI development and deployment, enabling organizations to build, customize, and operate AI solutions at scale. It provides tools, services, and infrastructure to develop, fine-tune and deploy AI models in production environments with enterprise-grade security and compliance. Foundry Local […] The post Connect Spring AI to Local AI Models with Foundry Local appeared first on Microsoft for Java Developers.  ( 25 min )
  • Open

    Azure DevOps MCP Server, Public Preview
    A few weeks ago at BUILD, we announced the upcoming Azure DevOps MCP Server: 👉 Azure DevOps with GitHub Repositories – Your path to Agentic AI Today, we’re excited to share that the local Azure DevOps MCP Server is now available in public preview. This lets GitHub Copilot in Visual Studio and Visual Studio Code […] The post Azure DevOps MCP Server, Public Preview appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Validating Change Requests with Kubernetes Admission Controllers
    Promoting an application or infrastructure change into production often comes with a requirement to follow a change control process. This ensures that changes to production are properly reviewed and that they adhere to required approvals, change windows and QA process. Often this change request (CR) process will be conducted using a system for recording and auditing the change request and the outcome. When deploying a release, there will often be places in the process to go through this change control workflow. This may be as part of a release pipeline, it may be managed in a pull request or it may be a manual process. Ultimately, by the time the actual changes are made to production infrastructure or applications, they should already be approved. This relies on the appropriate controls an…  ( 40 min )
  • Open

    Connecting Azure Kubernetes Service Cluster to Azure Machine Learning for Multi-Node GPU Training
    TLDR Create an Azure Kubernetes Service cluster with GPU nodes and connect it to Azure Machine Learning to run distributed ML training workloads. This integration provides a managed data science platform while maintaining Kubernetes flexibility under the hood, enables multi-node training that spans multiple GPUs, and bridges the gap between infrastructure and ML teams. The solution works for both new and existing clusters, supporting specialized GPU hardware and hybrid scenarios. Why Should You Care? Integrating Azure Kubernetes Service (AKS) clusters with GPUs into Azure Machine Learning (AML) offers several key benefits: Utilize existing infrastructure: Leverage your existing AKS clusters with GPUs via a managed data science platform like AML Flexible resource sharing: Allow both AKS wo…  ( 34 min )
  • Open

    Introducing MCP Support for Real-Time Intelligence (RTI)
    Co-author: Alexei Robsky, Data Scientist Manager Overview  As organizations increasingly rely on real-time data to drive decisions, the need for intelligent, responsive systems has never been greater. At the heart of this transformation is Fabric Real-Time Intelligence (RTI), a platform that empowers users to act on data as it arrives. Today, we’re excited to announce … Continue reading “Introducing MCP Support for Real-Time Intelligence (RTI) “  ( 7 min )
    Fabric Eventhouse now supports Eventstream Derived Streams in Direct Ingestion mode (Preview)
    The Eventstreams artifact in the Microsoft Fabric Real-Time Intelligence experience lets you bring real-time events into Fabric, transform them, and then route them to various destinations such as Eventhouse, without writing any code (no-code). You can ingest data from an Eventstream to Eventhouse seamlessly either from Eventstream artifact or Eventhouse Get Data Wizard. This capability … Continue reading “Fabric Eventhouse now supports Eventstream Derived Streams in Direct Ingestion mode (Preview)”  ( 7 min )
    Introducing new item creation experience in Fabric
    Have you ever found yourself frustrated by inconsistent item creation? Maybe you’ve struggled to select the right workspace or folder when creating a new item or ended up with a cluttered workspace due to accidental item creation. We hear you—and we’re excited to introduce the new item creation experience in Fabric! This update is designed … Continue reading “Introducing new item creation experience in Fabric”  ( 6 min )
    Surge Protection for Background Operation (Generally Available)
    We’re excited to announce Surge Protection for background operations is now Generally Available (GA). Using surge protection, capacity admins can limit overuse by background operations in their capacities.  ( 6 min )
  • Open

    Modernizing Loan Processing with Gen AI and Azure AI Foundry Agentic Service
    Scenario Once a loan application is submitted, financial institutions must process a variety of supporting documents—including pay stubs, tax returns, credit reports, and bank statements—before a loan can be approved. This post-application phase is often fragmented and manual, involving data retrieval from multiple systems, document verification, eligibility calculations, packet compilation, and signing. Each step typically requires coordination between underwriters, compliance teams, and loan processors, which can stretch the processing time to several weeks. This solution automates the post-application loan processing workflow using Azure services and Generative AI agents. Intelligent agents retrieve and validate applicant data, extract and summarize document contents, calculate loan eli…  ( 42 min )
  • Open

    Python in Visual Studio Code – June 2025 Release
    The June 2025 release includes Copilot chat tools in the Python extension, project creation from a template, language server based terminal suggest, and more! The post Python in Visual Studio Code – June 2025 Release appeared first on Microsoft for Python Developers Blog.  ( 24 min )

  • Open

    Result Set Caching for Microsoft Fabric Data Warehouse (Preview)
    Result Set Caching is now available in preview for Microsoft Fabric Data Warehouse and Lakehouse SQL analytics endpoint. This performance optimization works transparently to cache the results of eligible T-SQL queries. When the same query is issued again, it directly retrieves the stored result, instead of recompiling and recomputing the original query. This operation drastically … Continue reading “Result Set Caching for Microsoft Fabric Data Warehouse (Preview)”  ( 6 min )
  • Open

    Quest 4 - I want to connect my AI prototype to external data using RAG
    In this quest, you'll teach your AI app to talk to external data using the Retreival Augmented Generation (RAG) technique. You'll overcome the limitations of pre-trained language models by allowing them to reference your own data, using it as context to deliver accurate, fact-based responses.   👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🔧 What You’ll Build In this quest, you’ll: Connect your AI app to external documents (like PDFs) Allow your app to “read” and respond using your real-world content Why does this matter? Because LLMs are powerful, but they don’t know your business, reports, or research papers, etc. With RAG, yo…  ( 25 min )
    Use Prompty with Foundry Local
    Prompty is a powerful tool for managing prompts in AI applications. Not only does it allow you to easily test your prompts during development, but it also provides observability, understandability and portability. Here's how to use Prompty with Foundry Local to support your AI applications with on-device inference. Foundry Local At the Build '25 conference, Microsoft announced Foundry Local, a new tool that allows developers to run AI models locally on their devices. Foundry Local offers developers several benefits, including performance, privacy, and cost savings. Why Prompty? When you build AI applications with Foundry Local, but also other language model hosts, consider using Prompty to manage your prompts. With Prompty, you store your prompts in separate files, making it easy to test a…  ( 27 min )
  • Open

    AI Automation in Azure Foundry through turnkey MCP Integration and Computer Use Agent Models
    The Fashion Trends Discovery Scenario In this walkthrough, we'll explore a sample application that demonstrates the power of combining Computer Use (CUA) models with Playwright browser automation to autonomously compile trend information from the internet, while leveraging MCP integration to intelligently catalog and store insights in Azure Blob Storage. The User Experience A fashion analyst simply provides a query like "latest trends in sustainable fashion" to our command-line interface. What happens next showcases the power of agentic AI—the system requires no further human intervention to: Autonomous Web Navigation: The agent launches Pinterest, intelligently locates search interfaces, and performs targeted queries Intelligent Content Discovery: Systematically identifies and interacts …  ( 43 min )

  • Open

    12 GitLens Features that Revolutionized My Coding Workflow in VS Code
    Let me walk you through 12 features that have become indispensable in my daily coding life. Understanding Code Changes & History Inline Blame Annotations: GitLens adds small annotations at the end of each line of code, showing who last modified that line, when, and in which commit. This feature provides instant context about the code's history without leaving your editor. While working on an e-commerce platform's checkout process, I noticed an unexpected behavior. With inline blame, I quickly identified that the change was introduced in a recent sprint, saving hours of backtracking.  Heatmap: This feature adds a color-coded heatmap to the scroll bar, visually representing the age of the code. Newer changes appear in warmer colors, while older code is shown in cooler colors, helping you q…  ( 30 min )
  • Open

    Quest 3 - I want to add a simple chat interface to my AI prototype
    In this quest, you’ll give your Gen AI prototype a polished chat interface using Vite and Lit. Along the way, you’ll also manage application infrastructure with Bicep and Azure Developer CLI (azd), making your prototype more structured and ready for deployment. This step is all about UX, making your AI prototype not just functional, but interactive and user-friendly. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🔧 What You’ll Build By the end of this quest, you’ll have: A chat UI built with Vite and Lit  A structured codebase with infrastructure-as-code (IaC) using Bicep  Seamless local deployment workflow using the Azure Develop…  ( 26 min )
  • Open

    Cohere Models Now Available on Managed Compute in Azure AI Foundry Models
    Over the course of the last year, we have launched several Cohere models on Azure as Serverless Standard (pay-go) offering. We’re excited to announce that Cohere's latest models—Command A, Rerank 3.5, and Embed 4—are now available on Azure AI Foundry models via Managed Compute.   This launch allows enterprises and developers to now deploy Cohere models instantly with their own Azure quota, with per-hour GPU pricing that compensates the model provider—unlocking a scalable, low-friction path to production-ready GenAI.    What is Managed Compute?  Managed Compute is a deployment option within Azure AI Foundry Models that lets you run large language models (LLMs), SLMs, HuggingFace models and custom models fully hosted on Azure infrastructure.    Why Use Managed Compute?  Azure Managed Compute…  ( 23 min )

  • Open

    Monitor your Quarkus native application on Azure
    Introduction Quarkus is a general-purpose Java framework focused on efficient use of resources, fast startup, and rapid development. It allows developers to create and run services in the Java Virtual Machine (JVM) or native binary executables (native mode). In this blog post we are going to focus on using Quarkus to create and monitor a […] The post Monitor your Quarkus native application on Azure appeared first on Microsoft for Java Developers.  ( 25 min )
  • Open

    Handling unexpected job terminations on Azure Container Apps
    This post goes over situations where you may notice jobs being terminated or stopped briefly and some things that can be done to help alleviate interruptions.  ( 7 min )
  • Open

    Integrating Azure API Management with Fabric API for GraphQL
    Introduction Integrating Azure API Management (APIM) with Microsoft Fabric’s API for GraphQL can significantly enhance your API’s capabilities by providing robust scalability and security features such as identity management, rate limiting, and caching. This post will guide you through the process of setting up and configuring these features. You may not be familiar with API … Continue reading “Integrating Azure API Management with Fabric API for GraphQL”  ( 8 min )
    Privacy by Design: PII Detection and Anonymization with PySpark on Microsoft Fabric
    Introduction Whether you’re building analytics pipelines or conversational AI systems, the risk of exposing sensitive data is real. AI models trained on unfiltered datasets can inadvertently memorize and regurgitate PII, leading to compliance violations and reputational damage. This blog explores how to build scalable, secure, and compliant data workflows using PySpark, Microsoft Presidio, and Faker—covering … Continue reading “Privacy by Design: PII Detection and Anonymization with PySpark on Microsoft Fabric”  ( 9 min )
  • Open

    AgentCon arriva a Milano Martedì 17 Giugno
    📍 Milano, Sala Luigiana @ Coperni.co Centrale - Via Copernico 38📅 17 Giugno 2025 dalle 9:00 alle 17:30🧠 Focus: AI Agents per sviluppatori, ricercatori, imprenditori e professionisti IT La serie globale AgentCon – AI Agents World Tour arriva a Milano per una giornata interamente dedicata a chi costruisce il futuro con l’intelligenza artificiale. Dopo il successo delle tappe internazionali in Kenya, Olanda e Brasile, il tour approda in Italia per offrire contenuti tecnici, demo dal vivo e networking con esperti del settore. Perché partecipare? AgentCon non è una conferenza qualsiasi. È il luogo dove si parla di casi d'uso concreti e esempi pratici di codice. L’evento è pensato per sviluppatori, architetti software, data scientist e professionisti IT che vogliono andare oltre i modelli e s…  ( 22 min )

  • Open

    Introducing upgrades to AI functions for better performance—and lower costs
    Earlier this year, we released AI functions in public preview, allowing Fabric customers to apply LLM-powered transformations to OneLake data simply and seamlessly, in a single line of code. Since then, we’ve continued iterating on AI functions in response to your feedback. Let’s explore the latest updates, which make AI functions more powerful, more cost-effective, … Continue reading “Introducing upgrades to AI functions for better performance—and lower costs”  ( 7 min )
  • Open

    Configure Embedding Models on Azure AI Foundry with Open Web UI
    Introduction Let’s take a closer look at an exciting development in the AI space. Embedding models are the key to transforming complex data into usable insights, driving innovations like smarter chatbots and tailored recommendations. With Azure AI Foundry, Microsoft’s powerful platform, you’ve got the tools to build and scale these models effortlessly. Add in Open Web UI, a intuitive interface for engaging with AI systems, and you’ve got a winning combo that’s hard to beat. In this article, we’ll explore how embedding models on Azure AI Foundry, paired with Open Web UI, are paving the way for accessible and impactful AI solutions for developers and businesses. Let’s dive in!   To proceed with configuring the embedding model from Azure AI Foundry on Open Web UI, please firstly configure the…  ( 24 min )
  • Open

    June Patches for Azure DevOps Server
    Today we are releasing patches that impact the latest version of our self-hosted product, Azure DevOps Server. We strongly encourage and recommend that all customers use the latest, most secure release of Azure DevOps Server. You can download the latest version of the product, Azure DevOps Server 2022.2 from the Azure DevOps Server download page. […] The post June Patches for Azure DevOps Server appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Fix Identity Sprawl + Optimize Microsoft Entra
    Enforce MFA, block legacy authentication, and apply risk-based Conditional Access policies to reduce exposure from stale accounts and weak authentication methods. Use built-in tools for user, group, and device administration to detect and clean up identity sprawl — like unused credentials, inactive accounts, and expired apps — before they become vulnerabilities.  Jeremy Chapman, Microsoft 365 Director, shares steps to clean up your directory, strengthen authentication, and improve overall identity security.  Prioritize top risks.  Take action across MFA, risk policies, and stale objects with Microsoft Entra recommendations. Start here. Block over 99% of identity attacks.  Enforce MFA for admins and users in Microsoft Entra. Detect and delete stale user accounts.  See how to fix account …  ( 45 min )

  • Open

    Drasi accepted into CNCF sandbox for change-driven solutions
    The Azure Incubations team is proud to share that Drasi has officially been accepted into the Cloud Native Computing Foundation Sandbox. The post Drasi accepted into CNCF sandbox for change-driven solutions  appeared first on Microsoft Open Source Blog.  ( 13 min )
  • Open

    Running Self-hosted APIM Gateways in Azure Container Apps with VNet Integration
    With Azure Container Apps we can run containerized applications, completely serverless. The platform itself handles all the orchestration needed to dynamically scale based on your set triggers (such as KEDA) and even scale-to-zero! I have been working a lot with customers recently on using Azure API Management (APIM) and the topic of how we can leverage Azure APIM to manage our internal APIs without having to expose a public IP and stay within compliance from a security standpoint, which leads to the use of a Self-Hosted Gateway. This offers a managed gateway deployed within their network, allowing a unified approach in managing their APIs while keeping all API communication in-network. The self-hosted gateway is deployed as a container and in this article, we will go through how to provis…  ( 27 min )

  • Open

    Pre-Migration Vulnerability Scans:
    Migrating applications to the cloud or modernizing infrastructure requires thorough preparation. Whether it’s a cloud platform, a new data center, or a hybrid infrastructure — is a complex process. While organizations focus on optimizing performance, costs, and scalability, security often takes a backseat, leading to potential risks post-migration. One crucial step before migration is conducting a pre-migration scan to identify security vulnerabilities, licensing risks, and code quality issues. Several tools help in pre-migration scanning, including Blackduck, Coverity, Gitleaks, and Semgrep. In this article, we will explore the role of these tools in migration readiness. Why Perform a Pre-Migration Scan? When an application moves from an on-premises environment to the cloud, it interacts…  ( 31 min )
    Introducing Azure Migrate Explore with AI Assistant
    We're thrilled to announce the Public Preview of Azure Migrate Explore with an AI assistant! This exciting update enhances our existing Azure Migrate Explore (AME) utility, making migration assessments smarter, faster, and more impactful. What is Azure Migrate? Azure Migrate serves as a comprehensive hub designed to simplify the migration journey of on-premises infrastructure, including servers, databases, and web applications, to Azure Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) targets at scale. It provides a unified platform with a suite of tools and features to help you identify the best migration path, assess Azure readiness, estimate the cost of hosting workloads on Azure, and execute the migration with minimal downtime and risk. Revolutionizing Executive Pres…  ( 22 min )
  • Open

    Microsoft and F5 join forces on OpenTelemetry with Apache Arrow in Rust
    Microsoft and F5 are collaborating on Phase 2 of the OpenTelemetry with Apache Arrow project. The post Microsoft and F5 join forces on OpenTelemetry with Apache Arrow in Rust appeared first on Microsoft Open Source Blog.  ( 12 min )
  • Open

    What's new in SQL Server 2025
    Add deep AI integration with built-in vector search and DiskANN optimizations, plus native support for large object JSON and new Change Event Streaming for live data updates.  Join and analyze data faster with the Lakehouse shortcuts in Microsoft Fabric that unify multiple databases — across different SQL Server versions, clouds, and on-prem — into a single, logical schema without moving data. Build intelligent apps, automate workflows, and unlock rich insights with Copilot and the unified Microsoft data platform, including seamless Microsoft Fabric integration, all while leveraging your existing SQL skills and infrastructure.  Bob Ward, lead SQL engineer, joins Jeremy Chapman to share how the latest SQL Server 2025 innovations simplify building complex, high-performance workloads with les…  ( 57 min )
  • Open

    Refresh SQL analytics endpoint Metadata REST API (Preview)
    We’re excited to announce that the long-awaited refresh SQL analytics endpoint metadata REST API is now available in preview. You can now programmatically trigger a refresh of your SQL analytics endpoint to keep tables in sync with any changes made in the parent artifact, ensuring that you can keep your data up to date as needed. … Continue reading “Refresh SQL analytics endpoint Metadata REST API (Preview)”  ( 6 min )
    How to debug user data functions locally in VS Code
    Debugging your code is a big deal, especially when you’re working with user data functions. You want to make sure everything works as it should and that’s where local debugging lets you catch problems in your code without messing with the live environment. In this blog post, I’ll walk you through the steps to make local debugging easier, faster, and less of a headache.  ( 7 min )
  • Open

    A Recap of the Build AI Agents with Custom Tools Live Session
    Artificial Intelligence is evolving, and so are the ways we build intelligent agents. On a recent Microsoft YouTube Live session, developers and AI enthusiasts gathered to explore the power of custom tools in AI agents using Azure AI Studio. The session walked through concepts, use cases, and a live demo that showed how integrating custom tools can bring a new level of intelligence and adaptability to your applications. 🎥 Watch the full session here:  https://www.youtube.com/live/MRpExvcdxGs?si=X03wsQxQkkshEkOT What Are AI Agents with Custom Tools? AI agents are essentially smart workflows that can reason, plan, and act — powered by large language models (LLMs). While built-in tools like search, calculator, or web APIs are helpful, custom tools allow developers to tailor agents for busin…  ( 23 min )
  • Open

    Quest 1 – I Want to Build a Local Gen AI Prototype
    Part of the JS AI Build-a-thon, this quest is where the rubber meets the road. Whether you're here to explore the raw power of local models or just curious what Generative AI looks like under the hood, this guide is for you. 👉 Want to catch up on the full program or grab more quests? Start here💬 Got questions or just want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🧩 What You’ll Build in This Quest   In this quest, you’ll build a fully working local Gen AI prototype — no cloud APIs, no credits needed. You’ll explore the GitHub Models catalog, run local inference using an open model, and convert a hand-drawn UI sketch into a working webpage. Along the way, you’ll get hands-on with AI developer tooling built right into Visual Studio Code. Wh…  ( 31 min )
    Quest – I Want to Build a Local Gen AI Prototype
    Part of the JS AI Build-a-thon, this quest is where the rubber meets the road. Whether you're here to explore the raw power of local models or just curious what Generative AI looks like under the hood, this guide is for you. 👉 Want to catch up on the full program or grab more quests? Start here💬 Got questions or just want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🧩 What You’ll Build in This Quest   In this quest, you’ll build a fully working local Gen AI prototype — no cloud APIs, no credits needed. You’ll explore the GitHub Models catalog, run local inference using an open model, and convert a hand-drawn UI sketch into a working webpage. Along the way, you’ll get hands-on with AI developer tooling built right into Visual Studio Code. Wh…  ( 31 min )
    JS AI Build-a-thon Setup in 5 Easy Steps
    Want to build next-gen apps using AI and JavaScript (or TypeScript)? The JS AI Build-a-thon is your launchpad — a hands-on, quest-driven journey designed to take you from curious coder to AI-powered app builder. It’s self-paced, community-backed, and packed with real-world use cases. No fluff. Just code, context, and creative exploration. 👾 Ready to roll? Here's your setup guide: 🔧 Step 1: Start Your Journey Head to aka.ms/JSAIBuild-a-thon and find the GitHub repository. Before you hit “Start Course,” take a moment to scroll through the README — it’s packed with useful info, including a global list of Study Jams where you can connect with other developers learning alongside you. Find one near you or join virtually to level up with the community. Screenshot off available study jams Once y…  ( 25 min )

  • Open

    HCX 4.11.0 Upgrade and What it means for Current HCX Users
    Overview Azure VMware Solution is a VMware validated first party Azure service from Microsoft that provides private clouds containing VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. It enables customers to leverage their existing investments in VMware skills and tools, allowing them to focus on developing and running their VMware-based workloads on Azure. VMware HCX is the mobility and migration software used by the Azure VMware Solution to connect remote VMware vSphere environments to the Azure VMware Solution. These remote VMware vSphere environments can be on-premises, co-location or cloud-based instances.     Figure 1 – Azure VMware Solution with VMware HCX Service Mesh Broadcom has announced the end-of-life (EOL) for VMware HCX version 4.10.x, effective …  ( 27 min )
    Azure VMware Solution now available in Korea Central
    We are pleased to announce that Azure VMware Solution is now available in Korea Central. Now in 34 Azure regions, Azure VMware Solution empowers you to seamlessly extend or migrate existing VMware workloads to Azure without the cost, effort or risk of re-architecting applications or retooling operations.  Azure VMware Solution supports: Rapid cloud migration of VMware-based workloads to Azure without refactoring. Datacenter exit while maintaining operational consistency for the VMware environment. Business continuity and disaster recovery for on-premises VMware environments. Attach Azure services and innovate applications at your own pace. Includes the VMware technology stack and lets you leverage existing Microsoft licenses for Windows Server and SQL Server. For updates on current and upcoming region availability, visit the product by region page here.   Streamline migration with new offers and licensing benefits, including a 20% discount. We recently announced the VMware Rapid Migration Plan, where Microsoft provides a comprehensive set of licensing benefits and programs to give you price protection and savings as you migrate to Azure VMware Solution. Azure VMware Solution is a great first step to the cloud for VMware customers, and this plan can help you get there. Learn More  ( 19 min )
  • Open

    From Cloud to Edge: Navigating the Future of AI with LLMs, SLMs, and Azure AI Foundry
    Use Cases: From Automation to Edge AI   Generative AI is transforming industries through: Content creation, summarization, and translation Customer engagement via chatbots and personalization Edge deployment for low-latency, privacy-sensitive applications Domain-specific tasks like legal, medical, or technical document processing LLMs vs. SLMs: Choosing the Right Fit FeatureLLMsSLMs ParametersBillions (e.g., GPT-4)Millions PerformanceHigh accuracy, nuanced understandingFast, efficient for simpler tasks DeploymentCloud-based, resource-intensiveIdeal for edge and mobile CostHigh compute and energyCost-effective SLMs are increasingly viable thanks to optimized runtimes and hardware, making them perfect for on-device AI. Azure AI Foundry: Your AI Launchpad Azure AI Foundry offers: A model catalogue with open-source and proprietary models   Tools for fine-tuning, evaluation, and deployment Integration with GitHub, VS Code, and Azure DevOps Support for both cloud and local inferencing Local AI: The Edge Advantage With tools like Foundry Local and Windows AI Foundry, developers can: Run models on-device with ONNX Runtime Use APIs for summarization, translation, and more Optimize for CPU, GPU, and NPU Ensure privacy, low latency, and offline capability Customization: RAG vs. Fine-Tuning FeatureRAGFine-Tuning Knowledge UpdatesDynamicStatic InterpretabilityHighLow LatencyHigherLower Hallucination RiskLowerModerate Use CaseReal-time, external dataDomain-specific tasks Both methods enhance model relevance RAG by retrieving external data, and fine-tuning by adapting model weights. Developer Resources Get started with: Foundry Local SDK Windows AI Foundry | Microsoft Developer AI Toolkit for VS Code Windows ML Azure AI Learn Courses Join the Azure AI Discord Community  ( 21 min )
  • Open

    Restricting PAT Creation in Azure DevOps Is Now in Preview
    As organizations continue to strengthen their security posture, restricting usage of personal access tokens (PATs) has become a critical area of focus. With the latest public preview of the Restrict personal access token creation policy in Azure DevOps, Project Collection Administrators (PCAs) now have another powerful tool to reduce unnecessary PAT usage and enforce tighter […] The post Restricting PAT Creation in Azure DevOps Is Now in Preview appeared first on Azure DevOps Blog.  ( 25 min )
  • Open

    Meet the Supercomputer that runs ChatGPT, Sora & DeepSeek on Azure (feat. Mark Russinovich)
    Orchestrate multi-agent apps and high-scale inference solutions using open-source and proprietary models, no infrastructure management needed. With Azure, connect frameworks like Semantic Kernel to models from DeepSeek, Llama, OpenAI’s GPT-4o, and Sora, without provisioning GPUs or writing complex scheduling logic. Just submit your prompt and assets, and the models do the rest. Using Azure’s Model as a Service, access cutting-edge models, including brand-new releases like DeepSeek R1 and Sora, as managed APIs with autoscaling and built-in security. Whether you’re handling bursts of demand, fine-tuning models, or provisioning compute, Azure provides the capacity, efficiency, and flexibility you need. With industry-leading AI silicon, including H100s, GB200s, and advanced cooling, your solut…  ( 57 min )

  • Open

    Enhancing Plugin Metadata Management with SemanticPluginForge
    In the world of software development, flexibility and adaptability are key. Developers often face challenges when it comes to updating plugin metadata dynamically without disrupting services or requiring redeployment. This is where SemanticPluginForge, an open-source project, steps in to improve the way we manage plugin metadata. LLM Function Calling Feature The function calling feature in LLMs […] The post Enhancing Plugin Metadata Management with SemanticPluginForge appeared first on Semantic Kernel.  ( 25 min )
    Smarter SK Agents with Contextual Function Selection
    Smarter SK Agents with Contextual Function Selection In today’s fast-paced AI landscape, developers are constantly seeking ways to make AI interactions more efficient and relevant. The new Contextual Function Selection feature in the Semantic Kernel Agent Framework is here to address this need. By dynamically selecting and advertising only the most relevant functions based on […] The post Smarter SK Agents with Contextual Function Selection appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Secure Your Data from Day One: Best Practices for Success with Purview Data Loss Prevention (DLP) Policies in Microsoft Fabric
    As data volume and complexity soar, protecting sensitive information has become non-negotiable. With the latest enhancements to Purview Data Loss Prevention (DLP) Policies in Microsoft Fabric, organizations now have the power to proactively secure their data in Onelake. Whether you’re just getting started or looking to take your data governance to the next level, following … Continue reading “Secure Your Data from Day One: Best Practices for Success with Purview Data Loss Prevention (DLP) Policies in Microsoft Fabric”  ( 7 min )
  • Open

    Skill Up On The Latest AI Models & Tools on Model Mondays - Season 2 starts Jun 16!
    Quick Links To RSVP for each episode: EP1: Advanced Reasoning Models: https://developer.microsoft.com/en-us/reactor/events/25905/  EP2: Model Context Protocol:https://developer.microsoft.com/en-us/reactor/events/25906/  EP3: SLMs (and Reasoning):https://developer.microsoft.com/en-us/reactor/events/25907/  Get All The Details: https://aka.ms/model-mondays    Azure AI Foundry offers the best model choice  Did you manage to catch up on all the talks from Microsoft Build 2025? If, like me, you are interested in building AI-driven applications on Azure, you probably started by looking at what’s new in Azure AI Foundry.  I recommend you read Asha Sharma’s post for the top 10 things you need to know in this context. And it starts with New Models & Smarter Models!  New Models | Azure AI Foundr…  ( 28 min )
  • Open

    Dev Proxy v0.28 with LLM usage and costs tracking
    The latest version of Dev Proxy introduces a new ability to help you understand language models’ usage and costs in your applications, alongside many improvements to mocking, TypeSpec generation, and plugin flexibility. The post Dev Proxy v0.28 with LLM usage and costs tracking appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    GraphRAG and PostgreSQL integration in docker with Cypher query and AI agents
    Why should I care?   ⚡️In under 15 minutes, you’ll have a Cypher-powered, semantically rich knowledge graph you can query interactively.   How can graphRAG help? GraphRAG extracts structured knowledge from raw, unstructured data like .txt files by building a knowledge graph. This enables more precise and context-aware retrieval, making it easier to surface relevant insights from messy or disconnected content. What are the challenges? While the standard graphRAG indexing process typically expects input and output directories, some users already store their data in a DB (database) and prefer to run graphRAG directly against using the DB for both input and output. This eliminates the need for intermediate blob storage and simplifies the pipeline. Additionally, customers often request support …  ( 48 min )
    GraphRAG and PostgreSQL integration in docker with Cypher query and AI agents
    Why should I care?   ⚡️In under 15 minutes, you’ll have a Cypher-powered, semantically rich knowledge graph you can query interactively.   How can graphRAG help? GraphRAG extracts structured knowledge from raw, unstructured data like .txt files by building a knowledge graph. This enables more precise and context-aware retrieval, making it easier to surface relevant insights from messy or disconnected content. What are the challenges? While the standard graphRAG indexing process typically expects input and output directories, some users already store their data in a DB (database) and prefer to run graphRAG directly against using the DB for both input and output. This eliminates the need for intermediate blob storage and simplifies the pipeline. Additionally, customers often request support …
  • Open

    Host Remote MCP Servers on App Service: Updated samples now with new languages and auth support
    If you haven't seen my previous blog post introducing MCP on Azure App Service, check that out here for a quick overview and getting started. In this blog post, I’m excited to share some updates for our App Service MCP samples: new language samples, updated functionality to replace deprecated methods, and built-in authentication and authorization—all designed to make it easier for developers to host MCP servers on Azure App Service. 🔄 Migrating from SSE to Streamable HTTP The original .NET sample I shared used Server-Sent Events (SSE) for streaming responses. However, SSE has since been deprecated in favor of streamable HTTP, which offers better compatibility and performance across platforms. To align with the latest MCP specification, I’ve updated the .NET sample to use streamable HTTP: …  ( 23 min )
    Host Remote MCP Servers on App Service: Updated samples now with new languages and auth support
    If you haven't seen my previous blog post introducing MCP on Azure App Service, check that out here for a quick overview and getting started. In this blog post, I’m excited to share some updates for our App Service MCP samples: new language samples, updated functionality to replace deprecated methods, and built-in authentication and authorization—all designed to make it easier for developers to host MCP servers on Azure App Service. 🔄 Migrating from SSE to Streamable HTTP The original .NET sample I shared used Server-Sent Events (SSE) for streaming responses. However, SSE has since been deprecated in favor of streamable HTTP, which offers better compatibility and performance across platforms. To align with the latest MCP specification, I’ve updated the .NET sample to use streamable HTTP: …

  • Open

    Build like Microsoft: Developer agents in action
    Take a deep dive into Athena, an AI-powered collaborative agent, to learn how it was built and how to create your own version of Athena right within Microsoft Teams. The post Build like Microsoft: Developer agents in action appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Announcing Azure Command Launcher for Java
    Optimizing JVM Configuration for Azure Deployments Tuning the Java Virtual Machine (JVM) for cloud deployments is notoriously challenging. Over 30% of developers tend to deploy Java workloads with no JVM configuration at all, therefore relying on the default settings of the HotSpot JVM.  The default settings in OpenJDK are intentionally conservative, designed to work across a wide range of environments and scenarios. However, these defaults often lead to suboptimal resource utilization in cloud-based deployments, where memory and CPU tend to be dedicated for application workloads (use of containers and VMs) but still require intelligent management to maximize efficiency and cost-effectiveness. To address this, we are excited to introduce jaz, a new JVM launcher optimized specifically for …  ( 26 min )
    Announcing Azure Command Launcher for Java
    Optimizing JVM Configuration for Azure Deployments Tuning the Java Virtual Machine (JVM) for cloud deployments is notoriously challenging. Over 30% of developers tend to deploy Java workloads with no JVM configuration at all, therefore relying on the default settings of the HotSpot JVM.  The default settings in OpenJDK are intentionally conservative, designed to work across a wide range of environments and scenarios. However, these defaults often lead to suboptimal resource utilization in cloud-based deployments, where memory and CPU tend to be dedicated for application workloads (use of containers and VMs) but still require intelligent management to maximize efficiency and cost-effectiveness. To address this, we are excited to introduce jaz, a new JVM launcher optimized specifically for …
  • Open

    Intelligent Email Automation with Azure AI Agent Service
    Do you ever wish you could simply tell your agent to send an email, without the hassle of typing everything — from recipient list to the subject and the body? If so, this guide on building an email-sending agent might be exactly what you’re looking for. Technically, this guide won’t deliver a fully automated agent right out of the box - you’ll still need to add a speech-to-text layer and carefully curate your prompt instructions. By the end of this post, you’ll have an agent capable of interacting with users through natural conversation and generating emails with dynamic subject lines and content. Overview Azure AI Agent Services offers a robust framework for building conversational agents, making it an ideal choice for developers seeking enterprise-grade security and compliance. This ensu…  ( 29 min )
    Intelligent Email Automation with Azure AI Agent Service
    Do you ever wish you could simply tell your agent to send an email, without the hassle of typing everything — from recipient list to the subject and the body? If so, this guide on building an email-sending agent might be exactly what you’re looking for. Technically, this guide won’t deliver a fully automated agent right out of the box - you’ll still need to add a speech-to-text layer and carefully curate your prompt instructions. By the end of this post, you’ll have an agent capable of interacting with users through natural conversation and generating emails with dynamic subject lines and content. Overview Azure AI Agent Services offers a robust framework for building conversational agents, making it an ideal choice for developers seeking enterprise-grade security and compliance. This ensu…

  • Open

    Secure Mirrored Azure Databricks Data in Fabric with OneLake security
    We’re excited to announce that OneLake security capabilities have been extended to support mirrored data through Azure Mirrored Databricks Catalog. This enhancement brings the full suite of OneLake’s enterprise-grade security features to these mirrored assets, empowering organizations to manage access using table, column, or row level security across all engines.  What’s New?  With this update, Azure … Continue reading “Secure Mirrored Azure Databricks Data in Fabric with OneLake security “  ( 6 min )
  • Open

    How to add custom logging in Azure WebJobs Storage Extensions SDK in dotnet isolated function app
    In our previous blog, we discussed how to debug live issues in the Azure WebJobs Storage Extensions SDK. This approach is particularly effective when issues can be consistently reproduced. However, for intermittent problems, live debugging may not be the most efficient solution. In such cases, integrating custom logging can provide deeper insights and facilitate troubleshooting.   In this blog post, we will provide a step-by-step guide on how to implement custom logging within the Azure WebJobs Storage Extensions SDK. This will help you capture valuable information and better understand the behavior of your applications.   If you encounter any issues while using the Azure WebJobs Extensions SDK, the best way to report them is via GitHub Issues. You can report bugs, request features, or ask…  ( 27 min )
    How to add custom logging in Azure WebJobs Storage Extensions SDK in dotnet isolated function app
    In our previous blog, we discussed how to debug live issues in the Azure WebJobs Storage Extensions SDK. This approach is particularly effective when issues can be consistently reproduced. However, for intermittent problems, live debugging may not be the most efficient solution. In such cases, integrating custom logging can provide deeper insights and facilitate troubleshooting.   In this blog post, we will provide a step-by-step guide on how to implement custom logging within the Azure WebJobs Storage Extensions SDK. This will help you capture valuable information and better understand the behavior of your applications.   If you encounter any issues while using the Azure WebJobs Extensions SDK, the best way to report them is via GitHub Issues. You can report bugs, request features, or ask…
    Highlights from Microsoft Build 2025
    Microsoft just held its annual Microsoft Build event for developers. The live event might be over, but we have highlights and other content that will keep the excitement going. Explore on-demand sessions, learn about recent product announcements, watch deep technical demos, and discover fresh resources for learning cutting-edge developer skills.   Microsoft Build opening keynoteThe world of development—its tools and its possibilities—is rapidly evolving. In the Microsoft Build keynote, Satya Nadella discusses the agentic web, current dev tools, the dev landscape right now, and where it’s headed.   GitHub Copilot: Meet the new coding agentCheck out the exciting new coding agent for GitHub Copilot. Just assign a task or issue to Copilot and it will run in the background, pushing commits to a…  ( 25 min )
    Highlights from Microsoft Build 2025
    Microsoft just held its annual Microsoft Build event for developers. The live event might be over, but we have highlights and other content that will keep the excitement going. Explore on-demand sessions, learn about recent product announcements, watch deep technical demos, and discover fresh resources for learning cutting-edge developer skills.   Microsoft Build opening keynoteThe world of development—its tools and its possibilities—is rapidly evolving. In the Microsoft Build keynote, Satya Nadella discusses the agentic web, current dev tools, the dev landscape right now, and where it’s headed.   GitHub Copilot: Meet the new coding agentCheck out the exciting new coding agent for GitHub Copilot. Just assign a task or issue to Copilot and it will run in the background, pushing commits to a…
  • Open

    Teaching Python with GitHub Codespaces
    Whenever I teach Python workshops, tutorials, or classes, I love to use GitHub Codespaces. Every repository on GitHub can be opened inside a GitHub Codespace, which gives the student a full Python environment and a browser-based VS Code. Students spend less time setting up their environment and more time actually coding - the fun part! In this post, I'll walk through my tips for using Codespaces for teaching Python, particularly for classes about web apps, data science, or generative AI. Getting started You can start a GitHub Codespace from any repository. Navigate to the front page of the repository, then select "Code" > "Codespaces" > "Create codespace on main": By default, the Codespace will build an environment based off a universal Docker image, which includes Python, NodeJS, Java, a…  ( 51 min )
    Teaching Python with GitHub Codespaces
    Whenever I teach Python workshops, tutorials, or classes, I love to use GitHub Codespaces. Every repository on GitHub can be opened inside a GitHub Codespace, which gives the student a full Python environment and a browser-based VS Code. Students spend less time setting up their environment and more time actually coding - the fun part! In this post, I'll walk through my tips for using Codespaces for teaching Python, particularly for classes about web apps, data science, or generative AI. Getting started You can start a GitHub Codespace from any repository. Navigate to the front page of the repository, then select "Code" > "Codespaces" > "Create codespace on main": By default, the Codespace will build an environment based off a universal Docker image, which includes Python, NodeJS, Java, a…

  • Open

    Microsoft Fabric Community Conference Comes to Atlanta!
    The Microsoft Fabric Community Conference is back for its third year—and we’re bringing everything and everybody you’ve loved at past events with us to Atlanta, Georgia. After unforgettable experiences at FabCon in Las Vegas and Stockholm, the Fabric community proved just how powerful it can be when we come together. With more than 13,000 attendees across our last three conferences, it’s clear: the Microsoft Fabric community is here to drive the future of data!    And yes, we’re pleased to announce; it’s happening again! Mark your calendars … Continue reading “Microsoft Fabric Community Conference Comes to Atlanta!”  ( 6 min )
    Azure Synapse Runtime for Apache Spark 3.5 (Preview)
    We’re thrilled to announce that we have made Azure Synapse Runtime for Apache Spark 3.5 for our Azure Synapse Spark customers in preview, while they get ready and prepare for migrating to Microsoft Fabric Spark. Apache Spark 3.5 You can now create Azure Synapse Runtime for Apache Spark 3.5. The essential changes include features which come from … Continue reading “Azure Synapse Runtime for Apache Spark 3.5 (Preview)”  ( 5 min )
  • Open

    GitHub Secret Protection and GitHub Code Security for Azure DevOps
    Following the changes to GitHub Advanced Security on GitHub, we’re launching the standalone security products of GitHub Secret Protection and GitHub Code Security for Azure DevOps today. You can bring the protection of Advanced Security to your enterprise with the flexibility to enable the right level of protection for your repositories. GitHub Secret Protection for […] The post GitHub Secret Protection and GitHub Code Security for Azure DevOps appeared first on Azure DevOps Blog.  ( 23 min )
  • Open

    Using DeepSeek-R1 on Azure with JavaScript
    The pace at which innovative AI models are being developed is outstanding! DeepSeek-R1 is one such model that focuses on complex reasoning tasks, providing a powerful tool for developers to build intelligent applications. The week, we announced its availability on GitHub Models as well as on Azure AI Foundry. In this article, we’ll take a look at how you can deploy and use the DeepSeek-R1 models in your JavaScript applications. TL;DR key takeaways DeepSeek-R1 models focus on complex reasoning tasks, and is not designed for general conversation You can quickly switch your configuration to use Azure AI, GitHub Models, or even local models with Ollama. You can use OpenAI Node SDK or LangChain.js to interact with DeepSeek models. What you'll learn here Deploying DeepSeek-R1 model on Azure. …  ( 32 min )
    Using DeepSeek-R1 on Azure with JavaScript
    The pace at which innovative AI models are being developed is outstanding! DeepSeek-R1 is one such model that focuses on complex reasoning tasks, providing a powerful tool for developers to build intelligent applications. The week, we announced its availability on GitHub Models as well as on Azure AI Foundry. In this article, we’ll take a look at how you can deploy and use the DeepSeek-R1 models in your JavaScript applications. TL;DR key takeaways DeepSeek-R1 models focus on complex reasoning tasks, and is not designed for general conversation You can quickly switch your configuration to use Azure AI, GitHub Models, or even local models with Ollama. You can use OpenAI Node SDK or LangChain.js to interact with DeepSeek models. What you'll learn here Deploying DeepSeek-R1 model on Azure. …

  • Open

    Understanding Idle Usage in Azure Container Apps
    Introduction Azure Container Apps provides a serverless platform for running containers at scale, and one of the big benefits is that you can easily scale workloads to zero when they are not getting any traffic. Scaling to zero ensures you only pay when your workloads are actively receiving traffic or performing work. However, for some workloads, scaling to zero might not be possible for a variety of reasons. Some workloads must always be able to respond to requests quickly, and the time it takes to scale from 0 to 1 replicas, while short, is too long. Some applications need to be able to always respond to health checks, and so removing all replicas is not possible. In these scenarios, there may still be time periods where there is no traffic, or the application isn't doing any work. While…  ( 39 min )
    Understanding Idle Usage in Azure Container Apps
    Introduction Azure Container Apps provides a serverless platform for running containers at scale, and one of the big benefits is that you can easily scale workloads to zero when they are not getting any traffic. Scaling to zero ensures you only pay when your workloads are actively receiving traffic or performing work. However, for some workloads, scaling to zero might not be possible for a variety of reasons. Some workloads must always be able to respond to requests quickly, and the time it takes to scale from 0 to 1 replicas, while short, is too long. Some applications need to be able to always respond to health checks, and so removing all replicas is not possible. In these scenarios, there may still be time periods where there is no traffic, or the application isn't doing any work. While…
    Throughput Testing at Scale for Azure Functions
    Introduction Ensuring reliable, high-performance serverless applications is central to our work on Azure Functions. With new plans like Flex Consumption expanding the platform’s capabilities, it's critical to continuously validate that our infrastructure can scale—reliably and efficiently—under real-world load. To meet that need, we built PerfBench (Performance Benchmarker), a comprehensive benchmarking system designed to measure, monitor, and maintain our performance baselines—catching regressions before they impact customers. This infrastructure now runs close to 5,000 test executions every month, spanning multiple SKUs, regions, runtimes, and workloads—with Flex Consumption accounting for more than half of the total volume. This scale of testing helps us not only identify regressions ea…  ( 51 min )
    Throughput Testing at Scale for Azure Functions
    Introduction Ensuring reliable, high-performance serverless applications is central to our work on Azure Functions. With new plans like Flex Consumption expanding the platform’s capabilities, it's critical to continuously validate that our infrastructure can scale—reliably and efficiently—under real-world load. To meet that need, we built PerfBench (Performance Benchmarker), a comprehensive benchmarking system designed to measure, monitor, and maintain our performance baselines—catching regressions before they impact customers. This infrastructure now runs close to 5,000 test executions every month, spanning multiple SKUs, regions, runtimes, and workloads—with Flex Consumption accounting for more than half of the total volume. This scale of testing helps us not only identify regressions ea…
  • Open

    A visual introduction to vector embeddings
    Vector embeddings have become very popular over the last few years, but most of us developers are brand new to the concept. In this post, I'll give a high-level overview of embedding models, similarity metrics, vector search, and vector compression approaches. Vector embeddings A vector embedding is a mapping from an input (like a word, list of words, or image) into a list of floating point numbers. That list of numbers represents that input in the multidimensional embedding space of the model. We refer to the length of the list as its dimensions, so a list with 1024 numbers would have 1024 dimensions.   Embedding models Each embedding model has its own dimension length, allowed input types, similarity space, and other characteristics. word2vec For a long time, word2vec was the most well-…  ( 47 min )
    A visual introduction to vector embeddings
    Vector embeddings have become very popular over the last few years, but most of us developers are brand new to the concept. In this post, I'll give a high-level overview of embedding models, similarity metrics, vector search, and vector compression approaches. Vector embeddings A vector embedding is a mapping from an input (like a word, list of words, or image) into a list of floating point numbers. That list of numbers represents that input in the multidimensional embedding space of the model. We refer to the length of the list as its dimensions, so a list with 1024 numbers would have 1024 dimensions.   Embedding models Each embedding model has its own dimension length, allowed input types, similarity space, and other characteristics. word2vec For a long time, word2vec was the most well-…
  • Open

    Announcing enterprise-grade, Microsoft Entra-based document-level security in Azure AI Search
    Introduction AI agentic grounding, Retrieval-Augmented Generation (RAG) apps/copilots, and enterprise search are game-changers, but they stay safe only when every response obeys the file-level permissions you already set in your data source. Without built-in help, developers have to hand-code security trimming, unravel nested groups, and tweak the logic whenever someone’s role shifts. Starting with REST API version 2025-05-01-preview, Azure AI Search introduces native Microsoft Entra-based POSIX-style Access Control List (ACL) and Azure Role-Based Access Control (RBAC) support, alongside expanded capabilities in the Azure Data Lake Storage Gen2 (ADLS Gen2) built-in indexer. These enhancements make it easier to enforce document-level security across ingestion and query workflows, whether yo…  ( 40 min )
    Announcing enterprise-grade, Microsoft Entra-based document-level security in Azure AI Search
    Introduction AI agentic grounding, Retrieval-Augmented Generation (RAG) apps/copilots, and enterprise search are game-changers, but they stay safe only when every response obeys the file-level permissions you already set in your data source. Without built-in help, developers have to hand-code security trimming, unravel nested groups, and tweak the logic whenever someone’s role shifts. Starting with REST API version 2025-05-01-preview, Azure AI Search introduces native Microsoft Entra-based POSIX-style Access Control List (ACL) and Azure Role-Based Access Control (RBAC) support, alongside expanded capabilities in the Azure Data Lake Storage Gen2 (ADLS Gen2) built-in indexer. These enhancements make it easier to enforce document-level security across ingestion and query workflows, whether yo…

  • Open

    New pipeline Activities Now Support OPDG and VNET
    We’re excited to announce that Microsoft Fabric Data Pipelines now support both On-Premises Data Gateway (OPDG) and Virtual Network (VNET) Gateway across a broader set of external activity types. What’s New? You can now securely connect to on-premises and network-isolated resources using OPDG or VNET Gateway for the following Fabric Data Factory pipeline activities allowing for secure … Continue reading “New pipeline Activities Now Support OPDG and VNET”  ( 5 min )
    Integrating Fabric with Databricks using private network
    Microsoft Fabric and Azure Databricks are widely used data platforms. This article aims to address the requirement of customers who have large data estates in Azure databricks and want to unlock additional use cases in Microsoft Fabric arising out of different business teams. When integrating the two platforms, a crucial security requirement is to ensure … Continue reading “Integrating Fabric with Databricks using private network”  ( 7 min )
    Boost performance effortlessly with Automated Table Statistics in Microsoft Fabric
    We’re thrilled to introduce Automated Table Statistics in Microsoft Fabric Data Engineering — a major upgrade that helps you get blazing-fast query performance with zero manual effort. Whether you’re running complex joins, large aggregations, or heavy filtering workloads, Fabric’s new automated statistics will help Spark make smarter decisions, saving you time, compute, and money. What … Continue reading “Boost performance effortlessly with Automated Table Statistics in Microsoft Fabric”  ( 6 min )
    How to create a SQL database in Fabric using Fabric CLI
    Step into the future of simplicity with Fabric CLI. The Fabric Command Line Interface (CLI) has arrived in preview, bringing with it a seamless way to create SQL databases for your projects. This guide will take you through the steps to get started, ensuring you can leverage the power of Fabric CLI with ease. Prerequisites Before … Continue reading “How to create a SQL database in Fabric using Fabric CLI”  ( 6 min )
  • Open

    Troubleshooting Azure AI Foundry deployments on Azure App Service
    Troubleshooting Azure AI Foundry deployments on Azure App Service  ( 4 min )
  • Open

    Migration planning of MySQL workloads using Azure Migrate
    In our endeavor to increase coverage of OSS workloads in Azure Migrate, we are announcing discovery and modernization assessment of MySQL databases running on Windows and Linux servers. Customers previously had limited visibility into their MySQL workloads and often received generalized VM lift-and-shift recommendations. With this new capability, customers can now accurately identify their MySQL workloads and assess them for right-sizing into Azure Database for MySQL. MySQL workloads are a cornerstone of the LAMP stack, powering countless web applications with their reliability, performance, and ease of use. As businesses grow, the need for scalable and efficient database solutions becomes paramount. This is where Azure Database for MySQL comes into play. Migrating from on-premises to Azur…  ( 25 min )
    Migration planning of MySQL workloads using Azure Migrate
    In our endeavor to increase coverage of OSS workloads in Azure Migrate, we are announcing discovery and modernization assessment of MySQL databases running on Windows and Linux servers. Customers previously had limited visibility into their MySQL workloads and often received generalized VM lift-and-shift recommendations. With this new capability, customers can now accurately identify their MySQL workloads and assess them for right-sizing into Azure Database for MySQL. MySQL workloads are a cornerstone of the LAMP stack, powering countless web applications with their reliability, performance, and ease of use. As businesses grow, the need for scalable and efficient database solutions becomes paramount. This is where Azure Database for MySQL comes into play. Migrating from on-premises to Azur…
  • Open

    Jakarta EE and Quarkus on Azure – June 2025
    Hi everyone, welcome to the June 2025 update for Jakarta EE and Quarkus on Azure. It covers topics such as DevServices support of Quarkus Azure Extension, and comprehensive guides on implementing Quarkus applications monitoring and Liberty applications monitoring. If you’re interested in providing feedback or collaborating on migrating Java workloads to Azure with the engineering […] The post Jakarta EE and Quarkus on Azure – June 2025 appeared first on Microsoft for Java Developers.  ( 24 min )
  • Open

    Ways to simplify your data ingestion pipeline with Azure AI Search
    We are introducing multiple features in Azure AI Search that make AI agent grounding and RAG data preparation easier. Here is what’s new:  The GenAI prompt skill in public preview accesses Azure AI Foundry chat-completion models to enrich content when it’s being indexed  Logic app integration with AI Search in Azure portal for simple data ingestion  Introduction  Azure AI Search is introducing new features and integrations designed to simplify and accelerate the creation of RAG-ready indexes. The GenAI Prompt Skill leverages generative AI during the indexing process, enabling advanced context expansion, image verbalization, and other transformations to enhance multimodal search relevance. The GenAI Prompt Skill enables sophisticated data transformations during the indexing process and fa…  ( 39 min )
    Ways to simplify your data ingestion pipeline with Azure AI Search
    We are introducing multiple features in Azure AI Search that make AI agent grounding and RAG data preparation easier. Here is what’s new:  The GenAI prompt skill in public preview accesses Azure AI Foundry chat-completion models to enrich content when it’s being indexed  Logic app integration with AI Search in Azure portal for simple data ingestion  Introduction  Azure AI Search is introducing new features and integrations designed to simplify and accelerate the creation of RAG-ready indexes. The GenAI Prompt Skill leverages generative AI during the indexing process, enabling advanced context expansion, image verbalization, and other transformations to enhance multimodal search relevance. The GenAI Prompt Skill enables sophisticated data transformations during the indexing process and fa…

  • Open

    Introducing Multi-Vector and Scoring Profile integration with Semantic Ranking in Azure AI Search
    We're excited to announce two powerful new enhancements in Azure AI Search: Multi-Vector Field Support and Scoring Profiles Integration with Semantic Ranking. Developed based on your feedback, these features unlock more control and enable additional scenarios in your search experiences   Why these Enhancements Matter As search experiences become increasingly sophisticated, handling complex, multimodal data and maintaining precise relevance is crucial. These new capabilities directly address common pain points: Multi-Vector Field Support helps you manage detailed, multimodal, and segmented content more effectively. Scoring Profiles Integration with Semantic Ranking ensures consistent relevance throughout your search pipeline. Multi-Vector Field Support Previously, vector fields `(Collecti…  ( 27 min )
    Introducing Multi-Vector and Scoring Profile integration with Semantic Ranking in Azure AI Search
    We're excited to announce two powerful new enhancements in Azure AI Search: Multi-Vector Field Support and Scoring Profiles Integration with Semantic Ranking. Developed based on your feedback, these features unlock more control and enable additional scenarios in your search experiences   Why these Enhancements Matter As search experiences become increasingly sophisticated, handling complex, multimodal data and maintaining precise relevance is crucial. These new capabilities directly address common pain points: Multi-Vector Field Support helps you manage detailed, multimodal, and segmented content more effectively. Scoring Profiles Integration with Semantic Ranking ensures consistent relevance throughout your search pipeline. Multi-Vector Field Support Previously, vector fields `(Collecti…
  • Open

    Semantic Kernel and Microsoft.Extensions.AI: Better Together, Part 2
    This is Part 2 of our series on integrating Microsoft.Extensions.AI with Semantic Kernel. In Part 1, we explored the relationship between these technologies and how they complement each other. Now, let’s dive into practical examples showing how to use Microsoft.Extensions.AI abstractions with Semantic Kernel in non-agent scenarios. Getting Started with Microsoft.Extensions.AI and Semantic Kernel Before we […] The post Semantic Kernel and Microsoft.Extensions.AI: Better Together, Part 2 appeared first on Semantic Kernel.  ( 25 min )
    Semantic Kernel: Multi-agent Orchestration
    The field of AI is rapidly evolving, and the need for more sophisticated, collaborative, and flexible agent-based systems is growing. With this in mind, Semantic Kernel introduces a new multi-agent orchestration framework that enables developers to build, manage, and scale complex agent workflows with ease. This post explores the new orchestration patterns, their capabilities, and […] The post Semantic Kernel: Multi-agent Orchestration appeared first on Semantic Kernel.  ( 26 min )
  • Open

    Understanding OneLake Security with Shortcuts
    OneLake allows for security to be defined once and enforced consistently across Microsoft Fabric. One of its standout features is its ability to work seamlessly with shortcuts, offering users the flexibility to access and organize data from different locations while maintaining robust security controls. In this blog post, we will look at how OneLake security … Continue reading “Understanding OneLake Security with Shortcuts”  ( 7 min )
    New regions supported for Fabric User Data Functions
    Fabric User Data Functions is a serverless platform that allows you to build and run functions on the Fabric platform. You can use your functions to interact with your data and the rest of your Fabric items via native integrations. After the Fabric User Data Functions preview, we have been working on increasing the number … Continue reading “New regions supported for Fabric User Data Functions”  ( 5 min )
    Creating SQL database workload in Fabric with Terraform: A Step-by-Step Guide (Preview)
    Infrastructure as Code (IaC) tools like Terraform have revolutionized the way developers and organizations deploy and manage infrastructure. With its declarative language and ability to automate provisioning, Terraform reduces human error, ensures consistency, and speeds up deployment across cloud and on-premises environments. In this document, we’ll explore how you can create SQL databases workloads in … Continue reading “Creating SQL database workload in Fabric with Terraform: A Step-by-Step Guide (Preview)”  ( 7 min )
    Introducing FinOps Toolkit in Fabric
    The FinOps toolkit is built on top of the FinOps Framework, providing you a collection of resources to help enterprises facilitate their FinOps Goals. Just announced in the May Updates is an exciting integration with Fabric Real-Time Intelligence. With the latest updates you can now analyze all your cloud spend utilizing Eventhouse. Unlocking the power … Continue reading “Introducing FinOps Toolkit in Fabric”  ( 5 min )
  • Open

    Azure DevOps with GitHub Repositories – Your path to Agentic AI
    GitHub Copilot has evolved beyond a coding assistant in the IDE into an agentic teammate – providing actionable feedback on pull requests, fixing bugs and implementing new features, creating pull requests and responding to feedback, and much more. These new capabilities will transform every aspect of the software development lifecycle, as we are already seeing […] The post Azure DevOps with GitHub Repositories – Your path to Agentic AI appeared first on Azure DevOps Blog.  ( 26 min )
  • Open

    New GitHub Copilot Global Bootcamp: Now with Virtual and In-Person Workshops!
    The GitHub Copilot Global Bootcamp started in February as a fully virtual learning journey — and it was a hit. More than 60,000 developers joined the first edition across multiple languages and regions. Now, we're excited to launch the second edition — bigger and better — featuring both virtual and in-person workshops, hosted by tech communities around the globe. This new edition arrives shortly after the announcements at Microsoft Build 2025, where the GitHub and Visual Studio Code teams revealed exciting news: The GitHub Copilot Chat extension is going open source, reinforcing transparency and collaboration. AI is being deeply integrated into Visual Studio Code, now evolving into an open source AI editor. New APIs and tools are making it easier than ever to build with AI and LLMs. This…  ( 30 min )
    New GitHub Copilot Global Bootcamp: Now with Virtual and In-Person Workshops!
    The GitHub Copilot Global Bootcamp started in February as a fully virtual learning journey — and it was a hit. More than 60,000 developers joined the first edition across multiple languages and regions. Now, we're excited to launch the second edition — bigger and better — featuring both virtual and in-person workshops, hosted by tech communities around the globe. This new edition arrives shortly after the announcements at Microsoft Build 2025, where the GitHub and Visual Studio Code teams revealed exciting news: The GitHub Copilot Chat extension is going open source, reinforcing transparency and collaboration. AI is being deeply integrated into Visual Studio Code, now evolving into an open source AI editor. New APIs and tools are making it easier than ever to build with AI and LLMs. This…

  • Open

    Azure Data Factory Item Mounting (Generally Available)
    We’re excited to announce the General Availability (GA) of the Azure Data Factory (Mounting) feature in Microsoft Fabric! This powerful capability allows you to seamlessly connect your existing Azure Data Factory (ADF) pipelines to Fabric workspaces, eliminating the need to manually rebuild or migrate them. What’s New with GA? Once your ADF instance is linked to a Fabric workspace, you … Continue reading “Azure Data Factory Item Mounting (Generally Available)”  ( 5 min )
    Introducing Aggregations in Fabric API for GraphQL: Query Smarter, Not Harder
    Summarize, group, and explore data in one step with the new GraphQL aggregations feature We’re excited to launch a powerful new capability in the Fabric API for GraphQL—Aggregations. This feature brings native support for summary-level insights directly into your GraphQL queries, making your data exploration faster, simpler, and more efficient. Why Aggregations? Until now, getting … Continue reading “Introducing Aggregations in Fabric API for GraphQL: Query Smarter, Not Harder”  ( 6 min )
    Updates to database development tools for SQL database in Fabric
    With SQL database in Fabric, the source control integration in Fabric enables you to keep your active work synced to git while following a branching strategy that best matches your team’s environments and deployment requirements. With the complexity of enterprise deployment scenarios, code-first deployment is also available for Fabric objects through tools like the Fabric-CICD … Continue reading “Updates to database development tools for SQL database in Fabric”  ( 6 min )
    Mirroring in Microsoft Fabric explained: benefits, use cases, and pricing demystified
    Co-Author: Maraki Ketema, Principal Product Manager Unlocking Data Value at Scale: Mirroring in Microsoft Fabric  In the modern data era, speed, scale, and simplicity are no longer luxuries—they’re expectations. Organizations want to harness the power of their operational data in real time, without the overhead of complex ETL pipelines or latency-filled data movement. This is … Continue reading “Mirroring in Microsoft Fabric explained: benefits, use cases, and pricing demystified”  ( 9 min )
    Eventhouse Accelerated OneLake Table Shortcuts – Generally Available
    Turbo charge queries over Delta Lake and Iceberg tables in OneLake Eventhouse accelerated OneLake Table shortcuts aka. Query Acceleration is now Generally Available! OneLake shortcuts are references from an Eventhouse that point to internal Fabric or external sources. Previously, queries run over OneLake shortcuts were less performant than on data that is ingested directly to … Continue reading “Eventhouse Accelerated OneLake Table Shortcuts – Generally Available”  ( 8 min )
    Intelligent Data Cleanup: Smart Purging for Smarter Data Warehouses
    In the era of Artificial Intelligence, organizations generate and accumulate large volumes of information every second. From transactional records to user logs and analytics data warehouses serve as a single source of truth that stores this information for a plethora of purposes. However, as data accumulates over time, not all remain relevant and valuable, leading … Continue reading “Intelligent Data Cleanup: Smart Purging for Smarter Data Warehouses”  ( 6 min )
    Eventhouse No-Code table creation and editing
    Creating and managing tables in Eventhouse just got even more flexible. While you can create tables using the Get Data wizard – which builds the structure based on a sample file or streaming data – there are times when you need to start from scratch. This could be during basic training, testing, or experimenting with … Continue reading “Eventhouse No-Code table creation and editing”  ( 6 min )
  • Open

    Public Preview: Granular RBAC in Azure Monitor Logs
    We are happy to announce our public preview for Granular RBAC in Azure Monitor Log Analytics! What is Granular RBAC in Azure Monitor Logs? Many organizations emphasize the need to segregate and control access to data in a fine-grained manner, while maintaining a centralized and consolidated logging platform.  On top of the existing capabilities of workspace and table level access provided over Azure RBAC, you can now maintain all your data in a single Log Analytics workspace and provide least privilege access at any level. This means you can control which users can access which tables and rows, based on your business or security needs and defined criteria, and completely separate data and control plane access, using Azure Attribute-based access control (ABAC) as part of your Azure RBAC role assignment. Granular RBAC in Azure Monitor Logs allows you to filter the data that each user can view or query, based on the conditions that you specify. Common examples are characteristics such as organizational roles and units, geographical locations, or data sensitivity levels.  How to set granular data access in Azure Monitor Logs To set up granular access: Create or edit an Azure role assignment. Under “Conditions”, select “Add condition”. In “Add action”, choose the new DataAction: “Read workspace data”. Under “Build expression”, click “Add expression” to define your access rules. You can use any combination of the “Table Name” and “Column Value” attributes to scope access, leveraging a wide range of supported operators to match your criteria. Once applied, users will only be able to access the data that matches the conditions you've configured.  Get started with Granular RBAC in Azure Monitor Logs Learn more about Granular RBAC and how to set it up in Azure Monitor Logs We hope you enjoy this new addition to Azure Monitor Log Analytics.  ( 21 min )
    Public Preview: Granular RBAC in Azure Monitor Logs
    We are happy to announce our public preview for Granular RBAC in Azure Monitor Log Analytics! What is Granular RBAC in Azure Monitor Logs? Many organizations emphasize the need to segregate and control access to data in a fine-grained manner, while maintaining a centralized and consolidated logging platform.  On top of the existing capabilities of workspace and table level access provided over Azure RBAC, you can now maintain all your data in a single Log Analytics workspace and provide least privilege access at any level. This means you can control which users can access which tables and rows, based on your business or security needs and defined criteria, and completely separate data and control plane access, using Azure Attribute-based access control (ABAC) as part of your Azure RBAC role assignment. Granular RBAC in Azure Monitor Logs allows you to filter the data that each user can view or query, based on the conditions that you specify. Common examples are characteristics such as organizational roles and units, geographical locations, or data sensitivity levels.  How to set granular data access in Azure Monitor Logs To set up granular access: Create or edit an Azure role assignment. Under “Conditions”, select “Add condition”. In “Add action”, choose the new DataAction: “Read workspace data”. Under “Build expression”, click “Add expression” to define your access rules. You can use any combination of the “Table Name” and “Column Value” attributes to scope access, leveraging a wide range of supported operators to match your criteria. Once applied, users will only be able to access the data that matches the conditions you've configured.  Get started with Granular RBAC in Azure Monitor Logs Learn more about Granular RBAC and how to set it up in Azure Monitor Logs We hope you enjoy this new addition to Azure Monitor Log Analytics.
  • Open

    From Zero to Hero: Build your first voice agent with Voice Live API
    Voice technology is transforming how we interact with machines, making conversations with AI feel more natural than ever before. With the public beta release of the Voice Live API developers now have the tools to create low-latency, multimodal voice experiences in their apps, opening up endless possibilities for innovation. Gone are the days when building a voice bot required stitching together multiple models for transcription, inference, and text-to-speech conversion. With the Realtime API, developers can now streamline the entire process with a single API call, enabling fluid, natural speech-to-speech conversations. This is a game-changer for industries like customer support, education, and real-time language translation, where fast, seamless interactions are crucial. In this blog, we’…  ( 64 min )
    From Zero to Hero: Build your first voice agent with Voice Live API
    Voice technology is transforming how we interact with machines, making conversations with AI feel more natural than ever before. With the public beta release of the Voice Live API developers now have the tools to create low-latency, multimodal voice experiences in their apps, opening up endless possibilities for innovation. Gone are the days when building a voice bot required stitching together multiple models for transcription, inference, and text-to-speech conversion. With the Realtime API, developers can now streamline the entire process with a single API call, enabling fluid, natural speech-to-speech conversations. This is a game-changer for industries like customer support, education, and real-time language translation, where fast, seamless interactions are crucial. In this blog, we’…

  • Open

    NLWeb Pioneers: Success Stories & Use Cases
    Imagine the web as a vast network of pages you click through—until HTML transformed them into living documents you could link and style with simple tags. Announced at Build 2025 during CEO Satya Nadella’s keynote with CTO Kevin Scott, NLWeb is the next leap for conversation—the “HTML for chat”—turning every site into a natural-language endpoint both people and AI agents can query directly. Watch Kevin’s announcement below, or keep reading to learn about our early adopters. NLWeb Pioneer Highlights Tripadvisor: Conversational travel planning, from “Where should I go this fall with kids?” to full itineraries in one go. Read the spotlight. Qdrant: Lightning-fast, intent-aware search via its vector database. Read the spotlight. O’Reilly Media: Queryable technical library, like chatting with a resident expert. Read the spotlight. Eventbrite: Discover events by intent, not keywords. Read the spotlight. Inception Labs: Sub-second conversational queries using diffusion LLMs. Read the spotlight. Delish (Hearst): Instant recipe matches—“quick vegan dinner with mushrooms and pasta”—tailored to your pantry. Read the spotlight. NLWeb instances also double as Model Context Protocol (MCP) servers—exposing your content to AI assistants and enabling peer-to-peer Agent-to-Agent (A2A) workflows across sites. More PioneersChicago Public Media, Common Sense Media, DDM (Allrecipes & Serious Eats), Milvus, Shopify and Snowflake. Together, these collaborators are laying the protocol-based foundation for an open, agentic web—where sites don’t just display content, they converse. Ready to Build?Get started with the new NLWeb GitHub repository.  ( 22 min )
    NLWeb Pioneers: Success Stories & Use Cases
    Imagine the web as a vast network of pages you click through—until HTML transformed them into living documents you could link and style with simple tags. Announced at Build 2025 during CEO Satya Nadella’s keynote with CTO Kevin Scott, NLWeb is the next leap for conversation—the “HTML for chat”—turning every site into a natural-language endpoint both people and AI agents can query directly. Watch Kevin’s announcement below, or keep reading to learn about our early adopters. NLWeb Pioneer Highlights Tripadvisor: Conversational travel planning, from “Where should I go this fall with kids?” to full itineraries in one go. Read the spotlight. Qdrant: Lightning-fast, intent-aware search via its vector database. Read the spotlight. O’Reilly Media: Queryable technical library, like chatting with a resident expert. Read the spotlight. Eventbrite: Discover events by intent, not keywords. Read the spotlight. Inception Labs: Sub-second conversational queries using diffusion LLMs. Read the spotlight. Delish (Hearst): Instant recipe matches—“quick vegan dinner with mushrooms and pasta”—tailored to your pantry. Read the spotlight. NLWeb instances also double as Model Context Protocol (MCP) servers—exposing your content to AI assistants and enabling peer-to-peer Agent-to-Agent (A2A) workflows across sites. More PioneersChicago Public Media, Common Sense Media, DDM (Allrecipes & Serious Eats), Milvus, Shopify and Snowflake. Together, these collaborators are laying the protocol-based foundation for an open, agentic web—where sites don’t just display content, they converse. Ready to Build?Get started with the new NLWeb GitHub repository.

  • Open

    That’s a wrap for Build 2025!
    Microsoft Build 2025 delivered a powerful vision for the future of data and AI, with Microsoft Fabric and Power BI at the heart of the story. From AI-powered productivity with Copilot to deep integration with Cosmos DB, this year’s announcements reinforced Microsoft’s commitment to unifying the data experience across roles, tools, and industries. Fabric: The … Continue reading “That’s a wrap for Build 2025!”  ( 7 min )
  • Open

    How Amdocs CCoE leveraged Azure AI Agent Service to build intelligent email support agent.
    This post is co-authored with Shlomi Elkayam and Henry Hernandez  from Amdocs CCoE. In this blogpost you will learn how Amdocs CCoE team improved their SLA by providing technical support for IT and cloud infrastructure questions and queries. They used Azure AI Agent Service to build an intelligent email agent that helps Amdocs employees with their technical issues. This post will describe the development phases, solution details and the roadmap ahead. About Amdocs CCoE Amdocs is a multinational telecommunications technology company. The company specializes in software and services for communications, media and financial services providers and digital enterprises. CCoE team is responsible for automation, infrastructure and design of all our Azure solutions, either for internal use cases or …  ( 37 min )
    How Amdocs CCoE leveraged Azure AI Agent Service to build intelligent email support agent.
    This post is co-authored with Shlomi Elkayam and Henry Hernandez  from Amdocs CCoE. In this blogpost you will learn how Amdocs CCoE team improved their SLA by providing technical support for IT and cloud infrastructure questions and queries. They used Azure AI Agent Service to build an intelligent email agent that helps Amdocs employees with their technical issues. This post will describe the development phases, solution details and the roadmap ahead. About Amdocs CCoE Amdocs is a multinational telecommunications technology company. The company specializes in software and services for communications, media and financial services providers and digital enterprises. CCoE team is responsible for automation, infrastructure and design of all our Azure solutions, either for internal use cases or …
  • Open

    Office Add-ins announces Copilot agents with add-in actions and more at Build 2025
    As part of the expanding capabilities for agents across Microsoft 365, Office Platform announces add-in actions for Copilot agents are available in preview. This blog is an overview of all the new capabilities across the platform: APIs, developer tools, and add-in distribution options—making it simpler to build new or iterate on JavaScript add-ins. The post Office Add-ins announces Copilot agents with add-in actions and more at Build 2025 appeared first on Microsoft 365 Developer Blog.  ( 31 min )
    Introducing the Agent Store: Build, publish, and discover agents in Microsoft 365 Copilot
    We’re excited to introduce the Agent Store — a centralized, curated marketplace that features agents built by Microsoft, trusted partners, and customers. The Agent Store offers a new, immersive experience within Microsoft 365 Copilot that enables users to browse, install, and try agents tailored to their needs. The post Introducing the Agent Store: Build, publish, and discover agents in Microsoft 365 Copilot appeared first on Microsoft 365 Developer Blog.  ( 24 min )

  • Open

    Enhance data prep with AI-powered capabilities in Data Wrangler (Preview)
    With AI-powered capabilities in Data Wrangler, you can now do even more to accelerate exploratory analysis and data preparation in Fabric.  ( 6 min )
    SharePoint files destination the first file-based destination for Dataflows Gen2
    The introduction of file-based destinations for Dataflows Gen2 marks a significant step forward in enhancing collaboration, historical tracking, and sharing data for business users. This development begins with SharePoint file destinations in CSV format, offering a streamlined way to share with users on the SharePoint platform. Overview of file-based destinations File-based destinations allow data to … Continue reading “SharePoint files destination the first file-based destination for Dataflows Gen2”  ( 6 min )
    AI-powered development with Copilot for Data pipeline – Boost your productivity in understanding and updating pipeline
    Understanding complex data pipelines created by others can often be a challenging task, especially when users must review each activity individually to understand its configurations, settings, and functions. Additionally, manul updates to general settings, such as timeout and retry parameters, across multiple activities can be time-consuming and tedious. Copilot for Data pipeline introduces advanced capabilities … Continue reading “AI-powered development with Copilot for Data pipeline – Boost your productivity in understanding and updating pipeline”  ( 6 min )
    Fabric CLI: explore and automate Microsoft Fabric from your terminal (Generally Available)
    During FabCon Las Vegas, we introduced the Fabric CLI — a developer-first command-line tool that brings a familiar, file-system-like experience to working with Microsoft Fabric. Since then, thousands of developers have jumped in: exploring, scripting, and embedding the CLI into local workflows. But for many enterprise teams, one question kept coming up: “When will it … Continue reading “Fabric CLI: explore and automate Microsoft Fabric from your terminal (Generally Available)”  ( 8 min )
    What’s new with Fabric CI/CD – May 2025
    As the capabilities of our Fabric Platform Git Integration continue to evolve, we’re happy to share some updates about Fabric’s latest CI/CD capabilities. These advancements are designed to enhance the developer experience and simplify the integration of DevOps practices into everyday workflows. With these innovations, Fabric continues to empower teams to build, test, and deploy … Continue reading “What’s new with Fabric CI/CD – May 2025 “  ( 6 min )
  • Open

    How to use OpenSSL to Send HTTP(S) Requests
    OpenSSL is a powerful tool for working with SSL/TLS, but it can also be used to send custom HTTP requests, which is very useful for debugging and learning how HTTP(S) works at a low level.   1. How It Works Normally, tools like curl or your browser handle everything behind the scenes: they perform the TLS handshake and build the HTTP request. With OpenSSL, you manually create the HTTP request, and OpenSSL only handles the encrypted connection. TLS Handshake: OpenSSL’s s_client command establishes a secure (TLS) connection with the server. Send Request: You pipe (or redirect) a raw HTTP request into s_client. The server processes it and sends a response over the secure channel.   2. Sending a GET Request Here’s how you send a simple GET request to www.bing.com: ( printf "GET / HTTP/1.1\…  ( 23 min )
    How to use OpenSSL to Send HTTP(S) Requests
    OpenSSL is a powerful tool for working with SSL/TLS, but it can also be used to send custom HTTP requests, which is very useful for debugging and learning how HTTP(S) works at a low level.   1. How It Works Normally, tools like curl or your browser handle everything behind the scenes: they perform the TLS handshake and build the HTTP request. With OpenSSL, you manually create the HTTP request, and OpenSSL only handles the encrypted connection. TLS Handshake: OpenSSL’s s_client command establishes a secure (TLS) connection with the server. Send Request: You pipe (or redirect) a raw HTTP request into s_client. The server processes it and sends a response over the secure channel.   2. Sending a GET Request Here’s how you send a simple GET request to www.bing.com: ( printf "GET / HTTP/1.1\…
    Introducing AI-Powered Actionable Insights in Azure Load Testing
    We’re excited to announce the preview of AI powered Actionable Insights in Azure Load Testing—a new capability that helps teams quickly identify performance issues and understand test results through AI-driven analysis.  Performance testing is an essential part of ensuring application reliability and responsiveness, but interpreting the results can often be challenging. It typically involves manually correlating client-side load test telemetry with backend service metrics, which can be both time-consuming and error-prone. Actionable Insights simplifies this process by automatically analyzing test data, surfacing key issues, and offering clear, actionable recommendations—so teams can focus on fixing what matters, not sifting through raw data.  AI-powered diagnostics  Actionable Insights use…  ( 23 min )
    Introducing AI-Powered Actionable Insights in Azure Load Testing
    We’re excited to announce the preview of AI powered Actionable Insights in Azure Load Testing—a new capability that helps teams quickly identify performance issues and understand test results through AI-driven analysis.  Performance testing is an essential part of ensuring application reliability and responsiveness, but interpreting the results can often be challenging. It typically involves manually correlating client-side load test telemetry with backend service metrics, which can be both time-consuming and error-prone. Actionable Insights simplifies this process by automatically analyzing test data, surfacing key issues, and offering clear, actionable recommendations—so teams can focus on fixing what matters, not sifting through raw data.  AI-powered diagnostics  Actionable Insights use…
  • Open

    Lifecycle Management of Blobs (Deletion) using Automation Tasks
    Background: We often encounter scenarios where we need to delete blobs that have been idle in a storage account for an extended period. For a small number of blobs, deletion can be handled easily using the Azure Portal, Storage Explorer, or inline scripts such as PowerShell or Azure CLI. However, in most cases, we deal with a large volume of blobs, making manual deletion impractical. In such situations, it's essential to leverage automation tools to streamline the deletion process. One effective option is using Automation Tasks, which can help schedule and manage blob deletions efficiently. Note: Behind the scenes, an automation task is actually a logic app resource that runs a workflow. So, the Consumption pricing model of logic-app applies to automation tasks.    Scenario’s where “Automa…  ( 30 min )
    Lifecycle Management of Blobs (Deletion) using Automation Tasks
    Background: We often encounter scenarios where we need to delete blobs that have been idle in a storage account for an extended period. For a small number of blobs, deletion can be handled easily using the Azure Portal, Storage Explorer, or inline scripts such as PowerShell or Azure CLI. However, in most cases, we deal with a large volume of blobs, making manual deletion impractical. In such situations, it's essential to leverage automation tools to streamline the deletion process. One effective option is using Automation Tasks, which can help schedule and manage blob deletions efficiently. Note: Behind the scenes, an automation task is actually a logic app resource that runs a workflow. So, the Consumption pricing model of logic-app applies to automation tasks.    Scenario’s where “Automa…

  • Open

    New Microsoft 365 Copilot Tuning | Create fine-tuned models to write like you do
    Fine-tuning adds new skills to foundational models, simulating experience in the tasks you teach the model to do. This complements Retrieval Augmented Generation, which in real-time uses search to find related information, then add that to your prompts for context. Fine-tuning helps ensure that responses meet your quality expectations for specific repeatable tasks, without needing to be prompting expert. It’s great for drafting complex legal agreements, writing technical documentation, authoring medical papers, and more — using detailed, often lengthy precedent files along with what you teach the model. Using Copilot Studio, anyone can create and deploy these fine-tuned models to use with agents without data science or coding expertise. There, you can teach models using data labeling, grou…  ( 48 min )
    New Microsoft 365 Copilot Tuning | Create fine-tuned models to write like you do
    Fine-tuning adds new skills to foundational models, simulating experience in the tasks you teach the model to do. This complements Retrieval Augmented Generation, which in real-time uses search to find related information, then add that to your prompts for context. Fine-tuning helps ensure that responses meet your quality expectations for specific repeatable tasks, without needing to be prompting expert. It’s great for drafting complex legal agreements, writing technical documentation, authoring medical papers, and more — using detailed, often lengthy precedent files along with what you teach the model. Using Copilot Studio, anyone can create and deploy these fine-tuned models to use with agents without data science or coding expertise. There, you can teach models using data labeling, grou…
  • Open

    Semantic Kernel and Microsoft.Extensions.AI: Better Together, Part 1
    This is the start of a series highlighting the integration between Microsoft Semantic Kernel and Microsoft.Extensions.AI. Future parts will provide detailed examples of using Semantic Kernel with Microsoft.Extensions.AI abstractions.  The most common questions are:  “Do Microsoft’s AI extensions replace Semantic Kernel?”  “When should I use Microsoft’s AI extensions instead of Semantic Kernel?”  This blog post […] The post Semantic Kernel and Microsoft.Extensions.AI: Better Together, Part 1 appeared first on Semantic Kernel.  ( 27 min )
    Transitioning to new Extensions AI IEmbeddingGenerator interface
    As Semantic Kernel shifts its foundational abstractions to Microsoft.Extensions.AI, we are obsoleting and moving away from our experimental embeddings interfaces to the new standardized abstractions that provide a more consistent and powerful way to work with AI services across the .NET ecosystem. The Evolution of Embedding Generation in Semantic Kernel Semantic Kernel has always aimed […] The post Transitioning to new Extensions AI IEmbeddingGenerator interface appeared first on Semantic Kernel.  ( 23 min )
    Vector Data Extensions are now Generally Available (GA)
    We’re excited to announce the release of Microsoft.Extensions.VectorData.Abstractions, a foundational library providing exchange types and abstractions for vector stores when working with vector data in AI-powered applications. This release is the result of a close collaboration between the Semantic Kernel and .NET teams, combining expertise in AI and developer tooling to deliver a robust, extensible […] The post Vector Data Extensions are now Generally Available (GA) appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Azure Kubernetes Service Baseline - The Hard Way, Third time's a charm
    1 Access management Azure Kubernetes Service (AKS) supports Microsoft Entra ID integration, which allows you to control access to your cluster resources using Azure role-based access control (RBAC). In this tutorial, you will learn how to integrate AKS with Microsoft Entra ID and assign different roles and permissions to three types of users: An admin user, who will have full access to the AKS cluster and its resources. A backend ops team, who will be responsible for managing the backend application deployed in the AKS cluster. They will only have access to the backend namespace and the resources within it. A frontend ops team, who will be responsible for managing the frontend application deployed in the AKS cluster. They will only have access to the frontend namespace and the resources wi…  ( 48 min )
    Azure Kubernetes Service Baseline - The Hard Way, Third time's a charm
    1 Access management Azure Kubernetes Service (AKS) supports Microsoft Entra ID integration, which allows you to control access to your cluster resources using Azure role-based access control (RBAC). In this tutorial, you will learn how to integrate AKS with Microsoft Entra ID and assign different roles and permissions to three types of users: An admin user, who will have full access to the AKS cluster and its resources. A backend ops team, who will be responsible for managing the backend application deployed in the AKS cluster. They will only have access to the backend namespace and the resources within it. A frontend ops team, who will be responsible for managing the frontend application deployed in the AKS cluster. They will only have access to the frontend namespace and the resources wi…
  • Open

    Bringing AI to the edge: Hackathon Windows ML
    AI Developer Hackathon Windows ML  Hosted by Qualcomm on SnapDragonX We’re excited to announce our support and participation for the upcoming global series of Edge AI hackathons, hosted by Qualcomm Technologies. The first is on June 14-15 in Bangalore.  We see a world of hybrid AI, developing rapidly as new generation of intelligent applications get built for diverse scenarios. These range from mobile, desktop, spatial computing and extending all the way to industrial and automotive. Mission critical workloads oscillate between decision-making in the moment, on device, to fine tuning models on the cloud. We believe we are in the early stages of development of agentic applications that efficiently run on the edge for scenarios needing local deployment and on-device inferencing.  Microsoft W…  ( 27 min )
    Bringing AI to the edge: Hackathon Windows ML
    AI Developer Hackathon Windows ML  Hosted by Qualcomm on SnapDragonX We’re excited to announce our support and participation for the upcoming global series of Edge AI hackathons, hosted by Qualcomm Technologies. The first is on June 14-15 in Bangalore.  We see a world of hybrid AI, developing rapidly as new generation of intelligent applications get built for diverse scenarios. These range from mobile, desktop, spatial computing and extending all the way to industrial and automotive. Mission critical workloads oscillate between decision-making in the moment, on device, to fine tuning models on the cloud. We believe we are in the early stages of development of agentic applications that efficiently run on the edge for scenarios needing local deployment and on-device inferencing.  Microsoft W…
  • Open

    How to Use Postgres MCP Server with GitHub Copilot in VS Code
    GitHub Copilot has changed how developers write code, but when combined with an MCP (Model Copilot Protocol) server, it also connects your services. With it, Copilot can understand your database schema and generate relevant code for your API, data models, or business logic. In this guide, you'll learn how to use the Neon Serverless Postgres MCP server with GitHub Copilot in VS Code to build a sample REST API quickly. We'll walk through how to create an Azure Function that fetches data from a Neon database, all with minimal setup and no manual query writing. From Code Generation to Database Management with GitHub Copilot AI agents are no longer just helping write code—they’re creating and managing databases. When a chatbot logs a customer conversation, or a new region spins up in the Azure …  ( 29 min )
    How to Use Postgres MCP Server with GitHub Copilot in VS Code
    GitHub Copilot has changed how developers write code, but when combined with an MCP (Model Copilot Protocol) server, it also connects your services. With it, Copilot can understand your database schema and generate relevant code for your API, data models, or business logic. In this guide, you'll learn how to use the Neon Serverless Postgres MCP server with GitHub Copilot in VS Code to build a sample REST API quickly. We'll walk through how to create an Azure Function that fetches data from a Neon database, all with minimal setup and no manual query writing. From Code Generation to Database Management with GitHub Copilot AI agents are no longer just helping write code—they’re creating and managing databases. When a chatbot logs a customer conversation, or a new region spins up in the Azure …
  • Open

    [AI Search] LockedSPLResourceFound error when deleting AI Search
    Are you unable to delete AI Search with the following error? LockedSPLResourceFound :Unable to verify management locks on Resource '$Resource_Path '. If you still want to delete the search service, manually delete the SPL resource first and try again.  If you are, this is the right place for you to find a quick resolution! Keep on reading through. [Solution – Delete the Shared Private Link] The error message will appear if you still have Shared Private Link configured in AI Search. AI Search will not let you delete the resource unless you delete the Shared Private Link first. You must delete the Shared Private Link in the Portal manually.Move to Settings > Networking > Shared private access tab.     Once the Shared Private Links are all deleted,  please try again to delete the AI Search. Also please give at least 15 minutes for the Shared Private Links to be deleted completely as it may take longer.   [Extra - I tried to delete Shared Private Links but it’s been pending for a long while] There are occasions where you will see your Shared Private Links are in a state of being deleted for a long time as below. (For example 3 hours plus or more) In this case, please open a case to our Support team mentioning you are having an issue with deleting AI Search due to Shared Private Link not being deleted properly. Our team will take care of the issue from that point on!  ( 20 min )
    [AI Search] LockedSPLResourceFound error when deleting AI Search
    Are you unable to delete AI Search with the following error? LockedSPLResourceFound :Unable to verify management locks on Resource '$Resource_Path '. If you still want to delete the search service, manually delete the SPL resource first and try again.  If you are, this is the right place for you to find a quick resolution! Keep on reading through. [Solution – Delete the Shared Private Link] The error message will appear if you still have Shared Private Link configured in AI Search. AI Search will not let you delete the resource unless you delete the Shared Private Link first. You must delete the Shared Private Link in the Portal manually.Move to Settings > Networking > Shared private access tab.     Once the Shared Private Links are all deleted,  please try again to delete the AI Search. Also please give at least 15 minutes for the Shared Private Links to be deleted completely as it may take longer.   [Extra - I tried to delete Shared Private Links but it’s been pending for a long while] There are occasions where you will see your Shared Private Links are in a state of being deleted for a long time as below. (For example 3 hours plus or more) In this case, please open a case to our Support team mentioning you are having an issue with deleting AI Search due to Shared Private Link not being deleted properly. Our team will take care of the issue from that point on!
  • Open

    On-premises data gateway May 2025 release
    Here is the May 2025 release of the on-premises data gateway (version 3000.270).  ( 6 min )
  • Open

    One Pipeline to Rule Them All: Ensuring CodeQL Scanning Results and Dependency Scanning Results Go to the Intended Repository
    “One Ring to rule them all, One Ring to find them, One Ring to bring them all, and in the darkness bind them.” – J.R.R. Tolkien, The Lord of the Rings In the world of code scanning and dependency scanning, your pipeline is the One Ring—a single definition that can orchestrate scans across multiple repositories. […] The post One Pipeline to Rule Them All: Ensuring CodeQL Scanning Results and Dependency Scanning Results Go to the Intended Repository appeared first on Azure DevOps Blog.  ( 26 min )
  • Open

    Introducing the Microsoft 365 Agents Toolkit
    Read how the Microsoft 365 Agents Toolkit, an evolution of Microsoft Teams Toolkit, is designed to help developers build agents and apps for Microsoft 365 Copilot, Microsoft Teams, and Microsoft 365. The post Introducing the Microsoft 365 Agents Toolkit appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Transforming Android Development: Unveiling MediaTek’s latest chipset with Microsoft's Phi models
    Imagine running advanced AI applications—like intelligent copilots and Retrieval-Augmented Generation (RAG)—directly on Android devices, completely offline. With the rapid evolution of Neural Processing Units (NPUs), this is no longer a future vision—it’s happening now. Optimized AI at the Edge: Phi-4-mini on MediaTek Thanks to MediaTek’s conversion and quantization tools, Microsoft’s Phi-4-mini and Phi-4-mini-reasoning models are now optimized for MediaTek NPUs. This collaboration empowers developers to build fast, responsive, and privacy-preserving AI experiences on Android—without needing cloud connectivity. MediaTek’s flagship Dimensity 9400 and 9400+ platform with Dimensity GenAI Toolkit 2.0 delivers excellent performance with the Phi-4 mini (3.8B) model where ​prefill speed is >800 t…  ( 29 min )
    Transforming Android Development: Unveiling MediaTek’s latest chipset with Microsoft's Phi models
    Imagine running advanced AI applications—like intelligent copilots and Retrieval-Augmented Generation (RAG)—directly on Android devices, completely offline. With the rapid evolution of Neural Processing Units (NPUs), this is no longer a future vision—it’s happening now. Optimized AI at the Edge: Phi-4-mini on MediaTek Thanks to MediaTek’s conversion and quantization tools, Microsoft’s Phi-4-mini and Phi-4-mini-reasoning models are now optimized for MediaTek NPUs. This collaboration empowers developers to build fast, responsive, and privacy-preserving AI experiences on Android—without needing cloud connectivity. MediaTek’s flagship Dimensity 9400 and 9400+ platform with Dimensity GenAI Toolkit 2.0 delivers excellent performance with the Phi-4 mini (3.8B) model where ​prefill speed is >800 t…
    Voice-enabled AI Agents: transforming customer engagement with Azure AI Speech
    We are seeing customers such as Indiana Pacers and Coca-Cola transform customer experiences using Azure AI Speech to power customer interactions. And in the new era of agentic AI, voice is increasingly becoming an important modality to interact with AI Agents in a natural way. Today, we are excited to announce a number to new capabilities in Azure AI Speech that will further propel our customers in the voice-enabled agentic AI era as AI Agents are being rapidly adopted by a wide range of enterprise customers across a wide variety of industries. The updates we are announcing today include the new Voice Live API (Public Preview) which can be used to help simplify creating voice agents that provide fluent and natural speech to speech conversational experiences. To provide a robust conversatio…  ( 47 min )
    Voice-enabled AI Agents: transforming customer engagement with Azure AI Speech
    We are seeing customers such as Indiana Pacers and Coca-Cola transform customer experiences using Azure AI Speech to power customer interactions. And in the new era of agentic AI, voice is increasingly becoming an important modality to interact with AI Agents in a natural way. Today, we are excited to announce a number to new capabilities in Azure AI Speech that will further propel our customers in the voice-enabled agentic AI era as AI Agents are being rapidly adopted by a wide range of enterprise customers across a wide variety of industries. The updates we are announcing today include the new Voice Live API (Public Preview) which can be used to help simplify creating voice agents that provide fluent and natural speech to speech conversational experiences. To provide a robust conversatio…

  • Open

    How Anker soundcore Uses Azure AI Speech for Seamless Multilingual Communication
    “We’re excited to be part of Microsoft Build and to demonstrate what’s possible when AI meets every day tech. Built on deep technical integration and shared innovation goals, we’re able to deliver smarter, more intuitive, and responsive audio products for users around the world.”— Dongping Zhao, President of Anker Innovations   Imagine talking to anyone, no matter the language. soundcore, Anker Innovations' audio brand, has incorporated Microsoft Azure AI Speech services into its new devices to eliminate language barriers. These wireless earbuds now offer real-time speech translation and voice interactions, showcasing how cloud-based AI speech technologies can create immersive, multilingual experiences on consumer devices.    Anker’s Mission and Challenges Anker Innovations is a global sma…  ( 28 min )
    How Anker soundcore Uses Azure AI Speech for Seamless Multilingual Communication
    “We’re excited to be part of Microsoft Build and to demonstrate what’s possible when AI meets every day tech. Built on deep technical integration and shared innovation goals, we’re able to deliver smarter, more intuitive, and responsive audio products for users around the world.”— Dongping Zhao, President of Anker Innovations   Imagine talking to anyone, no matter the language. soundcore, Anker Innovations' audio brand, has incorporated Microsoft Azure AI Speech services into its new devices to eliminate language barriers. These wireless earbuds now offer real-time speech translation and voice interactions, showcasing how cloud-based AI speech technologies can create immersive, multilingual experiences on consumer devices.    Anker’s Mission and Challenges Anker Innovations is a global sma…
    NLWeb Pioneer Q&A: Delish
    As a Product Marketing Director at Azure, I’ve had a front-row seat to the evolution of generative AI—from early text-based bots to today’s intelligent systems that reason across images, documents, and real-world context. But some of the most exciting shifts aren’t just about models—they’re about the web itself. Enter NLWeb, Microsoft’s newly announced open initiative to make websites conversational and AI-native. Imagine asking a recipe site like Delish, “What’s a quick, vegan dinner I can make with mushrooms and pasta?”—and getting a smart, tailored response that pulls directly from your structured content, without relying on rigid keyword search.  Built on familiar tools like Schema.org and vector search, NLWeb is designed to be easy for developers and impactful for users. It’s led by R.V. Guha—the mind behind Schema.org, RSS, and RDF—who recently joined Microsoft to help reimagine the open web for the AI era. The new GitHub repo is now live for developers to explore and build upon. Here’s how NLWeb pioneer Delish is thinking about this shift: Q1: What inspired your team to try NLWeb?  We saw an opportunity to improve discovery for consumers by delivering more relevant results through faceted, natural language queries. Q2: How did the setup process go? Any surprises? Collaborating with the Microsoft engineers was valuable during the test planning, and the initial prototype results were promising as we explored the potential. Q3: What query or interaction made NLWeb click for you?  Users can ask naturally phrased questions—based on cultural moments or food types—and get accurate (and delicious) results. Q4: How are you blending NLWeb with your current experience?  We will actively be testing NLWeb embedded as part of the discovery experience on the Delish site. Q5: If NLWeb reaches its full potential, what could it unlock for your users or the web?  We’re excited about the potential to better serve our customers, drive deeper engagement, increase time spent and grow higher LTV audiences.  ( 21 min )
    NLWeb Pioneer Q&A: Delish
    As a Product Marketing Director at Azure, I’ve had a front-row seat to the evolution of generative AI—from early text-based bots to today’s intelligent systems that reason across images, documents, and real-world context. But some of the most exciting shifts aren’t just about models—they’re about the web itself. Enter NLWeb, Microsoft’s newly announced open initiative to make websites conversational and AI-native. Imagine asking a recipe site like Delish, “What’s a quick, vegan dinner I can make with mushrooms and pasta?”—and getting a smart, tailored response that pulls directly from your structured content, without relying on rigid keyword search.  Built on familiar tools like Schema.org and vector search, NLWeb is designed to be easy for developers and impactful for users. It’s led by R.V. Guha—the mind behind Schema.org, RSS, and RDF—who recently joined Microsoft to help reimagine the open web for the AI era. The new GitHub repo is now live for developers to explore and build upon. Here’s how NLWeb pioneer Delish is thinking about this shift: Q1: What inspired your team to try NLWeb?  We saw an opportunity to improve discovery for consumers by delivering more relevant results through faceted, natural language queries. Q2: How did the setup process go? Any surprises? Collaborating with the Microsoft engineers was valuable during the test planning, and the initial prototype results were promising as we explored the potential. Q3: What query or interaction made NLWeb click for you?  Users can ask naturally phrased questions—based on cultural moments or food types—and get accurate (and delicious) results. Q4: How are you blending NLWeb with your current experience?  We will actively be testing NLWeb embedded as part of the discovery experience on the Delish site. Q5: If NLWeb reaches its full potential, what could it unlock for your users or the web?  We’re excited about the potential to better serve our customers, drive deeper engagement, increase time spent and grow higher LTV audiences.
    From Extraction to Insight: Evolving Azure AI Content Understanding with Reasoning and Enrichment
    First introduced in public preview last year, Azure AI Content Understanding enables you to convert unstructured content—documents, audio, video, text, and images—into structured data. The service is designed to support consistent, high-quality output, directed improvements, built-in enrichment, and robust pre-processing to accelerate workflows and reduce cost. A New Chapter in Content Understanding Since our launch we’ve seen customers pushing the boundaries to go beyond simple data extraction with agentic solutions fully automating decisions. This requires more than just extracting fields. For example, a healthcare insurance provider decision to pay a claim requires cross-checking against insurance policies, applicable contracts, patient’s medical history and prescription datapoints. To …  ( 32 min )
    From Extraction to Insight: Evolving Azure AI Content Understanding with Reasoning and Enrichment
    First introduced in public preview last year, Azure AI Content Understanding enables you to convert unstructured content—documents, audio, video, text, and images—into structured data. The service is designed to support consistent, high-quality output, directed improvements, built-in enrichment, and robust pre-processing to accelerate workflows and reduce cost. A New Chapter in Content Understanding Since our launch we’ve seen customers pushing the boundaries to go beyond simple data extraction with agentic solutions fully automating decisions. This requires more than just extracting fields. For example, a healthcare insurance provider decision to pay a claim requires cross-checking against insurance policies, applicable contracts, patient’s medical history and prescription datapoints. To …
    NLWeb Pioneer Q&A: Qdrant
    We just announced NLWeb at Microsoft Build—starting with a GitHub repo to help developers explore it, and a short list of enterprise pioneers testing it out in the real world. Qdrant is one of those early innovators shaping where this goes. Known for their open-source vector database purpose-built for semantic search, Qdrant is helping developers supercharge the intelligence of their search interfaces—without rebuilding their entire stack. By integrating with NLWeb, Qdrant makes it easy to add fast, intent-aware, and context-rich search to websites and apps of any size. Below, the Qdrant team shares how this integration came together, what developers can expect, and why NLWeb might be the unlock that brings semantic search to the mainstream.     Q1: Why Qdrant Sees NLWeb as a Practical St…  ( 27 min )
    NLWeb Pioneer Q&A: Qdrant
    We just announced NLWeb at Microsoft Build—starting with a GitHub repo to help developers explore it, and a short list of enterprise pioneers testing it out in the real world. Qdrant is one of those early innovators shaping where this goes. Known for their open-source vector database purpose-built for semantic search, Qdrant is helping developers supercharge the intelligence of their search interfaces—without rebuilding their entire stack. By integrating with NLWeb, Qdrant makes it easy to add fast, intent-aware, and context-rich search to websites and apps of any size. Below, the Qdrant team shares how this integration came together, what developers can expect, and why NLWeb might be the unlock that brings semantic search to the mainstream.     Q1: Why Qdrant Sees NLWeb as a Practical St…
    NLWeb Pioneer Q&A: Eventbrite
    We just announced NLWeb at Microsoft Build—starting with a GitHub repo to help developers explore it, and a short list of enterprise pioneers testing it out in the real world. Eventbrite is one of those early innovators shaping where this goes. Known for their global events platform that helps millions discover and attend unique experiences, Eventbrite is now exploring how NLWeb can make that discovery process even more personal, conversational, and precise. Whether you're looking for a date night idea or planning your next creative outing, NLWeb enables natural, expressive search queries that understand intent—not just keywords. Below, Eventbrite shares their journey piloting NLWeb: what made them say yes, how it’s performing in early tests, and what they see on the horizon. Q1: What ins…  ( 27 min )
    NLWeb Pioneer Q&A: Eventbrite
    We just announced NLWeb at Microsoft Build—starting with a GitHub repo to help developers explore it, and a short list of enterprise pioneers testing it out in the real world. Eventbrite is one of those early innovators shaping where this goes. Known for their global events platform that helps millions discover and attend unique experiences, Eventbrite is now exploring how NLWeb can make that discovery process even more personal, conversational, and precise. Whether you're looking for a date night idea or planning your next creative outing, NLWeb enables natural, expressive search queries that understand intent—not just keywords. Below, Eventbrite shares their journey piloting NLWeb: what made them say yes, how it’s performing in early tests, and what they see on the horizon. Q1: What ins…
    NLWeb Pioneer Q&A: Inception
    As a Director of AI Product Marketing at Azure, I have spent the last three years deep in the GenAI ecosystem. From internal chatbots that evolved into multi-model agent orchestrations, I feel like I am witnessing history in my job and life. Every day I wonder what is coming next, to build on top of everything we have developed in just a short amount of time. Now I am excited to share the newest chapter in this incredible run of innovation with the announcement of Microsoft’s launch of NLWeb today, an open project aimed at facilitating the creation of natural language interfaces for websites, enabling users to interact with site content through natural language queries. The initiative strives to empower web publishers by making it easier to develop AI-driven applications that enhance user …  ( 24 min )
    NLWeb Pioneer Q&A: Inception
    As a Director of AI Product Marketing at Azure, I have spent the last three years deep in the GenAI ecosystem. From internal chatbots that evolved into multi-model agent orchestrations, I feel like I am witnessing history in my job and life. Every day I wonder what is coming next, to build on top of everything we have developed in just a short amount of time. Now I am excited to share the newest chapter in this incredible run of innovation with the announcement of Microsoft’s launch of NLWeb today, an open project aimed at facilitating the creation of natural language interfaces for websites, enabling users to interact with site content through natural language queries. The initiative strives to empower web publishers by making it easier to develop AI-driven applications that enhance user …
    NLWeb Pioneer Q&A: O'Reilly
    Over the past three years working in generative AI at Microsoft, I’ve had a front-row seat to some of the most exciting shifts in tech. From chat assistants to multi-modal agents, every layer we’ve built has opened up new questions—and new responsibilities. One of the biggest ones: how do we make the web itself more useful to humans and AI? Enter NLWeb—a new open project from Microsoft that aims to make websites natively conversational. Instead of keyword searches or rigid menus, users (and agents) can query a site’s content in natural language. For developers and web publishers, NLWeb offers a practical way to tap into your existing structured data—like Schema.org markup or product catalogs—and expose that content intelligently, without reinventing your site. I’m especially excited that O…  ( 27 min )
    NLWeb Pioneer Q&A: O'Reilly
    Over the past three years working in generative AI at Microsoft, I’ve had a front-row seat to some of the most exciting shifts in tech. From chat assistants to multi-modal agents, every layer we’ve built has opened up new questions—and new responsibilities. One of the biggest ones: how do we make the web itself more useful to humans and AI? Enter NLWeb—a new open project from Microsoft that aims to make websites natively conversational. Instead of keyword searches or rigid menus, users (and agents) can query a site’s content in natural language. For developers and web publishers, NLWeb offers a practical way to tap into your existing structured data—like Schema.org markup or product catalogs—and expose that content intelligently, without reinventing your site. I’m especially excited that O…
    NLWeb Pioneer Q&A: O'Reilly Media
    Over the past three years working in generative AI at Microsoft, I’ve had a front-row seat to some of the most exciting shifts in tech. From chat assistants to multi-modal agents, every layer we’ve built has opened up new questions—and new responsibilities. One of the biggest ones: how do we make the web itself more useful to humans and AI? Enter NLWeb—a new open project from Microsoft that aims to make websites natively conversational. Instead of keyword searches or rigid menus, users (and agents) can query a site’s content in natural language. For developers and web publishers, NLWeb offers a practical way to tap into your existing structured data—like Schema.org markup or product catalogs—and expose that content intelligently, without reinventing your site. I’m especially excited that O…
    NLWeb Pioneer Q&A: Tripadvisor
    As a Product Marketing Director at Azure, I’ve spent the last few years in the thick of generative AI, from early chatbot experiments to sophisticated agentic systems that now reason across modalities. It’s felt like living inside a tech documentary, with every month adding a new chapter. Today, I’m excited to share what might be one of the most quietly profound shifts yet: NLWeb. Just announced by Microsoft, NLWeb is an open initiative to make the entire web more conversational—enabling users (and AI agents) to interact with websites via natural language instead of dropdowns and keyword searches. For developers and publishers, NLWeb simplifies how you expose your content to AI by using structured data formats like Schema.org, vector indexes, and LLM tools you probably already use. What ma…  ( 29 min )
    NLWeb Pioneer Q&A: Tripadvisor
    As a Product Marketing Director at Azure, I’ve spent the last few years in the thick of generative AI, from early chatbot experiments to sophisticated agentic systems that now reason across modalities. It’s felt like living inside a tech documentary, with every month adding a new chapter. Today, I’m excited to share what might be one of the most quietly profound shifts yet: NLWeb. Just announced by Microsoft, NLWeb is an open initiative to make the entire web more conversational—enabling users (and AI agents) to interact with websites via natural language instead of dropdowns and keyword searches. For developers and publishers, NLWeb simplifies how you expose your content to AI by using structured data formats like Schema.org, vector indexes, and LLM tools you probably already use. What ma…
    Announcing Azure AI Language new features to accelerate your agent development
    In today’s fast-moving AI landscape, businesses are racing to embed conversational intelligence and automation into every customer touchpoint. However, building a reliable and scalable agent from scratch remains complex and time-consuming. Developers tell us they need a streamlined way to map diverse user intents, craft accurate responses, and support global audiences without wrestling with ad-hoc integrations. At the same time, rising expectations around data privacy and compliance introduce yet another layer of overhead. To meet these challenges, today, we’re excited to announce a suite of powerful new tools and templates designed to help developers build intelligent agents faster than ever with our Azure AI Language service. Working together with Azure AI Agent Service, whether you’re t…  ( 37 min )
    Unlock new dimensions of creativity: Gpt-image-1 and Sora
    We're excited to announce Sora, now available in AI Foundry. Learn more about Sora in Azure OpenAI, and its API and video playground, here. To dive deeper into video playground experience, check out this blog post ​  Why does this matter?  AI is no longer *just* about text. Here’s why: multimodal models enable deeper understanding. Today AI doesn’t just understand words: it understands context, visuals, and motion. From prompt to product, imagine going from a product description to a full marketing campaign.  In many ways, the rise of multimodal AI today is comparable to the inception of photography in the 19th century—introducing a new creative medium that, like photography, didn’t replace painting but expanded the boundaries of artistic expression. Just a little over a month ago, we rele…  ( 29 min )
    Introducing Built-in AgentOps Tools in Azure AI Foundry Agent Service
    A New Era of Agent Intelligence We’re thrilled to announce the public preview of Tracing, Evaluation, and Monitoring in Azure AI Foundry Agent Service, features designed to revolutionize how developers build, debug, and optimize AI agents. With detailed traces and customizable evaluators, AgentOps is here to bridge the gap between observability and performance improvement. Whether you’re managing a simple chatbot or a complex multi-agent system, this is the tool you’ve been waiting for. What Makes AgentOps Unique? AgentOps offers an unparalleled suite of functionalities that cater to the challenges AI developers face today. Here are the two cornerstone features: 1. Integrated Tracing Functionality AgentOps provides full execution tracing, offering a detailed, step-by-step breakdown of how…  ( 25 min )
    Azure OpenAI Fine Tuning is Everywhere
    If you’re building an AI agent and need to customize its behavior to specific domains, its interaction tone, or improve its tool selection, you should be fine tuning! Our customers agree, but the challenge faced has been twofold: regional availability and the cost of experimentation. Today, we’re bringing fine tuning of Azure OpenAI models to a dozen new AI Foundry regions with the public previews of Global Training and Developer Tier. With reduced pricing and global availability, AI Foundry makes fine tuning the latest OpenAI models on Azure more accessible and affordable than ever to bring your agents to life. What is Global Training? Global Training expands the reach of model customization with the affordable pricing of our other Global offerings: 🏋️‍♂️ Train the latest OpenAI models …  ( 27 min )
    Expand Azure AI Foundry Agent Service with More Knowledge and Action Tools
    Customers often face challenges deploying AI agents that can handle complex, real-world scenarios due to limited tool access. To empower AI agents with more capabilities, Azure AI Foundry Agent Service now supports a growing set of integrated tools: Grounding with Bing Custom Search (preview), SharePoint (coming soon), Azure Logic Apps (preview), Triggers (preview), and third-party tools, enabling agents to retrieve richer information, take more meaningful actions, and deliver intelligent, goal-driven results. This expansion helps organizations build smarter, more adaptable agents that can better align with dynamic business needs.   Today, we are thrilled to announce Azure AI Foundry Agent Service is now Generally Available including following tools: Azure AI Search, Azure Functions, Code …  ( 43 min )
    Building a Digital Workforce with Multi-Agents in Azure AI Foundry Agent Service
    As organizations increasingly rely on AI to automate complex tasks and scale digital operations, the ability to coordinate multiple agents in a single, cohesive system is becoming critical. Moving beyond single-agent architectures to multi-agent systems enables richer, more dynamic automation – where specialized agents can collaborate, share context, and complete multi-step processes with minimal human intervention. This shift is unlocking the potential for organizations to build full digital workforces that can manage everything from customer support to supply chain automation.   Why Multi-Agents Matter Building a single agent is often straightforward – it’s designed to perform a specific task, like answering common support queries or generating summaries from documents. However, real-wor…  ( 49 min )
    Announcing General Availability of Azure AI Foundry Agent Service
    Agents are revolutionizing business automation by evolving from simple chatbots into sophisticated, collaborative systems. Fueled by advancements in model reasoning and efficiency, these agents can now handle complex, multi-step processes with speed and accuracy. This marks a shift from isolated tools to dynamic, scalable agent workforces that coordinate tasks, share context, and adapt in real time. This transformation enables businesses to optimize operations and elevate customer experiences through AI-driven workflows. Instead of relying on single-purpose bots, organizations are deploying ecosystems of specialized agents that can interact, reason, and respond to changing conditions with minimal oversight. Yet building and scaling these systems is not without challenges. Developers must i…  ( 39 min )
    Announcing Azure AI Language new features to accelerate your agent development
    In today’s fast-moving AI landscape, businesses are racing to embed conversational intelligence and automation into every customer touchpoint. However, building a reliable and scalable agent from scratch remains complex and time-consuming. Developers tell us they need a streamlined way to map diverse user intents, craft accurate responses, and support global audiences without wrestling with ad-hoc integrations. At the same time, rising expectations around data privacy and compliance introduce yet another layer of overhead. To meet these challenges, today, we’re excited to announce a suite of powerful new tools and templates designed to help developers build intelligent agents faster than ever with our Azure AI Language service. Working together with Azure AI Agent Service, whether you’re t…
    Unlock new dimensions of creativity: Gpt-image-1 and Sora
    We're excited to announce Sora, coming soon to AI Foundry. Learn more about Sora in Azure OpenAI, and its API and video playground, here.  Hear how T&Pm, a WPP company, uses Sora in Azure OpenAI to bring their media solutions to the next level   AI is no longer *just* about text. Here’s why: multimodal models enable deeper understanding. Today AI doesn’t just understand words – it understands context, visuals, and motion. From prompt to product, imagine going from a product description to a full marketing campaign.  This shift represents more than just a technical evolution—it’s a creative revolution. In many ways, the rise of multimodal AI today is comparable to the inception of photography in the 19th century—introducing a new creative medium that, like photography, didn’t replace pain…
    Introducing Built-in AgentOps Tools in Azure AI Foundry Agent Service
    A New Era of Agent Intelligence We’re thrilled to announce the public preview of Tracing, Evaluation, and Monitoring in Azure AI Foundry Agent Service, features designed to revolutionize how developers build, debug, and optimize AI agents. With detailed traces and customizable evaluators, AgentOps is here to bridge the gap between observability and performance improvement. Whether you’re managing a simple chatbot or a complex multi-agent system, this is the tool you’ve been waiting for. What Makes AgentOps Unique? AgentOps offers an unparalleled suite of functionalities that cater to the challenges AI developers face today. Here are the two cornerstone features: 1. Integrated Tracing Functionality AgentOps provides full execution tracing, offering a detailed, step-by-step breakdown of how…
    Azure OpenAI Fine Tuning is Everywhere
    If you’re building an AI agent and need to customize its behavior to specific domains, its interaction tone, or improve its tool selection, you should be fine tuning! Our customers agree, but the challenge faced has been twofold: regional availability and the cost of experimentation. Today, we’re bringing fine tuning of Azure OpenAI models to a dozen new AI Foundry regions with the public previews of Global Training and Developer Tier. With reduced pricing and global availability, AI Foundry makes fine tuning the latest OpenAI models on Azure more accessible and affordable than ever to bring your agents to life. What is Global Training? Global Training expands the reach of model customization with the affordable pricing of our other Global offerings: 🏋️‍♂️ Train the latest OpenAI models …
    Expand Azure AI Foundry Agent Service with More Knowledge and Action Tools
    Customers often face challenges deploying AI agents that can handle complex, real-world scenarios due to limited tool access. To empower AI agents with more capabilities, Azure AI Foundry Agent Service now supports a growing set of integrated tools: Grounding with Bing Custom Search (preview), SharePoint (coming soon), Azure Logic Apps (preview), Triggers (preview), and third-party tools, enabling agents to retrieve richer information, take more meaningful actions, and deliver intelligent, goal-driven results. This expansion helps organizations build smarter, more adaptable agents that can better align with dynamic business needs.   Today, we are thrilled to announce Azure AI Foundry Agent Service is now Generally Available including following tools: Azure AI Search, Azure Functions, Code …
    Building a Digital Workforce with Multi-Agents in Azure AI Foundry Agent Service
    As organizations increasingly rely on AI to automate complex tasks and scale digital operations, the ability to coordinate multiple agents in a single, cohesive system is becoming critical. Moving beyond single-agent architectures to multi-agent systems enables richer, more dynamic automation – where specialized agents can collaborate, share context, and complete multi-step processes with minimal human intervention. This shift is unlocking the potential for organizations to build full digital workforces that can manage everything from customer support to supply chain automation.   Why Multi-Agents Matter Building a single agent is often straightforward – it’s designed to perform a specific task, like answering common support queries or generating summaries from documents. However, real-wor…
    Announcing General Availability of Azure AI Foundry Agent Service
    Agents are revolutionizing business automation by evolving from simple chatbots into sophisticated, collaborative systems. Fueled by advancements in model reasoning and efficiency, these agents can now handle complex, multi-step processes with speed and accuracy. This marks a shift from isolated tools to dynamic, scalable agent workforces that coordinate tasks, share context, and adapt in real time. This transformation enables businesses to optimize operations and elevate customer experiences through AI-driven workflows. Instead of relying on single-purpose bots, organizations are deploying ecosystems of specialized agents that can interact, reason, and respond to changing conditions with minimal oversight. Yet building and scaling these systems is not without challenges. Developers must i…
  • Open

    Announcing Azure Command Launcher for Java
    Optimizing JVM Configuration for Azure Deployments Tuning the Java Virtual Machine (JVM) for cloud deployments is notoriously challenging. Over 30% of developers tend to deploy Java workloads with no JVM configuration at all, therefore relying on the default settings of the HotSpot JVM. The default settings in OpenJDK are intentionally conservative, designed to work across […] The post Announcing Azure Command Launcher for Java appeared first on Microsoft for Java Developers.  ( 24 min )
    Vibe coding with GitHub Copilot: Agent mode and MCP support in JetBrains and Eclipse
    Today, we’re excited to announce that GitHub Copilot Agent Mode and MCP support are now in public preview for both JetBrains and Eclipse! Whether you’re working in IntelliJ IDEA, PyCharm, WebStorm or Eclipse, you can now access Copilot’s intelligent agent features and seamlessly manage your project workflows, all from within your IDE.  In this post, we’ll […] The post Vibe coding with GitHub Copilot: Agent mode and MCP support in JetBrains and Eclipse appeared first on Microsoft for Java Developers.  ( 24 min )
  • Open

    Efficient JSON loading to Eventhouse in Fabric Real-Time Intelligence
    In the era of big data, efficiently parsing and analyzing JSON data is critical for gaining actionable insights. Leveraging Kusto, a powerful query engine developed by Microsoft, enhances the efficiency of handling JSON data, making it simpler and faster to derive meaningful patterns and trends. Perhaps more importantly, Kusto’s ability to easily parse simple or … Continue reading “Efficient JSON loading to Eventhouse in Fabric Real-Time Intelligence”  ( 8 min )
    Simplifying Data Ingestion with Copy job – Introducing Change Data Capture (CDC) Support (Preview)
    Copy job is designed to simplify your data ingestion experience without compromise from any source to any destination. It supports multiple data delivery styles, including both batch and incremental copy, providing the flexibility to meet diverse needs. We are excited to introduce the preview of native Change Data Capture (CDC) support in Copy job that … Continue reading “Simplifying Data Ingestion with Copy job – Introducing Change Data Capture (CDC) Support (Preview)”  ( 7 min )
    New Copilot experience in Dataflow Gen2: Natural language to custom column
    Dataflows have great capabilities to help you add new columns. It has an extensive section in the ribbon that helps you create new columns based on the data of your table. For scenarios in which you wish to have full control over how the new column should be created, you can opt for the Custom … Continue reading “New Copilot experience in Dataflow Gen2: Natural language to custom column”  ( 6 min )
    Continuous Ingestion from Azure Storage to Eventhouse (Preview)
    The integration of Azure Storage with Fabric Eventhouse for continuous ingestion represents a significant simplification of data ingestion process from Azure Storage to Eventhouse in Fabric Real-Time Intelligence. It automates extraction and loading from Azure storage and facilitates near real-time updates to Eventhouse KQL DB tables. With this feature, it is now easier for organizations … Continue reading “Continuous Ingestion from Azure Storage to Eventhouse (Preview)”  ( 9 min )
    Extracting deeper insights with Fabric Data Agents in Copilot in Power BI
    Co-author: Joanne Wong We’re excited to announce the upcoming integration of Fabric data agent with Copilot in Power BI, enhancing your ability to extract insights seamlessly. What’s new? A new chat with your data experience is launching soon in Power BI– a full-screen Copilot for users to ask natural language questions and receive accurate, relevant … Continue reading “Extracting deeper insights with Fabric Data Agents in Copilot in Power BI”  ( 7 min )
  • Open

    Configuring Key Vault with Java App Service Linux
    In this blog post we’ll cover the process of integrating Key Vault in the Java Spring Boot app that runs on App Service Linux.  ( 6 min )
  • Open

    Spring AI 1.0 GA is Here - Build Java AI Apps End-to-End on Azure Today
    Spring AI 1.0 is now generally available, and it is ready to help Java developers bring the power of AI into their Spring Boot applications. This release is the result of open collaboration and contributions across the Spring and Microsoft Azure engineering teams. Together, they have made it simple for Java developers to integrate LLMs, vector search, memory, and agentic workflows using the patterns they already know. Why This Matters for Java Developers? Spring AI 1.0, built and maintained by the Spring team at Broadcom with active contributions from Microsoft Azure, delivers an intuitive and powerful foundation for building intelligent apps. You can plug AI into existing Spring Boot apps with minimal friction, using starters and conventions familiar to every Spring developer. Whether you…  ( 32 min )
    Spring AI 1.0 GA is Here - Build Java AI Apps End-to-End on Azure Today
    Spring AI 1.0 is now generally available, and it is ready to help Java developers bring the power of AI into their Spring Boot applications. This release is the result of open collaboration and contributions across the Spring and Microsoft Azure engineering teams. Together, they have made it simple for Java developers to integrate LLMs, vector search, memory, and agentic workflows using the patterns they already know. Why This Matters for Java Developers? Spring AI 1.0, built and maintained by the Spring team at Broadcom with active contributions from Microsoft Azure, delivers an intuitive and powerful foundation for building intelligent apps. You can plug AI into existing Spring Boot apps with minimal friction, using starters and conventions familiar to every Spring developer. Whether you…
    Red Hat OpenShift Virtualization on Azure Red Hat OpenShift in Public Preview
    Today we're excited to announce the public preview of Red Hat OpenShift Virtualization on Azure Red Hat OpenShift (ARO). This new capability brings you:  The best of both worlds - Run your virtual machines and containers on a single platform with unified management  Modernization at your pace - Keep critical VMs running while gradually moving components to containers when you're ready  Maximum value from Azure - Leverage your existing Azure benefits and commitments while optimizing resource usage across all workloads  This collaboration between Microsoft and Red Hat helps you simplify operations, reduce costs, and accelerate your cloud journey - all while preserving your existing VM investments.  Why Organizations Need a Unified VM and Container Platform  Organizations today face sig…  ( 39 min )
    Red Hat OpenShift Virtualization on Azure Red Hat OpenShift in Public Preview
    Today we're excited to announce the public preview of Red Hat OpenShift Virtualization on Azure Red Hat OpenShift (ARO). This new capability brings you:  The best of both worlds - Run your virtual machines and containers on a single platform with unified management  Modernization at your pace - Keep critical VMs running while gradually moving components to containers when you're ready  Maximum value from Azure - Leverage your existing Azure benefits and commitments while optimizing resource usage across all workloads  This collaboration between Microsoft and Red Hat helps you simplify operations, reduce costs, and accelerate your cloud journey - all while preserving your existing VM investments.  Why Organizations Need a Unified VM and Container Platform  Organizations today face sig…
    Configure health probes for Quarkus Apps on Azure Container Apps
    Overview This blog post shows you how to enable SmallRye Health in Quarkus applications with health probes in Azure Container Apps. The techniques shown in this blog post show you how to effectively monitor the health and status of your Quarkus instances. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app uses SmallRye Health to expose application health to Azure Container Apps.   The Quarkus SmallRye Health extension provides the following REST endpoints:   /q/health/live:  Indicates that the application is up and running.   /q/health/ready:  Indicates that the application is ready to accept requests.   /q/health/started:  Indicates that the application starts…  ( 49 min )
    Configure health probes for Quarkus Apps on Azure Container Apps
    Overview This blog post shows you how to enable SmallRye Health in Quarkus applications with health probes in Azure Container Apps. The techniques shown in this blog post show you how to effectively monitor the health and status of your Quarkus instances. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app uses SmallRye Health to expose application health to Azure Container Apps.   The Quarkus SmallRye Health extension provides the following REST endpoints:   /q/health/live:  Indicates that the application is up and running.   /q/health/ready:  Indicates that the application is ready to accept requests.   /q/health/started:  Indicates that the application starts…
    How to debug Azure WebJobs Storage Extensions SDK in dotnet isolated blob trigger function app
    This blog post provides a step-to-step guidance for how to debug the Azure WebJobs Extension SDK both locally and remotely on the function app.   As you all know, Azure Function Apps are built on top of the Azure WebJobs SDK, which is an open-source framework that simplifies the task of writing background processing code that runs in Azure. The Azure WebJobs SDK includes a declarative binding and trigger system, commonly referred to as Azure WebJobs Extensions. These extensions allow developers to easily integrate with various Azure services and define how their background tasks respond to events.   If you encounter any issues while using the Azure WebJobs Extensions SDK, the best way to report them is via GitHub Issues. You can report bugs, request features, or ask questions there. When r…  ( 26 min )
    How to debug Azure WebJobs Storage Extensions SDK in dotnet isolated blob trigger function app
    This blog post provides a step-to-step guidance for how to debug the Azure WebJobs Extension SDK both locally and remotely on the function app.   As you all know, Azure Function Apps are built on top of the Azure WebJobs SDK, which is an open-source framework that simplifies the task of writing background processing code that runs in Azure. The Azure WebJobs SDK includes a declarative binding and trigger system, commonly referred to as Azure WebJobs Extensions. These extensions allow developers to easily integrate with various Azure services and define how their background tasks respond to events.   If you encounter any issues while using the Azure WebJobs Extensions SDK, the best way to report them is via GitHub Issues. You can report bugs, request features, or ask questions there. When r…
    Public Preview: GitHub Copilot App Modernization for Java
    Modernizing Java applications and migrating to the cloud is typically a complex, labor-intensive, and fragmented process. GitHub Copilot App Modernization for Java is a powerful solution designed to simplify and accelerate your journey to the cloud. App Modernization and upgrade for Java offers an intelligent, guided approach that automates Java version upgrade and repetitive tasks and improves consistency — saving time, reducing risks, and accelerating time-to-cloud. GitHub Copilot App Modernization and upgrade for Java is in public preview and offered in a single extension pack, available in the VS Code marketplace. https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-app-mod-pack   The GitHub Copilot App Modernization for Java provides six distinct value pillars, each spe…  ( 27 min )
    Public Preview: GitHub Copilot App Modernization for Java
    Modernizing Java applications and migrating to the cloud is typically a complex, labor-intensive, and fragmented process. GitHub Copilot App Modernization for Java [and upgrade for Java] is a powerful solution designed to simplify and accelerate your journey to the cloud. App Modernization and upgrade for Java offers an intelligent, guided approach that automates Java version upgrade and repetitive tasks and improves consistency — saving time, reducing risks, and accelerating time-to-cloud. GitHub Copilot App Modernization and upgrade for Java is in public preview and offered in a single extension pack, available in the VS Code marketplace. https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-app-mod-pack   The GitHub Copilot App Modernization for Java provides six distinct …
    Powering the Next Generation of AI Apps and Agents on the Azure Application Platform
    Generative AI is already transforming how businesses operate, with organizations seeing an average return of 3.7x for every $1 of investment [The Business Opportunity of AI, IDC study commissioned by Microsoft]. Developers sit at the center of this transformation, and their need for speed, flexibility, and familiarity with existing tools is driving the demand for application platforms that integrate AI seamlessly into their current development workflows. To fully realize the potential of generative AI in applications, organizations must provide developers with frictionless access to AI models, frameworks, and environments that enable them to scale AI applications. We see this in action at organizations like Accenture, Assembly Software, Carvana, Coldplay (Pixel Lab), Global Travel Collecti…  ( 38 min )
    Powering the Next Generation of AI Apps and Agents on the Azure Application Platform
    Generative AI is already transforming how businesses operate, with organizations seeing an average return of 3.7x for every $1 of investment [The Business Opportunity of AI, IDC study commissioned by Microsoft]. Developers sit at the center of this transformation, and their need for speed, flexibility, and familiarity with existing tools is driving the demand for application platforms that integrate AI seamlessly into their current development workflows. To fully realize the potential of generative AI in applications, organizations must provide developers with frictionless access to AI models, frameworks, and environments that enable them to scale AI applications. We see this in action at organizations like Accenture, Assembly Software, Carvana, Coldplay (Pixel Lab), Global Travel Collecti…
    New Observability & Debugging Capabilities for Azure Container Apps
    Azure Container Apps gives you a strong foundation for monitoring and debugging, with built-in features that give you a holistic view of your container app’s health throughout its application lifecycle. As applications grow, developers need even deeper visibility and faster ways to troubleshoot issues. That’s why we’re excited to announce new observability and debugging features. These features will help you further monitor your environment and identify root causes faster. Generally Available: OpenTelemetry agent in Azure Container Apps OpenTelemetry agent in Azure Container Apps is now generally available. This feature enables you to use open-source standards to send your app’s data without setting up the OpenTelemetry collector yourself. You can use the managed agent to choose where to s…  ( 28 min )
    New Observability & Debugging Capabilities for Azure Container Apps
    Azure Container Apps gives you a strong foundation for monitoring and debugging, with built-in features that give you a holistic view of your container app’s health throughout its application lifecycle. As applications grow, developers need even deeper visibility and faster ways to troubleshoot issues. That’s why we’re excited to announce new observability and debugging features. These features will help you further monitor your environment and identify root causes faster. Generally Available: OpenTelemetry agent in Azure Container Apps OpenTelemetry agent in Azure Container Apps is now generally available. This feature enables you to use open-source standards to send your app’s data without setting up the OpenTelemetry collector yourself. You can use the managed agent to choose where to s…
    What's New in Azure App Service at #MSBuild 2025
    New App Service Premium v4 plan  The new App Service Premium v4 (Pv4) plan has entered public preview at Microsoft Build 2025 for both Windows and Linux!  This new plan is designed to support today's highly demanding application performance, scale, and budgets. Built on the latest "v6" general-purpose virtual machines and memory-optimized x64 Azure hardware with faster processors and NVMe temporary storage, it provides a noticeable performance uplift over prior generations of App Service Premium plans (over 25% in early testing). The Premium v4 offering includes nine new sizes ranging from P0v4 with a single virtual CPU and 4GB RAM all the way up through P5mv4, with 32 virtual CPUs and 256GB RAM, providing CPU and memory options to meet any business need. App Service Premium v4 plans provi…  ( 57 min )
    What's New in Azure App Service at #MSBuild 2025
    New App Service Premium v4 plan  The new App Service Premium v4 (Pv4) plan has entered public preview at Microsoft Build 2025 for both Windows and Linux!  This new plan is designed to support today's highly demanding application performance, scale, and budgets. Built on the latest "v6" general-purpose virtual machines and memory-optimized x64 Azure hardware with faster processors and NVMe temporary storage, it provides a noticeable performance uplift over prior generations of App Service Premium plans (over 25% in early testing). The Premium v4 offering includes nine new sizes ranging from P0v4 with a single virtual CPU and 4GB RAM all the way up through P5mv4, with 32 virtual CPUs and 256GB RAM, providing CPU and memory options to meet any business need. App Service Premium v4 plans provi…
    Unlocking new AI workloads in Azure Container Apps
    The rapid rise of AI has unlocked powerful new scenarios—from AI-powered chatbots and image generation to advanced agents. However, deploying AI models at scale presents real challenges including managing compute-intensive workloads, complexities with model deployment, and executing untrusted AI-generated code safely. Azure Container Apps addresses these challenges by offering a fully managed, flexible, serverless container platform designed for modern, cloud-native applications – now with GA of Dedicated GPUs, improved integrations for deploying Foundry models to Azure Container Apps, and the private preview of GPU powered dynamic sessions. Foundry Model Integration Azure AI Foundry Models support a wide collection of ready-to-deploy models. Traditionally, these models can be deployed wi…  ( 26 min )
    Unlocking new AI workloads in Azure Container Apps
    The rapid rise of AI has unlocked powerful new scenarios—from AI-powered chatbots and image generation to advanced agents. However, deploying AI models at scale presents real challenges including managing compute-intensive workloads, complexities with model deployment, and executing untrusted AI-generated code safely. Azure Container Apps addresses these challenges by offering a fully managed, flexible, serverless container platform designed for modern, cloud-native applications – now with GA of Dedicated GPUs, improved integrations for deploying Foundry models to Azure Container Apps, and the private preview of GPU powered dynamic sessions. Foundry Model Integration Azure AI Foundry Models support a wide collection of ready-to-deploy models. Traditionally, these models can be deployed wi…
    Azure App Service Premium v4 plan is now in public preview
    Azure App Service Premium v4 plan is the latest offering in the Azure App Service family, designed to deliver enhanced performance, scalability, and cost efficiency. We are excited to announce the public preview of this major upgrade to one of our most popular services. Key benefits: Fully managed platform-as-a-service (PaaS) to run your favorite web stack, on both Windows and Linux. Built using next-gen Azure hardware for higher performance and reliability. Lower total cost of ownership with new pricing tailored for large-scale app modernization projects. and more to come! Fully managed platform-as-a-service (PaaS) As the next generation of one of the leading PaaS solutions, Premium v4 abstracts infrastructure management, allowing businesses to focus on application development rather th…  ( 26 min )
    Azure App Service Premium v4 plan is now in public preview
    Azure App Service Premium v4 plan is the latest offering in the Azure App Service family, designed to deliver enhanced performance, scalability, and cost efficiency. We are excited to announce the public preview of this major upgrade to one of our most popular services. Key benefits: Fully managed platform-as-a-service (PaaS) to run your favorite web stack, on both Windows and Linux. Built using next-gen Azure hardware for higher performance and reliability. Lower total cost of ownership with new pricing tailored for large-scale app modernization projects. and more to come! Fully managed platform-as-a-service (PaaS) As the next generation of one of the leading PaaS solutions, Premium v4 abstracts infrastructure management, allowing businesses to focus on application development rather th…
    Azure Functions – Build 2025
    Azure Functions – Build 2025 update With Microsoft Build underway, the team is excited to provide an update on the latest releases in Azure Functions this year. Customers are leveraging Azure Functions to build AI solutions, thanks to its serverless capabilities that scale on demand and its native integration for processing real-time data. The newly launched capabilities enable the creation of AI and agentic applications with enhanced offerings, built-in security, and a pay-as-you-go model. Real-time retrieval augmented generation, making organizational data accessible through semantic search Native event driven tool function calling with the AI Foundry Agent service Hosted Model Context Protocol servers. Support for Flex consumption plans, including zone redundancy, increased regions, an…  ( 51 min )
    Azure Functions – Build 2025
    Azure Functions – Build 2025 update With Microsoft Build underway, the team is excited to provide an update on the latest releases in Azure Functions this year. Customers are leveraging Azure Functions to build AI solutions, thanks to its serverless capabilities that scale on demand and its native integration for processing real-time data. The newly launched capabilities enable the creation of AI and agentic applications with enhanced offerings, built-in security, and a pay-as-you-go model. Real-time retrieval augmented generation, making organizational data accessible through semantic search Native event driven tool function calling with the AI Foundry Agent service Hosted Model Context Protocol servers. Support for Flex consumption plans, including zone redundancy, increased regions, an…
    Build secure, flexible, AI-enabled applications with Azure Kubernetes Service
    Building AI applications has never been more accessible. With advancements in tools and platforms, developers can now create sophisticated AI solutions that drive innovation and efficiency across various industries. For many, Kubernetes stands out as natural choice for running AI applications and agents due to its robust orchestration capabilities, scalability, and flexibility.  In this blog, we will explore the latest advancements in Azure Kubernetes Service (AKS) we are announcing at Microsoft Build 2025, designed to enhance flexibility, bolster security, and seamlessly integrate AI capabilities into your Kubernetes environments. These updates will empower developers to create sophisticated AI solutions, improve operational efficiency, and drive innovation across various industries. Let'…  ( 38 min )
    Build secure, flexible, AI-enabled applications with Azure Kubernetes Service
    Building AI applications has never been more accessible. With advancements in tools and platforms, developers can now create sophisticated AI solutions that drive innovation and efficiency across various industries. For many, Kubernetes stands out as natural choice for running AI applications and agents due to its robust orchestration capabilities, scalability, and flexibility.  In this blog, we will explore the latest advancements in Azure Kubernetes Service (AKS) we are announcing at Microsoft Build 2025, designed to enhance flexibility, bolster security, and seamlessly integrate AI capabilities into your Kubernetes environments. These updates will empower developers to create sophisticated AI solutions, improve operational efficiency, and drive innovation across various industries. Let'…
    Announcing Workflow in Azure Container Apps with the Durable task scheduler – Now in Preview!
    We are thrilled to announce the durable workflow capabilities in Azure Container Apps with the Durable task scheduler (preview). This new feature brings powerful workflow capabilities to Azure Container Apps, enabling developers to build and manage complex, durable workflows as code with ease. What is Workflow and the Durable task scheduler? If you’ve missed the initial announcement of the durable task scheduler, please see these existing blog posts: https://aka.ms/dts-early-access https://aka.ms/dts-public-preview In summary, the Durable task scheduler is a fully managed backend for durable execution. Durable Execution is a fault-tolerant approach to running code, designed to handle failures gracefully through automatic retries and state persistence. It is built on three core principle…  ( 26 min )
    What's new in Azure Container Apps at Build'25
    Azure Container Apps is a fully managed serverless container service that runs microservices and containerized applications on Azure. It provides built-in autoscaling, including scale to zero, and offers simplified developer experience with support for multiple programming languages and frameworks, including special features built for .NET and Java. Container Apps also provides many advanced networking and monitoring capabilities, offering seamless deployment and management of containerized applications without the need to manage underlying infrastructure. Following the features announced at Ignite’24, we've continued to innovate and enhance Azure Container Apps. We announced the general availability of Serverless GPUs, enabling seamless AI workloads with automatic scaling, optimized cold …  ( 37 min )
    Building Durable and Deterministic Multi-Agent Orchestrations with Durable Execution
    Durable Execution Durable Execution is a reliable approach to running code, designed to handle failures smoothly with automatic retries and state persistence. It is built on three core principles: Incremental Execution: Each operation runs independently and in order. State Persistence: The output of each step is durably saved to ensure progress is not lost. Fault Tolerance: If a step fails, the operation is retried from the last successful step, skipping previously completed steps. Durable Execution is particularly beneficial for scenarios requiring stateful chaining of operations, such as order-processing applications, data processing pipelines, ETL (extract, transform, load), and as we'll get into in this post, intelligent applications with AI agents. Durable execution simplifies the i…  ( 32 min )
    Building the Agentic Future
    As a business built by developers, for developers, Microsoft has spent decades making it faster, easier and more exciting to create great software. And developers everywhere have turned everything from BASIC and the .NET Framework, to Azure, VS Code, GitHub and more into the digital world we all live in today. But nothing compares to what’s on the horizon as agentic AI redefines both how we build and the apps we’re building. In fact, the promise of agentic AI is so strong that market forecasts predict we’re on track to reach 1.3 billion AI Agents by 2028. Our own data, from 1,500 organizations around the world, shows agent capabilities have jumped as a driver for AI applications from near last to a top three priority when comparing deployments earlier this year to applications being define…  ( 42 min )
    New Networking Capabilities in Azure Container Apps
    New Networking Capabilities in Azure Container Apps Azure Container Apps is your go-to fully managed serverless container service that enables you to deploy and run containerized applications with per-second billing and autoscaling without having to manage infrastructure.   Today, Azure Container Apps is thrilled to announce several new enterprise capabilities that will take the flexibility, security, and manageability of your containerized applications to the next level. These capabilities include premium ingress, rule-based routing, private endpoints, Azure Arc integration, and planned maintenance. Let’s dive into the advanced networking features that Azure Container Apps has introduced. Public Preview: Premium Ingress in Azure Container Apps Azure Container Apps now supports premium ing…  ( 24 min )
    Reimagining App Modernization for the Era of AI
    If you’ve ever been to Microsoft Build, you know it’s not just another tech conference, it’s where ideas spark, connections happen, and the future of software takes shape. This year in Seattle, the energy is off the charts. And for those of us passionate about app modernization, it’s a moment we’ve been building toward (pun intended). At Microsoft, we believe modernization is more than just updating old systems—it’s about unlocking new possibilities with AI at the center. And at Build 2025, we’re showing exactly how we’re doing that. AI Is Changing Everything—And We’re Here for It Let’s start with the big picture: AI is transforming the entire software development lifecycle. From how we build apps to how we manage and scale them, AI is the force multiplier, reshaping how we work, build, de…  ( 35 min )
    Announcing Workflow in Azure Container Apps with the Durable task scheduler – Now in Preview!
    We are thrilled to announce the durable workflow capabilities in Azure Container Apps with the Durable task scheduler (preview). This new feature brings powerful workflow capabilities to Azure Container Apps, enabling developers to build and manage complex, durable workflows as code with ease. What is Workflow and the Durable task scheduler? If you’ve missed the initial announcement of the durable task scheduler, please see these existing blog posts: https://aka.ms/dts-early-access https://aka.ms/dts-public-preview In summary, the Durable task scheduler is a fully managed backend for durable execution. Durable Execution is a fault-tolerant approach to running code, designed to handle failures gracefully through automatic retries and state persistence. It is built on three core principle…
    What's new in Azure Container Apps at Build'25
    Azure Container Apps is a fully managed serverless container service that runs microservices and containerized applications on Azure. It provides built-in autoscaling, including scale to zero, and offers simplified developer experience with support for multiple programming languages and frameworks, including special features built for .NET and Java. Container Apps also provides many advanced networking and monitoring capabilities, offering seamless deployment and management of containerized applications without the need to manage underlying infrastructure. Following the features announced at Ignite’24, we've continued to innovate and enhance Azure Container Apps. We announced the general availability of Serverless GPUs, enabling seamless AI workloads with automatic scaling, optimized cold …
    Building Durable and Deterministic Multi-Agent Orchestrations with Durable Execution
    Durable Execution Durable Execution is a reliable approach to running code, designed to handle failures smoothly with automatic retries and state persistence. It is built on three core principles: Incremental Execution: Each operation runs independently and in order. State Persistence: The output of each step is durably saved to ensure progress is not lost. Fault Tolerance: If a step fails, the operation is retried from the last successful step, skipping previously completed steps. Durable Execution is particularly beneficial for scenarios requiring stateful chaining of operations, such as order-processing applications, data processing pipelines, ETL (extract, transform, load), and as we'll get into in this post, intelligent applications with AI agents. Durable execution simplifies the i…
    Building the Agentic Future
    As a business built by developers, for developers, Microsoft has spent decades making it faster, easier and more exciting to create great software. And developers everywhere have turned everything from BASIC and the .NET Framework, to Azure, VS Code, GitHub and more into the digital world we all live in today. But nothing compares to what’s on the horizon as agentic AI redefines both how we build and the apps we’re building. In fact, the promise of agentic AI is so strong that market forecasts predict we’re on track to reach 1.3 billion AI Agents by 2028. Our own data, from 1,500 organizations around the world, shows agent capabilities have jumped as a driver for AI applications from near last to a top three priority when comparing deployments earlier this year to applications being define…
    New Networking Capabilities in Azure Container Apps
    New Networking Capabilities in Azure Container Apps Azure Container Apps is your go-to fully managed serverless container service that enables you to deploy and run containerized applications with per-second billing and autoscaling without having to manage infrastructure.   Today, Azure Container Apps is thrilled to announce several new enterprise capabilities that will take the flexibility, security, and manageability of your containerized applications to the next level. These capabilities include premium ingress, rule-based routing, private endpoints, Azure Arc integration, and planned maintenance. Let’s dive into the advanced networking features that Azure Container Apps has introduced. Public Preview: Premium Ingress in Azure Container Apps Azure Container Apps now supports premium ing…
    Reimagining App Modernization for the Era of AI
    If you’ve ever been to Microsoft Build, you know it’s not just another tech conference, it’s where ideas spark, connections happen, and the future of software takes shape. This year in Seattle, the energy is off the charts. And for those of us passionate about app modernization, it’s a moment we’ve been building toward (pun intended). At Microsoft, we believe modernization is more than just updating old systems—it’s about unlocking new possibilities with AI at the center. And at Build 2025, we’re showing exactly how we’re doing that. AI Is Changing Everything—And We’re Here for It Let’s start with the big picture: AI is transforming the entire software development lifecycle. From how we build apps to how we manage and scale them, AI is the force multiplier, reshaping how we work, build, de…
  • Open

    Announcing new features and updates in Azure Event Grid
    Discover powerful new features in Azure Event Grid, enhancing its functionality and user experience. This fully managed event broker now supports multi-protocol interoperability, including MQTT, for scalable messaging. It seamlessly connects Microsoft-native and third-party services, enabling robust event-driven applications. Streamline event management with flexible push-pull communication patterns.   We are thrilled to announce General Availability of the Cross-tenant delivery to Event Hubs, Service Bus, Storage Queues, and dead letter storage using managed identity with federated identity credentials (FIC) from Azure Event Grid topics, domains, system topics, and partner topics. New cross-tenant scenarios, currently in Public Preview enable delivery to Event Hubs, webhooks, and dead let…  ( 23 min )
    Announcing new features and updates in Azure Event Grid
    Discover powerful new features in Azure Event Grid, enhancing its functionality and user experience. This fully managed event broker now supports multi-protocol interoperability, including MQTT, for scalable messaging. It seamlessly connects Microsoft-native and third-party services, enabling robust event-driven applications. Streamline event management with flexible push-pull communication patterns.   We are thrilled to announce General Availability of the Cross-tenant delivery to Event Hubs, Service Bus, Storage Queues, and dead letter storage using managed identity with federated identity credentials (FIC) from Azure Event Grid topics, domains, system topics, and partner topics. New cross-tenant scenarios, currently in Public Preview enable delivery to Event Hubs, webhooks, and dead let…
  • Open

    What’s new in Observability at Build 2025
    At Build 2025, we are excited to announce new features in Azure Monitor designed to enhance observability for developers and SREs, making it easier for you to streamline troubleshooting, improve monitoring efficiency, and gain deeper insights into application performance. With our new AI-powered tools, customizable alerts, and advanced visualization capabilities, we’re empowering developers to deliver high-quality, resilient applications with greater operational efficiency. AI-Powered Troubleshooting Capabilities We are excited to disclose two new AI-powered features, as well as share an update to a GA feature, which enhance troubleshooting and monitoring: AI-powered investigations (Public Preview): Identifies possible explanations for service degradations via automated analyses, consolid…  ( 29 min )
    What’s new in Observability at Build 2025
    At Build 2025, we are excited to announce new features in Azure Monitor designed to enhance observability for developers and SREs, making it easier for you to streamline troubleshooting, improve monitoring efficiency, and gain deeper insights into application performance. With our new AI-powered tools, customizable alerts, and advanced visualization capabilities, we’re empowering developers to deliver high-quality, resilient applications with greater operational efficiency. AI-Powered Troubleshooting Capabilities We are excited to disclose two new AI-powered features, as well as share an update to a GA feature, which enhance troubleshooting and monitoring: AI-powered investigations (Public Preview): Identifies possible explanations for service degradations via automated analyses, consolid…
    Announcing the Public Preview of Azure Monitor health models
    Troubleshooting modern cloud-native workloads has become increasingly complex. As applications scale across distributed services and regions, pinpointing the root cause of performance degradation or outages often requires navigating a maze of disconnected signals, metrics, and alerts. This fragmented experience slows down troubleshooting and burdens engineering teams with manual correlation work.  We address these challenges by introducing a unified, intelligent concept of workload health that’s enriched with application context. Health models streamline how you monitor, assess, and respond to issues affecting your workloads. Built on Azure Service Groups, they provide an out-of-the-box model tailored to your environment, consolidate signals to reduce alert noise, and surface actionable in…  ( 29 min )
    Announcing the Public Preview of Azure Monitor health models
    Troubleshooting modern cloud-native workloads has become increasingly complex. As applications scale across distributed services and regions, pinpointing the root cause of performance degradation or outages often requires navigating a maze of disconnected signals, metrics, and alerts. This fragmented experience slows down troubleshooting and burdens engineering teams with manual correlation work.  We address these challenges by introducing a unified, intelligent concept of workload health that’s enriched with application context. Health models streamline how you monitor, assess, and respond to issues affecting your workloads. Built on Azure Service Groups, they provide an out-of-the-box model tailored to your environment, consolidate signals to reduce alert noise, and surface actionable in…
    Enhance your Azure visualizations using Azure Monitor dashboards with Grafana
    In line with our commitment to open-source solutions, we are announcing the public preview of Azure Monitor dashboards with Grafana. This service offers a powerful solution for cloud-native monitoring and visualizing Prometheus metrics. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. Start quickly with pre-built and community dashboards Pre-built Grafana dashboards for Azure Kubernetes Services, Azure Monitor, and dozens of other Azure resources are included and enabled by default. Additionally, you can import dashboards from thousands of publicly available Grafana community and open-source dashboards for Prometh…  ( 22 min )
    Enhance your Azure visualizations using Azure Monitor dashboards with Grafana
    In line with our commitment to open-source solutions, we are announcing the public preview of Azure Monitor dashboards with Grafana. This service offers a powerful solution for cloud-native monitoring and visualizing Prometheus metrics. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. Start quickly with pre-built and community dashboards Pre-built Grafana dashboards for Azure Kubernetes Services, Azure Monitor, and dozens of other Azure resources are included and enabled by default. Additionally, you can import dashboards from thousands of publicly available Grafana community and open-source dashboards for Prometh…
    Public Preview: Simple Log Alerts in Azure Monitor
    Public Preview: Simple Log Alerts in Azure Monitor We are excited to announce the Public Preview of Simple Log Alerts in Azure Monitor, available starting in mid-May. This new feature is designed to provide a simplified and more intuitive experience for monitoring and alerting, enhancing your ability to detect and respond to issues in near real-time. Simple Log Alerts are a new type of Log Search Alerts in Azure Monitor, designed to provide a simpler and faster alternative to Log Search Alerts. Unlike Log Search Alerts that aggregate rows over a defined period, Simple Log Alerts evaluate each row individually. This feature is now available for customers using Basic Logs who want to enable alerting. Previously, when customers opted to configure the traces table in Azure Monitor Application …  ( 21 min )
    GA: Managed Prometheus visualizations in Azure Monitor for AKS — unified insights at your fingertips
    We’re thrilled to announce the general availability (GA) of Managed Prometheus visualizations in Azure Monitor for AKS, along with an enhanced, unified AKS Monitoring experience. Troubleshooting Kubernetes clusters is often time-consuming and complex whether you're diagnosing failures, scaling issues, or performance bottlenecks. This redesign of the existing Insights experience brings all your key monitoring data into a single, streamlined view reducing the time and effort it takes to diagnose, triage, and resolve problems so you can keep your applications running smoothly with less manual work. By using Managed Prometheus, customers can also realize up to 80% savings on metrics costs and benefit from up to 90% faster blade load performance delivering both a powerful and cost-efficient way…  ( 25 min )
    Public Preview: Smarter Troubleshooting in Azure Monitor with AI-powered Investigation
    Investigate smarter – click, analyze, and easily mitigate with Azure Monitor investigations! We are excited to introduce the public preview of Azure Monitor issue and investigation.      These new capabilities are designed to enhance your troubleshooting experience and streamline the process of resolving health degradations in your application and infrastructure. What it is  Azure Monitor investigation is an AI-powered, automated analysis designed to scan the telemetry gathered by Azure Monitor to troubleshoot and mitigate potential service health issues. Investigation provides a list of AI-generated findings that include a summary of what happened, potential causes, and steps for further troubleshooting and mitigation. Azure Monitor issue contains all observability related data and proce…  ( 23 min )
    Announcing the Launch of Customizable Email Subjects for Log Search Alerts V2 in Azure Monitor
    We are thrilled to announce the launch of a new feature in Azure Monitor: Customizable Email Subjects for Log Search Alerts V2, available during May. What it is Customizable Email Subjects for Log Search Alerts V2 is a new feature that enables customers to personalize the subject lines of alert emails, making it easier to quickly identify and respond to alerts with more relevant and specific information. How it works This feature allows you to override email subjects with dynamic values by concatenating information from the common schema and custom text. For example, you can customize email subjects to include specific details such as the name of the virtual machine (VM) or patching details, allowing for quick identification without opening the email. Getting Started To get started with Cu…  ( 22 min )
    Public Preview: Simple Log Alerts in Azure Monitor
    Public Preview: Simple Log Alerts in Azure Monitor We are excited to announce the Public Preview of Simple Log Alerts in Azure Monitor, available starting in mid-May. This new feature is designed to provide a simplified and more intuitive experience for monitoring and alerting, enhancing your ability to detect and respond to issues in near real-time. Simple Log Alerts are a new type of Log Search Alerts in Azure Monitor, designed to provide a simpler and faster alternative to Log Search Alerts. Unlike Log Search Alerts that aggregate rows over a defined period, Simple Log Alerts evaluate each row individually. This feature is now available for customers using Basic Logs who want to enable alerting. Previously, when customers opted to configure the traces table in Azure Monitor Application …
    GA: Managed Prometheus visualizations in Azure Monitor for AKS — unified insights at your fingertips
    We’re thrilled to announce the general availability (GA) of Managed Prometheus visualizations in Azure Monitor for AKS, along with an enhanced, unified AKS Monitoring experience. Troubleshooting Kubernetes clusters is often time-consuming and complex whether you're diagnosing failures, scaling issues, or performance bottlenecks. This redesign of the existing Insights experience brings all your key monitoring data into a single, streamlined view reducing the time and effort it takes to diagnose, triage, and resolve problems so you can keep your applications running smoothly with less manual work. By using Managed Prometheus, customers can also realize up to 80% savings on metrics costs and benefit from up to 90% faster blade load performance delivering both a powerful and cost-efficient way…
    Public Preview: Smarter Troubleshooting in Azure Monitor with AI-powered Investigation
    Investigate smarter – click, analyze, and easily mitigate with Azure Monitor investigations! We are excited to introduce the public preview of Azure Monitor issue and investigation.      These new capabilities are designed to enhance your troubleshooting experience and streamline the process of resolving health degradations in your application and infrastructure. What it is  Azure Monitor investigation is an AI-powered, automated analysis designed to scan the telemetry gathered by Azure Monitor to troubleshoot and mitigate potential service health issues. Investigation provides a list of AI-generated findings that include a summary of what happened, potential causes, and steps for further troubleshooting and mitigation. Azure Monitor issue contains all observability related data and proce…
    Announcing the Launch of Customizable Email Subjects for Log Search Alerts V2 in Azure Monitor
    We are thrilled to announce the launch of a new feature in Azure Monitor: Customizable Email Subjects for Log Search Alerts V2, available during May. What it is Customizable Email Subjects for Log Search Alerts V2 is a new feature that enables customers to personalize the subject lines of alert emails, making it easier to quickly identify and respond to alerts with more relevant and specific information. How it works This feature allows you to override email subjects with dynamic values by concatenating information from the common schema and custom text. For example, you can customize email subjects to include specific details such as the name of the virtual machine (VM) or patching details, allowing for quick identification without opening the email. Getting Started To get started with Cu…
  • Open

    Supercharge AI development with new AI-powered features in Microsoft Dev Box
    AI is reshaping how we build, deploy, and scale software. As more apps become AI-powered, developers need environments that can keep up with speed and demands of innovation. We’ve heard from customers just how critical it is to have a zero-config experience when building AI applications, with streamlined access to compute, prebuilt models, and environment […] The post Supercharge AI development with new AI-powered features in Microsoft Dev Box appeared first on Develop from the cloud.  ( 32 min )
    Unlock developer potential with Microsoft Dev Box
    AI agents, model context protocols (MCPs), and emerging AI workflows are fundamentally transforming software development paradigms and broadening the range of applications we can create. Developers are expected to keep up with the pace of innovation, but that’s hard to do when working from traditional development environments or legacy VDI solutions. With Microsoft Dev Box, […] The post Unlock developer potential with Microsoft Dev Box appeared first on Develop from the cloud.  ( 27 min )
  • Open

    Introducing Microsoft 365 Copilot APIs
    Learn how Microsoft 365 Copilot APIs allow you to build solutions grounded in your organization’s content, context, and permissions, without needing to relocate or duplicate data. The post Introducing Microsoft 365 Copilot APIs appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Introducing Azure SRE Agent
    Today we’re thrilled to introduce Azure SRE Agent, an AI-powered tool that makes it easier to sustain production cloud environments. SRE Agent helps respond to incidents quickly and effectively, alleviating the toil of managing production environments. Overall, it results in better service uptime and reduced operational costs. SRE Agent leverages the reasoning capabilities of LLMs to identify the logs and metrics necessary for rapid root cause analysis and issue mitigation. Its advanced AI capabilities transform incident and infrastructure management in Azure, freeing engineers to focus on more meaningful work.  Sign up for the SRE Agent preview click here As more companies move their services online, site reliability engineering (SRE) has become crucial to keeping critical systems reliab…  ( 28 min )
    Introducing Azure SRE Agent
    Today we’re thrilled to introduce Azure SRE Agent, an AI-powered tool that makes it easier to sustain production cloud environments. SRE Agent helps respond to incidents quickly and effectively, alleviating the toil of managing production environments. Overall, it results in better service uptime and reduced operational costs. SRE Agent leverages the reasoning capabilities of LLMs to identify the logs and metrics necessary for rapid root cause analysis and issue mitigation. Its advanced AI capabilities transform incident and infrastructure management in Azure, freeing engineers to focus on more meaningful work.  Sign up for the SRE Agent preview click here As more companies move their services online, site reliability engineering (SRE) has become crucial to keeping critical systems reliab…

  • Open

    Announcing Public Preview of the GitHub Copilot app modernization for Java
    Modernizing Java applications and migrating to the cloud is often a complex, time-consuming, and fragmented process. GitHub Copilot app modernization for Java [and upgrade for Java] is a powerful solution designed to simplify and accelerate your journey to the cloud. Available now in Public Preview as a single extension pack in the Visual Studio Code Marketplace, […] The post Announcing Public Preview of the GitHub Copilot app modernization for Java appeared first on Microsoft for Java Developers.  ( 24 min )
  • Open

    From diagrams to dialogue: Introducing new multimodal functionality in Azure AI Search
    Introduction  We're thrilled to introduce a new suite of multimodal capabilities in Azure AI Search.     This set of features includes both new additions and incremental improvements that enable Azure AI Search to extract text from pages and inline images, generate image descriptions (verbalization), and create vision/text embeddings. It also facilitates storing cropped images in the knowledge store and returning text and image annotations in RAG (Retrieval Augmented Generation) applications to end users.  These features can be configured in our new Azure portal wizard with multimodal support or via the REST API 2025-05-01-preview version.  In addition, we're providing a new GitHub repo with sample code for a RAG app. This resource shows how you can use the index created by Azure AI Search…  ( 34 min )
    Introducing agentic retrieval in Azure AI Search
    Today we’re announcing agentic retrieval in Azure AI Search, a multiturn query engine that plans and runs its own retrieval strategy for improved answer relevance. Compared to traditional, single-shot RAG, agentic retrieval improves answer relevance to complex questions by up to 40%.  It transforms queries, runs parallel searches, and delivers results tuned for agents, along with references and a query activity log. Now available in public preview. What is agentic retrieval? Agentic retrieval in Azure AI Search uses a new query architecture that incorporates user conversation history and an Azure OpenAI model to plan, retrieve and synthesize queries. Here's how it works: An LLM analyzes the entire chat thread to identify the underlying information need. Instead of a single, catch-all que…  ( 26 min )
    Up to 40% better relevance for complex queries with new agentic retrieval engine
    By Alec Berntson, Alina Stoica Beck, Amaia Salvador Aguilera, Arnau Quindós Sánchez, Thibault Gisselbrecht and Xianshun Chen   Agentic retrieval in Azure AI Search is a new API built to effectively answer complex queries by extracting the right content needed. The API defines and runs a query plan, incorporating conversation history and an Azure OpenAI model. It transforms complex queries, then performs multiple searches at once, combining the final results and delivering ready-to-use content for answer generation. In this post we detail the operations that take place while the API is called and walk through the numerous experiments and datasets to evaluate its relevance performance. We learned the agentic retrieval API automates optimal retrieval for complex user queries, so you can get m…  ( 74 min )
    From diagrams to dialogue: Introducing new multimodal functionality in Azure AI Search
    Introduction  We're thrilled to introduce a new suite of multimodal capabilities in Azure AI Search.     This set of features includes both new additions and incremental improvements that enable Azure AI Search to extract text from pages and inline images, generate image descriptions (verbalization), and create vision/text embeddings. It also facilitates storing cropped images in the knowledge store and returning text and image annotations in RAG (Retrieval Augmented Generation) applications to end users.  These features can be configured in our new Azure portal wizard with multimodal support or via the REST API 2025-05-01-preview version.  In addition, we're providing a new GitHub repo with sample code for a RAG app. This resource shows how you can use the index created by Azure AI Search…
    Introducing agentic retrieval in Azure AI Search
    Today we’re announcing agentic retrieval in Azure AI Search, a multiturn query engine that plans and runs its own retrieval strategy for improved answer relevance. Compared to traditional, single-shot RAG, agentic retrieval improves answer relevance to complex questions by up to 40%.  It transforms queries, runs parallel searches, and delivers results tuned for agents, along with references and a query activity log. Now available in public preview. What is agentic retrieval? Agentic retrieval in Azure AI Search uses a new query architecture that incorporates user conversation history and an Azure OpenAI model to plan, retrieve and synthesize queries. Here's how it works: An LLM analyzes the entire chat thread to identify the underlying information need. Instead of a single, catch-all que…
    Up to 40% better relevance for complex queries with new agentic retrieval engine
    By Alec Berntson, Alina Stoica Beck, Amaia Salvador Aguilera, Arnau Quindós Sánchez, Thibault Gisselbrecht and Xianshun Chen   Agentic retrieval in Azure AI Search is a new API built to effectively answer complex queries by extracting the right content needed. The API defines and runs a query plan, incorporating conversation history and an Azure OpenAI model. It transforms complex queries, then performs multiple searches at once, combining the final results and delivering ready-to-use content for answer generation. In this post we detail the operations that take place while the API is called and walk through the numerous experiments and datasets to evaluate its relevance performance. We learned the agentic retrieval API automates optimal retrieval for complex user queries, so you can get m…
  • Open

    Microsoft Fabric Spark: Native Execution Engine now generally available
    The Fabric Spark Native Execution Engine (NEE) is now generally available (GA) as part of Fabric Runtime 1.3. This C++-based vectorized engine (built on Apache Gluten and Velox) runs Spark workloads directly on the lakehouse, requiring no code changes or new libraries. It supports Spark 3.5 APIs and both Parquet and Delta Lake formats, so … Continue reading “Microsoft Fabric Spark: Native Execution Engine now generally available”  ( 6 min )
    Simplify Your Data Strategy: Mirroring for Azure Database for PostgreSQL in Microsoft Fabric for Effortless Analytics on Transactional Data
    PostgreSQL is a popular relational database in application development, and Azure Database for PostgreSQL provides a fully managed service with enterprise-level security, availability, and scalability. Integrating Azure Database for PostgreSQL with Microsoft Fabric through Mirroring (Preview) enables seamless replication of transactional data for analytics, simplifying data processes and ensuring real-time insights and data consistency. Microsoft … Continue reading “Simplify Your Data Strategy: Mirroring for Azure Database for PostgreSQL in Microsoft Fabric for Effortless Analytics on Transactional Data”  ( 8 min )
    New features in Mirroring for Azure SQL Managed Instance – private endpoint support and more
    Mirroring in Microsoft Fabric allows you to seamlessly reflect your existing data estate from Azure SQL Managed Instance into OneLake. The mirrored data is automatically kept up to date in near real-time, enabling advanced analytics and reporting to generate essential business insights. Since our preview announcement of Mirroring for Azure SQL Managed Instance, we have listened … Continue reading “New features in Mirroring for Azure SQL Managed Instance – private endpoint support and more”  ( 6 min )
    What’s new with Mirroring in Fabric at Microsoft Build 2025
    At Microsoft Build 2025, we are thrilled to show you the latest innovations that we have delivered with Mirroring in Fabric. Mirroring is a powerful feature that allows you to seamlessly reflect your existing data estate continuously from any database or data warehouse into OneLake in Fabric. Once Mirroring starts the replication process, the mirrored … Continue reading “What’s new with Mirroring in Fabric at Microsoft Build 2025”  ( 9 min )
    Dataflow Gen2 CI/CD, GIT integration and Public APIs (Generally Available)
    We’re excited to announce the General Availability of Dataflow Gen2 CI/CD & Git integration support! With this set of features, you can now seamlessly integrate your Dataflow Gen2 items with your existing CI/CD pipelines and version control of your workspace in Fabric. This integration allows for better collaboration, versioning, and automation of your deployment process … Continue reading “Dataflow Gen2 CI/CD, GIT integration and Public APIs (Generally Available)”  ( 7 min )
    Encrypt data at rest in your Fabric workspaces using customer-managed keys (Preview)
    As organizations advance in their cloud platform journey, ensuring robust data security remains fundamental. Encryption plays a crucial role in defense-in-depth strategies used to safeguard sensitive information by adding a layer of protection against unauthorized access. In addition to strengthening your security posture, encryption helps you adhere to your organization’s internal security, data governance and … Continue reading “Encrypt data at rest in your Fabric workspaces using customer-managed keys (Preview)”  ( 7 min )
    Warehouse Snapshots in Microsoft Fabric (Preview)
    Maintaining data consistency during ETL (Extract, Transform, Load) processes has long been a critical challenge for data engineers. Whether it’s a nightly pipeline overwriting key records or a mid-day transformation introducing schema drift, the risk of disrupting downstream analytics is both real and costly. In today’s fast-paced, data-driven world, even brief inconsistencies can break dashboards, … Continue reading “Warehouse Snapshots in Microsoft Fabric (Preview)”  ( 11 min )
    Get to insights faster with SaaS databases and “chat with your data”
    A recent study from the Social Science Research Network looked at 5,000 developers using generative AI tools in their day-to-day work and found a 26% average increase in completed tasks. The massive opportunity generative AI presents for developers and data professionals was one of the key driving forces behind the initial development of Microsoft Fabric. … Continue reading “Get to insights faster with SaaS databases and “chat with your data””  ( 14 min )
    Simplifying Medallion Implementation with Materialized Lake Views in Fabric
    We are excited to announce Materialized Lake views (MLV) in Microsoft Fabric. Coming soon in preview, MLV is a new feature that allows you to build declarative data pipelines using SQL, complete with built-in data quality rules and automatic monitoring of data transformations. In essence, an MLV is a persisted, continuously updated view of your … Continue reading “Simplifying Medallion Implementation with Materialized Lake Views in Fabric”  ( 9 min )
    New Shortcut Type for Azure Blob Storage in OneLake shortcuts
    We’re excited to announce a new shortcut type for Azure Blob Storage in Microsoft Fabric! As a key platform for storing unstructured data — from images and documents to logs and media — Azure Blob Storage plays a vital role in powering AI and advanced analytics solutions. With this new shortcut type, you can easily … Continue reading “New Shortcut Type for Azure Blob Storage in OneLake shortcuts”  ( 6 min )
    Announcing Cosmos DB in Microsoft Fabric (Preview)
    Announcing preview of Cosmos DB in Microsoft Fabric. Cosmos DB makes it easy to build AI apps, offering a database that scales automatically, deeply integrated with OneLake ad and is secure right out of the box. It’s built on scalability and performance of Azure Cosmos DB.  ( 8 min )
  • Open

    Agent management updates in the Copilot Control System
    Control who can find, use, and create agents, define permissions, approve or block agent deployments, and configure billing models including pay-as-you-go or prepaid options. Get detailed visibility into how agents are used, which users and groups are driving consumption, and how much they’re costing you. With Microsoft Purview integration, get visibility into sensitive data exposure, track compliance risks, and audit agent activity to stay secure and aligned with your organization’s data policies. Jeremy Chapman, Director of Microsoft 365, shares how to configure, deploy, monitor, and secure AI agents at scale. Define agent access by group or user. Customize permissions with Microsoft 365 admin controls. See how to use the Copilot Control System. Enable pay-as-you-go agent billing with m…  ( 38 min )
    Agent management updates in the Copilot Control System
    Control who can find, use, and create agents, define permissions, approve or block agent deployments, and configure billing models including pay-as-you-go or prepaid options. Get detailed visibility into how agents are used, which users and groups are driving consumption, and how much they’re costing you. With Microsoft Purview integration, get visibility into sensitive data exposure, track compliance risks, and audit agent activity to stay secure and aligned with your organization’s data policies. Jeremy Chapman, Director of Microsoft 365, shares how to configure, deploy, monitor, and secure AI agents at scale. Define agent access by group or user. Customize permissions with Microsoft 365 admin controls. See how to use the Copilot Control System. Enable pay-as-you-go agent billing with m…
  • Open

    Building an Enterprise RAG Pipeline in Azure with NVIDIA AI Blueprint for RAG and Azure NetApp Files
    Table of Contents Abstract Introduction Enterprise RAG: Challenges and Requirements NVIDIA AI Blueprint for RAG Adapting the Blueprint for Azure Deployment Azure NetApp Files: Powering High-Performance RAG Workloads Why Azure NetApp Files works well for RAG Service Levels for RAG Workloads Dynamic Service Level Adjustments Snapshot Capabilities for ML Versioning Cost Optimization Strategies Azure Reference Architecture for Enterprise RAG End-to-End Workflow Implementation Guide: Building the Pipeline Setup your Bash shell Setup your Azure Account Set environment variables Evaluating Your Enterprise RAG Pipeline Retrieval Accuracy Latency and Throughput GPU Utilization Cost Analysis What’s Coming Next Enterprise Use Cases and Real-World Applications Enterprise Search Customer Support Regula…  ( 81 min )
    Building an Enterprise RAG Pipeline in Azure with NVIDIA AI Blueprint for RAG and Azure NetApp Files
    Table of Contents Abstract Introduction Enterprise RAG: Challenges and Requirements NVIDIA AI Blueprint for RAG Adapting the Blueprint for Azure Deployment Azure NetApp Files: Powering High-Performance RAG Workloads Why Azure NetApp Files works well for RAG Service Levels for RAG Workloads Dynamic Service Level Adjustments Snapshot Capabilities for ML Versioning Cost Optimization Strategies Azure Reference Architecture for Enterprise RAG End-to-End Workflow Implementation Guide: Building the Pipeline Setup your Bash shell Setup your Azure Account Set environment variables Evaluating Your Enterprise RAG Pipeline Retrieval Accuracy Latency and Throughput GPU Utilization Cost Analysis What’s Coming Next Enterprise Use Cases and Real-World Applications Enterprise Search Customer Support Regula…
  • Open

    Unlocking AI Potential: Exploring the Model Context Protocol with AI Toolkit
    In the ever-evolving world of Generative AI, the pace of innovation is nothing short of breathtaking. Just a few short months ago, Large Language Models (LLMs) and their smaller counterparts, Small Language Models (SLMs), were the talk of the town. Then came Retrieval Augmented Generation (RAG), offering a powerful way to ground these models to specific knowledge bases. The emergence of Agents and Agentic AI further opened doors for new possibilities by using the GenAI Model capabilities. Now, as the next step in this exciting journey, we're witnessing the arrival of a new open protocol – one that standardizes how applications provide crucial context to LLMs. This new protocol is MCP. Model Context Protocol: Currently there is a Challenge while integrating LLMs with specific tools (like da…  ( 42 min )
    Unlocking AI Potential: Exploring the Model Context Protocol with AI Toolkit
    In the ever-evolving world of Generative AI, the pace of innovation is nothing short of breathtaking. Just a few short months ago, Large Language Models (LLMs) and their smaller counterparts, Small Language Models (SLMs), were the talk of the town. Then came Retrieval Augmented Generation (RAG), offering a powerful way to ground these models to specific knowledge bases. The emergence of Agents and Agentic AI further opened doors for new possibilities by using the GenAI Model capabilities. Now, as the next step in this exciting journey, we're witnessing the arrival of a new open protocol – one that standardizes how applications provide crucial context to LLMs. This new protocol is MCP. Model Context Protocol: Currently there is a Challenge while integrating LLMs with specific tools (like da…
  • Open

    Getting Started with .NET Aspire on Azure App Service
    We’re laying the groundwork to bring .NET Aspire to Azure App Service. While this is just the beginning, we wanted to give you an early preview of how to set up a basic Aspire application on App Service.  ( 5 min )

  • Open

    Kickstart Your AI Development with the Model Context Protocol (MCP) Course
    Model Context Protocol is an open standard that acts as a universal connector between AI models and the outside world. Think of MCP as “the USB-C of the AI world,” allowing AI systems to plug into APIs, databases, files, and other tools seamlessly. By adopting MCP, developers can create smarter, more useful AI applications that access up-to-date information and perform actions like a human developer would. To help developers learn this game-changing technology, Microsoft has created the “MCP for Beginners” course a free, open-source curriculum that guides you from the basics of MCP to building real-world AI integrations. Below, we’ll explore what MCP is, who this course is for, and how it empowers both beginners and intermediate developers to get started with MCP. What is MCP and Why Shoul…  ( 43 min )
    Kickstart Your AI Development with the Model Context Protocol (MCP) Course
    Model Context Protocol is an open standard that acts as a universal connector between AI models and the outside world. Think of MCP as “the USB-C of the AI world,” allowing AI systems to plug into APIs, databases, files, and other tools seamlessly. By adopting MCP, developers can create smarter, more useful AI applications that access up-to-date information and perform actions like a human developer would. To help developers learn this game-changing technology, Microsoft has created the “MCP for Beginners” course a free, open-source curriculum that guides you from the basics of MCP to building real-world AI integrations. Below, we’ll explore what MCP is, who this course is for, and how it empowers both beginners and intermediate developers to get started with MCP. What is MCP and Why Shoul…
  • Open

    Data security for agents and 3rd party AI in Microsoft Purview
    With built-in visibility into how AI apps and agents interact with sensitive data — whether inside Microsoft 365 or across unmanaged consumer tools — you can detect risks early, take decisive action, and enforce the right protections without slowing innovation. See usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work. Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance AI innovation with enterprise-grade data governance and security. Move from detection to prevention. Built-in, pre-configured policies you can activate in seconds. Check o…  ( 43 min )
    Data security for agents and 3rd party AI in Microsoft Purview
    With built-in visibility into how AI apps and agents interact with sensitive data — whether inside Microsoft 365 or across unmanaged consumer tools — you can detect risks early, take decisive action, and enforce the right protections without slowing innovation. See usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work. Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance AI innovation with enterprise-grade data governance and security. Move from detection to prevention. Built-in, pre-configured policies you can activate in seconds. Check o…
    Data security controls in OneLake
    Unify and secure your data — no matter where it lives — without sacrificing control using OneLake security, part of Microsoft Fabric.  With granular permissions down to the row, column, and table level, you can confidently manage access across engines like Power BI, Spark, and T-SQL, all from one place. Discover, label, and govern your data with clarity using the integrated OneLake catalog that surfaces the right items fast. Aaron Merrill, Microsoft Fabric Principal Program Manager, shows how you can stay in control, from security to discoverability — owning, sharing, and protecting data on your terms. Protect sensitive information at scale. Set precise data access rules — down to individual rows. Check out OneLake security in Microsoft Fabric. No data duplication needed. Hide sensitive…  ( 41 min )
    Data security controls in OneLake
    Unify and secure your data — no matter where it lives — without sacrificing control using OneLake security, part of Microsoft Fabric.  With granular permissions down to the row, column, and table level, you can confidently manage access across engines like Power BI, Spark, and T-SQL, all from one place. Discover, label, and govern your data with clarity using the integrated OneLake catalog that surfaces the right items fast. Aaron Merrill, Microsoft Fabric Principal Program Manager, shows how you can stay in control, from security to discoverability — owning, sharing, and protecting data on your terms. Protect sensitive information at scale. Set precise data access rules — down to individual rows. Check out OneLake security in Microsoft Fabric. No data duplication needed. Hide sensitive…

  • Open

    Microsoft 365 Copilot Wave 2 Spring updates
    Streamline your day with new, user-focused updates to Microsoft 365 Copilot. Jump into work faster with a redesigned layout that puts Chat, Search, and your agents front and center. New Copilot Search lets you yse natural language to find files, emails, and conversations — even if you don’t remember exact keywords — and get instant summaries and previews without switching apps. Create high-impact visuals, documents, and videos in seconds with the new Copilot Create experience, complete with support for brand templates. Tap into powerful agents like Researcher and Analyst to handle deep tasks or build your own with ease. And if you manage Copilot across your organization, you now have better tools to deploy, monitor, and secure AI use — all from a single view. Describe what you want. Don’…  ( 42 min )
    Microsoft 365 Copilot Wave 2 Spring updates
    Streamline your day with new, user-focused updates to Microsoft 365 Copilot. Jump into work faster with a redesigned layout that puts Chat, Search, and your agents front and center. New Copilot Search lets you yse natural language to find files, emails, and conversations — even if you don’t remember exact keywords — and get instant summaries and previews without switching apps. Create high-impact visuals, documents, and videos in seconds with the new Copilot Create experience, complete with support for brand templates. Tap into powerful agents like Researcher and Analyst to handle deep tasks or build your own with ease. And if you manage Copilot across your organization, you now have better tools to deploy, monitor, and secure AI use — all from a single view. Describe what you want. Don’…
  • Open

    Announcing the General Availability of New Availability Zone Features for Azure App Service
    What are Availability Zones?  Availability Zones, or zone redundancy, refers to the deployment of applications across multiple availability zones within an Azure region. Each availability zone consists of one or more data centers with independent power, cooling, and networking. By leveraging zone redundancy, you can protect your applications and data from data center failures, ensuring uninterrupted service.  Key Updates  The minimum instance requirement for enabling Availability Zones has been reduced from three instances to two, while still maintaining a 99.99% SLA.  Many existing App Service plans with two or more instances will automatically support Availability Zones without additional setup.  The zone redundant setting for App Service plans and App Service Environment v3 is now …  ( 32 min )
    Announcing the General Availability of New Availability Zone Features for Azure App Service
    What are Availability Zones?  Availability Zones, or zone redundancy, refers to the deployment of applications across multiple availability zones within an Azure region. Each availability zone consists of one or more data centers with independent power, cooling, and networking. By leveraging zone redundancy, you can protect your applications and data from data center failures, ensuring uninterrupted service.  Key Updates  The minimum instance requirement for enabling Availability Zones has been reduced from three instances to two, while still maintaining a 99.99% SLA.  Many existing App Service plans with two or more instances will automatically support Availability Zones without additional setup.  The zone redundant setting for App Service plans and App Service Environment v3 is now …
    Diagnose Web App Issues Instantly—Just Drop a Screenshot into Conversational Diagnostics
    It’s that time of year again—Microsoft Build 2025 is here! And in the spirit of pushing boundaries with AI, we’re thrilled to introduce a powerful new preview feature in Conversational Diagnostics. 📸 Diagnose with a ScreenshotNo more struggling to describe a tricky issue or typing out long explanations. With this new capability, you can simply paste, upload, or drag a screenshot into the chat. Conversational Diagnostics will analyze the image, identify the context, and surface relevant diagnostics for your selected Azure Resource—all in seconds. Whether you're debugging a web app or triaging a customer issue, this feature helps you move from problem to insight faster than ever. Thank you!  ( 18 min )
    Diagnose Web App Issues Instantly—Just Drop a Screenshot into Conversational Diagnostics
    It’s that time of year again—Microsoft Build 2025 is here! And in the spirit of pushing boundaries with AI, we’re thrilled to introduce a powerful new preview feature in Conversational Diagnostics. 📸 Diagnose with a ScreenshotNo more struggling to describe a tricky issue or typing out long explanations. With this new capability, you can simply paste, upload, or drag a screenshot into the chat. Conversational Diagnostics will analyze the image, identify the context, and surface relevant diagnostics for your selected Azure Resource—all in seconds. Whether you're debugging a web app or triaging a customer issue, this feature helps you move from problem to insight faster than ever. Thank you!

  • Open

    Allocating Azure ML Costs with Kubecost
    Cost tracking is a critical aspect of cloud operations—it helps you understand not just how much you're spending, but also where that spend is going and which teams are responsible. When running a Machine Learning capability with multiple consumers across your organisation, it becomes especially challenging to attribute compute costs to the teams building and deploying models. With the extensive compute use in Machine Learning, these costs can add up quickly. In this article, we’ll explore how tools like Kubecost can help bring visibility and accountability to ML workloads. Tracking costs in Azure can mostly be done through Azure Cost Management, however when we are running these ML models as endpoints and deployments in a Kubernetes cluster, things can get a bit trickier. Azure Cost Manag…  ( 41 min )
    Allocating Azure ML Costs with Kubecost
    Cost tracking is a critical aspect of cloud operations—it helps you understand not just how much you're spending, but also where that spend is going and which teams are responsible. When running a Machine Learning capability with multiple consumers across your organisation, it becomes especially challenging to attribute compute costs to the teams building and deploying models. With the extensive compute use in Machine Learning, these costs can add up quickly. In this article, we’ll explore how tools like Kubecost can help bring visibility and accountability to ML workloads. Tracking costs in Azure can mostly be done through Azure Cost Management, however when we are running these ML models as endpoints and deployments in a Kubernetes cluster, things can get a bit trickier. Azure Cost Manag…
    Announcing Native Azure Functions Support in Azure Container Apps
    A New Way to Host Functions on ACA With the new native hosting model, Azure Functions are now fully integrated into ACA. This means you can deploy and run your functions directly on ACA, taking full advantage of the robust app platform.  Create via Portal: Option to optimize for Azure function If you are using CLI, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the Container App resource. Create via CLI: Set “kind=functionapp” property Please note, in the new native hosting model, Azure Functions extensions will continue to work as before. Auto-scaling will remain available. Deployments are supported through ARM templates, Bicep, Azure CLI, and the Azure portal. Monitoring using Applicat…  ( 27 min )
    Announcing Native Azure Functions Support in Azure Container Apps
    A New Way to Host Functions on ACA With the new native hosting model, Azure Functions are now fully integrated into ACA. This means you can deploy and run your functions directly on ACA, taking full advantage of the robust app platform.  Create via Portal: Option to optimize for Azure function If you are using CLI, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the Container App resource. Create via CLI: Set “kind=functionapp” property Please note, in the new native hosting model, Azure Functions extensions will continue to work as before. Auto-scaling will remain available. Deployments are supported through ARM templates, Bicep, Azure CLI, and the Azure portal. Monitoring using Applicat…
  • Open

    Protect AI apps with Microsoft Defender
    Stay in control with Microsoft Defender. You can identify which AI apps and cloud services are in use across your environment, evaluate their risk levels, and allow or block them as needed — all from one place. Whether it’s a sanctioned tool or a shadow AI app, you’re equipped to set the right policies and respond fast to emerging threats.  Microsoft Defender gives you the visibility to track complex attack paths — linking signals across endpoints, identities, and cloud apps. Investigate real-time alerts, protect sensitive data from misuse in AI tools like Copilot, and enforce controls even for in-house developed apps using system prompts and Azure AI Foundry.  Rob Lefferts, Microsoft Security CVP, joins me in the Mechanics studio to share how you can safeguard your AI-powered environment…  ( 59 min )
    Protect AI apps with Microsoft Defender
    Stay in control with Microsoft Defender. You can identify which AI apps and cloud services are in use across your environment, evaluate their risk levels, and allow or block them as needed — all from one place. Whether it’s a sanctioned tool or a shadow AI app, you’re equipped to set the right policies and respond fast to emerging threats.  Microsoft Defender gives you the visibility to track complex attack paths — linking signals across endpoints, identities, and cloud apps. Investigate real-time alerts, protect sensitive data from misuse in AI tools like Copilot, and enforce controls even for in-house developed apps using system prompts and Azure AI Foundry.  Rob Lefferts, Microsoft Security CVP, joins me in the Mechanics studio to share how you can safeguard your AI-powered environment…
    How Microsoft 365 Backup works and how to set it up
    Protect your Microsoft 365 data and stay in control with Microsoft 365 Backup — whether managing email, documents, or sites across Exchange, OneDrive, and SharePoint. Define exactly what you want to back up and restore precisely what you need to with speeds reaching 2TB per hour at scale. With flexible policies, dynamic rules, and recovery points up to 365 days back, you can stay resilient and ready.  In this introduction, I'll show you how to minimize disruption and keep your organization moving forward even in the event of a disaster with Microsoft 365 Backup.  Fine-tune what gets backed up.  Back up by user, site, group, or file type — to meet your exact needs. Get started with Microsoft 365 Backup.  Restore data in-place or to a new location.  Compare versions before committing. Tak…  ( 45 min )
    How Microsoft 365 Backup works and how to set it up
    Protect your Microsoft 365 data and stay in control with Microsoft 365 Backup — whether managing email, documents, or sites across Exchange, OneDrive, and SharePoint. Define exactly what you want to back up and restore precisely what you need to with speeds reaching 2TB per hour at scale. With flexible policies, dynamic rules, and recovery points up to 365 days back, you can stay resilient and ready.  In this introduction, I'll show you how to minimize disruption and keep your organization moving forward even in the event of a disaster with Microsoft 365 Backup.  Fine-tune what gets backed up.  Back up by user, site, group, or file type — to meet your exact needs. Get started with Microsoft 365 Backup.  Restore data in-place or to a new location.  Compare versions before committing. Tak…
  • Open

    Maximum Allowed Cores exceeded for the Managed Environment
    This post will discuss this message that may be seen on Azure Container Apps.  ( 8 min )
  • Open

    RAG Virtual Assistant - Built with Microsoft Fabric and Azure OpenAI
    Introduction In Kenya's evolving higher education landscape, access to accurate information about funding opportunities remains a critical challenge for students and their families. When the New Funding Model (NFM) was introduced in May 2023, many applicants found themselves navigating unfamiliar processes with limited guidance. This knowledge gap inspired our team to develop a solution that would bridge this divide using cutting-edge AI technology. Our project, RAG-Powered Virtual Assistant for the Higher Education Fund (HEF), emerged as the overall winner at the Microsoft Data + AI Hack Kenya 2025. By leveraging Microsoft Fabric's powerful Eventhouse capabilities alongside Azure OpenAI services, we created an intelligent assistant that provides instant, accurate responses to queries abou…  ( 40 min )
    RAG Virtual Assistant - Built with Microsoft Fabric and Azure OpenAI
    Introduction In Kenya's evolving higher education landscape, access to accurate information about funding opportunities remains a critical challenge for students and their families. When the New Funding Model (NFM) was introduced in May 2023, many applicants found themselves navigating unfamiliar processes with limited guidance. This knowledge gap inspired our team to develop a solution that would bridge this divide using cutting-edge AI technology. Our project, RAG-Powered Virtual Assistant for the Higher Education Fund (HEF), emerged as the overall winner at the Microsoft Data + AI Hack Kenya 2025. By leveraging Microsoft Fabric's powerful Eventhouse capabilities alongside Azure OpenAI services, we created an intelligent assistant that provides instant, accurate responses to queries abou…
    Power Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
    Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This …  ( 27 min )
    Power Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
    Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This …
  • Open

    Semantic Kernel: Package previews, Graduations & Deprecations
    Semantic Kernel: Package Previews, Graduations & Deprecations We are excited to share a summary of recent updates and continuous clean-up efforts across the Semantic Kernel .NET codebase. These changes focus on improving maintainability, aligning with the latest APIs, and ensuring a consistent experience for users. Below you’ll find details on package graduations, deprecations, and a […] The post Semantic Kernel: Package previews, Graduations & Deprecations appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Smart Mutations in Microsoft Fabric API for GraphQL with Stored Procedures
    Overview Microsoft Fabric API for GraphQL makes it easy to query and mutate data from a Fabric- SQL database and other Fabric data sources such as Data Warehouse and Lakehouse, with strongly typed schemas and a rich query language allowing developers to create an intuitive API without writing custom server code. While you can’t customize … Continue reading “Smart Mutations in Microsoft Fabric API for GraphQL with Stored Procedures”  ( 7 min )
    Updates to Fabric Copilot Capacity
    Fabric Copilot Capacities are making changes to be more streamlined and easier to use.  ( 6 min )
    Improving productivity in Fabric Notebooks with Inline Code Completion
    We are excited to introduce Copilot Inline Code Completion, an AI-powered feature that helps data scientists and engineers write high-quality Python code faster and with greater ease. Inspired by GitHub Copilot, this feature offers intelligent code suggestions as you type, with no commands needed. By understanding the context of your notebook, Copilot Inline Code Completion … Continue reading “Improving productivity in Fabric Notebooks with Inline Code Completion”  ( 6 min )
  • Open

    Learn How to Build Smarter AI Agents with Microsoft’s MCP Resources Hub
    If you've been curious about how to build your own AI agents that can talk to APIs, connect with tools like databases, or even follow documentation you're in the right place. Microsoft has created something called MCP, which stands for Model‑Context‑Protocol. And to help you learn it step by step, they’ve made an amazing MCP Resources Hub on GitHub. In this blog, I’ll Walk you through what MCP is, why it matters, and how to use this hub to get started, even if you're new to AI development. What is MCP (Model‑Context‑Protocol)? Think of MCP like a communication bridge between your AI model and the outside world. Normally, when we chat with AI (like ChatGPT), it only knows what’s in its training data. But with MCP, you can give your AI real-time context from: APIs Documents Databases Websit…  ( 29 min )
    Learn How to Build Smarter AI Agents with Microsoft’s MCP Resources Hub
    If you've been curious about how to build your own AI agents that can talk to APIs, connect with tools like databases, or even follow documentation you're in the right place. Microsoft has created something called MCP, which stands for Model‑Context‑Protocol. And to help you learn it step by step, they’ve made an amazing MCP Resources Hub on GitHub. In this blog, I’ll Walk you through what MCP is, why it matters, and how to use this hub to get started, even if you're new to AI development. What is MCP (Model‑Context‑Protocol)? Think of MCP like a communication bridge between your AI model and the outside world. Normally, when we chat with AI (like ChatGPT), it only knows what’s in its training data. But with MCP, you can give your AI real-time context from: APIs Documents Databases Websit…

  • Open

    How to Choose the Right Hosting Plan – WordPress on App Service
    Choosing the right hosting plan for your WordPress site on Azure App Service can feel overwhelming—but it doesn’t have to be. Whether you're just exploring WordPress or launching a high-traffic production site, we’ve created four tailored hosting plans to help you get started quickly and confidently. Let’s walk through how to pick the right plan for your needs. Which Hosting Plan Should You Choose? We’ve simplified the decision-making process with a clear recommendation based on your use case: Use CaseRecommended Plan Hobby or exploratory siteFree or Basic Small production websiteStandard High-load production websitePremium   💡 Important: Only the Premium plan supports High Availability (HA). This is the only setting that cannot be changed after deployment. If HA is a requirement,…  ( 25 min )
    How to Choose the Right Hosting Plan – WordPress on App Service
    Choosing the right hosting plan for your WordPress site on Azure App Service can feel overwhelming—but it doesn’t have to be. Whether you're just exploring WordPress or launching a high-traffic production site, we’ve created four tailored hosting plans to help you get started quickly and confidently. Let’s walk through how to pick the right plan for your needs. Which Hosting Plan Should You Choose? We’ve simplified the decision-making process with a clear recommendation based on your use case: Use CaseRecommended Plan Hobby or exploratory siteFree or Basic Small production websiteStandard High-load production websitePremium   💡 Important: Only the Premium plan supports High Availability (HA). This is the only setting that cannot be changed after deployment. If HA is a requirement,…
  • Open

    Orchestrate your Databricks Jobs with Fabric Data pipelines
    We’re excited to announce that you can now orchestrate Azure Databricks Jobs from your Microsoft Fabric data pipelines! Databrick Jobs allow you to schedule and orchestrate a task or multiple tasks in a workflow in your Databricks workspace. Since any operation in Databricks can be a task, this means you can now run anything in … Continue reading “Orchestrate your Databricks Jobs with Fabric Data pipelines”  ( 6 min )
  • Open

    Stop Translating Docs Manually! Automate Your Global Reach with Co-op Translator v0.8 Series
    Stop Translating Docs Manually! Automate Your Global Reach with Co-op Translator v0.8 Series   Is your team or open-source project drowning in the endless cycle of manually translating documentation? Every update to your source content triggers a wave of tedious, error-prone work across multiple languages, slowing down knowledge sharing and hindering your global impact.   This challenge became particularly clear within large-scale Microsoft educational projects like the "For Beginners" series, where manual translation simply couldn't keep pace. A scalable, automated solution was needed to ensure valuable technical knowledge reaches learners and developers worldwide, breaking down language barriers that limit participation and slow innovation.   Co-op Translator, a Microsoft Azure open-sour…  ( 29 min )
    Stop Translating Docs Manually! Automate Your Global Reach with Co-op Translator v0.8 Series
    Stop Translating Docs Manually! Automate Your Global Reach with Co-op Translator v0.8 Series   Is your team or open-source project drowning in the endless cycle of manually translating documentation? Every update to your source content triggers a wave of tedious, error-prone work across multiple languages, slowing down knowledge sharing and hindering your global impact.   This challenge became particularly clear within large-scale Microsoft educational projects like the "For Beginners" series, where manual translation simply couldn't keep pace. A scalable, automated solution was needed to ensure valuable technical knowledge reaches learners and developers worldwide, breaking down language barriers that limit participation and slow innovation.   Co-op Translator, a Microsoft Azure open-sour…
  • Open

    Mastering Query Fields in Azure AI Document Intelligence with C#
    Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage.   Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below…  ( 23 min )
    Mastering Query Fields in Azure AI Document Intelligence with C#
    Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage.   Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below…
  • Open

    Mastering Query Fields in Azure AI Document Intelligence with C#
    Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage.   Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below…  ( 23 min )
    Mastering Query Fields in Azure AI Document Intelligence with C#
    Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage.   Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below…
  • Open

    Azure NetApp Files solutions for three EDA Cloud-Compute scenarios
    Table of Contents Abstract Introduction EDA Cloud-Compute scenarios Scenario 1: Burst to Azure from on-premises Data Center Scenario 2: “24x7 Single Set Workload” Scenario 3: "Data Center Supplement" Summary Abstract Azure NetApp Files (ANF) is transforming Electronic Design Automation (EDA) workflows in the cloud by delivering unparalleled performance, scalability, and efficiency. This blog explores how ANF addresses critical challenges in three cloud compute scenarios: Cloud Bursting, 24x7 All-in-Cloud, and Cloud-based Data Center Supplement. These solutions are tailored to optimize EDA processes, which rely on high-performance NFS file systems to design advanced semiconductor products. With the ability to support clusters exceeding 50,000 cores, ANF enhances productivity, shortens desi…  ( 45 min )
    Azure NetApp Files solutions for three EDA Cloud-Compute scenarios
    Table of Contents Abstract Introduction EDA Cloud-Compute scenarios Scenario 1: Burst to Azure from on-premises Data Center Scenario 2: “24x7 Single Set Workload” Scenario 3: "Data Center Supplement" Summary Abstract Azure NetApp Files (ANF) is transforming Electronic Design Automation (EDA) workflows in the cloud by delivering unparalleled performance, scalability, and efficiency. This blog explores how ANF addresses critical challenges in three cloud compute scenarios: Cloud Bursting, 24x7 All-in-Cloud, and Cloud-based Data Center Supplement. These solutions are tailored to optimize EDA processes, which rely on high-performance NFS file systems to design advanced semiconductor products. With the ability to support clusters exceeding 50,000 cores, ANF enhances productivity, shortens desi…
    Natural Language to SQL Semantic Kernel Multi-Agent System
    In today’s data-driven landscape, the ability to access and interpret information in a human-readable format is increasingly valuable. Being able to interact with and query your database in natural language is the game changer. In this post, we’ll walk through how to build a SQL agent using the Semantic Kernel framework to interact with a PostgreSQL database containing DVD rental data. I’ll explain how to define Semantic Kernel functions through plugins and how to incorporate them into agents. We’ll also look at how to set up these agents and guide them with well-structured instructions. Our example uses a PostgreSQL database that stores detailed information about DVD rentals. This sample database contains 15 tables and can be found here: PostgreSQL Sample Database. With a natural language…  ( 41 min )
    Natural Language to SQL Semantic Kernel Multi-Agent System
    In today’s data-driven landscape, the ability to access and interpret information in a human-readable format is increasingly valuable. Being able to interact with and query your database in natural language is the game changer. In this post, we’ll walk through how to build a SQL agent using the Semantic Kernel framework to interact with a PostgreSQL database containing DVD rental data. I’ll explain how to define Semantic Kernel functions through plugins and how to incorporate them into agents. We’ll also look at how to set up these agents and guide them with well-structured instructions. Our example uses a PostgreSQL database that stores detailed information about DVD rentals. This sample database contains 15 tables and can be found here: PostgreSQL Sample Database. With a natural language…
  • Open

    High-volume batch transaction processing
    The architecture uses AKS to implement compute clusters of the applications that process high-volume batches of transactions. The applications receive the transactions in messages from Service Bus topics or queues. The topics and queues can be at Azure datacenters in different geographic regions, and multiple AKS clusters can read input from them.   Architecture       Workflow The numbered circles in the diagram correspond to the numbered steps in the following list. The architecture uses Service Bus topics and queues to organize the batch processing input and to pass it downstream for processing. Azure Load Balancer, a Layer 4 (TCP, UDP) load balancer, distributes incoming traffic among healthy instances of services defined in a load-balanced set. Load balancing and management of connec…  ( 59 min )
    High-volume batch transaction processing
    The architecture uses AKS to implement compute clusters of the applications that process high-volume batches of transactions. The applications receive the transactions in messages from Service Bus topics or queues. The topics and queues can be at Azure datacenters in different geographic regions, and multiple AKS clusters can read input from them.   Architecture       Workflow The numbered circles in the diagram correspond to the numbered steps in the following list. The architecture uses Service Bus topics and queues to organize the batch processing input and to pass it downstream for processing. Azure Load Balancer, a Layer 4 (TCP, UDP) load balancer, distributes incoming traffic among healthy instances of services defined in a load-balanced set. Load balancing and management of connec…
  • Open

    Introducing Azure AI Content Understanding for Beginners
    Enterprises today face several challenges in processing and extracting insights from multimodal data, like managing diverse data formats, ensuring data quality, and streamlining workflows efficiently. Ensuring the accuracy and usability of extracted insights often requires advanced AI techniques, while inefficiencies in managing large data volumes increase costs and delay results. Azure AI Content Understanding addresses these pain points by offering a unified solution to transform unstructured data into actionable insights, improve data accuracy with schema extraction and confidence scoring, and integrate seamlessly with Azure’s ecosystem to enhance efficiency and reduce costs. Content Understanding makes it easy to extract custom task-specific output without advanced GenAI skills. It ena…  ( 25 min )
    Introducing Azure AI Content Understanding for Beginners
    Enterprises today face several challenges in processing and extracting insights from multimodal data, like managing diverse data formats, ensuring data quality, and streamlining workflows efficiently. Ensuring the accuracy and usability of extracted insights often requires advanced AI techniques, while inefficiencies in managing large data volumes increase costs and delay results. Azure AI Content Understanding addresses these pain points by offering a unified solution to transform unstructured data into actionable insights, improve data accuracy with schema extraction and confidence scoring, and integrate seamlessly with Azure’s ecosystem to enhance efficiency and reduce costs. Content Understanding makes it easy to extract custom task-specific output without advanced GenAI skills. It ena…

  • Open

    Query vs. Mutation in API for GraphQL – Understanding the difference
    GraphQL has revolutionized the way developers interact with APIs by offering a more flexible and efficient alternative to REST. Before getting started , Create an API for GraphQL in Fabric and add data to use GraphQL in Fabric. At the heart of GraphQL are two core operations: queries and mutations. While they may look similar on the surface, they serve very different purposes. Let’s explain it in detail.  ( 7 min )
    Evaluate your Fabric Data Agents programmatically with the Python SDK (Preview)
    We’re excited to announce that native support for evaluating Data Agents through the Fabric SDK is now available in Preview. You can now run structured evaluations of your agent’s responses using Python — directly from notebooks or your own automation pipelines. Whether you’re validating accuracy before deploying to production, tuning prompts for better performance, or … Continue reading “Evaluate your Fabric Data Agents programmatically with the Python SDK (Preview)”  ( 7 min )
  • Open

    Part 2 - How to Create a VS Code Extension for API Health Checks?
    Introduction Have you ever thought about to build a Visual Studio Code extension as your capstone project? That’s what I did: Part 1 - Develop a VS Code Extension for Your Capstone Project. I have created a Visual Studio Code Extension, API Guardian, that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing issues caused by breaking or non-breaking changes and to alleviate the difficulties in performing maintenance due to unclear or outdated documentation. Let's build your very own extension! Now, let’s do it step by step.   Step 1 – Install the NPM package for generator-code Ensure Node.js is installed before proceeding. Verify by running node -v, which will display the installed versi…  ( 27 min )
    Part 2 - How to Create a VS Code Extension for API Health Checks?
    Introduction Have you ever thought about to build a Visual Studio Code extension as your capstone project? That’s what I did: Part 1 - Develop a VS Code Extension for Your Capstone Project. I have created a Visual Studio Code Extension, API Guardian, that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing issues caused by breaking or non-breaking changes and to alleviate the difficulties in performing maintenance due to unclear or outdated documentation. Let's build your very own extension! Now, let’s do it step by step.   Step 1 – Install the NPM package for generator-code Ensure Node.js is installed before proceeding. Verify by running node -v, which will display the installed versi…
  • Open

    Create Your First AI Agent with JavaScript and Azure AI Agent Service!
    Introduction: The Era of AI Agents in JavaScript During the AI Agents Hackathon, one of the most anticipated sessions was presented by Wassim Chegham, Senior AI Developer Advocate for JavaScript at Microsoft. The topic? "How to Create Your First AI Agent with JavaScript and Azure AI Agent Service" — a powerful tool designed for modern developers looking to build AI-first applications with security, scalability, and productivity in mind. In this article, we explore the main highlights of the session, focusing on how you can create your own AI agent using JavaScript and Azure AI Agent Service. The video’s goal is clear: walk through the step-by-step process of creating AI agents using JavaScript and TypeScript with Azure AI Foundry, and explain all the key concepts behind this new developmen…  ( 35 min )
    Create Your First AI Agent with JavaScript and Azure AI Agent Service!
    Introduction: The Era of AI Agents in JavaScript During the AI Agents Hackathon, one of the most anticipated sessions was presented by Wassim Chegham, Senior AI Developer Advocate for JavaScript at Microsoft. The topic? "How to Create Your First AI Agent with JavaScript and Azure AI Agent Service" — a powerful tool designed for modern developers looking to build AI-first applications with security, scalability, and productivity in mind. In this article, we explore the main highlights of the session, focusing on how you can create your own AI agent using JavaScript and Azure AI Agent Service. The video’s goal is clear: walk through the step-by-step process of creating AI agents using JavaScript and TypeScript with Azure AI Foundry, and explain all the key concepts behind this new developmen…
  • Open

    Azure Migrate - Build 2025 updates
    Shiva Shastri Sr Product Marketing Manager, Azure Migrate—Product & Ecosystem. Cost-effective and sustainable innovation. In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay competitive through innovations while managing costs. By leveraging the power of the cloud, organizations can achieve cost-effectiveness and foster sustainable innovation. By transitioning to Azure, any organization can achieve greater financial flexibility, operational efficiency, and gain access to innovations that provide a competitive edge in the marketplace. Collocating application resources and data is essential for optimal performance and return on investment (ROI). Once in Azure, secure and responsible AI can help you with insights and actions that lead to better outcom…  ( 27 min )
    Azure Migrate - Build 2025 updates
    Shiva Shastri Sr Product Marketing Manager, Azure Migrate—Product & Ecosystem. Cost-effective and sustainable innovation. In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay competitive through innovations while managing costs. By leveraging the power of the cloud, organizations can achieve cost-effectiveness and foster sustainable innovation. By transitioning to Azure, any organization can achieve greater financial flexibility, operational efficiency, and gain access to innovations that provide a competitive edge in the marketplace. Collocating application resources and data is essential for optimal performance and return on investment (ROI). Once in Azure, secure and responsible AI can help you with insights and actions that lead to better outcom…
  • Open

    Innovation in Action: Azure Red Hat OpenShift at Build and Red Hat Summit 2025
    The strategic partnership between Microsoft Azure and Red Hat continues to flourish in 2025, with both companies showcasing their joint innovations at two major tech events: Microsoft Build (May 19-22 in Seattle) and Red Hat Summit 2025. This collaboration represents one of tech's most impactful partnerships, combining Microsoft's cloud expertise with Red Hat's open-source leadership to create solutions that drive digital transformation across industries. Microsoft Build 2025: AI Innovation Meets Open Source Microsoft Build 2025 will take place from May 19-22 at the Seattle Convention Center, bringing together developers, creators, and AI innovators from around the world. This year's event has been extended to four days, offering more opportunities for learning and networking. A key highl…  ( 34 min )
    Innovation in Action: Azure Red Hat OpenShift at Build and Red Hat Summit 2025
    The strategic partnership between Microsoft Azure and Red Hat continues to flourish in 2025, with both companies showcasing their joint innovations at two major tech events: Microsoft Build (May 19-22 in Seattle) and Red Hat Summit 2025. This collaboration represents one of tech's most impactful partnerships, combining Microsoft's cloud expertise with Red Hat's open-source leadership to create solutions that drive digital transformation across industries. Microsoft Build 2025: AI Innovation Meets Open Source Microsoft Build 2025 will take place from May 19-22 at the Seattle Convention Center, bringing together developers, creators, and AI innovators from around the world. This year's event has been extended to four days, offering more opportunities for learning and networking. A key highl…

  • Open

    The State of Coding the Future with Java and AI – May 2025
    Software development is changing fast, and Java developers are right in the middle of it – especially when it comes to using Artificial Intelligence (AI) in their apps. This report brings together feedback from 647 Java professionals to show where things stand and what is possible as Java and AI come together. One of the […] The post The State of Coding the Future with Java and AI – May 2025 appeared first on Microsoft for Java Developers.  ( 44 min )
  • Open

    Accelerate AI on Oracle Databases with Open Mirroring, Fabric Data Agent, and Azure AI Foundry
    Additional contributors: Venkat Ramakrishnan, Amir Jafari, Wilson Lee, Maraki Ketema As organizations accelerate their hybrid cloud adoption strategies, Oracle Database@Azure has emerged as a critical platform for running Oracle database workloads using Exadata, Autonomous, Exadata Exascale and Base databases for the enterprise. However, deriving real-time, AI-powered insights in hybrid settings has long been challenging due … Continue reading “Accelerate AI on Oracle Databases with Open Mirroring, Fabric Data Agent, and Azure AI Foundry”  ( 9 min )
    Announcing Copilot for SQL Analytics Endpoint in Microsoft Fabric (Preview)
    We’re excited to introduce Copilot for SQL Analytics Endpoint, now in preview – a transformative, AI-powered assistant built to change how you query, explore, and analyze data within Microsoft Fabric’s SQL experience. With Copilot integrated directly into the SQL Analytics Endpoint, users can now express intent in natural language and instantly receive ready-to-run T-SQL. Whether … Continue reading “Announcing Copilot for SQL Analytics Endpoint in Microsoft Fabric (Preview)”  ( 7 min )
  • Open

    Build your code-first agent with Azure AI Foundry: Self-Guided Workshop
    Build your first Agent App Agentic AI is changing how we build intelligent apps - enabling software to reason, plan, and act for us. Learning to build AI agents is quickly becoming a must-have skill for anyone working with AI. Self-Guided Workshop Try our self-guided “Build your code-first agent with Azure AI Foundry” workshop to get hands-on with Azure AI Agent Service. You’ll learn to build, deploy, and interact with agents using Azure’s powerful tools. What is Azure AI Agent Service? Azure AI Agent Service lets you create, orchestrate, and manage AI-powered agents that can handle complex tasks, integrate with tools, and deploy securely. What Will You Learn? The basics of agentic AI apps and how they differ from traditional apps How to set up your Azure environment How to build your first agent How to test and interact with your agent Advanced features like tool integration and memory management Who Is This For? Anyone interested in building intelligent, goal-oriented agents — developers, data scientists, students, and AI enthusiasts. No prior experience with Azure AI Agent Service required. How Does the Workshop Work? Tip: Select the self-guided tab in Getting Started for the right instructions. Step-by-step guides at your own pace Code samples and templates Real-world scenarios Get Started See what agentic AI can do for you with the self-guided “Build your code-first agent with Azure AI Foundry” workshop. Build practical skills in one of AI’s most exciting areas. Try the workshop and start building agents that make a difference! Additional Resources Azure AI Foundry Documentation Azure AI Agent Service Overview Questions or feedback Questions or feedback? Visit the issues page. Happy learning and building with Azure AI Agent Service!  ( 21 min )
    Build your code-first agent with Azure AI Foundry: Self-Guided Workshop
    Build your first Agent App Agentic AI is changing how we build intelligent apps - enabling software to reason, plan, and act for us. Learning to build AI agents is quickly becoming a must-have skill for anyone working with AI. Self-Guided Workshop Try our self-guided “Build your code-first agent with Azure AI Foundry” workshop to get hands-on with Azure AI Agent Service. You’ll learn to build, deploy, and interact with agents using Azure’s powerful tools. What is Azure AI Agent Service? Azure AI Agent Service lets you create, orchestrate, and manage AI-powered agents that can handle complex tasks, integrate with tools, and deploy securely. What Will You Learn? The basics of agentic AI apps and how they differ from traditional apps How to set up your Azure environment How to build your first agent How to test and interact with your agent Advanced features like tool integration and memory management Who Is This For? Anyone interested in building intelligent, goal-oriented agents — developers, data scientists, students, and AI enthusiasts. No prior experience with Azure AI Agent Service required. How Does the Workshop Work? Tip: Select the self-guided tab in Getting Started for the right instructions. Step-by-step guides at your own pace Code samples and templates Real-world scenarios Get Started See what agentic AI can do for you with the self-guided “Build your code-first agent with Azure AI Foundry” workshop. Build practical skills in one of AI’s most exciting areas. Try the workshop and start building agents that make a difference! Additional Resources Azure AI Foundry Documentation Azure AI Agent Service Overview Questions or feedback Questions or feedback? Visit the issues page. Happy learning and building with Azure AI Agent Service!
  • Open

    Dynamic Tool Discovery: Azure AI Agent Service + MCP Server Integration
    At the time of this writing, Azure AI Agent Service does not offer turnkey integration with Model Context Protocol (MCP) Servers. Discussed here is a solution that helps to leverage MCP's powerful capabilities while working within the Azure ecosystem. The integration approach piggybacks on the Function integration capability in the Azure AI Agent Service. By utilizing an MCP Client to discover and register tools from an MCP Server as Functions with the Agent Service, we create a seamless integration layer between the two systems. Built using the Microsoft Bot Framework, this application can be published as an AI Assistant across numerous channels like Microsoft Teams, Slack, and others. For development and testing purposes, we've used the Bot Framework Emulator to run and validate the appl…  ( 29 min )
    Dynamic Tool Discovery: Azure AI Agent Service + MCP Server Integration
    At the time of this writing, Azure AI Agent Service does not offer turnkey integration with Model Context Protocol (MCP) Servers. Discussed here is a solution that helps to leverage MCP's powerful capabilities while working within the Azure ecosystem. The integration approach piggybacks on the Function integration capability in the Azure AI Agent Service. By utilizing an MCP Client to discover and register tools from an MCP Server as Functions with the Agent Service, we create a seamless integration layer between the two systems. Built using the Microsoft Bot Framework, this application can be published as an AI Assistant across numerous channels like Microsoft Teams, Slack, and others. For development and testing purposes, we've used the Bot Framework Emulator to run and validate the appl…

  • Open

    From Complexity to Simplicity: The ASC and Azure AI Partnership
    ASC Technologies, a leader in compliance recording and AI-driven data analytics, provides cutting-edge software solutions for capturing and analyzing communication channels. Their innovative technology empowers more than 500 customers worldwide to record communications legally while extracting valuable insights and helping to prevent fraudulent activities. Many of their customers operate in heavily regulated industries where compliance recording is mandatory. These organizations rely on the ability to consolidate and analyze information shared across multiple channels including voice recordings, chat logs, speaker recognition, video analysis, and document and screen activity. As ASC’s customer base expanded, and their clients accumulated millions of calls and vast amounts of conversation m…  ( 25 min )
    From Complexity to Simplicity: The ASC and Azure AI Partnership
    ASC Technologies, a leader in compliance recording and AI-driven data analytics, provides cutting-edge software solutions for capturing and analyzing communication channels. Their innovative technology empowers more than 500 customers worldwide to record communications legally while extracting valuable insights and helping to prevent fraudulent activities. Many of their customers operate in heavily regulated industries where compliance recording is mandatory. These organizations rely on the ability to consolidate and analyze information shared across multiple channels including voice recordings, chat logs, speaker recognition, video analysis, and document and screen activity. As ASC’s customer base expanded, and their clients accumulated millions of calls and vast amounts of conversation m…

  • Open

    Seamlessly Integrating Azure Document Intelligence with Azure API Management (APIM)
    In today’s data-driven world, organizations are increasingly turning to AI for document understanding. Whether it's extracting invoices, contracts, ID cards, or complex forms, Azure Document Intelligence (formerly known as Form Recognizer) provides a robust, AI-powered solution for automated document processing. But what happens when you want to scale, secure, and load balance your document intelligence backend for high availability and enterprise-grade integration? Enter Azure API Management (APIM) — your gateway to efficient, scalable API orchestration. In this blog, we’ll explore how to integrate Azure Document Intelligence with APIM using a load-balanced architecture that works seamlessly with the Document Intelligence SDK — without rewriting your application logic. Azure Doc Intellige…  ( 36 min )
    Seamlessly Integrating Azure Document Intelligence with Azure API Management (APIM)
    In today’s data-driven world, organizations are increasingly turning to AI for document understanding. Whether it's extracting invoices, contracts, ID cards, or complex forms, Azure Document Intelligence (formerly known as Form Recognizer) provides a robust, AI-powered solution for automated document processing. But what happens when you want to scale, secure, and load balance your document intelligence backend for high availability and enterprise-grade integration? Enter Azure API Management (APIM) — your gateway to efficient, scalable API orchestration. In this blog, we’ll explore how to integrate Azure Document Intelligence with APIM using a load-balanced architecture that works seamlessly with the Document Intelligence SDK — without rewriting your application logic. Azure Doc Intellige…
  • Open

    Java monitoring over SSH
    This post will cover how to remotely connect to the JVM when running on Azure App Service with Java.  ( 3 min )
  • Open

    Azure Container Apps with Application Gateway and custom domain: hostname mismatch
    Introduction Azure Container Apps offers a robust platform for deploying microservices and containerized applications. When integrating with Azure Application Gateway, an internal container app environment can be accessed via public internet. Users often bind custom domains to enhance accessibility and user experience. A common challenge arises when we bind the custom domain on Application Gateway and try to access container app. Container app is acting as a middleware service and needs to forward request to another API server or finish authentication process, users may encountered HTTP 403 forbidden error which is caused by hostname/redirect URL mismatch. What's more, you definitely don't want to expose your backend service default domain. This blog explores these challenges and offers pr…  ( 22 min )
    Azure Container Apps with Application Gateway and custom domain: hostname mismatch
    Introduction Azure Container Apps offers a robust platform for deploying microservices and containerized applications. When integrating with Azure Application Gateway, an internal container app environment can be accessed via public internet. Users often bind custom domains to enhance accessibility and user experience. A common challenge arises when we bind the custom domain on Application Gateway and try to access container app. Container app is acting as a middleware service and needs to forward request to another API server or finish authentication process, users may encountered HTTP 403 forbidden error which is caused by hostname/redirect URL mismatch. What's more, you definitely don't want to expose your backend service default domain. This blog explores these challenges and offers pr…
  • Open

    Python in Visual Studio Code – May 2025 Release
    The May 2025 release includes updates in the Python Environments extension, a new color picker added by Pylance, branch coverage support, and more! The post Python in Visual Studio Code – May 2025 Release appeared first on Microsoft for Python Developers Blog.  ( 24 min )
  • Open

    Exchange Web Services code analyzer and usage report
    We are less than 18 months away from the retirement of Exchange Web Services. Start planning your migration from EWS to Microsoft Graph. The post Exchange Web Services code analyzer and usage report appeared first on Microsoft 365 Developer Blog.  ( 24 min )

  • Open

    JWT it like it’s hot: A practical guide for Kubernetes Structured Authentication
    With this practical guide, you now know how to secure your Kubernetes cluster using the structured-authentication feature, offering flexible integration with any JWT-compliant token provider. The post JWT it like it’s hot: A practical guide for Kubernetes Structured Authentication appeared first on Microsoft Open Source Blog.  ( 16 min )
  • Open

    Shortcut cache and on-prem gateway support (Generally Available)
    Shortcut cache and on-prem gateway support are now generally available (GA) Shortcut cache Shortcuts in OneLake allow you to quickly and easily source data from external cloud providers and use it across all Fabric workloads such as Power BI reports, SQL, Spark and Kusto.  However, each time these workloads read data from cross-cloud sources, the … Continue reading “Shortcut cache and on-prem gateway support (Generally Available)”  ( 6 min )
    Manage connections for shortcuts
    Shortcuts in OneLake provide a quick and easy way to make your data available in Fabric without having to copy it.  Simply create a new shortcut and your data is instantly available to all Fabric workloads. When you first create a new shortcut, you also set up a shared cloud connection. These are the same connections … Continue reading “Manage connections for shortcuts”  ( 6 min )
  • Open

    Build faster with this simple AZD template for FastAPI on Azure App Service
    I’ve made this Simple FastAPI AZD template for Azure App Service to help you get to the fun part, and to cut out all the extra infrastructure that you don’t necessarily want or need. This FastAPI template for Azure App Service gives you all the infrastructure as code to deploy a basic “Hello World” FastAPI web app that you can spin up using AZD (Azure Developer SDK) with just three commands. How to do it To get started, you just need to install AZD, a command-line tool you can use right there in VS Code. Then you’re ready to grab the template and deploy. Run these commands and follow the prompts as you go.  Grab our new simple FastAPI template for Azure App Service: azd init --template Azure-Samples/azd-simple-fastapi-appservice It will ask you to give your environment a name. This will b…  ( 33 min )
    Build faster with this simple AZD template for FastAPI on Azure App Service
    I’ve made this Simple FastAPI AZD template for Azure App Service to help you get to the fun part, and to cut out all the extra infrastructure that you don’t necessarily want or need. This FastAPI template for Azure App Service gives you all the infrastructure as code to deploy a basic “Hello World” FastAPI web app that you can spin up using AZD (Azure Developer SDK) with just three commands. How to do it To get started, you just need to install AZD, a command-line tool you can use right there in VS Code. Then you’re ready to grab the template and deploy. Run these commands and follow the prompts as you go.  Grab our new simple FastAPI template for Azure App Service: azd init --template Azure-Samples/azd-simple-fastapi-appservice It will ask you to give your environment a name. This will b…
  • Open

    Unlocking the Power of Model Distillation through Azure AI Foundry
    AI distillation is a powerful technique in machine learning that involves transferring knowledge from a large, complex model (often called the "teacher") to a smaller, more efficient "student" model. The goal is to retain the performance and accuracy of the larger model while drastically reducing computational requirements, making AI systems faster, cheaper, and more deployable especially in real-time or resource-constrained environments. In this post, we'll explore what AI distillation is, why it's gaining traction, and how it's being used to bring the power of advanced AI models to everyday applications. Stored completions in Azure OpenAI's AI Foundry provide a structured way to capture and reuse high-quality model responses, streamlining the model distillation process. By logging curate…  ( 38 min )
    Unlocking the Power of Model Distillation through Azure AI Foundry
    AI distillation is a powerful technique in machine learning that involves transferring knowledge from a large, complex model (often called the "teacher") to a smaller, more efficient "student" model. The goal is to retain the performance and accuracy of the larger model while drastically reducing computational requirements, making AI systems faster, cheaper, and more deployable especially in real-time or resource-constrained environments. In this post, we'll explore what AI distillation is, why it's gaining traction, and how it's being used to bring the power of advanced AI models to everyday applications. Stored completions in Azure OpenAI's AI Foundry provide a structured way to capture and reuse high-quality model responses, streamlining the model distillation process. By logging curate…
    Navigating AI Solutions: Microsoft Copilot Studio vs. Azure AI Foundry
    Are you looking to build custom Copilots but unsure about the differences between Copilot Studio and Azure AI Foundry? As a Microsoft Technical Trainer with over a decade of experience, I've spent the last 18 months focusing on Azure AI Solutions and Copilot. Through numerous workshops, I've seen firsthand how customers benefit from AI solutions beyond Microsoft Copilot. Microsoft 365 Copilot Chat offers seamless integration with Generative AI for tasks like document creation, content summarization, and insights from M365 solutions such as Email, OneDrive, SharePoint, and Teams. It ensures compliance with organizational security, governance, and privacy policies, making it ideal for immediate AI assistance without customization. On the other hand, platforms like Copilot Studio and Azure AI…  ( 37 min )
    Navigating AI Solutions: Microsoft Copilot Studio vs. Azure AI Foundry
    Are you looking to build custom Copilots but unsure about the differences between Copilot Studio and Azure AI Foundry? As a Microsoft Technical Trainer with over a decade of experience, I've spent the last 18 months focusing on Azure AI Solutions and Copilot. Through numerous workshops, I've seen firsthand how customers benefit from AI solutions beyond Microsoft Copilot. Microsoft 365 Copilot Chat offers seamless integration with Generative AI for tasks like document creation, content summarization, and insights from M365 solutions such as Email, OneDrive, SharePoint, and Teams. It ensures compliance with organizational security, governance, and privacy policies, making it ideal for immediate AI assistance without customization. On the other hand, platforms like Copilot Studio and Azure AI…
    Azure OpenAI o-series & GPT-4.1 Models Now Available in Azure AI Agent Service
    New Models Available!   We’re excited to announce the preview availability of the following Azure OpenAI Service models for use in the Azure AI Agent Service, starting 5/7:  o1  o3-mini  gpt-4.1  gpt-4.1-mini  gpt-4.1-nano    Azure OpenAI o-Series Models  Azure OpenAI o-series models are designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.    o1: The most capable model in the o1 series, offering enhanced reasoning abilities.  o3 (coming soon): The most capable reasoning model in the o model series, and the first one to offer full tools support for agentic so…  ( 30 min )
    Azure OpenAI o-series & GPT-4.1 Models Now Available in Azure AI Agent Service
    New Models Available!   We’re excited to announce the preview availability of the following Azure OpenAI Service models for use in the Azure AI Agent Service, starting 5/7:  o1  o3-mini  gpt-4.1  gpt-4.1-mini  gpt-4.1-nano    Azure OpenAI o-Series Models  Azure OpenAI o-series models are designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.    o1: The most capable model in the o1 series, offering enhanced reasoning abilities.  o3 (coming soon): The most capable reasoning model in the o model series, and the first one to offer full tools support for agentic so…
  • Open

    Prepare your Office Add-in for the European Accessibility Act (EAA)
    Starting June 28, 2025, the European Accessibility Act (EAA) takes effect, requiring all digital products and services offered to EU customers to meet comprehensive accessibility standards. If you're developing Office Add-ins, this may impact you. This blog explains what you need to know to ensure your Office Add-ins are compliant. The post Prepare your Office Add-in for the European Accessibility Act (EAA) appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    The State of Coding the Future with Java and AI – May 2025
    Software development is changing fast, and Java developers are right in the middle of it - especially when it comes to using Artificial Intelligence (AI) in their apps. This report brings together feedback from 647 Java professionals to show where things stand and what is possible as Java and AI come together. One of the biggest takeaways is this: Java developers do not need to be experts in AI, machine learning, or Python. With tools like the Model Context Protocol (MCP) Java SDK, Spring AI, and LangChain4j, they can start adding smart features to their apps using the skills they already have. Whether it is making recommendations, spotting fraud, supporting natural language search or a world of possibilities, AI can be part of everyday Java development. The report walks through real-world…  ( 102 min )
    The State of Coding the Future with Java and AI – May 2025
    Software development is changing fast, and Java developers are right in the middle of it - especially when it comes to using Artificial Intelligence (AI) in their apps. This report brings together feedback from 647 Java professionals to show where things stand and what is possible as Java and AI come together. One of the biggest takeaways is this: Java developers do not need to be experts in AI, machine learning, or Python. With tools like the Model Context Protocol (MCP) Java SDK, Spring AI, and LangChain4j, they can start adding smart features to their apps using the skills they already have. Whether it is making recommendations, spotting fraud, supporting natural language search or a world of possibilities, AI can be part of everyday Java development. The report walks through real-world…

  • Open

    Enabling broader adoption of XMLA-based tools and scenarios
    Starting on June 9, 2025, all Power BI and Fabric capacity SKUs will support XMLA read/write operations by default. This change is intended to assist customers using XMLA-based tools to create, edit, and maintain semantic models, such as DAX Query View in the web, Live Editing in Power BI Desktop, SQL Server Management Studio (SSMS), … Continue reading “Enabling broader adoption of XMLA-based tools and scenarios”  ( 5 min )
  • Open

    Part 1 - Develop a VS Code Extension for Your Capstone Project
    API Guardian - My Capstone Project As software and APIs evolve, developers encounter significant difficulties in maintaining and updating API endpoints. Breaking changes can lead to system instability, while outdated or unclear documentation makes maintenance less efficient. These challenges are further compounded by the time-consuming nature of updating dependencies and the tendency to prioritize new features over maintenance tasks. The absence of effective tools and processes to tackle these issues reduces overall productivity and developer efficiency. To address this, API Guardian was created as a Visual Studio Code extension that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing …  ( 26 min )
    Part 1 - Develop a VS Code Extension for Your Capstone Project
    API Guardian - My Capstone Project As software and APIs evolve, developers encounter significant difficulties in maintaining and updating API endpoints. Breaking changes can lead to system instability, while outdated or unclear documentation makes maintenance less efficient. These challenges are further compounded by the time-consuming nature of updating dependencies and the tendency to prioritize new features over maintenance tasks. The absence of effective tools and processes to tackle these issues reduces overall productivity and developer efficiency. To address this, API Guardian was created as a Visual Studio Code extension that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing …
  • Open

    Custom Tracing in API Management
    Scenario: In case of encountering error in API management, request tracing is an invaluable feature that serves as a debugger. It allows for tracking the flow of requests as they pass through various policy logic, providing detailed insights into the complete API Management (APIM) processing. Here is a link if you would like to read more on how to enable request tracing in API management.  However, it is the most common way to debug your API, let's assume a real-life scenario where you encounter a sporadic error or unexpected response while processing the live APIM calls and need to drill down the issue. In such cases, attaching a debugger or running request traces can be challenging, especially when the issue is intermittent or requires checking the specific code logic. This often necessi…  ( 29 min )
    Custom Tracing in API Management
    Scenario: In case of encountering error in API management, request tracing is an invaluable feature that serves as a debugger. It allows for tracking the flow of requests as they pass through various policy logic, providing detailed insights into the complete API Management (APIM) processing. Here is a link if you would like to read more on how to enable request tracing in API management.  However, it is the most common way to debug your API, let's assume a real-life scenario where you encounter a sporadic error or unexpected response while processing the live APIM calls and need to drill down the issue. In such cases, attaching a debugger or running request traces can be challenging, especially when the issue is intermittent or requires checking the specific code logic. This often necessi…
  • Open

    Nested App Authentication: Now generally available across Microsoft 365
    Get started with Nested App Authentication, a modern protocol for simplifying authentication for Personal Tab Teams apps that run across Microsoft 365. The post Nested App Authentication: Now generally available across Microsoft 365 appeared first on Microsoft 365 Developer Blog.  ( 23 min )

  • Open

    RC1: Semantic Kernel for Java Agents API
    We’re excited to announce the release candidate of the Semantic Kernel for Java Agents API! This marks a major step forward in bringing the power of intelligent agents to Java developers, enabling them to build rich, contextual, and interactive AI experiences using the Semantic Kernel framework. What Are Agents in Semantic Kernel? Agents are intelligent, autonomous […] The post RC1: Semantic Kernel for Java Agents API appeared first on Semantic Kernel.  ( 22 min )
  • Open

    Smart Auditing: Leveraging Azure AI Agents to Transform Financial Oversight
    In today's data-driven business environment, audit teams often spend weeks poring over logs and databases to verify spending and billing information. This time-consuming process is ripe for automation. But is there a way to implement AI solutions without getting lost in complex technical frameworks? While tools like LangChain, Semantic Kernel, and AutoGen offer powerful AI agent capabilities, sometimes you need a straightforward solution that just works.  So, what's the answer for teams seeking simplicity without sacrificing effectiveness? This tutorial will show you how to use Azure AI Agent Service to build an AI agent that can directly access your Postgres database to streamline audit workflows. No complex chains or graphs required, just a practical solution to get your audit process au…  ( 43 min )
    Smart Auditing: Leveraging Azure AI Agents to Transform Financial Oversight
    In today's data-driven business environment, audit teams often spend weeks poring over logs and databases to verify spending and billing information. This time-consuming process is ripe for automation. But is there a way to implement AI solutions without getting lost in complex technical frameworks? While tools like LangChain, Semantic Kernel, and AutoGen offer powerful AI agent capabilities, sometimes you need a straightforward solution that just works.  So, what's the answer for teams seeking simplicity without sacrificing effectiveness? This tutorial will show you how to use Azure AI Agent Service to build an AI agent that can directly access your Postgres database to streamline audit workflows. No complex chains or graphs required, just a practical solution to get your audit process au…
  • Open

    Hubs and Workspaces on Azure Machine Learning – General Availability
    We are pleased to announce that hubs and workspaces is now generally available on Azure machine learning allowing users to use hub for team’s collaboration environment for machine learning applications.  Azure Hubs and Workspaces provide a centralized platform capability for Azure Machine Learning. This feature enables developers to innovate faster by creating project workspaces and accessing shared company resources without needing repeated assistance from IT administrators.  Quick Model Building and Experimentation without IT bottleneck  Hubs and Workspaces in Azure Machine Learning provide a centralized solution for managing machine learning resources. Hubs act as a central resource management construct that oversees security, connectivity, computing resources, and team quotas. Once cre…  ( 26 min )
    Hubs and Workspaces on Azure Machine Learning – General Availability
    We are pleased to announce that hubs and workspaces is now generally available on Azure machine learning allowing users to use hub for team’s collaboration environment for machine learning applications.  Azure Hubs and Workspaces provide a centralized platform capability for Azure Machine Learning. This feature enables developers to innovate faster by creating project workspaces and accessing shared company resources without needing repeated assistance from IT administrators.  Quick Model Building and Experimentation without IT bottleneck  Hubs and Workspaces in Azure Machine Learning provide a centralized solution for managing machine learning resources. Hubs act as a central resource management construct that oversees security, connectivity, computing resources, and team quotas. Once cre…
  • Open

    Announcing the updated Teams AI Library and MCP support
    Discover the new and improved Teams AI Library, designed to help developers create even more powerful agents for Microsoft Teams. The post Announcing the updated Teams AI Library and MCP support appeared first on Microsoft 365 Developer Blog.  ( 24 min )

  • Open

    Help Shape the Future of Log Analytics: Your Feedback Matters
    We’re launching a quick survey to gather your feedback on Azure Monitor Log Analytics. Your input directly impacts our product roadmap and helps us prioritize the features and improvements that matter most to you. The survey takes just a few minutes, and your responses will remain confidential. Take the Survey   New to Log Analytics? Start here: Get Started with Azure Monitor Log Analytics Overview of Log Analytics in Azure Monitor - Azure Monitor | Microsoft Learn For questions or additional feedback, feel free to reach out to Noyablanga@microsoft.com.Thank you for being part of this journey!  ( 18 min )
    Help Shape the Future of Log Analytics: Your Feedback Matters
    We’re launching a quick survey to gather your feedback on Azure Monitor Log Analytics. Your input directly impacts our product roadmap and helps us prioritize the features and improvements that matter most to you. The survey takes just a few minutes, and your responses will remain confidential. Take the Survey   New to Log Analytics? Start here: Get Started with Azure Monitor Log Analytics Overview of Log Analytics in Azure Monitor - Azure Monitor | Microsoft Learn For questions or additional feedback, feel free to reach out to Noyablanga@microsoft.com.Thank you for being part of this journey!
  • Open

    Applications (and revisions) stuck in activating state on Azure Container Apps
    This post refers to issues where you may see revisions “stuck in ‘Activating’ state” when using Azure Container Apps - and what are some common causes and explainations behind this.  ( 6 min )
  • Open

    Where Does an LLM Keep All That Knowledge? A Peek into the Physical Side of AI
    We often hear about Large Language Models (LLMs) like GPT-4 having billions of parameters and being trained on massive datasets. But have you ever wondered: Where is all that data actually stored? And more fundamentally, how does a computer even store knowledge in the first place? Let’s take a journey from the world of electric charges to the vast neural networks powering today’s AI. Data at the Most Basic Level: 0s and 1s At its core, all digital data is just binary — a series of 0s and 1s. These bits are represented physically using electric charges or magnetic states: In RAM or CPU/GPU memory, bits are stored using transistors and capacitors that are either charged (1) or not charged (0). In SSDs, data is stored using floating-gate transistors that trap electrons to represent binary st…  ( 23 min )
    Where Does an LLM Keep All That Knowledge? A Peek into the Physical Side of AI
    We often hear about Large Language Models (LLMs) like GPT-4 having billions of parameters and being trained on massive datasets. But have you ever wondered: Where is all that data actually stored? And more fundamentally, how does a computer even store knowledge in the first place? Let’s take a journey from the world of electric charges to the vast neural networks powering today’s AI. Data at the Most Basic Level: 0s and 1s At its core, all digital data is just binary — a series of 0s and 1s. These bits are represented physically using electric charges or magnetic states: In RAM or CPU/GPU memory, bits are stored using transistors and capacitors that are either charged (1) or not charged (0). In SSDs, data is stored using floating-gate transistors that trap electrons to represent binary st…
  • Open

    Activator as an Orchestrator of the Fabric Event Driven flows
    With Fabric Events general availability, the role of Activator expands from setting notifications and acting on your data in real time to becoming an orchestration centerpiece. Activator acts as a connecting tissue enabling complex event-driven and data-driven flows. Let’s look at a very common architecture we often see our customers implement: In this architecture we … Continue reading “Activator as an Orchestrator of the Fabric Event Driven flows”  ( 6 min )
    Task flows in Microsoft Fabric (Generally Available)
    Task flows feature is now generally available! Task flows streamline the design of your data solutions and ensure consistency between design and development efforts. It also allows you to navigate items and manage your workspace more easily, even as it becomes more complex over time. Since its preview last May, we have received a great … Continue reading “Task flows in Microsoft Fabric (Generally Available)”  ( 6 min )
  • Open

    AI Agents in Production: From Prototype to Reality - Part 10
    Hi everyone, Shivam Goyal here! This marks the final installment in our AI Agents for Beginners series, based on the awesome repository (link to the repo). I hope you've enjoyed this journey into the world of agentic AI! In previous posts ([links to parts 1-9 at the end]), we've covered the fundamentals and key design patterns. Now, let's explore the practical considerations of deploying AI agents to production, focusing on performance, cost management, and evaluation. As an active member of the AI community, I'm excited to share these insights to help you bring your agentic AI projects to life. From Lab to Production: Key Considerations Successfully deploying AI agents requires careful planning and attention to detail. We need to consider: How to plan the deployment of your AI Agent to p…  ( 26 min )
    AI Agents in Production: From Prototype to Reality - Part 10
    Hi everyone, Shivam Goyal here! This marks the final installment in our AI Agents for Beginners series, based on the awesome repository (link to the repo). I hope you've enjoyed this journey into the world of agentic AI! In previous posts ([links to parts 1-9 at the end]), we've covered the fundamentals and key design patterns. Now, let's explore the practical considerations of deploying AI agents to production, focusing on performance, cost management, and evaluation. As an active member of the AI community, I'm excited to share these insights to help you bring your agentic AI projects to life. From Lab to Production: Key Considerations Successfully deploying AI agents requires careful planning and attention to detail. We need to consider: How to plan the deployment of your AI Agent to p…

  • Open

    Granting Azure Resources Access to SharePoint Online Sites Using Managed Identity
    When integrating Azure resources like Logic Apps, Function Apps, or Azure VMs with SharePoint Online, you often need secure and granular access control. Rather than handling credentials manually, Managed Identity is the recommended approach to securely authenticate to Microsoft Graph and access SharePoint resources. High-level steps: Step 1: Enable Managed Identity (or App Registration) Step 2: Grant Sites.Selected Permission in Microsoft Entra ID Step 3: Assign SharePoint Site-Level Permission Step 1: Enable Managed Identity (or App Registration) For your Azure resource (e.g., Logic App): Navigate to the Azure portal. Go to the resource (e.g., Logic App). Under Identity, enable System-assigned Managed Identity. Note the Object ID and Client ID (you’ll need the Client ID later). Alternat…  ( 23 min )
    Granting Azure Resources Access to SharePoint Online Sites Using Managed Identity
    When integrating Azure resources like Logic Apps, Function Apps, or Azure VMs with SharePoint Online, you often need secure and granular access control. Rather than handling credentials manually, Managed Identity is the recommended approach to securely authenticate to Microsoft Graph and access SharePoint resources. High-level steps: Step 1: Enable Managed Identity (or App Registration) Step 2: Grant Sites.Selected Permission in Microsoft Entra ID Step 3: Assign SharePoint Site-Level Permission Step 1: Enable Managed Identity (or App Registration) For your Azure resource (e.g., Logic App): Navigate to the Azure portal. Go to the resource (e.g., Logic App). Under Identity, enable System-assigned Managed Identity. Note the Object ID and Client ID (you’ll need the Client ID later). Alternat…
  • Open

    Guest Blog: Orchestrating AI Agents with Semantic Kernel Plugins: A Technical Deep Dive
    Today we’re excited to welcome Jarre Nejatyab as a guest blog to highlight a technical deep dive on orchestrating AI Agents with Semantic Kernel Plugins. In the rapidly evolving world of Large Language Models (LLMs), orchestrating specialized AI agents has become crucial for building sophisticated cognitive architectures capable of complex reasoning and task execution. While […] The post Guest Blog: Orchestrating AI Agents with Semantic Kernel Plugins: A Technical Deep Dive appeared first on Semantic Kernel.  ( 28 min )

  • Open

    Real-time Speech Transcription with GPT-4o-transcribe and GPT-4o-mini-transcribe using WebSocket
    Azure OpenAI has expanded its speech recognition capabilities with two powerful models: GPT-4o-transcribe and GPT-4o-mini-transcribe. These models also leverage WebSocket connections to enable real-time transcription of audio streams, providing developers with cutting-edge tools for speech-to-text applications. In this technical blog, we'll explore how these models work and demonstrate a practical implementation using Python. Understanding OpenAI's Realtime Transcription API Unlike the regular REST API for audio transcription, Azure OpenAI's Realtime API enables continuous streaming of audio data through WebSockets or WebRTC connections. This approach is particularly valuable for applications requiring immediate transcription feedback, such as live captioning, meeting transcription, or voi…  ( 31 min )
    Real-time Speech Transcription with GPT-4o-transcribe and GPT-4o-mini-transcribe using WebSocket
    Azure OpenAI has expanded its speech recognition capabilities with two powerful models: GPT-4o-transcribe and GPT-4o-mini-transcribe. These models also leverage WebSocket connections to enable real-time transcription of audio streams, providing developers with cutting-edge tools for speech-to-text applications. In this technical blog, we'll explore how these models work and demonstrate a practical implementation using Python. Understanding OpenAI's Realtime Transcription API Unlike the regular REST API for audio transcription, Azure OpenAI's Realtime API enables continuous streaming of audio data through WebSockets or WebRTC connections. This approach is particularly valuable for applications requiring immediate transcription feedback, such as live captioning, meeting transcription, or voi…
    A Microsoft Fabric Template for Azure AI Content Understanding is Now Available
    We are excited to share that we have released a new Microsoft Fabric pipeline template that helps you easily send the results from Azure AI Content Understanding into a Fabric Lakehouse! This template makes it easier than ever to harvest information from multimodal content and turn it into structured data using Content Understanding and perform further analysis in Microsoft Fabric. Whether you are looking to extract insights from a contract, call transcript, or video footage, this template simplifies the process and gives you fast access to Fabric’s powerful data tools. Why It Matters Azure AI Content Understanding uses powerful large language models (LLMs) to extract key information from documents, videos, audio, and image files. For example, it can identify key phrases in documents, extract tables from invoices, or generate video chapters and summaries. This template lets you seamlessly feed structured JSON outputs from Content Understanding into a Fabric Lakehouse, where you can immediately use Power BI, Dataflows, and other tools to analyze and make sense of the data.   Key Benefits Quick Setup: Move from unstructured content to structured data in no time—no complicated setup required! Seamless Integration: Connect Azure AI and Microsoft Fabric effortlessly. Secure & Scalable: Every component is built on Microsoft’s cloud, ensuring your data is safe and scalable as your needs grow. Try It Now You can find the template and setup instructions on GitHub. We would love to hear how you are using it! Feel free to leave any questions or feedback in the comments below or send us an email.  Resources & Documentation Explore the following resources to learn more about Azure AI Content Understanding and Microsoft Fabric Azure Content Understanding Overview Microsoft Fabric Overview Azure Content Understanding in Azure AI Foundry Azure Content Understanding FAQs  ( 21 min )
    A Microsoft Fabric Template for Azure AI Content Understanding is Now Available
    We are excited to share that we have released a new Microsoft Fabric pipeline template that helps you easily send the results from Azure AI Content Understanding into a Fabric Lakehouse! This template makes it easier than ever to harvest information from multimodal content and turn it into structured data using Content Understanding and perform further analysis in Microsoft Fabric. Whether you are looking to extract insights from a contract, call transcript, or video footage, this template simplifies the process and gives you fast access to Fabric’s powerful data tools. Why It Matters Azure AI Content Understanding uses powerful large language models (LLMs) to extract key information from documents, videos, audio, and image files. For example, it can identify key phrases in documents, extract tables from invoices, or generate video chapters and summaries. This template lets you seamlessly feed structured JSON outputs from Content Understanding into a Fabric Lakehouse, where you can immediately use Power BI, Dataflows, and other tools to analyze and make sense of the data.   Key Benefits Quick Setup: Move from unstructured content to structured data in no time—no complicated setup required! Seamless Integration: Connect Azure AI and Microsoft Fabric effortlessly. Secure & Scalable: Every component is built on Microsoft’s cloud, ensuring your data is safe and scalable as your needs grow. Try It Now You can find the template and setup instructions on GitHub. We would love to hear how you are using it! Feel free to leave any questions or feedback in the comments below or send us an email.  Resources & Documentation Explore the following resources to learn more about Azure AI Content Understanding and Microsoft Fabric Azure Content Understanding Overview Microsoft Fabric Overview Azure Content Understanding in Azure AI Foundry Azure Content Understanding FAQs
  • Open

    Introducing Azure DevOps ID Token Refresh and Terraform Task Version 5
    We are excited to share some recent updates that improve the experience of using Workload identity federation (OpenID Connect) with Azure DevOps and Terraform on Microsoft Azure. Many working parts have come together to make this possible and we’ll share those here. We are also very pleased to announce version 5 of the Microsoft DevLabs […] The post Introducing Azure DevOps ID Token Refresh and Terraform Task Version 5 appeared first on Azure DevOps Blog.  ( 25 min )
  • Open

    How Networking setting of Batch Account impacts simplified communication mode Batch pool
    As described in our official document, the classic communication mode of Batch node will be retired on 31 March 2026. Instead, it’s recommended to use simplified communication mode while creating Batch pool.   But while user changes their Batch pool communication mode from classic to simplified and applies the necessary changes of network security group per documentation, they will find out that the node is still stuck in unusable status.   A very possible cause of this issue is due to the bad networking setting of Batch Account.   This blog will mainly talk about why networking setting can cause node using simplified communication mode stuck in unusable status and how to configure correct networking setting under different user scenarios.   Cause: As described in this document, the differ…  ( 23 min )
    How Networking setting of Batch Account impacts simplified communication mode Batch pool
    As described in our official document, the classic communication mode of Batch node will be retired on 31 March 2026. Instead, it’s recommended to use simplified communication mode while creating Batch pool.   But while user changes their Batch pool communication mode from classic to simplified and applies the necessary changes of network security group per documentation, they will find out that the node is still stuck in unusable status.   A very possible cause of this issue is due to the bad networking setting of Batch Account.   This blog will mainly talk about why networking setting can cause node using simplified communication mode stuck in unusable status and how to configure correct networking setting under different user scenarios.   Cause: As described in this document, the differ…
  • Open

    Get ready for Microsoft Build 2025
    Microsoft Build is just a few weeks away. To celebrate, we’re highlighting resources that will help you get ready for the big event. Explore some of the exciting sessions you can join in-person or online, learn new skills before jumping into live deep-dive sessions, brush up on best practices, and get up to speed on the latest developer tools so you can hit the event ready to take your knowledge (and your applications) to the next level. Connect, code, and grow at BuildIt’s almost time for Microsoft Build! Can’t join the event live in-person? No problem. You can still experience the event streaming live online for free (May 19-22). Watch the keynote, join live sessions, learn new skills, and watch in-depth demos. Join the .NET & C# teams at Microsoft Build 2025Don’t miss this opportunity t…  ( 28 min )
    Get ready for Microsoft Build 2025
    Microsoft Build is just a few weeks away. To celebrate, we’re highlighting resources that will help you get ready for the big event. Explore some of the exciting sessions you can join in-person or online, learn new skills before jumping into live deep-dive sessions, brush up on best practices, and get up to speed on the latest developer tools so you can hit the event ready to take your knowledge (and your applications) to the next level. Connect, code, and grow at BuildIt’s almost time for Microsoft Build! Can’t join the event live in-person? No problem. You can still experience the event streaming live online for free (May 19-22). Watch the keynote, join live sessions, learn new skills, and watch in-depth demos. Join the .NET & C# teams at Microsoft Build 2025Don’t miss this opportunity t…
  • Open

    Introducing Cloud Accelerate Factory: Unlock zero cost deployment assistance for Azure
    As AI reshapes how businesses operate and deliver value, many organizations are seeking ways to modernize their infrastructure — quickly, securely, and at scale. With the right tools and support, cloud adoption becomes an empowering step toward innovation and growth. Azure Innovate & Azure Migrate and Modernize were designed to support your entire cloud journey, providing expert guidance, funding, and comprehensive resources all in one place to help you maximize the value of Azure to boost business growth. We’re continuously evolving these offerings to deliver even more value at scale. That’s why we created Cloud Accelerate Factory, a new benefit of Azure Innovate & Azure Migrate and Modernize, built on the patterns and insights from thousands of customer deployments. The Factory provides …  ( 23 min )
    Introducing Cloud Accelerate Factory: Unlock zero cost deployment assistance for Azure
    As AI reshapes how businesses operate and deliver value, many organizations are seeking ways to modernize their infrastructure — quickly, securely, and at scale. With the right tools and support, cloud adoption becomes an empowering step toward innovation and growth. Azure Innovate & Azure Migrate and Modernize were designed to support your entire cloud journey, providing expert guidance, funding, and comprehensive resources all in one place to help you maximize the value of Azure to boost business growth. We’re continuously evolving these offerings to deliver even more value at scale. That’s why we created Cloud Accelerate Factory, a new benefit of Azure Innovate & Azure Migrate and Modernize, built on the patterns and insights from thousands of customer deployments. The Factory provides …
  • Open

    Streamlining data discovery for AI/ML with OpenMetadata on AKS and Azure NetApp Files
    Table of Contents Abstract Introduction Prerequisites Workstation setup Repository directory contents Terraform variables file Credentials Azure settings Instaclustr settings VNet settings AKS cluster settings Azure NetApp Files settings PostgreSQL settings OpenSearch settings Authorized networks Infrastructure deployment Application deployment Using OpenMetadata Adding a service Adding an ingestion Cleanup Summary Additional Information Abstract This article contains a step-by-step guide to deploying OpenMetadata on Azure Kubernetes Service (AKS), using Azure NetApp Files for storage. It also covers the deployment and configuration of PostgreSQL and OpenSearch databases to run externally from the Kubernetes cluster, following OpenMetadata best practices, managed by NetApp® Instaclustr®. T…  ( 61 min )
    Streamlining data discovery for AI/ML with OpenMetadata on AKS and Azure NetApp Files
    Table of Contents Abstract Introduction Prerequisites Workstation setup Repository directory contents Terraform variables file Credentials Azure settings Instaclustr settings VNet settings AKS cluster settings Azure NetApp Files settings PostgreSQL settings OpenSearch settings Authorized networks Infrastructure deployment Application deployment Using OpenMetadata Adding a service Adding an ingestion Cleanup Summary Additional Information Abstract This article contains a step-by-step guide to deploying OpenMetadata on Azure Kubernetes Service (AKS), using Azure NetApp Files for storage. It also covers the deployment and configuration of PostgreSQL and OpenSearch databases to run externally from the Kubernetes cluster, following OpenMetadata best practices, managed by NetApp® Instaclustr®. T…

  • Open

    Authenticate to Fabric data connections using Azure Key Vault stored secrets (Preview)
    Azure Key Vault support in Fabric Data connections is now in preview! With this capability, we are introducing a new concept called ‘Azure Key Vault references’ in Microsoft Fabric, using which, users can reuse their existing Azure key vault secrets for authentication to data source connections instead of copy-pasting passwords, slashing credential-management effort and audit risk. … Continue reading “Authenticate to Fabric data connections using Azure Key Vault stored secrets (Preview)”  ( 7 min )
    Announcing the winners of “Hack Together: The Microsoft Data + AI Kenya Hack”
    We are excited to announce the winners of the winners of Hack Together: The Microsoft Data + AI Kenya Hack! About the Hackathon The HackTogether was an exciting opportunity to bring bold ideas to life by building Data & AI solutions using Microsoft Fabric and Azure AI Services. Organized exclusively for participants from Kenya, the … Continue reading “Announcing the winners of “Hack Together: The Microsoft Data + AI Kenya Hack””  ( 10 min )
    Introducing new OpenAI Plugins for Eventhouse (Preview)
    We are excited to announce the release of two powerful AI plugins for Eventhouse: AI Embed Text Plugin and AI Chat Completion Prompt Plugin. These plugins are designed to enhance your data analysis capabilities and augment your workflows with OpenAI models, providing more granular control to power users who seek precision over model output or wish to fine-tune … Continue reading “Introducing new OpenAI Plugins for Eventhouse (Preview)”  ( 7 min )
  • Open

    Microsoft.Extensions.AI: Integrating AI into your .NET applications
    Artificial Intelligence (AI) is transforming the way we build applications. With the introduction of Microsoft.Extensions.AI, integrating AI services into .NET applications has never been easier. In this blog, we'll explore Microsoft.Extensions.AI, why .NET developers should try it out and how to get started using it to build a simple text generation application. Why Microsoft.Extensions.AI? Microsoft.Extensions.AI provides unified abstractions and middleware for integrating AI services into .NET applications. This means you can work with AI capabilities like chat features, embedding generation, and tool calling without worrying about specific platform implementations. Whether you're using Azure AI, OpenAI, or other AI services, Microsoft.Extensions.AI ensures seamless integration and coll…  ( 36 min )
    Microsoft.Extensions.AI: Integrating AI into your .NET applications
    Artificial Intelligence (AI) is transforming the way we build applications. With the introduction of Microsoft.Extensions.AI, integrating AI services into .NET applications has never been easier. In this blog, we'll explore Microsoft.Extensions.AI, why .NET developers should try it out and how to get started using it to build a simple text generation application. Why Microsoft.Extensions.AI? Microsoft.Extensions.AI provides unified abstractions and middleware for integrating AI services into .NET applications. This means you can work with AI capabilities like chat features, embedding generation, and tool calling without worrying about specific platform implementations. Whether you're using Azure AI, OpenAI, or other AI services, Microsoft.Extensions.AI ensures seamless integration and coll…
  • Open

    How to use DefaultAzureCredential across multiple tenants
    If you are using the DefaultAzureCredential class from the Azure Identity SDK while your user account is associated with multiple tenants, you may find yourself frequently running into API authentication errors (such as HTTP 401/Unauthorized). This post is for you! These are your two options for successful authentication from a non-default tenant: Setup your environment precisely to force DefaultAzureCredential to use the desired tenant Use a specific credential class and explicitly pass in the desired tenant ID Option 1: Get DefaultAzureCredential working The DefaultAzureCredential class is a credential chain, which means that it tries a sequence of credential classes until it finds one that can authenticate successfully. The current sequence is: EnvironmentCredential WorkloadIdentityC…  ( 26 min )
    How to use DefaultAzureCredential across multiple tenants
    If you are using the DefaultAzureCredential class from the Azure Identity SDK while your user account is associated with multiple tenants, you may find yourself frequently running into API authentication errors (such as HTTP 401/Unauthorized). This post is for you! These are your two options for successful authentication from a non-default tenant: Setup your environment precisely to force DefaultAzureCredential to use the desired tenant Use a specific credential class and explicitly pass in the desired tenant ID Option 1: Get DefaultAzureCredential working The DefaultAzureCredential class is a credential chain, which means that it tries a sequence of credential classes until it finds one that can authenticate successfully. The current sequence is: EnvironmentCredential WorkloadIdentityC…
    Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers
    Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers Introduction Phi-4-Reasoning is a state-of-the-art AI model developed by Microsoft Research, designed to excel in complex reasoning tasks. With its advanced capabilities, Phi-4-Reasoning is a powerful tool for AI developers, enabling them to tackle intricate problems with ease and precision.     What is Phi-4-Reasoning? Phi-4-Reasoning is a 14-billion parameter open-weight reasoning model that has been fine-tuned from the Phi-4 model using supervised fine-tuning on a dataset of chain-of-thought traces..  We are also releasing Phi-4-reasoning-plus, a variant enhanced through a short phase of outcome-based reinforcement learning that offers higher performance by generating longer reasoning traces. This model is designed to handle …  ( 34 min )
    Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers
    Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers Introduction Phi-4-Reasoning is a state-of-the-art AI model developed by Microsoft Research, designed to excel in complex reasoning tasks. With its advanced capabilities, Phi-4-Reasoning is a powerful tool for AI developers, enabling them to tackle intricate problems with ease and precision.     What is Phi-4-Reasoning? Phi-4-Reasoning is a 14-billion parameter open-weight reasoning model that has been fine-tuned from the Phi-4 model using supervised fine-tuning on a dataset of chain-of-thought traces..  We are also releasing Phi-4-reasoning-plus, a variant enhanced through a short phase of outcome-based reinforcement learning that offers higher performance by generating longer reasoning traces. This model is designed to handle …
  • Open

    Feedback Loops in GenAI with Azure Functions, Azure OpenAI and Neon serverless Postgres
    As vector search and Retrieval Augmented Generation(RAG) become mainstream for Generative AI (GenAI) use cases, we’re looking ahead to what’s next. GenAI primarily operates in a one-way direction, generating content based on input data.GenAI must have a secret for production data. Generative Feedback Loops (GFL) are focused on optimizing and improving the AI’s outputs over time through a cycle of feedback and learnings based on the production data. In GFL, results generated from Large Language Models (LLMs) like GPT are vectorized, indexed, and saved back into vector storage for better-filtered semantic search operations. This creates a dynamic cycle that adapts LLMs to new and continuously changing data, and user needs. GFL offers personalized, up-to-date summaries and suggestions. A good…  ( 50 min )
    Feedback Loops in GenAI with Azure Functions, Azure OpenAI and Neon serverless Postgres
    As vector search and Retrieval Augmented Generation(RAG) become mainstream for Generative AI (GenAI) use cases, we’re looking ahead to what’s next. GenAI primarily operates in a one-way direction, generating content based on input data.GenAI must have a secret for production data. Generative Feedback Loops (GFL) are focused on optimizing and improving the AI’s outputs over time through a cycle of feedback and learnings based on the production data. In GFL, results generated from Large Language Models (LLMs) like GPT are vectorized, indexed, and saved back into vector storage for better-filtered semantic search operations. This creates a dynamic cycle that adapts LLMs to new and continuously changing data, and user needs. GFL offers personalized, up-to-date summaries and suggestions. A good…
  • Open

    Dev Proxy v0.27 with generating TypeSpec files and configuring using natural language
    Dev Proxy v0.27 is even more developer-friendly, helping you generate API specs faster, improving suggestions while editing, and laying the foundation for more flexible AI integrations. The post Dev Proxy v0.27 with generating TypeSpec files and configuring using natural language appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    General Availability: App Service Webjobs on Linux
    Last year, we introduced Webjobs on Linux as a preview feature. We are now excited to announce General Avilability for Webjobs on App Service Linux for both code an containers scenarios.  ( 2 min )

  • Open

    Streamline & Modernise ASP.NET Auth: Moving enterprise apps from IIS to App Service with Easy Auth
    Introduction When modernising your enterprise ASP.NET (.NET Framework) or ASP.NET Core applications and moving them from IIS over to Azure App Service, one of the aspects you will have to take into consideration is how you will manage authentication (AuthN) and authorisation (AuthZ). Specifically, for applications that leverage on-premises auth mechanisms such as Integrated Windows Authentication, you will need to start considering more modern auth protocols such as OpenID Connect/OAuth which are more suited to the cloud. Fortunately, App Service includes built-in authentication and authorisation support also known as 'Easy Auth', which requires minimal to zero code changes. This feature is integrated into the platform, includes a built-in token store, and operates as a middleware running …  ( 40 min )
    Streamline & Modernise ASP.NET Auth: Moving enterprise apps from IIS to App Service with Easy Auth
    Introduction When modernising your enterprise ASP.NET (.NET Framework) or ASP.NET Core applications and moving them from IIS over to Azure App Service, one of the aspects you will have to take into consideration is how you will manage authentication (AuthN) and authorisation (AuthZ). Specifically, for applications that leverage on-premises auth mechanisms such as Integrated Windows Authentication, you will need to start considering more modern auth protocols such as OpenID Connect/OAuth which are more suited to the cloud. Fortunately, App Service includes built-in authentication and authorisation support also known as 'Easy Auth', which requires minimal to zero code changes. This feature is integrated into the platform, includes a built-in token store, and operates as a middleware running …
    How to use Azure Table Storage with .NET Aspire and a Minimal API
    Azure Storage is a versatile cloud storage solution that I've used in many projects. In this post, I'll share my experience integrating it into a .NET Aspire project through two perspectives: first, by building a simple demo project to learn the basics, and then by applying those learnings to migrate a real-world application, AzUrlShortener. This post is part of a series about modernizing the AzUrlShortener project: Migrating AzUrlShortener from Azure SWA to Azure Container Apps Converting a Blazor WASM to FluentUI Blazor server Azure Developer CLI (azd) in a real-life scenario How to use Azure Table Storage with .NET Aspire and a Minimal API Part 1: Learning using a simple project For this post we will be using a simpler project instead of the full AzUrlShortener solution to make it eas…  ( 36 min )
    How to use Azure Table Storage with .NET Aspire and a Minimal API
    Azure Storage is a versatile cloud storage solution that I've used in many projects. In this post, I'll share my experience integrating it into a .NET Aspire project through two perspectives: first, by building a simple demo project to learn the basics, and then by applying those learnings to migrate a real-world application, AzUrlShortener. This post is part of a series about modernizing the AzUrlShortener project: Migrating AzUrlShortener from Azure SWA to Azure Container Apps Converting a Blazor WASM to FluentUI Blazor server Azure Developer CLI (azd) in a real-life scenario How to use Azure Table Storage with .NET Aspire and a Minimal API Part 1: Learning using a simple project For this post we will be using a simpler project instead of the full AzUrlShortener solution to make it eas…
    Tracking Kubernetes Updates in AKS Clusters
    When you support Azure Kubernetes Service (AKS) clusters, keeping up with new versions of Kubernetes being released, and ensuring that your clusters are on a supported version can be difficult. If you have one or two clusters it might be OK, but as your estate grows it can be difficult to keep on top of which clusters have which version of Kubernetes and which needs updates. One way of dealing with this could be to implement Azure Kubernetes Fleet Manager (Fleet). Fleet provides a comprehensive solution for monitoring Kubernetes and Node Image versions in your clusters, and rolling out updates across your estate. You can read more details on Fleet for update management here. However, if you're not ready to implement Fleet, or your AKS estate isn't large enough to warrant it, we can build a…  ( 35 min )
    Tracking Kubernetes Updates in AKS Clusters
    When you support Azure Kubernetes Service (AKS) clusters, keeping up with new versions of Kubernetes being released, and ensuring that your clusters are on a supported version can be difficult. If you have one or two clusters it might be OK, but as your estate grows it can be difficult to keep on top of which clusters have which version of Kubernetes and which needs updates. One way of dealing with this could be to implement Azure Kubernetes Fleet Manager (Fleet). Fleet provides a comprehensive solution for monitoring Kubernetes and Node Image versions in your clusters, and rolling out updates across your estate. You can read more details on Fleet for update management here. However, if you're not ready to implement Fleet, or your AKS estate isn't large enough to warrant it, we can build a…
  • Open

    Using network troubleshooting tools with Azure Container Apps
    This post will go over using networking troubleshooting tools and which scenarios they may be best for  ( 8 min )
  • Open

    Medallion Architecture in Fabric Real-Time Intelligence
    Introduction Building a multi-layer, medallion architecture using Fabric Real-Time Intelligence (RTI) requires a different approach compared to traditional data warehousing techniques. But even transactional source systems can be effectively processed in RTI. To demonstrate, we’ll look at how sales orders (created in a relational database) can be continuously ingested and transformed through a RTI bronze, … Continue reading “Medallion Architecture in Fabric Real-Time Intelligence”  ( 9 min )
    Fabric April 2025 Feature Summary
    Welcome to the Fabric April 2025 Feature Summary! This update brings exciting advancements across various workloads, including Low-code AI tools to accelerate productivity in notebooks (Preview), session Scoped distributed #temp table in Fabric Data Warehouse (Generally Available) and the Migration assistant for Fabric Data Warehouse (Preview) to simplify your migration experience. Contents Community & Events … Continue reading “Fabric April 2025 Feature Summary”  ( 16 min )
  • Open

    AI Sparks: Unleashing Agents with the AI Toolkit
    The final episode of our "AI Sparks" series delved deep into the exciting world of AI Agents and their practical implementation. We also covered a fair part of MCP with Microsoft AI Toolkit extension for VS Code.  We kicked off by charting the evolutionary path of intelligent conversational systems. Starting with the rudimentary rule-based Basic Chatbots, we then explored the advancements brought by Basic Generative AI Chatbots, which offered contextually aware interactions. Then we explored the Retrieval-Augmented Generation (RAG), highlighting its ability to ground generative models in specific knowledge bases, significantly enhancing accuracy and relevance. The limitations were also discussed for the above mentioned techniques. The session was then centralized to the theme – Agents and …  ( 28 min )
    AI Sparks: Unleashing Agents with the AI Toolkit
    The final episode of our "AI Sparks" series delved deep into the exciting world of AI Agents and their practical implementation. We also covered a fair part of MCP with Microsoft AI Toolkit extension for VS Code.  We kicked off by charting the evolutionary path of intelligent conversational systems. Starting with the rudimentary rule-based Basic Chatbots, we then explored the advancements brought by Basic Generative AI Chatbots, which offered contextually aware interactions. Then we explored the Retrieval-Augmented Generation (RAG), highlighting its ability to ground generative models in specific knowledge bases, significantly enhancing accuracy and relevance. The limitations were also discussed for the above mentioned techniques. The session was then centralized to the theme – Agents and …
  • Open

    Getting Started with Azure MCP Server: A Guide for Developers
    The world of cloud computing is growing rapidly, and Azure is at the forefront of this innovation. If you're a student developer eager to dive into Azure and learn about Model Context Protocol (MCP), the Azure MCP Server is your perfect starting point. This tool, currently in Public Preview, empowers AI agents to seamlessly interact with Azure services like Azure Storage, Cosmos DB, and more. Let's explore how you can get started! 🎯 Why Use the Azure MCP Server? The Azure MCP Server revolutionizes how AI agents and developers interact with Azure services. Here's a glimpse of what it offers: Exploration Made Easy: List storage accounts, databases, resource groups, tables, and more with natural language commands. Advanced Operations: Manage configurations, query analytics, and execute comp…  ( 24 min )
    Getting Started with Azure MCP Server: A Guide for Developers
    The world of cloud computing is growing rapidly, and Azure is at the forefront of this innovation. If you're a student developer eager to dive into Azure and learn about Model Context Protocol (MCP), the Azure MCP Server is your perfect starting point. This tool, currently in Public Preview, empowers AI agents to seamlessly interact with Azure services like Azure Storage, Cosmos DB, and more. Let's explore how you can get started! 🎯 Why Use the Azure MCP Server? The Azure MCP Server revolutionizes how AI agents and developers interact with Azure services. Here's a glimpse of what it offers: Exploration Made Easy: List storage accounts, databases, resource groups, tables, and more with natural language commands. Advanced Operations: Manage configurations, query analytics, and execute comp…
  • Open

    GenAIOps and Evals Best Practices
    Contributors and Reviewers: Jay Sen (C), Anthony Nevico (C), Chris Kahrs (C), Anurag Karuparti (C), John De Havilland (R) Key drivers behind GenAI Evaluation Risk Mitigation and Reliability: Proactively identifying issues to ensure GenAI models perform safely and consistently in critical environments.  Iterative Improvement: Leveraging continuous feedback loops and tests to refine models and maintain alignment with evolving business objectives.  Transparency and Accountability: Establishing clear, shared metrics that build trust between technical teams and business stakeholders, ensuring AI deployments are safe, ethical, and outcome driven.  Speed to market: Evaluation frameworks allow the adoption of GenAI technologies across business processes in a safe, sane and validated manner, allow…  ( 37 min )
    GenAIOps and Evals Best Practices
    Contributors and Reviewers: Jay Sen (C), Anthony Nevico (C), Chris Kahrs (C), Anurag Karuparti (C), John De Havilland (R) Key drivers behind GenAI Evaluation Risk Mitigation and Reliability: Proactively identifying issues to ensure GenAI models perform safely and consistently in critical environments.  Iterative Improvement: Leveraging continuous feedback loops and tests to refine models and maintain alignment with evolving business objectives.  Transparency and Accountability: Establishing clear, shared metrics that build trust between technical teams and business stakeholders, ensuring AI deployments are safe, ethical, and outcome driven.  Speed to market: Evaluation frameworks allow the adoption of GenAI technologies across business processes in a safe, sane and validated manner, allow…
  • Open

    Elevate Your Virtual Machine Management with Multi-Select, Sorting, Grouping, and Tags in Azure DevTest Labs
    We are thrilled to unveil exciting new enhancements in the My Virtual Machine view within Azure DevTest Labs that will revolutionize your VM management experience. With these updates, managing your virtual machines has never been easier or more efficient. Imagine being able to multi-select VMs to start, stop, restart, or delete them all at once […] The post Elevate Your Virtual Machine Management with Multi-Select, Sorting, Grouping, and Tags in Azure DevTest Labs appeared first on Develop from the cloud.  ( 22 min )

  • Open

    FSI Knowledge Mining and Intelligent Document Process Reference Architecture
    FSI customers such as insurance companies and banks rely on their vast amounts of data to provide sometimes hundreds of individual products to their customers. From assessing product suitability, underwriting, fraud investigations, and claims handling, many employees and applications depend on accessing this data to do their jobs efficiently. Since the capabilities of GenAI have been realised, we have been helping our customers in this market transform their business with unified systems that simplify access to this data and speed up the processing times of these core tasks, while remaining compliant with the numerous regulations that govern the FSI space. Combining the use of Knowledge Mining with Intelligent Document processing provides a powerful solution to reduce the manual effort and…  ( 39 min )
    FSI Knowledge Mining and Intelligent Document Process Reference Architecture
    FSI customers such as insurance companies and banks rely on their vast amounts of data to provide sometimes hundreds of individual products to their customers. From assessing product suitability, underwriting, fraud investigations, and claims handling, many employees and applications depend on accessing this data to do their jobs efficiently. Since the capabilities of GenAI have been realised, we have been helping our customers in this market transform their business with unified systems that simplify access to this data and speed up the processing times of these core tasks, while remaining compliant with the numerous regulations that govern the FSI space. Combining the use of Knowledge Mining with Intelligent Document processing provides a powerful solution to reduce the manual effort and…
    Add-ins and more – WordPress on App Service
    The WordPress on App Service create flow offers a streamlined process to set up your site along with all the necessary Azure resources. Let's learn more about add-ins that can enhance your WordPress experience and help you decide which ones to opt for. Deploying WordPress on App Service is a breeze thanks to the ARM template approach, which ties together Azure applications to ensure a seamless experience for developers. Whether you're a seasoned pro or new to the create flow, this guide will demystify these additional settings and help you make informed choices. Add-ins tab Managed Identity: Say goodbye to managing credentials! Managed Identities provide secure access to Azure resources without storing sensitive credentials. Enabling this option creates a user-assigned managed identity, c…  ( 25 min )
    Add-ins and more – WordPress on App Service
    The WordPress on App Service create flow offers a streamlined process to set up your site along with all the necessary Azure resources. Let's learn more about add-ins that can enhance your WordPress experience and help you decide which ones to opt for. Deploying WordPress on App Service is a breeze thanks to the ARM template approach, which ties together Azure applications to ensure a seamless experience for developers. Whether you're a seasoned pro or new to the create flow, this guide will demystify these additional settings and help you make informed choices. Add-ins tab Managed Identity: Say goodbye to managing credentials! Managed Identities provide secure access to Azure resources without storing sensitive credentials. Enabling this option creates a user-assigned managed identity, c…
  • Open

    Fabric SQL Database Integration: Unlocking New Possibilities with Power BI desktop
    Introducing Seamless Connectivity for Enhanced Data Analytics and Reporting. We are thrilled to announce a new SQL database integration with Power BI Desktop! This innovative feature is designed to empower users with streamlined access to their SQL databases, providing greater flexibility and precision for building insightful reports and dashboards. With this integration, users can now … Continue reading “Fabric SQL Database Integration: Unlocking New Possibilities with Power BI desktop”  ( 6 min )
  • Open

    AI Agents Readiness and skilling on Demand Events
    2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development. On Demand content now available TopicTrack AI Agents Hackathon KickoffAll Build your code-first app with Azure AI Agent ServicePython AI Agents for Java using Azure AI FoundryJava Build your code-first app with Azure AI Agent ServicePython Build and extend agents for Microsoft 365 CopilotCopilots Transforming business processes with multi-agent AI using Semantic KernelPython Build your code-first app with Azure AI Agent Service (.NET)C# Building custom engine agents with Azure AI Foundry and Visual Studio CodeCopilots Your first AI Agent in JS with …  ( 24 min )
    AI Agents Readiness and skilling on Demand Events
    2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development. On Demand content now available TopicTrack AI Agents Hackathon KickoffAll Build your code-first app with Azure AI Agent ServicePython AI Agents for Java using Azure AI FoundryJava Build your code-first app with Azure AI Agent ServicePython Build and extend agents for Microsoft 365 CopilotCopilots Transforming business processes with multi-agent AI using Semantic KernelPython Build your code-first app with Azure AI Agent Service (.NET)C# Building custom engine agents with Azure AI Foundry and Visual Studio CodeCopilots Your first AI Agent in JS with …
  • Open

    Routine Planned Maintenance Notifications Improvements for App Service
    As of April 2025, we are happy to announce major improvements to App Service routine maintenance notifications.  ( 3 min )
  • Open

    Advancing Fine-Tuning in Azure AI Foundry: April 2025 Updates
    As organizations increasingly tailor foundation models to meet their domain-specific needs, Azure AI Foundry continues to deliver new capabilities that streamline, scale, and enhance the fine-tuning experience. One such organization, Decagon AI, fine-tuned GPT-4o-mini using Azure OpenAI Service’s supervised fine-tuning for their customer service agents. They were able to improve model accuracy and observed substantially lower latency for inferencing. This is one of my favorite use cases because it combines two cutting edge techniques - agents and fine tuning - for better results! “Fine-tuning GPT-4o-mini on Azure dramatically accelerated our delivery timeline,” said Ashwin Sreenvias, CEO at Decagon AI. “The training performance, simplicity of the pipeline, and integrated tooling gave us a …  ( 26 min )
    Advancing Fine-Tuning in Azure AI Foundry: April 2025 Updates
    As organizations increasingly tailor foundation models to meet their domain-specific needs, Azure AI Foundry continues to deliver new capabilities that streamline, scale, and enhance the fine-tuning experience. One such organization, Decagon AI, fine-tuned GPT-4o-mini using Azure OpenAI Service’s supervised fine-tuning for their customer service agents. They were able to improve model accuracy and observed substantially lower latency for inferencing. This is one of my favorite use cases because it combines two cutting edge techniques - agents and fine tuning - for better results! “Fine-tuning GPT-4o-mini on Azure dramatically accelerated our delivery timeline,” said Ashwin Sreenvias, CEO at Decagon AI. “The training performance, simplicity of the pipeline, and integrated tooling gave us a …
  • Open

    Guest Blog: Letting AI Help Make the World More Accessible – Analyzing Website Accessibility with Semantic Kernel and OmniParser
    Today we’re excited to welcome Jonathan David, as a guest author on the Semantic Kernel blog. We’ll turn it over to Jonathan to dive into Letting AI Help Make the World More Accessible – Analyzing Website Accessibility with Semantic Kernel and OmniParser.   With the European Accessibility Act and Germany’s Barrierefreiheitsstärkungsgesetz (which translates to Barrier […] The post Guest Blog: Letting AI Help Make the World More Accessible – Analyzing Website Accessibility with Semantic Kernel and OmniParser appeared first on Semantic Kernel.  ( 34 min )
  • Open

    Application Awareness in Azure Migrate
    Shiva Shastri Sr Product Marketing Manager, Azure Migrate—Product & Ecosystem. Intuitive and cost-effective migrations. In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay competitive through innovations while managing costs. By leveraging the power of the cloud, organizations can achieve unparalleled cost-effectiveness and foster sustainable innovation. By transitioning to Azure, any organization can achieve greater financial flexibility, operational efficiency, and access to secure innovations that provide a competitive edge in the marketplace. Collocating application resources and data is essential for optimal performance and return on investment (ROI). Once in Azure, secure and responsible AI can help provide insights and lead to actions with b…  ( 23 min )
    Application Awareness in Azure Migrate
    Shiva Shastri Sr Product Marketing Manager, Azure Migrate—Product & Ecosystem. Intuitive and cost-effective migrations. In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay competitive through innovations while managing costs. By leveraging the power of the cloud, organizations can achieve unparalleled cost-effectiveness and foster sustainable innovation. By transitioning to Azure, any organization can achieve greater financial flexibility, operational efficiency, and access to secure innovations that provide a competitive edge in the marketplace. Collocating application resources and data is essential for optimal performance and return on investment (ROI). Once in Azure, secure and responsible AI can help provide insights and lead to actions with b…

  • Open

    Using Azure Machine Learning (AML) for Medical Imaging Vision Model Training and Fine-tuning
    Vision Model Architectures At present, Transformer-based vision model architecture is considered the forefront of advanced vision modeling.  These models are exceptionally versatile, capable of handling a wide range of applications, from object detection and image segmentation to contextual classification. Two popular Transformer-based model implementations are often used in real-world applications.  These are: Masked Autoencoders (MAE) and Vision Transformer (ViT) Masked Autoencoders (MAE) Masked Autoencoders (MAE) are a type of Transformer-based vision model architecture. They are designed to handle large-scale vision tasks by leveraging the power of self-supervised learning. The key idea behind MAE is to mask a portion of the input image and then train the model to reconstruct the mis…  ( 42 min )
    Using Azure Machine Learning (AML) for Medical Imaging Vision Model Training and Fine-tuning
    Vision Model Architectures At present, Transformer-based vision model architecture is considered the forefront of advanced vision modeling.  These models are exceptionally versatile, capable of handling a wide range of applications, from object detection and image segmentation to contextual classification. Two popular Transformer-based model implementations are often used in real-world applications.  These are: Masked Autoencoders (MAE) and Vision Transformer (ViT) Masked Autoencoders (MAE) Masked Autoencoders (MAE) are a type of Transformer-based vision model architecture. They are designed to handle large-scale vision tasks by leveraging the power of self-supervised learning. The key idea behind MAE is to mask a portion of the input image and then train the model to reconstruct the mis…
  • Open

    Service principal and private library support for Fabric User data functions
    Using Service Principal and Managed Identity, along with private libraries for Fabric user data functions, makes working with data much easier and more secure. These features let developers customize workflows and use their own code to solve problems, boosting productivity and creativity in teams. As businesses grow and rely more on unique analytics and automation, these tools help simplify data management and improve operations. Check this blog post to learn more  ( 6 min )
  • Open

    AI Agents: Metacognition for Self-Aware Intelligence - Part 9
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with an exciting topic: Metacognition in AI Agents. In previous posts ([links to parts 1-8 at the end]), we've covered fundamental concepts and design patterns. Now, we'll explore how to equip AI agents with the ability to "think about thinking," enabling them to evaluate, adapt, and improve their own cognitive processes. What is Metacognition? Metacognition, often described as "thinking about thinking," refers to higher-order cognitive processes that involve self-awareness and self-regulation of one's cognitive activities. In AI, this means enabling agents to evaluate their actions, identify errors, and adjust strategies based on past experiences. This self-awareness allows…  ( 26 min )
    AI Agents: Metacognition for Self-Aware Intelligence - Part 9
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with an exciting topic: Metacognition in AI Agents. In previous posts ([links to parts 1-8 at the end]), we've covered fundamental concepts and design patterns. Now, we'll explore how to equip AI agents with the ability to "think about thinking," enabling them to evaluate, adapt, and improve their own cognitive processes. What is Metacognition? Metacognition, often described as "thinking about thinking," refers to higher-order cognitive processes that involve self-awareness and self-regulation of one's cognitive activities. In AI, this means enabling agents to evaluate their actions, identify errors, and adjust strategies based on past experiences. This self-awareness allows…
  • Open

    "Appointment Booking Assistant"—an AI-powered voice agent
    Introduction Imagine having an intelligent assistant that can schedule appointments for you over a phone call. The Appointment Booking Assistant is exactly that – a voice-driven AI agent that answers calls, converses naturally with users, and books appointments in a calendar. This solution showcasing how modern cloud services and AI can streamline scheduling tasks. It brings together real-time voice interaction with the power of AI and Microsoft 365 integration, allowing users to simply speak with an assistant to set up meetings or appointments. The result is a faster, more accessible way to manage bookings without needing a human receptionist or manual coordination. Technologies Involved Building this assistant required combining several key technologies, each playing a specific role: Az…  ( 59 min )
    "Appointment Booking Assistant"—an AI-powered voice agent
    Introduction Imagine having an intelligent assistant that can schedule appointments for you over a phone call. The Appointment Booking Assistant is exactly that – a voice-driven AI agent that answers calls, converses naturally with users, and books appointments in a calendar. This solution showcasing how modern cloud services and AI can streamline scheduling tasks. It brings together real-time voice interaction with the power of AI and Microsoft 365 integration, allowing users to simply speak with an assistant to set up meetings or appointments. The result is a faster, more accessible way to manage bookings without needing a human receptionist or manual coordination. Technologies Involved Building this assistant required combining several key technologies, each playing a specific role: Az…

  • Open

    Azure Kubernetes Fleet Manager Demo with Terraform Code
    Introduction Azure Kubernetes Fleet Manager (Fleet Manager) simplifies the at-scale management of multiple Azure Kubernetes Service (AKS) clusters by treating them as a coordinated “fleet.” One Fleet Manager hub can manage up to 100 AKS clusters in a single Azure AD tenant and region scope, so you can register, organize, and operate a large number of clusters from a single control plane. In this walkthrough, we’ll explore: The key benefits and considerations of using Fleet Manager A real-world e-commerce use case How to deploy a Fleet Manager hub, AKS clusters, and Azure Front Door with Terraform How everything looks and works in the Azure portal Along the way, you’ll see screenshots from my demo environment to illustrate each feature.   Why Use Fleet Manager? Managing dozens or even hun…  ( 31 min )
    Azure Kubernetes Fleet Manager Demo with Terraform Code
    Introduction Azure Kubernetes Fleet Manager (Fleet Manager) simplifies the at-scale management of multiple Azure Kubernetes Service (AKS) clusters by treating them as a coordinated “fleet.” One Fleet Manager hub can manage up to 100 AKS clusters in a single Azure AD tenant and region scope, so you can register, organize, and operate a large number of clusters from a single control plane. In this walkthrough, we’ll explore: The key benefits and considerations of using Fleet Manager A real-world e-commerce use case How to deploy a Fleet Manager hub, AKS clusters, and Azure Front Door with Terraform How everything looks and works in the Azure portal Along the way, you’ll see screenshots from my demo environment to illustrate each feature.   Why Use Fleet Manager? Managing dozens or even hun…
  • Open

    Guest Blog: SemantiClip: A Practical Guide to Building Your Own AI Agent with Semantic Kernel
    Today we’re excited to welcome Vic Perdana, as a guest author on the Semantic Kernel blog today to cover his work on a SemantiClip: A Practical Guide to Building Your Own AI Agent with Semantic Kernel. We’ll turn it over to Vic to dive in further. Everywhere you look lately, the buzz is about AI agents. But […] The post Guest Blog: SemantiClip: A Practical Guide to Building Your Own AI Agent with Semantic Kernel appeared first on Semantic Kernel.  ( 28 min )
  • Open

    How Xi’an Jiaotong-Liverpool University scaled hands-on learning with Microsoft Dev Box
    As AI and data science rapidly reshape industries, universities worldwide are rethinking how they deliver hands-on learning. At Xi’an Jiaotong-Liverpool University (XJTLU) in China, the School of AI and Advanced Computing embraced Microsoft Dev Box to give students a modern, scalable, and real-world development environment—right from their first year. Here’s how XJTLU transformed their curriculum […] The post How Xi’an Jiaotong-Liverpool University scaled hands-on learning with Microsoft Dev Box appeared first on Develop from the cloud.  ( 23 min )

  • Open

    Understanding Azure OpenAI Service Quotas and Limits: A Beginner-Friendly Guide
    Azure OpenAI Service allows developers, researchers, and students to integrate powerful AI models like GPT-4, GPT-3.5, and DALL·E into their applications. But with great power comes great responsibility and limits. Before you dive into building your next AI-powered solution, it's crucial to understand how quotas and limits work in the Azure OpenAI ecosystem. This guide is designed to help students and beginners easily understand the concept of quotas, limits, and how to manage them effectively. What Are Quotas and Limits? Think of Azure's quotas as your "AI data pack." It defines how much you can use the service. Meanwhile, limits are hard boundaries set by Azure to ensure fair use and system stability. Quota The maximum number of resources (e.g., tokens, requests) allocated to your Az…  ( 24 min )
    Understanding Azure OpenAI Service Quotas and Limits: A Beginner-Friendly Guide
    Azure OpenAI Service allows developers, researchers, and students to integrate powerful AI models like GPT-4, GPT-3.5, and DALL·E into their applications. But with great power comes great responsibility and limits. Before you dive into building your next AI-powered solution, it's crucial to understand how quotas and limits work in the Azure OpenAI ecosystem. This guide is designed to help students and beginners easily understand the concept of quotas, limits, and how to manage them effectively. What Are Quotas and Limits? Think of Azure's quotas as your "AI data pack." It defines how much you can use the service. Meanwhile, limits are hard boundaries set by Azure to ensure fair use and system stability. Quota The maximum number of resources (e.g., tokens, requests) allocated to your Az…
  • Open

    Accelerating DeepSeek Inference with AMD MI300: A Collaborative Breakthrough
    Accelerating DeepSeek Inference with AMD MI300: A Collaborative Breakthrough  Over the past few months, we’ve been collaborating closely with AMD to deliver a new level of performance for large-scale inference—starting with the DeepSeek-R1 and DeepSeek-V3 models on Azure AI Foundry.  Through day-by-day improvements on inference frameworks and major kernels and shared engineering investment, we’ve significantly accelerated inference on AMD MI300 hardware, reaching competitive performance with traditional NVIDIA alternatives. The result? Faster output, and more flexibility for Models-as-a-Service (MaaS) customers using DeepSeek models.  Why AMD MI300?  While many enterprise workloads are optimized for NVIDIA GPUs, AMD’s MI300 architecture has proven to be a strong contender—especially for la…  ( 31 min )
    Accelerating DeepSeek Inference with AMD MI300: A Collaborative Breakthrough
    Accelerating DeepSeek Inference with AMD MI300: A Collaborative Breakthrough  Over the past few months, we’ve been collaborating closely with AMD to deliver a new level of performance for large-scale inference—starting with the DeepSeek-R1 and DeepSeek-V3 models on Azure AI Foundry.  Through day-by-day improvements on inference frameworks and major kernels and shared engineering investment, we’ve significantly accelerated inference on AMD MI300 hardware, reaching competitive performance with traditional NVIDIA alternatives. The result? Faster output, and more flexibility for Models-as-a-Service (MaaS) customers using DeepSeek models.  Why AMD MI300?  While many enterprise workloads are optimized for NVIDIA GPUs, AMD’s MI300 architecture has proven to be a strong contender—especially for la…
  • Open

    Migrating Cloud-Based Databases from AWS to Azure: Key Insights and Best Practices
    Migrating your cloud-based databases to Microsoft Azure can be a transformative journey, offering enhanced performance, scalability, and security. I’ve had the opportunity to dive deep into this process, and I’m excited to share the key points that can make your migration smooth and successful. If you're using the Azure Migration Hub as your starting point, you're already ahead. But when migrating to cloud-based databases, a few key details can make or break your deployment. Some key considerations for migrating are infrastructure and configuration, web application firewall configuration, DNS, hostnames, and session management. Databases you can migrate to Azure PostgreSQL Advanced Security Features: Integration with Azure Key Vault ensures secure storage and management of encryption keys …  ( 30 min )
    Migrating Cloud-Based Databases from AWS to Azure: Key Insights and Best Practices
    Migrating your cloud-based databases to Microsoft Azure can be a transformative journey, offering enhanced performance, scalability, and security. I’ve had the opportunity to dive deep into this process, and I’m excited to share the key points that can make your migration smooth and successful. If you're using the Azure Migration Hub as your starting point, you're already ahead. But when migrating to cloud-based databases, a few key details can make or break your deployment. Some key considerations for migrating are infrastructure and configuration, web application firewall configuration, DNS, hostnames, and session management. Databases you can migrate to Azure PostgreSQL Advanced Security Features: Integration with Azure Key Vault ensures secure storage and management of encryption keys …
  • Open

    Public Preview: Metrics usage insights for Azure Monitor Workspace
    As organizations expand their services and applications, reliability and high availability are a top priority to ensure they provide a high level of quality to their customers. As the complexity of these services and applications grows, organizations continue to collect more telemetry to ensure higher observability. However, many are facing a common challenge: increasing costs driven by the ever-growing volume of telemetry data.   Over time, as products grow and evolve, not all telemetry remains valuable. In fact, over instrumentation can create unnecessary noise, generating data that contributes to higher costs without delivering actionable insights. In a time where every team is being asked to do more with less, identifying which telemetry streams truly matter has become essential.   To …  ( 23 min )
    Public Preview: Metrics usage insights for Azure Monitor Workspace
    As organizations expand their services and applications, reliability and high availability are a top priority to ensure they provide a high level of quality to their customers. As the complexity of these services and applications grows, organizations continue to collect more telemetry to ensure higher observability. However, many are facing a common challenge: increasing costs driven by the ever-growing volume of telemetry data.   Over time, as products grow and evolve, not all telemetry remains valuable. In fact, over instrumentation can create unnecessary noise, generating data that contributes to higher costs without delivering actionable insights. In a time where every team is being asked to do more with less, identifying which telemetry streams truly matter has become essential.   To …
  • Open

    AI Resilience: Strategies to Keep Your Intelligent App Running at Peak Performance
    Stay Online Reliability. It's one of the 5 pillars of Azure Well-Architect Framework.  When starting to implement and go-to-market any new product witch has any integration with Open AI Service you can face spikes of usage in your workload and, even having everything scaling correctly in your side, if you have an Azure Open AI Services deployed using PTU you can reach the PTU threshold and them start to experience some 429 response code. You also will receive some important information about the when you can retry the request in the header of the response and with this information you can implement in your business logic a solution. Here in this article I will show how to use the API Management Service policy to handle this and also explore the native cache to save some tokens! Architectur…  ( 25 min )
    AI Resilience: Strategies to Keep Your Intelligent App Running at Peak Performance
    Stay Online Reliability. It's one of the 5 pillars of Azure Well-Architect Framework.  When starting to implement and go-to-market any new product witch has any integration with Open AI Service you can face spikes of usage in your workload and, even having everything scaling correctly in your side, if you have an Azure Open AI Services deployed using PTU you can reach the PTU threshold and them start to experience some 429 response code. You also will receive some important information about the when you can retry the request in the header of the response and with this information you can implement in your business logic a solution. Here in this article I will show how to use the API Management Service policy to handle this and also explore the native cache to save some tokens! Architectur…

  • Open

    Customer Case Study: Microsoft Store Assistant — bringing multi expert intelligence to Microsoft Store chat with Semantic Kernel and Azure AI
    Introduction In October 2024 Microsoft replaced a legacy rule‑based chat bot on Microsoft Store with Microsoft Store Assistant, powered by Azure Open AI, Semantic Kernel, and real‑time page context. The transformation changed a scripted, button-driven experience into a conversation that comprehends the entire public Microsoft portfolio, including Surface and Xbox products, Microsoft 365 subscriptions, Azure services, and the Dynamics and Power Platform […] The post Customer Case Study: Microsoft Store Assistant — bringing multi expert intelligence to Microsoft Store chat with Semantic Kernel and Azure AI appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Microsoft 365 Certification control spotlight: HIPAA
    Learn how Microsoft 365 Certification verifies that ISVs have established protocols for managing health information, dealing with emergencies and service disruptions, and complying with key HIPAA regulations. The post Microsoft 365 Certification control spotlight: HIPAA appeared first on Microsoft 365 Developer Blog.  ( 23 min )
    Announcing SharePoint Framework 1.21 with updates on building enterprise extensibility within Microsoft 365
    We are excited to announce general availability for the SharePoint Framework 1.21. This time focus is primarily on technical platform updates and new UX options in SharePoint and in Viva Connections. The post Announcing SharePoint Framework 1.21 with updates on building enterprise extensibility within Microsoft 365 appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Mastering Getting Started with Agents: Your On-Demand Resource Hub
    What’s Included in Your Learning Journey? Explore on-demand sessions broken down by week, topic, and track, providing targeted guidance for developers using Python, Java, C#, JavaScript, and more. Here's a peek at what you can expect:Foundational insights into building agents TopicTrack Build your code-first app with Azure AI Agent ServicePython AI Agents for Java using Azure AI FoundryJava Build your code-first app with Azure AI Agent ServicePython Build and extend agents for Microsoft 365 CopilotCopilots Transforming business processes with multi-agent AI using Semantic KernelPython Build your code-first app with Azure AI Agent Service (.NET)C# Build more sophisticated agents and explore advanced capabilities: TopicTrack Building custom engine agents with Azure AI Foundry and Vis…  ( 23 min )
    Mastering Getting Started with Agents: Your On-Demand Resource Hub
    What’s Included in Your Learning Journey? Explore on-demand sessions broken down by week, topic, and track, providing targeted guidance for developers using Python, Java, C#, JavaScript, and more. Here's a peek at what you can expect:Foundational insights into building agents TopicTrack Build your code-first app with Azure AI Agent ServicePython AI Agents for Java using Azure AI FoundryJava Build your code-first app with Azure AI Agent ServicePython Build and extend agents for Microsoft 365 CopilotCopilots Transforming business processes with multi-agent AI using Semantic KernelPython Build your code-first app with Azure AI Agent Service (.NET)C# Build more sophisticated agents and explore advanced capabilities: TopicTrack Building custom engine agents with Azure AI Foundry and Vis…
  • Open

    Week 3 . Microsoft Agents Hack Online Events and Readiness Resources
    Readiness and skilling events for Week 3: Microsoft AI Agents Hack Register Now at https://aka.ms/agentshack https://aka.ms/agentshack     2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development.Register Now: https://aka.ms/agentshack 🔥 Learn from expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Week 3: April 21st-25th LIVE & ONDEMAND Day/TimeTopicTrack 4/21 12:00 PM PTKnowledge-augmented agents with LlamaIndex.TSJS 4/22 06:00 AM PTBuilding a AI Agent with Prompty and Azure AI FoundryPython 4/22 09:00 AM PTReal-time Multi-Agent LLM solutions with SignalR, gRPC, and HTTP based on Semantic KernelC# 4/22 10:30 AM PTLearn Live: Fundamentals of AI agents on Azure- 4/22 12:00 PM PTDemystifying Agents: Building an AI Agent from Scratch on Your Own Data using Azure SQLC# 4/22 03:00 PM PTVoiceRAG: talk to your dataPython 4/23 09:00 AM PTBuilding Multi-Agent Apps on top of Azure PostgreSQLPython 4/23 12:00 PM PTAgentic RAG with reflectionPython 4/23 03:00 PM PTMulti-source data patterns for modern RAG appsC# 4/24 06:00 AM PTEngineering agents that Think, Act, and Govern themselvesC# 4/24 09:00 AM PTExtending AI Agents with Azure FunctionsPython, C# 4/24 12:00 PM PTBuild real time voice agents with Azure Communication ServicesPython 🌟 Join the Conversation on Azure AI Foundry Discussions! 🌟Have ideas, questions, or insights about AI? Don't keep them to yourself! Share your thoughts, engage with experts, and connect with a community that’s shaping the future of artificial intelligence. 🧠✨👉 Click here to join the discussion!  ( 21 min )
    Week 3 . Microsoft Agents Hack Online Events and Readiness Resources
    Readiness and skilling events for Week 3: Microsoft AI Agents Hack Register Now at https://aka.ms/agentshack https://aka.ms/agentshack     2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development.Register Now: https://aka.ms/agentshack 🔥 Learn from expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Week 3: April 21st-25th LIVE & ONDEMAND Day/TimeTopicTrack 4/21 12:00 PM PTKnowledge-augmented agents with LlamaIndex.TSJS 4/22 06:00 AM PTBuilding a AI Agent with Prompty and Azure AI FoundryPython 4/22 09:00 AM PTReal-time Multi-Agent LLM solutions with SignalR, gRPC, and HTTP based on Semantic KernelC# 4/22 10:30 AM PTLearn Live: Fundamentals of AI agents on Azure- 4/22 12:00 PM PTDemystifying Agents: Building an AI Agent from Scratch on Your Own Data using Azure SQLC# 4/22 03:00 PM PTVoiceRAG: talk to your dataPython 4/23 09:00 AM PTBuilding Multi-Agent Apps on top of Azure PostgreSQLPython 4/23 12:00 PM PTAgentic RAG with reflectionPython 4/23 03:00 PM PTMulti-source data patterns for modern RAG appsC# 4/24 06:00 AM PTEngineering agents that Think, Act, and Govern themselvesC# 4/24 09:00 AM PTExtending AI Agents with Azure FunctionsPython, C# 4/24 12:00 PM PTBuild real time voice agents with Azure Communication ServicesPython 🌟 Join the Conversation on Azure AI Foundry Discussions! 🌟Have ideas, questions, or insights about AI? Don't keep them to yourself! Share your thoughts, engage with experts, and connect with a community that’s shaping the future of artificial intelligence. 🧠✨👉 Click here to join the discussion!
    VS Code Live: Agent Mode Day Highlights
    🎙️ Featuring: Olivia McVicker, Cassidy Williams, Burke Holland, Harald Kirschner, Toby Padilla, Rob Lourens, Tim Rogers, James Montemagno, Don Jayamanne, Brigit Murtaugh, Chris Harrison. What is Agent Mode? Agent Mode in VS Code represents a leap beyond traditional AI code completion. Instead of simply suggesting code snippets, Agent Mode empowers the AI to: Write, edit, and iterate on code Run terminal commands autonomously Fix its own mistakes during the workflow Interact with external tools, APIs, and services This creates a more dynamic, "agentic" coding partner that can automate complex tasks, reduce manual intervention, and keep developers in their flow. Agent Mode is accessible directly in VS Code and integrates seamlessly with GitHub Copilot, making advanced AI capabilities avai…  ( 26 min )
    VS Code Live: Agent Mode Day Highlights
    🎙️ Featuring: Olivia McVicker, Cassidy Williams, Burke Holland, Harald Kirschner, Toby Padilla, Rob Lourens, Tim Rogers, James Montemagno, Don Jayamanne, Brigit Murtaugh, Chris Harrison. What is Agent Mode? Agent Mode in VS Code represents a leap beyond traditional AI code completion. Instead of simply suggesting code snippets, Agent Mode empowers the AI to: Write, edit, and iterate on code Run terminal commands autonomously Fix its own mistakes during the workflow Interact with external tools, APIs, and services This creates a more dynamic, "agentic" coding partner that can automate complex tasks, reduce manual intervention, and keep developers in their flow. Agent Mode is accessible directly in VS Code and integrates seamlessly with GitHub Copilot, making advanced AI capabilities avai…
  • Open

    Streaming and Analyzing Azure Storage Diagnostic Logs via Event Hub using Service Bus Explorer
    Monitoring Azure Storage operations is crucial for ensuring performance, compliance, and security. Azure provides various options to collect and route diagnostic logs. One powerful option is sending logs to Azure Event Hub, which allows real-time streaming and integration with external tools and analytics platforms. In this blog, we’ll walk through setting up diagnostic logging for an Azure Storage account with Event Hub as the destination, and then demonstrate how to analyse incoming logs using Service Bus Explorer.   Prerequisites Before we begin, make sure you have the following set up: 1. Azure Event Hub Configuration An Event Hub namespace and instance set up in your Azure subscription. 2. Service Bus Explorer Tool We'll use Service Bus Explorer to connect to Event Hub and analyse l…  ( 25 min )
    Streaming and Analyzing Azure Storage Diagnostic Logs via Event Hub using Service Bus Explorer
    Monitoring Azure Storage operations is crucial for ensuring performance, compliance, and security. Azure provides various options to collect and route diagnostic logs. One powerful option is sending logs to Azure Event Hub, which allows real-time streaming and integration with external tools and analytics platforms. In this blog, we’ll walk through setting up diagnostic logging for an Azure Storage account with Event Hub as the destination, and then demonstrate how to analyse incoming logs using Service Bus Explorer.   Prerequisites Before we begin, make sure you have the following set up: 1. Azure Event Hub Configuration An Event Hub namespace and instance set up in your Azure subscription. 2. Service Bus Explorer Tool We'll use Service Bus Explorer to connect to Event Hub and analyse l…
    Tips for Migrating Azure Event Hub from Standard to Basic Tier Using Scripts
    Introduction What are Event Hubs? Azure Event Hub is a big data streaming platform and event ingestion service by Microsoft Azure. It’s designed to ingest, buffer, store, and process millions of events per second in real time.   Feature Comparison The Standard tier of Azure Event Hubs provides features beyond what is available in the Basic tier. The following features are included with Standard:   Feature Basic Tier Standard Tier Capture Feature                         ❌ Not available ✅ Available Virtual Network Integration ❌ Not available ✅ Available Auto-Inflate ❌ Not available ✅ Available Consumer Groups Limited (only 1 group) Up to 20 consumer groups Message Retention Up to 1 day Up to 7 days   Many organizations or users choose to downgrade their Event Hu…  ( 24 min )
    Tips for Migrating Azure Event Hub from Standard to Basic Tier Using Scripts
    Introduction What are Event Hubs? Azure Event Hub is a big data streaming platform and event ingestion service by Microsoft Azure. It’s designed to ingest, buffer, store, and process millions of events per second in real time.   Feature Comparison The Standard tier of Azure Event Hubs provides features beyond what is available in the Basic tier. The following features are included with Standard:   Feature Basic Tier Standard Tier Capture Feature                         ❌ Not available ✅ Available Virtual Network Integration ❌ Not available ✅ Available Auto-Inflate ❌ Not available ✅ Available Consumer Groups Limited (only 1 group) Up to 20 consumer groups Message Retention Up to 1 day Up to 7 days   Many organizations or users choose to downgrade their Event Hu…
  • Open

    Using the CUA model in Azure OpenAI for procure to Pay Automation
    Solution Architecture   The solution leverages a comprehensive stack of Azure technologies: **Azure OpenAI Service**: Powers core AI capabilities Responses API: Orchestrates the workflow, by calling the tools below and performing actions automatically.  Computer Using Agent (CUA) model: Enables browser automation. This is called through Function Calling, since there are other steps to be performed between the calls to this model, where the gpt-4o model is used, like reasoning through vision, performing vector search and evaluating business rules for anomalies detection. GPT-4o: Processes invoice images with vision capabilities Vector store: Maintains business rules and documentation Azure Container Apps: Hosts procurement web applications Azure SQL Database: Stores contract and procur…  ( 39 min )
    Using the CUA model in Azure OpenAI for procure to Pay Automation
    Solution Architecture   The solution leverages a comprehensive stack of Azure technologies: **Azure OpenAI Service**: Powers core AI capabilities Responses API: Orchestrates the workflow, by calling the tools below and performing actions automatically.  Computer Using Agent (CUA) model: Enables browser automation. This is called through Function Calling, since there are other steps to be performed between the calls to this model, where the gpt-4o model is used, like reasoning through vision, performing vector search and evaluating business rules for anomalies detection. GPT-4o: Processes invoice images with vision capabilities Vector store: Maintains business rules and documentation Azure Container Apps: Hosts procurement web applications Azure SQL Database: Stores contract and procur…
    Using Azure OpenAI's Computer Using Agent for procure to Pay Automation
    Solution Architecture   The solution leverages a comprehensive stack of Azure technologies: **Azure OpenAI Service**: Powers core AI capabilities Responses API: Orchestrates the workflow, by calling the tools below and performing actions automatically.  Computer Using Agent (CUA) model: Enables browser automation. This is called through Function Calling, since there are other steps to be performed between the calls to this model, where the gpt-4o model is used, like reasoning through vision, performing vector search and evaluating business rules for anomalies detection. GPT-4o: Processes invoice images with vision capabilities Vector store: Maintains business rules and documentation Azure Container Apps: Hosts procurement web applications Azure SQL Database: Stores contract and procur…
    SLM Model Weight Merging for Federated Multi-tenant Requirements
    Model merging is a technique for combining the model parameters of multiple models, specifically finetuned variants of a common base model, into a single unified model. In the context of Small Language Models (SLMs), which are lightweight and efficient, merging allows us to have variants of a domain specialized base model to suit different tenant-specific requirements (like fine-tune the base model on their own data set), and enable transfer of the model parameters to the base model without the need to expose the data used for tenant-specific requirements. Model merging operates at the parameter level, using techniques such as weighted averaging, SLERP (Spherical Linear Interpolation), task arithmetic, or advanced methods like TIES leading to a model that preserves both the general abiliti…  ( 39 min )
    SLM Model Weight Merging for Federated Multi-tenant Requirements
    Model merging is a technique for combining the model parameters of multiple models, specifically finetuned variants of a common base model, into a single unified model. In the context of Small Language Models (SLMs), which are lightweight and efficient, merging allows us to have variants of a domain specialized base model to suit different tenant-specific requirements (like fine-tune the base model on their own data set), and enable transfer of the model parameters to the base model without the need to expose the data used for tenant-specific requirements. Model merging operates at the parameter level, using techniques such as weighted averaging, SLERP (Spherical Linear Interpolation), task arithmetic, or advanced methods like TIES leading to a model that preserves both the general abiliti…
    Tracing your Semantic Kernel Agents with Azure AI Foundry
    Many of us have encountered questions about monitoring Semantic Kernel Agents. As developers, we want to understand several aspects: the prompts sent by the Kernel to the AI Service, the behind-the-scenes processes when the Kernel calls the functions we added as plugins, and the token usage during the communication between the AI Service and the Kernel. These are all excellent questions that boil down to how we can observe Semantic Kernel. We can start answering these questions with the use of Azure AI Foundry. So let's dive into it! Adding the Azure AI Inference Connector to the Kernel  The key aspect is to replace the chat completion service that we normally add to the Kernel, by  the Azure AI Inference connector from the Azure Inference Client Library. This connector will automatically…  ( 24 min )
    Tracing your Semantic Kernel Agents with Azure AI Foundry
    Many of us have encountered questions about monitoring Semantic Kernel Agents. As developers, we want to understand several aspects: the prompts sent by the Kernel to the AI Service, the behind-the-scenes processes when the Kernel calls the functions we added as plugins, and the token usage during the communication between the AI Service and the Kernel. These are all excellent questions that boil down to how we can observe Semantic Kernel. We can start answering these questions with the use of Azure AI Foundry. So let's dive into it! Adding the Azure AI Inference Connector to the Kernel  The key aspect is to replace the chat completion service that we normally add to the Kernel, by  the Azure AI Inference connector from the Azure Inference Client Library. This connector will automatically…
  • Open

    Introduction of item limits in a Fabric workspace
    Previously, there were no restrictions on the number of Fabric items that could be created in a workspace, with a limit for Power BI items already being enforced. Even though this allows flexibility for our users, having too many items in workspaces reduces the overall user friendliness and effectiveness of the platform. As of April … Continue reading “Introduction of item limits in a Fabric workspace”  ( 6 min )
    Passing parameter values to refresh a Dataflow Gen2 (Preview)
    Parameters in Dataflow Gen2 enhance flexibility by allowing dynamic adjustments without altering the dataflow itself. They simplify organization, reduce redundancy, and centralize control, making workflows more efficient and adaptable to varying inputs and scenarios. Leveraging query parameters while authoring Dataflows Gen2 has been possible for a long time, however, it was not possible to override … Continue reading “Passing parameter values to refresh a Dataflow Gen2 (Preview)”  ( 7 min )
  • Open

    Important Updates to Container Images of Microsoft Build of OpenJDK
    Mariner Linux 2.0 will reach its End-Of-Life (EOL) in July of 2025 and will be replaced with Azure Linux (version 3.0). To ensure a smooth transition for our customers and partners, the Java Engineering Group (DevDiv JEG) behind the Microsoft Build of OpenJDK has developed a migration aligned with this timeline. This strategy takes effect […] The post Important Updates to Container Images of Microsoft Build of OpenJDK appeared first on Microsoft for Java Developers.  ( 23 min )
  • Open

    Announcing Public Preview of Larger Container Sizes on Azure Container Instances
    ACI provides a fast and simple way to run containers in the cloud. As a serverless solution, ACI eliminates the need to manage underlying infrastructure, automatically scaling to meet application demands. Customers benefit from using ACI because it offers flexible resource allocation, pay-per-use pricing, and rapid deployment, making it easier to focus on development and innovation without worrying about infrastructure management.  Today, we are excited to announce the public preview of larger container sizes on Azure Container Instances (ACI). Customers can now deploy workloads with higher vCPU and memory for standard containers, confidential containers, containers with virtual networks, and containers utilizing virtual nodes to connect to Azure Kubernetes Service (AKS). ACI now supports …  ( 26 min )
    Announcing Public Preview of Larger Container Sizes on Azure Container Instances
    ACI provides a fast and simple way to run containers in the cloud. As a serverless solution, ACI eliminates the need to manage underlying infrastructure, automatically scaling to meet application demands. Customers benefit from using ACI because it offers flexible resource allocation, pay-per-use pricing, and rapid deployment, making it easier to focus on development and innovation without worrying about infrastructure management.  Today, we are excited to announce the public preview of larger container sizes on Azure Container Instances (ACI). Customers can now deploy workloads with higher vCPU and memory for standard containers, confidential containers, containers with virtual networks, and containers utilizing virtual nodes to connect to Azure Kubernetes Service (AKS). ACI now supports …
  • Open

    Spring Cleaning: A CTA for Azure DevOps OAuth Apps with expired or long-living secrets
    Today, we officially closed the doors on any new Azure DevOps OAuth app registrations. As we prepare for the end-of-life for Azure DevOps OAuth apps in 2026, we’ll begin outreach to engage existing app owners and support them through the migration process to use the Microsoft Identity platform instead for future app development with Azure […] The post Spring Cleaning: A CTA for Azure DevOps OAuth Apps with expired or long-living secrets appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Migrate or modernize your applications using Azure Migrate
    Introduction The journey to the cloud is an essential step for modern enterprises looking to leverage the benefits of security, innovation (AI), scalability, flexibility, and cost-efficiency.  To help unlock these benefits migration or modernization to Azure is critical for reasons such as colocation of IT assets. A crucial part of this transformation is understanding the current state of your IT infrastructure, including workloads, applications, and their interdependencies. Often, organizations aim to set their migration goals based on the applications they want to move to the cloud, rather than focusing on individual servers or databases in isolation. In our endeavour to both simplify and enrich your cloud adoption journey. We are introducing new capabilities in Azure Migrate to help you…  ( 28 min )
    Migrate or modernize your applications using Azure Migrate
    Introduction The journey to the cloud is an essential step for modern enterprises looking to leverage the benefits of security, innovation (AI), scalability, flexibility, and cost-efficiency.  To help unlock these benefits migration or modernization to Azure is critical for reasons such as colocation of IT assets. A crucial part of this transformation is understanding the current state of your IT infrastructure, including workloads, applications, and their interdependencies. Often, organizations aim to set their migration goals based on the applications they want to move to the cloud, rather than focusing on individual servers or databases in isolation. In our endeavour to both simplify and enrich your cloud adoption journey. We are introducing new capabilities in Azure Migrate to help you…

  • Open

    Build data-driven agents with curated data from OneLake
    Innovation doesn’t always happen in a straight line. From the invention of the World Wide Web, to the introduction of smartphones, technology often makes massive leaps that transform how we interact with the world almost overnight. Now we’re seeing the next great shift: the era of AI. This shift has been decades in the making, … Continue reading “Build data-driven agents with curated data from OneLake”  ( 9 min )
    Best practices for Microsoft Fabric GraphQL API performance
    Microsoft Fabric’s GraphQL API offers a powerful way to query data efficiently, but performance optimization is key to ensuring smooth and scalable applications. In this blog, we’ll explore best practices to maximize the efficiency of your Fabric GraphQL API. Whether you’re handling complex queries or optimizing response times, these strategies will help you get the best performance out of your GraphQL implementation  ( 7 min )
    Develop, test and deploy a user data functions in Microsoft Fabric using Visual Studio Code
    In this blog post, we will walk through the process of creating, testing, and deploying a user data function using Visual Studio Code, based on the official Microsoft Fabric documentation. This guide will help you understand the steps involved and provide a practical example to get you started.  ( 7 min )
    Build a Data Warehouse schema with Copilot for Data Warehouse
    As a data engineer, it is important to be able to efficiently organize, analyze and derive insights from your data so that you can drive informed and data-driven decisions across your organization. Having a well set up Data Warehouse, you can ensure data integrity, improve your query performance and support advanced analytics. Optimizing a Data … Continue reading “Build a Data Warehouse schema with Copilot for Data Warehouse”  ( 8 min )
    On-premises data gateway April 2025 release
    Here is the April2025 release of the on-premises data gateway (version 3000.266).  ( 5 min )
  • Open

    Exciting updates coming to the Microsoft 365 Developer Program
    We are excited to share a preview of upcoming updates to the Microsoft 365 Developer Program. The post Exciting updates coming to the Microsoft 365 Developer Program appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Learn Generative AI with JavaScript: Free and Interactive Course! 💡🤖
    In the latest video on my YouTube Channel, the Microsoft JavaScript + A.I Advocacy team presents an innovative initiative for developers who want to take their first steps with Artificial Intelligence: the free course Generative AI with JavaScript. Combining technical learning with a gamified experience, the course is an excellent gateway for those who want to explore Generative AI using JavaScript/TypeScript.   Let’s talk a bit more about the course and how it can help you become a more skilled and up-to-date developer with the latest tech trends. About the Course: Generative AI with JavaScript I recorded a video where I explain the main concepts covered in the course, including Generative AI techniques, practical examples, and tips to maximize your learning. If you haven’t watched it ye…  ( 29 min )
    Learn Generative AI with JavaScript: Free and Interactive Course! 💡🤖
    In the latest video on my YouTube Channel, the Microsoft JavaScript + A.I Advocacy team presents an innovative initiative for developers who want to take their first steps with Artificial Intelligence: the free course Generative AI with JavaScript. Combining technical learning with a gamified experience, the course is an excellent gateway for those who want to explore Generative AI using JavaScript/TypeScript.   Let’s talk a bit more about the course and how it can help you become a more skilled and up-to-date developer with the latest tech trends. About the Course: Generative AI with JavaScript I recorded a video where I explain the main concepts covered in the course, including Generative AI techniques, practical examples, and tips to maximize your learning. If you haven’t watched it ye…
  • Open

    Everything You Need to Know About Reasoning Models: o1, o3, o4-mini and Beyond
    Think AI has hit a wall? The latest reasoning models will make you reconsider everything. Contributors: Rafal Rutyna, Brady Leavitt, Julia Heseltine, Tierney Morgan, Liam Cavanagh, Riccardo Chiodaroli  There are new models coming out every week. So why should you care about reasoning models?Unlike previous AI offerings, reasoning models such as o1, o3, and o4-mini mark a fundamental shift in enterprise automation. For the first time, organizations can access AI with PhD-level intelligence—capable of automating business processes that require multi-step reasoning, expert-level analysis, and contextual decision making. Tasks that previously relied on human judgment—such as processing complex cases, analyzing fraud, or generating insights from data—can now be handled transparently, accurately…  ( 64 min )
    Everything You Need to Know About Reasoning Models: o1, o3, o4-mini and Beyond
    Think AI has hit a wall? The latest reasoning models will make you reconsider everything. Contributors: Rafal Rutyna, Brady Leavitt, Julia Heseltine, Tierney Morgan, Liam Cavanagh, Riccardo Chiodaroli  There are new models coming out every week. So why should you care about reasoning models?Unlike previous AI offerings, reasoning models such as o1, o3, and o4-mini mark a fundamental shift in enterprise automation. For the first time, organizations can access AI with PhD-level intelligence—capable of automating business processes that require multi-step reasoning, expert-level analysis, and contextual decision making. Tasks that previously relied on human judgment—such as processing complex cases, analyzing fraud, or generating insights from data—can now be handled transparently, accurately…
    Memory Management for AI Agents
    When we think about how humans function daily, memory plays a critical role beyond mere cognition. The brain has two primary types of memory: short-term and long-term. Short-term memory allows us to temporarily hold onto information, such as conversations or names, while long-term memory is where important knowledge and skills—like learning to walk or recalling a conversation from two weeks ago—are stored.   Memory operates by strengthening neural connections between events, facts, or concepts. These connections are reinforced by relevance and frequency of use, making frequently accessed memories easier to recall. Over time, we might forget information we no longer use because the brain prunes unused neural pathways, prioritizing the memories we frequently rely on. This can explain why rec…  ( 35 min )
    Memory Management for AI Agents
    When we think about how humans function daily, memory plays a critical role beyond mere cognition. The brain has two primary types of memory: short-term and long-term. Short-term memory allows us to temporarily hold onto information, such as conversations or names, while long-term memory is where important knowledge and skills—like learning to walk or recalling a conversation from two weeks ago—are stored.   Memory operates by strengthening neural connections between events, facts, or concepts. These connections are reinforced by relevance and frequency of use, making frequently accessed memories easier to recall. Over time, we might forget information we no longer use because the brain prunes unused neural pathways, prioritizing the memories we frequently rely on. This can explain why rec…
  • Open

    1-Bit Brilliance: BitNet on Azure App Service with Just a CPU
    In a world where running large language models typically demands GPUs and hefty cloud bills, Microsoft Research is reshaping the narrative with BitNet — a compact, 1-bit quantized transformer that delivers surprising capabilities even when deployed on modest hardware.  ( 4 min )
  • Open

    Java OpenJDK April 2025 Patch & Security Update
    Hello Java customers! We are happy to announce the latest April 2025 patch & security update release for the Microsoft Build of OpenJDK. Download and install the binaries today. OpenJDK 21.0.7 OpenJDK 17.0.15 OpenJDK 11.0.27 Check our release notes page for details on fixes and enhancements. The source code of our builds is available now on GitHub for […] The post Java OpenJDK April 2025 Patch & Security Update appeared first on Microsoft for Java Developers.  ( 23 min )

  • Open

    Microsoft Purview protections for Copilot
    Use Microsoft Purview and Microsoft 365 Copilot together to build a secure, enterprise-ready foundation for generative AI. Apply existing data protection and compliance controls, gain visibility into AI usage, and reduce risk from oversharing or insider threats. Classify, restrict, and monitor sensitive data used in Copilot interactions. Investigate risky behavior, enforce dynamic policies, and block inappropriate use — all from within your Microsoft 365 environment. Erica Toelle, Microsoft Purview Senior Product Manager, shares how to implement these controls and proactively manage data risks in Copilot deployments. Control what content can be referenced in generated responses. Check out Microsoft 365 Copilot security and privacy basics. Uncover risky or sensitive interactions. Use DSP…  ( 41 min )
    Microsoft Purview protections for Copilot
    Use Microsoft Purview and Microsoft 365 Copilot together to build a secure, enterprise-ready foundation for generative AI. Apply existing data protection and compliance controls, gain visibility into AI usage, and reduce risk from oversharing or insider threats. Classify, restrict, and monitor sensitive data used in Copilot interactions. Investigate risky behavior, enforce dynamic policies, and block inappropriate use — all from within your Microsoft 365 environment. Erica Toelle, Microsoft Purview Senior Product Manager, shares how to implement these controls and proactively manage data risks in Copilot deployments. Control what content can be referenced in generated responses. Check out Microsoft 365 Copilot security and privacy basics. Uncover risky or sensitive interactions. Use DSP…
  • Open

    Session-scoped distributed #temp tables in Fabric Data Warehouse (Generally Available)
    Introducing distributed session-scoped temporary (#temp) tables in Fabric Data Warehouse and Fabric Lakehouse SQL Endpoints. #temp tables have been a feature of Microsoft SQL Server (and other database systems) for many years. In the current implementation of Fabric data warehouse, #temp tables are session scoped or local temp tables. Global temp tables are not included … Continue reading “Session-scoped distributed #temp tables in Fabric Data Warehouse (Generally Available)”  ( 7 min )
  • Open

    Combating Digitally Altered Images: Deepfake Detection
    In today's digital age, the rise of deepfake technology poses significant threats to credibility, privacy, and security. This article delves into our Deepfake Detection Project, a robust solution designed to combat the misuse of AI-generated content. Our team, comprising Microsoft Learn Student Ambassadors - Saksham Kumar and Rhythm Narang both from India embarked on this journey to create a tool that helps users verify the authenticity of digital images. Project Overview The Deepfake Detection Project aims to provide a reliable tool for detecting and classifying images as either real or deepfake. Our primary goal is to reduce the spread of misinformation, protect individuals from identity theft, and prevent the malicious use of AI technologies. By implementing this tool, we hope to safegu…  ( 29 min )
    Combating Digitally Altered Images: Deepfake Detection
    In today's digital age, the rise of deepfake technology poses significant threats to credibility, privacy, and security. This article delves into our Deepfake Detection Project, a robust solution designed to combat the misuse of AI-generated content. Our team, comprising Microsoft Learn Student Ambassadors - Saksham Kumar and Rhythm Narang both from India embarked on this journey to create a tool that helps users verify the authenticity of digital images. Project Overview The Deepfake Detection Project aims to provide a reliable tool for detecting and classifying images as either real or deepfake. Our primary goal is to reduce the spread of misinformation, protect individuals from identity theft, and prevent the malicious use of AI technologies. By implementing this tool, we hope to safegu…
  • Open

    How to get started quickly with setting up the Microsoft Dev Box service
    Last updated: April 21, 2025 🎯 Introduction Are you an IT admin, platform engineer, or a developer looking to explore a developer-centric, cloud-powered workstation experience? In this post, I’ll show you how to start quickly and try out Microsoft Dev Box using either a Microsoft 365 Business Premium or Microsoft 365 E3 plan — both […] The post How to get started quickly with setting up the Microsoft Dev Box service appeared first on Develop from the cloud.  ( 23 min )
  • Open

    Guest Blog: Build an AI App That Can Browse the Internet Using Microsoft’s Playwright MCP Server & Semantic Kernel — in Just 4 Steps
    Today we’re excited to feature a returning guest author, Akshay Kokane to share his recent Medium article on Building an AI App That Can Browse the Internet Using Microsoft’s Playwright MCP Server & Semantic Kernel. We’ll turn it over to him to dive in! MCP! It’s the new buzzword in the AI world. So, I thought […] The post Guest Blog: Build an AI App That Can Browse the Internet Using Microsoft’s Playwright MCP Server & Semantic Kernel — in Just 4 Steps appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Introducing the Microsoft Graph API usage report
    Learn more about our new journey to give customers more insight and control over how applications access their data through Microsoft Graph. The post Introducing the Microsoft Graph API usage report appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    AI Agents: The Multi-Agent Design Pattern - Part 8
    Hi everyone, Shivam Goyal here! This blog series exploring AI agents, based on Microsoft's AI Agents for Beginners repository, continues. In previous posts ([links to parts 1-7 at the end]), we've built a solid foundation, exploring agent fundamentals, frameworks, and design principles. Now, we'll delve into the Multi-Agent Design Pattern, a powerful approach for tackling complex tasks by leveraging the collective intelligence of multiple specialized agents. Introduction to Multi-Agent Systems As you progress in building AI agent applications, you'll inevitably encounter scenarios where a single agent isn't enough. This is where the Multi-Agent Design Pattern comes into play. But how do you know when to transition to a multi-agent system and what are the benefits? When to Use Multi-Agent S…  ( 25 min )
    AI Agents: The Multi-Agent Design Pattern - Part 8
    Hi everyone, Shivam Goyal here! This blog series exploring AI agents, based on Microsoft's AI Agents for Beginners repository, continues. In previous posts ([links to parts 1-7 at the end]), we've built a solid foundation, exploring agent fundamentals, frameworks, and design principles. Now, we'll delve into the Multi-Agent Design Pattern, a powerful approach for tackling complex tasks by leveraging the collective intelligence of multiple specialized agents. Introduction to Multi-Agent Systems As you progress in building AI agent applications, you'll inevitably encounter scenarios where a single agent isn't enough. This is where the Multi-Agent Design Pattern comes into play. But how do you know when to transition to a multi-agent system and what are the benefits? When to Use Multi-Agent S…

  • Open

    Using Azure Functions to read Azure VMware Solution data via Powershell, PowerCLI, and the API
    Gotcha’s Functions are region specific. Functions require a small (ie /28) Do not use “ –  “ if Azure functions, use “ _ “ if you need a spacer. Pay attention to the sections on RBAC and system managed identities.   Build Order   Create or identify the Log Analytics Workspace that will be used for storage diagnostics.   Create a KeyVault and set the RBAC properties to allow creation of secrets.   Create KeyVault secrets to hold the account IDs and account passwords that will be used to authenticate to the vCenter Server, NSX and HCX appliances.   Create/identify a storage account and an empty subnet that will be used by the function.   Create the Function App   Creating the Functions App common variable retrieval code (shared by all functions)   Example 1: Retrieving vCenter…  ( 60 min )
    Using Azure Functions to read Azure VMware Solution data via Powershell, PowerCLI, and the API
    Gotcha’s Functions are region specific. Functions require a small (ie /28) Do not use “ –  “ if Azure functions, use “ _ “ if you need a spacer. Pay attention to the sections on RBAC and system managed identities.   Build Order   Create or identify the Log Analytics Workspace that will be used for storage diagnostics.   Create a KeyVault and set the RBAC properties to allow creation of secrets.   Create KeyVault secrets to hold the account IDs and account passwords that will be used to authenticate to the vCenter Server, NSX and HCX appliances.   Create/identify a storage account and an empty subnet that will be used by the function.   Create the Function App   Creating the Functions App common variable retrieval code (shared by all functions)   Example 1: Retrieving vCenter…
    Using Azure Functions to read AVS data via Powershell, PowerCLI, and the API
    Gotcha’s Functions are region specific. Functions require a small (ie /28) Do not use “ –  “ if Azure functions, use “ _ “ if you need a spacer. Pay attention to the sections on RBAC and system managed identities.   Build Order   Create or identify the Log Analytics Workspace that will be used for storage diagnostics.   Create a KeyVault and set the RBAC properties to allow creation of secrets.   Create KeyVault secrets to hold the account IDs and account passwords that will be used to authenticate to the vCenter, NSX-T and HCX appliances.   Create/identify a storage account and an empty subnet that will be used by the function.   Create the Function App   Creating the Functions App common variable retrieval code (shared by all functions)   Example 1: Retrieving vCenter data…

  • Open

    Arizona Department of Transportation Innovates with Azure AI Vision
    The Arizona Department of Transportation (ADOT) is committed to providing safe and efficient transportation services to the residents of Arizona. With a focus on innovation and customer service, ADOT’s Motor Vehicle Division (MVD) continually seeks new ways to enhance its services and improve the overall experience for its residents. The challenge ADOT MVD had a tough challenge to ensure the security and authenticity of transactions, especially those involving sensitive information. Every day, the department needs to verify thousands of customers seeking to use its online services to perform activities like updating customer information including addresses, renewing vehicle registrations, ordering replacement driver licenses, and ordering driver and vehicle records. Traditional methods of …  ( 32 min )
    Arizona Department of Transportation Innovates with Azure AI Vision
    The Arizona Department of Transportation (ADOT) is committed to providing safe and efficient transportation services to the residents of Arizona. With a focus on innovation and customer service, ADOT’s Motor Vehicle Division (MVD) continually seeks new ways to enhance its services and improve the overall experience for its residents. The challenge ADOT MVD had a tough challenge to ensure the security and authenticity of transactions, especially those involving sensitive information. Every day, the department needs to verify thousands of customers seeking to use its online services to perform activities like updating customer information including addresses, renewing vehicle registrations, ordering replacement driver licenses, and ordering driver and vehicle records. Traditional methods of …

  • Open

    Azure Boards + GitHub: Recent Updates
    Over the past several months, we’ve delivered a series of improvements to the Azure Boards + GitHub integration. Whether you’re tracking code, managing pull requests, or connecting pipelines, these updates aim to simplify and strengthen the link between your work items and your GitHub activity. Here’s a recap of everything we’ve released (or are just […] The post Azure Boards + GitHub: Recent Updates appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Why Azure AI Is Retail’s Secret Sauce
    Executive Summary Leading RCG enterprises are standardizing on Azure AI—specifically Azure OpenAI Service, Azure Machine Learning, Azure AI Search, and Azure AI Vision—to increase digital‑channel conversion, sharpen demand forecasts, automate store execution, and accelerate product innovation. Documented results include up to 30 percent uplift in search conversion, 10 percent reduction in stock‑outs, and multimillion‑dollar productivity gains. This roadmap consolidates field data from CarMax, Kroger, Coca‑Cola, Estée Lauder, PepsiCo and Microsoft reference architectures to guide board‑level investment and technology planning. 1 Strategic Value of Azure AI Azure AI delivers state‑of‑the‑art language (GPT‑4o, GPT-4.1), reasoning (o1, o3, o4-mini) and multimodal (Phi‑3 Vision) models through …  ( 25 min )
    Why Azure AI Is Retail’s Secret Sauce
    Executive Summary Leading RCG enterprises are standardizing on Azure AI—specifically Azure OpenAI Service, Azure Machine Learning, Azure AI Search, and Azure AI Vision—to increase digital‑channel conversion, sharpen demand forecasts, automate store execution, and accelerate product innovation. Documented results include up to 30 percent uplift in search conversion, 10 percent reduction in stock‑outs, and multimillion‑dollar productivity gains. This roadmap consolidates field data from CarMax, Kroger, Coca‑Cola, Estée Lauder, PepsiCo and Microsoft reference architectures to guide board‑level investment and technology planning. 1 Strategic Value of Azure AI Azure AI delivers state‑of‑the‑art language (GPT‑4o, GPT-4.1), reasoning (o1, o3, o4-mini) and multimodal (Phi‑3 Vision) models through …
  • Open

    FabCon Las Vegas keynote recording now available
    Record-Breaking Attendance The Microsoft Fabric Community Conference (FabCon) was a monumental success with over 6,000 attendees, 200+ breakout sessions, 20 workshops, and 70+ sponsors. Day 1 keynotes at the T-Mobile Arena featured a packed auditorium of Fabric enthusiasts ready to discover the future of Microsoft’s unified AI platform. While FabCon was an in-person only event, … Continue reading “FabCon Las Vegas keynote recording now available”  ( 7 min )
  • Open

    Integrating Semantic Kernel Python with Google’s A2A Protocol
    Google’s Agent-to-Agent (A2A) protocol is designed to enable seamless interoperability among diverse AI agents. Microsoft’s Semantic Kernel (SK), an open-source platform for orchestrating intelligent agent interactions, is now being integrated into the A2A ecosystem. In this blog, we demonstrate how Semantic Kernel agents can easily function as an A2A Server, efficiently routing agent calls to […] The post Integrating Semantic Kernel Python with Google’s A2A Protocol appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Introducing MAI-DS-R1
    Authors: Samer Hassan, Doran Chakraborty, Qi Zhang ,Yuan Yu Today we’re releasing MAI-DS-R1, a new open weights DeepSeek R1 model variant, via both Azure AI Foundry and HuggingFace. This new model has been post-trained by the Microsoft AI team to improve its responsiveness on blocked topics and its risk profile, while maintaining its reasoning capabilities and competitive performance.  Key results:  MAI-DS-R1 successfully responds to 99.3% of prompts related to blocked topics, outperforming DeepSeek R1 by 2.2x, and matching Perplexity’s R1-1776.   MAI-DS-R1 also delivers higher satisfaction metrics on internal evals, outperforming DeepSeek R1 and R1-1776 by 2.1x and 1.3x, respectively.   MAI-DS-R1 outperforms both DeepSeek’s R1 and R1-1776 in reducing harmful content in both the “thin…  ( 86 min )
    Introducing MAI-DS-R1
    Authors: Samer Hassan, Doran Chakraborty, Qi Zhang ,Yuan Yu Today we’re releasing MAI-DS-R1, a new open weights DeepSeek R1 model variant, via both Azure AI Foundry and HuggingFace. This new model has been post-trained by the Microsoft AI team to improve its responsiveness on blocked topics and its risk profile, while maintaining its reasoning capabilities and competitive performance.  Key results:  MAI-DS-R1 successfully responds to 99.3% of prompts related to blocked topics, outperforming DeepSeek R1 by 2.2x, and matching Perplexity’s R1-1776.   MAI-DS-R1 also delivers higher satisfaction metrics on internal evals, outperforming DeepSeek R1 and R1-1776 by 2.1x and 1.3x, respectively.   MAI-DS-R1 outperforms both DeepSeek’s R1 and R1-1776 in reducing harmful content in both the “thin…

  • Open

    Microsoft 365 Copilot Power User Tips
    Take control of your workday — summarize long emails instantly, turn meeting transcripts into actionable plans, and build strategic documents in seconds using your own data with Microsoft 365 Copilot. Instead of chasing down context, ask natural prompts and get clear, detailed results complete with tone-matched writing, visual recaps, and real-time collaboration. Get up to speed on complex email threads, transform insights from missed meetings into next steps, and pull relevant content from across your calendar, inbox, and docs — all without switching tools or losing momentum. Mary Pasch, Microsoft 365 Principal PM, shows how whether you’re refining a plan in Word, responding in Outlook, or catching up in Teams, Copilot works behind the scenes to help you move faster and focus on what mat…  ( 43 min )
    Microsoft 365 Copilot Power User Tips
    Take control of your workday — summarize long emails instantly, turn meeting transcripts into actionable plans, and build strategic documents in seconds using your own data with Microsoft 365 Copilot. Instead of chasing down context, ask natural prompts and get clear, detailed results complete with tone-matched writing, visual recaps, and real-time collaboration. Get up to speed on complex email threads, transform insights from missed meetings into next steps, and pull relevant content from across your calendar, inbox, and docs — all without switching tools or losing momentum. Mary Pasch, Microsoft 365 Principal PM, shows how whether you’re refining a plan in Word, responding in Outlook, or catching up in Teams, Copilot works behind the scenes to help you move faster and focus on what mat…
    New reasoning agents: Researcher and Analyst in Microsoft 365 Copilot
    Analyze data and research with expertise on demand, and automate workflows with intelligent agents in Microsoft 365 Copilot. Analyst thinks like a data scientist and Researcher like an expert, so you can uncover insights, validate logic, and generate expert-level reports in minutes.  And using Microsoft Copilot Studio, build your own autonomous AI agents to streamline multi-step processes with deep reasoning, like responding to RFPs or synthesizing internal knowledge, incorporating Copilot Flows as automated actions. No need for perfect prompts — just describe what you need, and Copilot will reason through the task, surface key insights, and deliver actionable results faster than ever.  Jeremy Chapman, Microsoft 365 Director, walks you through how to use these AI-driven agents step-by-ste…  ( 41 min )
    New reasoning agents: Researcher and Analyst in Microsoft 365 Copilot
    Analyze data and research with expertise on demand, and automate workflows with intelligent agents in Microsoft 365 Copilot. Analyst thinks like a data scientist and Researcher like an expert, so you can uncover insights, validate logic, and generate expert-level reports in minutes.  And using Microsoft Copilot Studio, build your own autonomous AI agents to streamline multi-step processes with deep reasoning, like responding to RFPs or synthesizing internal knowledge, incorporating Copilot Flows as automated actions. No need for perfect prompts — just describe what you need, and Copilot will reason through the task, surface key insights, and deliver actionable results faster than ever.  Jeremy Chapman, Microsoft 365 Director, walks you through how to use these AI-driven agents step-by-ste…
    Microsoft Purview: New data security controls for the browser & network
    Protect your organization’s data with Microsoft Purview. Gain complete visibility into potential data leaks, from AI applications to unmanaged cloud services, and take immediate action to prevent unwanted data sharing. Microsoft Purview unifies data security controls across Microsoft 365 apps, the Edge browser, Windows and macOS endpoints, and even network communications over HTTPS — all in one place. Take control of your data security with automated risk insights, real-time policy enforcement, and seamless management across apps and devices. Strengthen compliance, block unauthorized transfers, and streamline policy creation to stay ahead of evolving threats. Roberto Yglesias, Microsoft Purview Principal GPM, goes beyond Data Loss Prevention  Keep sensitive data secure no matter where it …  ( 42 min )
    Microsoft Purview: New data security controls for the browser & network
    Protect your organization’s data with Microsoft Purview. Gain complete visibility into potential data leaks, from AI applications to unmanaged cloud services, and take immediate action to prevent unwanted data sharing. Microsoft Purview unifies data security controls across Microsoft 365 apps, the Edge browser, Windows and macOS endpoints, and even network communications over HTTPS — all in one place. Take control of your data security with automated risk insights, real-time policy enforcement, and seamless management across apps and devices. Strengthen compliance, block unauthorized transfers, and streamline policy creation to stay ahead of evolving threats. Roberto Yglesias, Microsoft Purview Principal GPM, goes beyond Data Loss Prevention  Keep sensitive data secure no matter where it …
  • Open

    Microsoft 365 Certification control spotlight: General Data Protection Regulation (GDPR)
    Read how Microsoft 365 Certification helps ISVs validate General Data Protection Regulation (GDPR) compliance. The post Microsoft 365 Certification control spotlight: General Data Protection Regulation (GDPR) appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Semantic Kernel adds Model Context Protocol (MCP) support for Python
    We are excited to announce that Semantic Kernel (SK) now has first-class support for the Model Context Protocol (MCP) — a standard created by Anthropic to enable models, tools, and agents to share context and capabilities seamlessly. With this release, SK can act as both an MCP host (client) and an MCP server, and you […] The post Semantic Kernel adds Model Context Protocol (MCP) support for Python appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Azure AI Search: Cut Vector Costs Up To 92.5% with New Compression Techniques
    TLDR: Key learnings from our compression technique evaluation Cost savings: Up to 92.5% reduction in monthly costs Storage efficiency: Vector index size reduced by up to 99% Speed improvement: Query response times up to 33% faster with compressed vectors Quality maintained: Many compression configurations maintain 99-100% of baseline relevance quality At scale, the cost of storing and querying large, high-dimensional vector indexes can balloon. The common trade-off? Either pay a premium to maintain top-tier search quality or sacrifice user experience to limit expenses. With Azure AI Search, you no longer have to choose. Through testing, we have identified ways to reduce system costs without compromising retrieval performance quality.   Our experiments show: 92.5% reduction in cost when…  ( 52 min )
    Azure AI Search: Cut Vector Costs Up To 92.5% with New Compression Techniques
    TLDR: Key learnings from our compression technique evaluation Cost savings: Up to 92.5% reduction in monthly costs Storage efficiency: Vector index size reduced by up to 99% Speed improvement: Query response times up to 33% faster with compressed vectors Quality maintained: Many compression configurations maintain 99-100% of baseline relevance quality At scale, the cost of storing and querying large, high-dimensional vector indexes can balloon. The common trade-off? Either pay a premium to maintain top-tier search quality or sacrifice user experience to limit expenses. With Azure AI Search, you no longer have to choose. Through testing, we have identified ways to reduce system costs without compromising retrieval performance quality.   Our experiments show: 92.5% reduction in cost when…
    Building an Interactive Feedback Review Agent with Azure AI Search and Haystack
    By Khye Wei (Azure AI Search) & Amna Mubashar (Haystack)   We’re excited to announce the integration of Haystack with Azure AI Search! To demonstrate its capabilities, we’ll walk you through building an interactive review agent to efficiently retrieve and analyze customer reviews. By combining Azure AI Search’s hybrid retrieval with Haystack’s flexible pipeline architecture, this agent provides deeper insights through sentiment analysis and intelligent summarization tools. Why Use Azure AI Search with Haystack? Azure AI Search offers an enterprise-grade retrieval system with battle-tested AI search technology, built for high performance GenAI applications at any scale: Hybrid Search: Combining keyword-based BM25 and vector-based searches with reciprocal rank fusion (RRF). Semantic Ranking…  ( 39 min )
    Building an Interactive Feedback Review Agent with Azure AI Search and Haystack
    We’re excited to announce the integration of Haystack with Azure AI Search! To demonstrate its capabilities, we’ll walk you through building an interactive review agent to efficiently retrieve and analyze customer reviews. By combining Azure AI Search’s hybrid retrieval with Haystack’s flexible pipeline architecture, this agent provides deeper insights through sentiment analysis and intelligent summarization tools. Why Use Azure AI Search with Haystack? Azure AI Search offers an enterprise-grade retrieval system with battle-tested AI search technology, built for high performance GenAI applications at any scale: Hybrid Search: Combining keyword-based BM25 and vector-based searches with reciprocal rank fusion (RRF). Semantic Ranking: Enhancing retrieval results using deep learning models. S…
    Bonus RAG Time Journey: Agentic RAG
    This is a bonus post for RAG Time, a 6-part educational series on retrieval-augmented generation (RAG). In this series, we explored topics such as indexing and retrieval techniques for RAG, data ingestion, and storage optimization. The final topic for this series covers agentic RAG, and how to use semi-autonomous agents to make a dynamic and self-refining retrieval system. What we'll cover: Overview and definition of agentic RAG Example of a single-shot RAG flow Two examples of agentic RAG: single-step and multi-step reflection What is agentic RAG? An agent is a component of an AI application that leverages generative models to make decisions and execute actions autonomously. Agentic RAG improves the traditional RAG flow by actively interacting with its environment using tools, memory, a…  ( 37 min )
    Bonus RAG Time Journey: Agentic RAG
    This is a bonus post for RAG Time, a 6-part educational series on retrieval-augmented generation (RAG). In this series, we explored topics such as indexing and retrieval techniques for RAG, data ingestion, and storage optimization. The final topic for this series covers agentic RAG, and how to use semi-autonomous agents to make a dynamic and self-refining retrieval system. What we'll cover: Overview and definition of agentic RAG Example of a single-shot RAG flow Two examples of agentic RAG: single-step and multi-step reflection What is agentic RAG? An agent is a component of an AI application that leverages generative models to make decisions and execute actions autonomously. Agentic RAG improves the traditional RAG flow by actively interacting with its environment using tools, memory, a…
  • Open

    Mastering SKU Estimations with the Microsoft Fabric SKU Estimator
    In today’s ever-changing analytics landscape it can be difficult to plan out your next project or your enterprise analytics roadmap. Designed to optimize data infrastructure planning, the Microsoft Fabric SKU Estimator helps customers and partners to accurately estimate capacity requirements and select the most suitable SKU for their workloads, protecting users from under-provisioning and overcommitment. … Continue reading “Mastering SKU Estimations with the Microsoft Fabric SKU Estimator”  ( 9 min )
    BULK INSERT statement is generally available!
    The BULK INSERT statement is generally available in Fabric Data Warehouse. The BULK INSERT statement enables you to ingest parquet or csv data into a table from the specified file stored in Azure Data Lake or Azure Blob storage: The BULK INSERT statement is very similar to the COPY INTO statement and enables you to … Continue reading “BULK INSERT statement is generally available!”  ( 7 min )
  • Open

    Step-by-Step Contact Center Chat Analysis with Azure OpenAI & Communication Services
    1. Introduction Contact centers are the front lines of customer interaction, generating vast amounts of valuable data through chat logs, call transcripts, and emails. However, manually sifting through this data to find actionable insights is often a monumental task. Imagine the scenario of a thriving online service, like a food delivery app: as usage climbs, so does the number of customer support chats, making it incredibly difficult to pinpoint recurring problems or gauge overall satisfaction from the sea of text. How can businesses effectively tap into this wealth of information? This post explores a powerful solution: building an automated analytics platform using Azure Communication Services (ACS) combined with the intelligence of Azure OpenAI Service. We'll outline how this integratio…  ( 72 min )
    Step-by-Step Contact Center Chat Analysis with Azure OpenAI & Communication Services
    1. Introduction Contact centers are the front lines of customer interaction, generating vast amounts of valuable data through chat logs, call transcripts, and emails. However, manually sifting through this data to find actionable insights is often a monumental task. Imagine the scenario of a thriving online service, like a food delivery app: as usage climbs, so does the number of customer support chats, making it incredibly difficult to pinpoint recurring problems or gauge overall satisfaction from the sea of text. How can businesses effectively tap into this wealth of information? This post explores a powerful solution: building an automated analytics platform using Azure Communication Services (ACS) combined with the intelligence of Azure OpenAI Service. We'll outline how this integratio…
  • Open

    Host Remote MCP Servers in Azure App Service
    My colleague, Anthony Chu, from Azure Container Apps, recently published an excellent blog post outlining how to get started with MCP servers in Azure Container Apps. I highly recommend reading it, as there are many similarities between hosting MCP servers on Azure Container Apps and Azure App Service. He also provides great background information on remote MCP servers and their future plans. In this article, I will build on that foundation and show you how to run remote MCP servers as web apps in Azure App Service, and how to connect to them with GitHub Copilot in Visual Studio Code. Quick Background on MCP Servers MCP (Model Context Protocol) servers are part of a rapidly evolving technology used for hosting and managing model-based contexts. These servers interact with clients like GitH…  ( 22 min )
    Host Remote MCP Servers in Azure App Service
    My colleague, Anthony Chu, from Azure Container Apps, recently published an excellent blog post outlining how to get started with MCP servers in Azure Container Apps. I highly recommend reading it, as there are many similarities between hosting MCP servers on Azure Container Apps and Azure App Service. He also provides great background information on remote MCP servers and their future plans. In this article, I will build on that foundation and show you how to run remote MCP servers as web apps in Azure App Service, and how to connect to them with GitHub Copilot in Visual Studio Code. Quick Background on MCP Servers MCP (Model Context Protocol) servers are part of a rapidly evolving technology used for hosting and managing model-based contexts. These servers interact with clients like GitH…
  • Open

    Automate creation of work items in ADO and Export/Import workflow packages
    This article would create multiple work items (tasks) for any specific User Story in a particular Backlog with a certain TAG value in Azure Devops. This article would also show steps to Export/Import a workflow package.PART 1: Create multiple items for a parent User StoryThis article uses Power Automate to do the same.There are certain Pre-requisites that have to be fulfilled,1. Be a part of an Azure Devops organization, have a project created along with some User Stories.2. Have access to Power Automate to create workflows, and make sure you are able to add connections from Automate to ADO (Not to worry we would check that in the below steps)Now go ahead and Follow the below steps to achieve the purpose.Step1: Open power automate portal and Click on "My Flows" Link: https://make.powerauto…  ( 27 min )
    Automate creation of work items in ADO and Export/Import workflow packages
    This article would create multiple work items (tasks) for any specific User Story in a particular Backlog with a certain TAG value in Azure Devops. This article would also show steps to Export/Import a workflow package.PART 1: Create multiple items for a parent User StoryThis article uses Power Automate to do the same.There are certain Pre-requisites that have to be fulfilled,1. Be a part of an Azure Devops organization, have a project created along with some User Stories.2. Have access to Power Automate to create workflows, and make sure you are able to add connections from Automate to ADO (Not to worry we would check that in the below steps)Now go ahead and Follow the below steps to achieve the purpose.Step1: Open power automate portal and Click on "My Flows" Link: https://make.powerauto…

  • Open

    NodeJs GitHub Action Deployment on App Service Linux Using Publish Profile
    Overview  ( 3 min )
  • Open

    Common use cases for building solutions with Microsoft Fabric User data functions (UDFs)
    Data engineering often presents challenges with data quality or complex data analytics processing that requires custom logic. This is where Fabric User data functions can be used to implement custom logic into your data processes or pipelines. Here are the most common cases where Fabric User Data Functions  ( 8 min )
  • Open

    Unlocking the Power of Azure: Mastering Resource Management in Kubernetes
    Hi, I’m Pranjal Mishra, a Student Ambassador from Galgotias University, pursuing B.Tech in Computer Science with a specialization in AI & ML. As someone passionate about cloud computing and DevOps, I often explore how platforms like Azure simplify complex infrastructure challenges—especially when working with containerized applications in Kubernetes. In this article, we’ll dive into resource management in Kubernetes, with a focus on implementing resource quotas and limits using Azure Kubernetes Service (AKS). Whether you're optimizing cost, ensuring performance, or avoiding resource contention, this guide is for you. Why Resource Management Matters ? In Kubernetes, resource limits and quotas are your best allies in controlling how much CPU and memory workloads consume. Without these contro…  ( 25 min )
    Unlocking the Power of Azure: Mastering Resource Management in Kubernetes
    Hi, I’m Pranjal Mishra, a Student Ambassador from Galgotias University, pursuing B.Tech in Computer Science with a specialization in AI & ML. As someone passionate about cloud computing and DevOps, I often explore how platforms like Azure simplify complex infrastructure challenges—especially when working with containerized applications in Kubernetes. In this article, we’ll dive into resource management in Kubernetes, with a focus on implementing resource quotas and limits using Azure Kubernetes Service (AKS). Whether you're optimizing cost, ensuring performance, or avoiding resource contention, this guide is for you. Why Resource Management Matters ? In Kubernetes, resource limits and quotas are your best allies in controlling how much CPU and memory workloads consume. Without these contro…
  • Open

    Customer Case Study: Announcing the Neon Serverless Postgres Connector for Microsoft Semantic Kernel
    Announcing the Neon Serverless Postgres Connector for Microsoft Semantic Kernel We’re excited to introduce the Neon Serverless Postgres Connector for Microsoft Semantic Kernel, enabling developers to seamlessly integrate Neon’s serverless Postgres capabilities with AI-driven vector search and retrieval workflows. By leveraging the **pgvector** extension in Neon and the existing Postgres Vector Store connector, this integration […] The post Customer Case Study: Announcing the Neon Serverless Postgres Connector for Microsoft Semantic Kernel appeared first on Semantic Kernel.  ( 26 min )
    Guest Blog: Bridging Business and Technology: Transforming Natural Language Queries into SQL with Semantic Kernel Part 2
    Today we’d like to welcome back a team of internal Microsoft employees for part 2 of their guest blog series focused on Bridging Business and Technology: Transforming Natural Language Queries into SQL with Semantic Kernel. We’ll turn it over to our authors – Samer El Housseini, Riccardo Chiodaroli, Daniel Labbe and Fabrizio Ruocco to dive […] The post Guest Blog: Bridging Business and Technology: Transforming Natural Language Queries into SQL with Semantic Kernel Part 2 appeared first on Semantic Kernel.  ( 31 min )
  • Open

    Azure Monitor Application Insights Auto-Instrumentation for Java and Node Microservices on AKS
    Key Takeaways (TLDR) Monitor Java and Node applications with zero code changes Fast onboarding: just 2 steps Supports distributed tracing, logs, and metrics Correlates application-level telemetry in Application Insights with infrastructure-level telemetry in Container Insights Available today in public preview Introduction Monitoring your applications is now easier than ever with the public preview release of Auto-Instrumentation for Azure Kubernetes Service (AKS). You can now easily monitor your Java and Node deployments without changing your code by leveraging auto-instrumentation that is integrated into the AKS cluster.  This feature is ideal for developers or operators who are... Looking to add monitoring in the easiest way possible, without modifying code and avoiding ongoing SDK u…  ( 31 min )
    Azure Monitor Application Insights Auto-Instrumentation for Java and Node Microservices on AKS
    Monitoring your applications is now easier than ever with the public preview release of Auto-Instrumentation for Azure Kubernetes Service (AKS). You can now easily monitor your Java and Node deployments without changing your code by leveraging auto-instrumentation that is integrated into the AKS cluster.  This feature is ideal for developers or operators who are... Looking to add monitoring in the easiest way possible, without modifying code and avoiding ongoing SDK update maintenance. Starting out on their monitoring journey and looking to benefit from carefully chosen default configurations with the ability to tweak them over time. Working with someone else’s code and looking to instrument at scale. Or considering monitoring for the first time at the time of deployment. Before the intr…

  • Open

    Major Updates to VS Code Docker: Introducing Container Tools
    The first, most obvious thing is the introduction of the Container Tools extension to broaden our focus and open new extensibility opportunities. The existing extension code (and MIT license) will be migrated to the Container Tools extension, and the Docker extension will become an extension pack that includes the Docker DX and Container Tools extensions. For you, this means the ability to customize the tooling to meet your needs - choose your preferred container runtime and only the functionality that you need in the extension settings. This major update marks a significant step forward in enhancing the development experience when working with containers. Please comment here with any questions or feedback and stay tuned to experiment with the new features!   tl;dr  The Docker extension is becoming the Container Tools extension Still free and open source Podman support is coming No action is required  ( 19 min )
    Major Updates to VS Code Docker: Introducing Container Tools
    The first, most obvious thing is the introduction of the Container Tools extension to broaden our focus and open new extensibility opportunities. The existing extension code (and MIT license) will be migrated to the Container Tools extension, and the Docker extension will become an extension pack that includes the Docker DX and Container Tools extensions. For you, this means the ability to customize the tooling to meet your needs - choose your preferred container runtime and only the functionality that you need in the extension settings. This major update marks a significant step forward in enhancing the development experience when working with containers. Please comment here with any questions or feedback and stay tuned to experiment with the new features!   tl;dr  The Docker extension is becoming the Container Tools extension Still free and open source Podman support is coming No action is required
    Getting Started with .NET on Azure Container Apps
    Great news for .NET developers who would like to become familiar with containers and Azure Container Apps (ACA)! We just released a new Getting Started guide for .NET developers on Azure Container Apps. This guide is designed to help you get started with Azure Container Apps and understand how to build and deploy your applications using this service.   In a series of guided lessons, you will learn: All about container services on Azure - and where ACA fits in How to run a monolith on ACA How to add authentication to an app on ACA How to run microservices on ACA How to implement a CI/CD pipeline for ACA How to monitor and optimize your app for cost on ACA How to monitor the performance of your app on ACA How .NET Aspire helps to orchestrate your app on ACA All the code is available on Git…  ( 23 min )
    Getting Started with .NET on Azure Container Apps
    Great news for .NET developers who would like to become familiar with containers and Azure Container Apps (ACA)! We just released a new Getting Started guide for .NET developers on Azure Container Apps. This guide is designed to help you get started with Azure Container Apps and understand how to build and deploy your applications using this service.   In a series of guided lessons, you will learn: All about container services on Azure - and where ACA fits in How to run a monolith on ACA How to add authentication to an app on ACA How to run microservices on ACA How to implement a CI/CD pipeline for ACA How to monitor and optimize your app for cost on ACA How to monitor the performance of your app on ACA How .NET Aspire helps to orchestrate your app on ACA All the code is available on Git…
  • Open

    Azure VMware Solution approved DISA Provisional Authorization of Azure Government at IL5
    Today we are pleased to announce that Azure VMware Solution in Microsoft Azure Government was approved and added as a service within the DISA Provisional Authorization of Azure Government at Impact Level 5. Azure VMware Solution (AVS) is a fully managed service in Azure that customers can use to extend their on-premises VMware vSphere workloads more seamlessly to the cloud, while maintaining their existing skills and operational processes. Learn more about how you can streamline your migration efforts with Azure VMware Solution in Azure Government. Azure VMware Solution was already approved at DoD Impact Level 4 in Azure Government. With this latest approval, DoD customers and their partners who require the higher impact level can now meet those requirements. Customers and their partners who require DoD Impact Level 2 can use Azure VMware Solution in Azure Commercial or Azure Government. For details about availability and pricing, please reach out to your Microsoft account team, and to learn more about getting started on Azure VMware Solution you can visit the documentation page. To learn more about DoD Impact Level scope for Azure Commercial and Azure Government, you can visit the Azure compliance documentation.  ( 19 min )
    Azure VMware Solution approved DISA Provisional Authorization of Azure Government at IL5
    Today we are pleased to announce that Azure VMware Solution in Microsoft Azure Government was approved and added as a service within the DISA Provisional Authorization of Azure Government at Impact Level 5. Azure VMware Solution (AVS) is a fully managed service in Azure that customers can use to extend their on-premises VMware vSphere workloads more seamlessly to the cloud, while maintaining their existing skills and operational processes. Learn more about how you can streamline your migration efforts with Azure VMware Solution in Azure Government. Azure VMware Solution was already approved at DoD Impact Level 4 in Azure Government. With this latest approval, DoD customers and their partners who require the higher impact level can now meet those requirements. Customers and their partners who require DoD Impact Level 2 can use Azure VMware Solution in Azure Commercial or Azure Government. For details about availability and pricing, please reach out to your Microsoft account team, and to learn more about getting started on Azure VMware Solution you can visit the documentation page. To learn more about DoD Impact Level scope for Azure Commercial and Azure Government, you can visit the Azure compliance documentation.
    AVS was approved as a service within the DISA Provisional Authorization of Azure Government at IL5
    Today we are pleased to announce that Azure VMware Solution in Microsoft Azure Government was approved and added as a service within the DISA Provisional Authorization of Azure Government at Impact Level 5. Azure VMware Solution (AVS) is a fully managed service in Azure that customers can use to extend their on-premises VMware workloads more seamlessly to the cloud, while maintaining their existing skills and operational processes. Learn more about how you can streamline your migration efforts with Azure VMware Solution in Azure Government. AVS was already approved at DoD Impact Level 4 in Azure Government. With this latest approval, DoD customers and their partners who require the higher impact level can now meet those requirements. Customers and their partners who require DoD Impact Level 2 can use AVS in Azure Commercial or Azure Government. For details about availability and pricing, please reach out to your Microsoft account team, and to learn more about getting started on Azure VMware Solution you can visit the documentation page. To learn more about DoD Impact Level scope for Azure Commercial and Azure Government, you can visit the Azure compliance documentation.
    Azure VMware Solution now available in the new AV48 node size in Japan East.
    Today we're announcing the availability of the Azure VMware Solution AV48 SKU in Japan East.  This SKU modernizes the CPU to the Intel Sapphire Rapids architecture and increases the deployed cores and memory per server to better accommodate today’s workloads. Key features of the new AV48 in Japan East: Dual Intel Xeon Gold 6442Y CPUs (Sapphire Rapids microarchitecture) with 24 cores/CPU @ 2.6 GHz / 3.3Ghz All Core Turbo / 4.0 GHz Max Turbo, Total 48 physical cores (96 logical cores with hyperthreading) 1TB of DRAM Memory 19.2TB storage capacity with all NVMe based SSDs 1.5TB of NVMe Cache For pricing reach out to your Microsoft Account Team, or visit the Azure Portal quota request page. Learn More  ( 18 min )
    Azure VMware Solution now available in the new AV48 node size in Japan East.
    Today we're announcing the availability of the AVS AV48 SKU in Japan East.  This SKU modernizes the CPU to the Intel Sapphire Rapids architecture and increases the deployed cores and memory per server to better accommodate today’s workloads. Key features of the new AV48 in Japan East: Dual Intel Xeon Gold 6442Y CPUs (Sapphire Rapids microarchitecture) with 24 cores/CPU @ 2.6 GHz / 3.3Ghz All Core Turbo / 4.0 GHz Max Turbo, Total 48 physical cores (96 logical cores with hyperthreading) 1TB of DRAM Memory 19.2TB storage capacity with all NVMe based SSDs 1.5TB of NVMe Cache For pricing reach out to your Microsoft Account Team, or visit the Azure Portal quota request page. Learn More
    Forward Azure VMware Solution logs anywhere using Azure Logic Apps
    Overview  As enterprises scale their infrastructure in Microsoft Azure using Azure VMware Solution, gaining real-time visibility into the operational health of their private cloud environment becomes increasingly critical. Whether troubleshooting deployment issues, monitoring security events, or performing compliance audits, centralized logging is a must-have.  Azure VMware Solution offers flexible options for exporting syslogs from vCenter Server, ESXi Hosts, and NSX components. While many customers already use Log Analytics or third-party log platforms for visibility, some have unique operational or compliance requirements that necessitate forwarding logs to specific destinations outside the Microsoft ecosystem.  With the advent of VMware Cloud Foundation on Azure VMware Solution, custom…  ( 30 min )
    Forward Azure VMware Solution logs anywhere using Azure Logic Apps
    Overview  As enterprises scale their infrastructure in Microsoft Azure using Azure VMware Solution, gaining real-time visibility into the operational health of their private cloud environment becomes increasingly critical. Whether troubleshooting deployment issues, monitoring security events, or performing compliance audits, centralized logging is a must-have.  Azure VMware Solution offers flexible options for exporting syslogs from vCenter Server, ESXi Hosts, and NSX components. While many customers already use Log Analytics or third-party log platforms for visibility, some have unique operational or compliance requirements that necessitate forwarding logs to specific destinations outside the Microsoft ecosystem.  With the advent of VMware Cloud Foundation on Azure VMware Solution, custom…
    Migrating from EKS to AKS: What Actually Matters
    If you're using the Azure Migration Hub as your starting point, you're already ahead. But when it comes to migrating Kubernetes workloads from EKS to AKS, there are still a few key details that can make or break your deployment. We recently walked through a real-world migration of a typical web app from AWS EKS to Azure AKS, and while the core containers came over cleanly, the supporting architecture required some careful rework. Here’s what stood out during the process—no fluff, just what matters when you're doing the work. Mind the Infrastructure, Not Just the App The app itself (a voting tool using Redis and Postgres) migrated easily. But things got more complex when we looked at the surrounding infrastructure: Ingress and Load Balancing: AWS ALB maps loosely to Azure Application Gatew…  ( 23 min )
    Migrating from EKS to AKS: What Actually Matters
    If you're using the Azure Migration Hub as your starting point, you're already ahead. But when it comes to migrating Kubernetes workloads from EKS to AKS, there are still a few key details that can make or break your deployment. We recently walked through a real-world migration of a typical web app from AWS EKS to Azure AKS, and while the core containers came over cleanly, the supporting architecture required some careful rework. Here’s what stood out during the process—no fluff, just what matters when you're doing the work. Mind the Infrastructure, Not Just the App The app itself (a voting tool using Redis and Postgres) migrated easily. But things got more complex when we looked at the surrounding infrastructure: Ingress and Load Balancing: AWS ALB maps loosely to Azure Application Gatew…
  • Open

    Fabric Espresso – Episodes about Performance Optimization & Compute Management in Microsoft Fabric
    Fabric Espresso – Episodes About Performance Optimization & Compute Management in Microsoft Fabric  ( 6 min )

  • Open

    Azure Red Hat OpenShift: April 2025 Update
    Enterprise Kubernetes shouldn't be complicated or insecure. That's why our April 2025 update brings powerful new features to make your Azure Red Hat OpenShift experience even better. Here's what's new, with links to get you started right away!  🔐 Security Enhancements  Managed Identity & Workload Identity → Replace long-lived credentials with short-term tokens for enhanced security. Now in public preview! Only available for new cluster creation on ARO 4.13 and newer. Get implementation details and read the Red Hat blog.  Managed identity workload identity on Azure Red Hat OpenShift value proposition Cluster-Wide Proxy → Enable connectivity from ARO cluster components to external endpoints via corporate proxies. This feature is only for cluster components, not for customer workloads. Pe…  ( 23 min )
    Azure Red Hat OpenShift: April 2025 Update
    Enterprise Kubernetes shouldn't be complicated or insecure. That's why our April 2025 update brings powerful new features to make your Azure Red Hat OpenShift experience even better. Here's what's new, with links to get you started right away!  🔐 Security Enhancements  Managed Identity & Workload Identity → Replace long-lived credentials with short-term tokens for enhanced security. Now in public preview! Only available for new cluster creation on ARO 4.13 and newer. Get implementation details and read the Red Hat blog.  Managed identity workload identity on Azure Red Hat OpenShift value proposition Cluster-Wide Proxy → Enable connectivity from ARO cluster components to external endpoints via corporate proxies. This feature is only for cluster components, not for customer workloads. Pe…
  • Open

    Azure Firewall and Service Endpoints
    In my recent blog series Private Link reality bites I briefly mentioned the possibility of inspecting Service Endpoints with Azure Firewall, and many have asked for more details on that configuration. Here we go! First things first: what the heck am I talking about? Most Azure services such as Azure Storage, Azure SQL and many others can be accessed directly over the public Internet. However, there are two alternatives to access those services over Microsoft's backbone: Private Link and VNet Service Endpoints. Microsoft's overall recommendation is using private link, but some organizations prefer leveraging service endpoints. Feel free to read this post on a comparison of the two. You might want to inspect traffic to Azure services with network firewalls, even if that traffic is leveraging…  ( 27 min )
    Azure Firewall and Service Endpoints
    In my recent blog series Private Link reality bites I briefly mentioned the possibility of inspecting Service Endpoints with Azure Firewall, and many have asked for more details on that configuration. Here we go! First things first: what the heck am I talking about? Most Azure services such as Azure Storage, Azure SQL and many others can be accessed directly over the public Internet. However, there are two alternatives to access those services over Microsoft's backbone: Private Link and VNet Service Endpoints. Microsoft's overall recommendation is using private link, but some organizations prefer leveraging service endpoints. Feel free to read this post on a comparison of the two. You might want to inspect traffic to Azure services with network firewalls, even if that traffic is leveraging…
  • Open

    AI Agents: Planning and Orchestration with the Planning Design Pattern - Part 7
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a focus on the Planning Design Pattern. In previous posts (links at the end!), we've built a strong foundation in AI agent concepts. Now, we'll explore how to design agents that can effectively plan and orchestrate complex tasks, breaking them down into manageable subtasks and coordinating their execution. Introduction to Planning Design The Planning Design Pattern helps AI agents tackle complex goals by providing a structured approach to task decomposition and execution. This involves: Defining a clear overall goal. Breaking down the task into smaller, manageable subtasks. Leveraging structured output for easier processing. Using an event-driven approach for dynamic a…  ( 24 min )
    AI Agents: Planning and Orchestration with the Planning Design Pattern - Part 7
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a focus on the Planning Design Pattern. In previous posts (links at the end!), we've built a strong foundation in AI agent concepts. Now, we'll explore how to design agents that can effectively plan and orchestrate complex tasks, breaking them down into manageable subtasks and coordinating their execution. Introduction to Planning Design The Planning Design Pattern helps AI agents tackle complex goals by providing a structured approach to task decomposition and execution. This involves: Defining a clear overall goal. Breaking down the task into smaller, manageable subtasks. Leveraging structured output for easier processing. Using an event-driven approach for dynamic a…
  • Open

    Empowering businesses with smart capacity planning: Introducing the Microsoft Fabric SKU estimator (Preview)
    We’re excited to unveil the Microsoft Fabric SKU estimator, now available in preview—an enhanced version of the previously introduced Microsoft Fabric Capacity Calculator. This advanced tool has been refined based on extensive user feedback to provide tailored capacity estimations for businesses. Designed to optimize data infrastructure planning, the Microsoft Fabric SKU Estimator helps customers and … Continue reading “Empowering businesses with smart capacity planning: Introducing the Microsoft Fabric SKU estimator (Preview)”  ( 6 min )
    Purview DLP Policies with Restrict Access for Fabric Lakehouses (Preview)
    In today’s fast-paced data-driven world, enterprises are building more sophisticated data platforms to gain insights and drive innovation. Microsoft Fabric Lakehouses combine the scale of a data lake with the management finesse of a data warehouse – delivering unified analytics in an ever-evolving business landscape. But with great data comes great responsibility. Protecting sensitive information … Continue reading “Purview DLP Policies with Restrict Access for Fabric Lakehouses (Preview)”  ( 6 min )
    Microsoft Purview Data Loss Prevention policies for Fabric have been extended to KQL and Mirrored Databases (Preview)
    Microsoft Purview’s Data Loss Prevention (DLP) policies for Fabric now supports Fabric KQL and Mirrored DBs! Purview DLP policies help organizations to improve their data security posture and comply with governmental and industry regulations. Security teams use DLP policies to automatically detect upload of sensitive information to Microsoft 365 applications like SharePoint and Exchange, and … Continue reading “Microsoft Purview Data Loss Prevention policies for Fabric have been extended to KQL and Mirrored Databases (Preview)”  ( 6 min )
  • Open

    Evaluating Agentic AI Systems: A Deep Dive into Agentic Metrics
    In this post, we explore the latest Agentic metrics introduced in the Azure AI Evaluation library, a Python library designed to assess generative AI systems with both traditional NLP metrics (like BLEU and ROUGE) and AI-assisted evaluators (such as relevance, coherence, and safety). With the rise of agentic systems, the library now includes purpose-built evaluators for complex agent workflows. We’ll focus on three key metrics: Task Adherence, Tool Call Accuracy, and Intent Resolution—each capturing a critical dimension of an agent’s performance. To help illustrate these evaluation strategies, you can find AgenticEvals, a simple public repo that showcases these metrics in action using Semantic Kernel for the agentic/orchestration layer and Azure AI Evaluation library for the evaluation.   W…  ( 28 min )
    Evaluating Agentic AI Systems: A Deep Dive into Agentic Metrics
    In this post, we explore the latest Agentic metrics introduced in the Azure AI Evaluation library, a Python library designed to assess generative AI systems with both traditional NLP metrics (like BLEU and ROUGE) and AI-assisted evaluators (such as relevance, coherence, and safety). With the rise of agentic systems, the library now includes purpose-built evaluators for complex agent workflows. We’ll focus on three key metrics: Task Adherence, Tool Call Accuracy, and Intent Resolution—each capturing a critical dimension of an agent’s performance. To help illustrate these evaluation strategies, you can find AgenticEvals, a simple public repo that showcases these metrics in action using Semantic Kernel for the agentic/orchestration layer and Azure AI Evaluation library for the evaluation.   W…

  • Open

    Model Mondays: Bringing AI Home with Local Development
    As generative AI tools become more powerful, developers are looking for faster, more flexible ways to experiment, fine-tune, and deploy models. But not every workflow starts—or needs to stay—in the cloud. In this episode of Model Mondays, we explore how the AI Toolkit for Visual Studio Code is transforming the development experience by enabling local AI workflows that give you more control, faster iteration, and seamless integration with your existing tools. Whether you're working in a constrained environment, or just prefer to prototype locally, this toolkit makes it possible to run and refine AI models right from your own machine. Whether you're tinkering with models on a plane, prototyping in a coffee shop, or just want to test your prompts in peace, this toolkit has your back. It's like taking your favorite AI models on a road trip... and they actually behave. What’s in it for you? Each Model Mondays episode is a 30-minute boost to your AI skillset: Stay updated – A 5-min recap of the week’s hottest model drops and Azure AI Foundry news Get hands-on – A 15-min walkthrough focused on how to fine-tune Mistral in Azure Ask the experts – Live Q&A with Microsoft and Mistral  And the conversation doesn’t stop there. Join us every Friday for a Model Mondays Watercooler Chat at 1:30 PM ET / 10:30 AM PT in our Discord community, where we recap, react, and nerd out with the broader AI community. In Case You Missed It Episode 1: GitHub Models – Building better dev experiences Episode 2: Reasoning Models  Episode 3: Search & Retrieval Models Episode 4 : Visual Generative Models Episode 5 : Fine-Tuning Models Be Part of the Movement Watch Live on Microsoft Reactor – RSVP Now Join the AI Community – Discord Fridays Explore the Tech – Model Mondays GitHub So, grab your laptop, launch VS Code, and let’s bring AI development home!!  ( 22 min )
    Model Mondays: Bringing AI Home with Local Development
    As generative AI tools become more powerful, developers are looking for faster, more flexible ways to experiment, fine-tune, and deploy models. But not every workflow starts—or needs to stay—in the cloud. In this episode of Model Mondays, we explore how the AI Toolkit for Visual Studio Code is transforming the development experience by enabling local AI workflows that give you more control, faster iteration, and seamless integration with your existing tools. Whether you're working in a constrained environment, or just prefer to prototype locally, this toolkit makes it possible to run and refine AI models right from your own machine. Whether you're tinkering with models on a plane, prototyping in a coffee shop, or just want to test your prompts in peace, this toolkit has your back. It's like taking your favorite AI models on a road trip... and they actually behave. What’s in it for you? Each Model Mondays episode is a 30-minute boost to your AI skillset: Stay updated – A 5-min recap of the week’s hottest model drops and Azure AI Foundry news Get hands-on – A 15-min walkthrough focused on how to fine-tune Mistral in Azure Ask the experts – Live Q&A with Microsoft and Mistral  And the conversation doesn’t stop there. Join us every Friday for a Model Mondays Watercooler Chat at 1:30 PM ET / 10:30 AM PT in our Discord community, where we recap, react, and nerd out with the broader AI community. In Case You Missed It Episode 1: GitHub Models – Building better dev experiences Episode 2: Reasoning Models  Episode 3: Search & Retrieval Models Episode 4 : Visual Generative Models Episode 5 : Fine-Tuning Models Be Part of the Movement Watch Live on Microsoft Reactor – RSVP Now Join the AI Community – Discord Fridays Explore the Tech – Model Mondays GitHub So, grab your laptop, launch VS Code, and let’s bring AI development home!!
  • Open

    Host remote MCP servers in Azure Container Apps
    Whether you're building AI agents or using LLM powered tools like GitHub Copilot in Visual Studio Code, you're probably hearing a lot about MCP (Model Context Protocol) lately; maybe you're already using it. It's quickly becoming the standard interoperability layer between different components of the AI stack. In this article, we'll explore how to run remote MCP servers as serverless containers in Azure Container Apps and use them in GitHub Copilot in Visual Studio Code. MCP servers today MCP follows a client-server architecture. It all starts with a client, such as GitHub Copilot in VS Code or Claude Desktop. A client connects to one or more MCP servers. Servers are the main extensibility points in MCP. Each server provides new tools, skills, and capabilities to the client. For example, a…  ( 32 min )
    Host remote MCP servers in Azure Container Apps
    Whether you're building AI agents or using LLM powered tools like GitHub Copilot in Visual Studio Code, you're probably hearing a lot about MCP (Model Context Protocol) lately; maybe you're already using it. It's quickly becoming the standard interoperability layer between different components of the AI stack. In this article, we'll explore how to run remote MCP servers as serverless containers in Azure Container Apps and use them in GitHub Copilot in Visual Studio Code. MCP servers today MCP follows a client-server architecture. It all starts with a client, such as GitHub Copilot in VS Code or Claude Desktop. A client connects to one or more MCP servers. Servers are the main extensibility points in MCP. Each server provides new tools, skills, and capabilities to the client. For example, a…
  • Open

    Guest Blog: Revolutionize Business Automation with AI: A Guide to Microsoft’s Semantic Kernel Process Framework
    Revolutionize Business Automation with AI: A Guide to Microsoft’s Semantic Kernel Process Framework Step-by-Step guide on creating your first process with AI Microsoft’s AI Framework, Semantic Kernel, is an easy-to-use C#, Java, and Python-based AI framework that helps you quickly build AI solutions or integrate AI capabilities into your existing app. Semantic Kernel provides various […] The post Guest Blog: Revolutionize Business Automation with AI: A Guide to Microsoft’s Semantic Kernel Process Framework appeared first on Semantic Kernel.  ( 26 min )
  • Open

    How to use any Python AI agent framework with free GitHub Models
    I ❤️ when companies offer free tiers for developer services, since it gives everyone a way to learn new technologies without breaking the bank. Free tiers are especially important for students and people between jobs, when the desire to learn is high but the available cash is low. That's why I'm such a fan of GitHub Models: free, high-quality generative AI models available to anyone with a GitHub account. The available models include the latest OpenAI LLMs (like o3-mini), LLMs from the research community (like Phi and Llama), LLMs from other popular providers (like Mistral and Jamba), multimodal models (like gpt-4o and llama-vision-instruct) and even a few embedding models (from OpenAI and Cohere). With access to such a range of models, you can prototype complex multi-model workflows to im…  ( 32 min )
    How to use any Python AI agent framework with free GitHub Models
    I ❤️ when companies offer free tiers for developer services, since it gives everyone a way to learn new technologies without breaking the bank. Free tiers are especially important for students and people between jobs, when the desire to learn is high but the available cash is low. That's why I'm such a fan of GitHub Models: free, high-quality generative AI models available to anyone with a GitHub account. The available models include the latest OpenAI LLMs (like o3-mini), LLMs from the research community (like Phi and Llama), LLMs from other popular providers (like Mistral and Jamba), multimodal models (like gpt-4o and llama-vision-instruct) and even a few embedding models (from OpenAI and Cohere). With access to such a range of models, you can prototype complex multi-model workflows to im…
  • Open

    General-Purpose vs Reasoning Models in Azure OpenAI
    1. Introduction Since Large Language Models (LLMs) have become mainstream, a wide range of models have emerged to serve different types of tasks—from casual chatbot interactions to advanced scientific reasoning. If you're familiar with GPT-3.5 and GPT-4, you'll know that these models set a high standard for general-purpose AI. But as the field evolves, the distinction between model types has become more pronounced. In this blog, we'll explore the differences between two major categories of LLMs: General-Purpose Models – Designed for broad tasks like conversation, content generation, and multimodal input processing. Reasoning Models – Optimized for tasks requiring logic, problem-solving, and step-by-step breakdowns. We'll use specific models available in Azure OpenAI as examples to illust…  ( 40 min )
    General-Purpose vs Reasoning Models in Azure OpenAI
    1. Introduction Since Large Language Models (LLMs) have become mainstream, a wide range of models have emerged to serve different types of tasks—from casual chatbot interactions to advanced scientific reasoning. If you're familiar with GPT-3.5 and GPT-4, you'll know that these models set a high standard for general-purpose AI. But as the field evolves, the distinction between model types has become more pronounced. In this blog, we'll explore the differences between two major categories of LLMs: General-Purpose Models – Designed for broad tasks like conversation, content generation, and multimodal input processing. Reasoning Models – Optimized for tasks requiring logic, problem-solving, and step-by-step breakdowns. We'll use specific models available in Azure OpenAI as examples to illust…

  • Open

    Resolving Microsoft Graph PowerShell 2.26+ compatibility issues with Azure Runbooks
    We know how important Azure Automation workflows and appreciate the critical role played by automation runbooks. Some customers have experienced issues with the release of 2.26.1 of the Microsoft Graph PowerShell SDK, particularly when running PowerShell 7.2 runbooks in Azure Automation. The core challenge is a conflict around .NET 6 where fixing Runbooks would break […] The post Resolving Microsoft Graph PowerShell 2.26+ compatibility issues with Azure Runbooks appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Configure Virtual Applications, Mounted Azure Files, and Static File Access in Azure App Service
    Background: In Azure App Service, developers often need to serve files (images, config files, data files, etc.) that are stored outside the app’s wwwroot folder — such as on an Azure File Share. This is especially useful when: You want to share files across multiple web apps Your files are too large to bundle with the app You need to manage files independently from the app deployment To accomplish this, Azure provides the ability to: Mount external Azure File Shares into your web app's file system Expose those mounted folders via virtual paths Configure directory browsing and MIME types to make files directly accessible over the browser Step-by-Step Configuration: ===Azure Storage Account part=== Create Azure File Share ===Azure App Service part=== 1.Mount Azure File Share Azure Porta…  ( 23 min )
    Configure Virtual Applications, Mounted Azure Files, and Static File Access in Azure App Service
    Background: In Azure App Service, developers often need to serve files (images, config files, data files, etc.) that are stored outside the app’s wwwroot folder — such as on an Azure File Share. This is especially useful when: You want to share files across multiple web apps Your files are too large to bundle with the app You need to manage files independently from the app deployment To accomplish this, Azure provides the ability to: Mount external Azure File Shares into your web app's file system Expose those mounted folders via virtual paths Configure directory browsing and MIME types to make files directly accessible over the browser Step-by-Step Configuration: ===Azure Storage Account part=== Create Azure File Share ===Azure App Service part=== 1.Mount Azure File Share Azure Porta…
    Announcing the Public Preview of the New Hybrid Connection Manager (HCM)
    Key Features and Improvements The new version of HCM introduces several enhancements aimed at improving usability, performance, and security: Cross-Platform Compatibility: The new HCM is now supported on both Windows and Linux clients, allowing for seamless management of hybrid connections across different platforms, providing users with greater flexibility and control. Enhanced User Interface: We have redesigned the GUI to offer a more intuitive and efficient user experience. In addition to a new and more accessible GUI, we have also introduced a CLI that includes all the functionality needed to manage connections, especially for our Linux customers who may solely use a CLI to manage their workloads. Improved Visibility: The new version offers enhanced logging and connection testing, whi…  ( 27 min )
    Announcing the Public Preview of the New Hybrid Connection Manager (HCM)
    Key Features and Improvements The new version of HCM introduces several enhancements aimed at improving usability, performance, and security: Cross-Platform Compatibility: The new HCM is now supported on both Windows and Linux clients, allowing for seamless management of hybrid connections across different platforms, providing users with greater flexibility and control. Enhanced User Interface: We have redesigned the GUI to offer a more intuitive and efficient user experience. In addition to a new and more accessible GUI, we have also introduced a CLI that includes all the functionality needed to manage connections, especially for our Linux customers who may solely use a CLI to manage their workloads. Improved Visibility: The new version offers enhanced logging and connection testing, whi…
  • Open

    Best Practices for Mitigating Hallucinations in Large Language Models (LLMs)
    Real-world AI Solutions: Lessons from the Field Overview  This document provides practical guidance for minimizing hallucinations—instances where models produce inaccurate or fabricated content—when building applications with Azure AI services. It targets developers, architects, and MLOps teams working with LLMs in enterprise settings.   Key Outcomes ✅ Reduce hallucinations through retrieval-augmented strategies and prompt engineering✅ Improve model output reliability, grounding, and explainability✅ Enable robust enterprise deployment through layered safety, monitoring, and security   Understanding Hallucinations Hallucinations come in different forms. Here are some realistic examples for each category to help clarify them: Type Description Example Factual Outputs are incorrect or…  ( 31 min )
    Best Practices for Mitigating Hallucinations in Large Language Models (LLMs)
    Real-world AI Solutions: Lessons from the Field Overview  This document provides practical guidance for minimizing hallucinations—instances where models produce inaccurate or fabricated content—when building applications with Azure AI services. It targets developers, architects, and MLOps teams working with LLMs in enterprise settings.   Key Outcomes ✅ Reduce hallucinations through retrieval-augmented strategies and prompt engineering✅ Improve model output reliability, grounding, and explainability✅ Enable robust enterprise deployment through layered safety, monitoring, and security   Understanding Hallucinations Hallucinations come in different forms. Here are some realistic examples for each category to help clarify them: Type Description Example Factual Outputs are incorrect or…
    New enhanced navigation in Azure AI Search
    Faceted navigation is a key component of search experiences, helping users intuitively drill down through large sets of search results by refining their queries quickly and efficiently.  We are announcing several improvements to facets in preview: Hierarchical facets enable developers to create multi-level navigation trees, offering a more organized view of search categories Facet filtering provides precision by allowing regular expressions to refine the facet values displayed Facet summing introduces the ability to aggregate numeric data within facet Hierarchical Facets Facets in Azure AI Search were previously limited to a flat, one layer model. Consider the following index which models a product catalog: Product ID Name Category Subcategory Price P001 UltraHD Smart TV Elect…  ( 25 min )
    New enhanced navigation in Azure AI Search
    Faceted navigation is a key component of search experiences, helping users intuitively drill down through large sets of search results by refining their queries quickly and efficiently.  We are announcing several improvements to facets in preview: Hierarchical facets enable developers to create multi-level navigation trees, offering a more organized view of search categories Facet filtering provides precision by allowing regular expressions to refine the facet values displayed Facet summing introduces the ability to aggregate numeric data within facet Hierarchical Facets Facets in Azure AI Search were previously limited to a flat, one layer model. Consider the following index which models a product catalog: Product ID Name Category Subcategory Price P001 UltraHD Smart TV Elect…
  • Open

    Azure Training Maps
    Overview The Azure Training Maps are a comprehensive visual guide to the Azure ecosystem, integrating all the resources, tools, structures, and connections covered in the course into one inclusive diagram. It enables students to map out and understand the elements they've studied, providing a clear picture of their place within the larger Azure ecosystem. It serves as a 1:1 representation of all the topics officially covered in the instructor-led training. Formats available include PDF, Visio, Excel, and Video.     Links: Each icon in the blueprint has a hyperlink to the pertinent document in the learning path on Learn. Layers: You have the capability to filter layers to concentrate on segments of the course by modules. I.E.: Just day 1 of AZ-104, using filters in Visio and selecting modu…  ( 25 min )
    Azure Training Maps
    Overview The Azure Training Maps are a comprehensive visual guide to the Azure ecosystem, integrating all the resources, tools, structures, and connections covered in the course into one inclusive diagram. It enables students to map out and understand the elements they've studied, providing a clear picture of their place within the larger Azure ecosystem. It serves as a 1:1 representation of all the topics officially covered in the instructor-led training. Formats available include PDF, Visio, Excel, and Video.     Links: Each icon in the blueprint has a hyperlink to the pertinent document in the learning path on Learn. Layers: You have the capability to filter layers to concentrate on segments of the course by modules. I.E.: Just day 1 of AZ-104, using filters in Visio and selecting modu…
    Synthetic Monitoring in Application Insights Using Playwright: A Game-Changer
    Monitoring the availability and performance of web applications is crucial to ensuring a seamless user experience. Azure Application Insights provides powerful synthetic monitoring capabilities to help detect issues proactively. However, Microsoft has deprecated two key features: (Deprecated) Multi-step web tests: Previously, these allowed developers to record and replay a sequence of web requests to test complex workflows. They were created in Visual Studio Enterprise and uploaded to the portal. (Deprecated) URL ping tests: These tests checked if an endpoint was responding and measured performance. They allowed setting custom success criteria, dependent request parsing, and retries. With these features being phased out, we are left without built-in logic to test application health beyon…  ( 26 min )
    Synthetic Monitoring in Application Insights Using Playwright: A Game-Changer
    Monitoring the availability and performance of web applications is crucial to ensuring a seamless user experience. Azure Application Insights provides powerful synthetic monitoring capabilities to help detect issues proactively. However, Microsoft has deprecated two key features: (Deprecated) Multi-step web tests: Previously, these allowed developers to record and replay a sequence of web requests to test complex workflows. They were created in Visual Studio Enterprise and uploaded to the portal. (Deprecated) URL ping tests: These tests checked if an endpoint was responding and measured performance. They allowed setting custom success criteria, dependent request parsing, and retries. With these features being phased out, we are left without built-in logic to test application health beyon…
  • Open

    Microsoft at PyTexas 2025: Join Us for a Celebration of Python and Innovation
    Microsoft is thrilled to announce our participation in PyTexas 2025, taking place this year in the vibrant city of Austin, Texas! At this year’s event, Microsoft is proud to contribute to the community’s growth and excitement by hosting a booth and delivering an engaging talk. The post Microsoft at PyTexas 2025: Join Us for a Celebration of Python and Innovation appeared first on Microsoft for Python Developers Blog.  ( 23 min )
  • Open

    New agents and Copilot Chat for frontline staff
    Stay productive and connected with AI-powered experiences for frontline workers using Android and iOS devices to quickly sign in, manage tasks, find information, and collaborate with team members. Easily access essential resources, check inventory, and get instant answers from Copilot Chat and AI agents that you can build in SharePoint and Microsoft Copilot Studio. Microsoft 365’s latest device experiences take frontline productivity and customer interactions to the next level — from improving customer interactions to managing worksites.  Avery Salumbides, Microsoft 365 Frontline Product Manager, demonstrates key updates across Microsoft Teams, Copilot, and AI agents on mobile, plus essential admin setup considerations to equip your frontline. Secure, fast sign-in. Less downtime and more…  ( 44 min )
    New agents and Copilot Chat for frontline staff
    Stay productive and connected with AI-powered experiences for frontline workers using Android and iOS devices to quickly sign in, manage tasks, find information, and collaborate with team members. Easily access essential resources, check inventory, and get instant answers from Copilot Chat and AI agents that you can build in SharePoint and Microsoft Copilot Studio. Microsoft 365’s latest device experiences take frontline productivity and customer interactions to the next level — from improving customer interactions to managing worksites.  Avery Salumbides, Microsoft 365 Frontline Product Manager, demonstrates key updates across Microsoft Teams, Copilot, and AI agents on mobile, plus essential admin setup considerations to equip your frontline. Secure, fast sign-in. Less downtime and more…

  • Open

    Microsoft 365 Certification control spotlight: Privacy
    Discover how Microsoft 365 Certification ensures ISVs use the latest privacy and personally identifiable information (PII) management controls to protect customer data. The post Microsoft 365 Certification control spotlight: Privacy appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    Boost your development with Microsoft Fabric extensions for Visual Studio Code
    Microsoft Fabric is changing how we handle data engineering and data science. To make things easier, Microsoft added some cool extensions for Visual Studio Code (VS Code) that help you manage Fabric artifacts and build analytical applications. By adding these Microsoft Fabric extensions to VS Code, developers can quickly create Fabric solutions and manage their … Continue reading “Boost your development with Microsoft Fabric extensions for Visual Studio Code”  ( 7 min )
    Recap of Data Factory Announcements at Fabric Conference US 2025
    We had such an exciting week for Fabric during the Fabric Conference US, filled with several product announcements and sneak previews of upcoming new features. Thanks to all of you who participated in the conference, either in person or by being part of the many virtual conversations through blogs, Community forums, social media and other … Continue reading “Recap of Data Factory Announcements at Fabric Conference US 2025”  ( 16 min )
    Use Service Principals to create shortcuts to ADLS Gen2 storage accounts with trusted access
    You now have the capability with service principals to create shortcuts to Azure Data Lake Storage (ADLS) Gen2 storage accounts that have firewall enabled.  Previously, the creation of ADLS Gen2 shortcuts by service principals was restricted when firewall settings were active. However, with the latest changes, service principals will be able to navigate these restrictions … Continue reading “Use Service Principals to create shortcuts to ADLS Gen2 storage accounts with trusted access”  ( 6 min )
  • Open

    Resolving Azure App Service Mount Failures with File Share and Blob Storage
    When using Azure App Service to host web applications, it is common to mount file shares or blob storage hosted in an Azure Storage Account, a configuration also known as "Bring Your Own Storage (BYOS)". While the setup process is seamless, troubleshooting mount issues can be challenging due to different authentication, networking and other configuration aspects. Whether you are encountering errors during the application startup, or viewing any permission denied messages, this guide will help you by going through a step-by-step checklist, to validate the underlying dependencies and settings required for a successful mount: On Azure portal, open the Web App Configuration menu and select the Path Mappings tab. Confirm that the external storage is not being mounted to the unsupported filesys…  ( 27 min )
    Resolving Azure App Service Mount Failures with File Share and Blob Storage
    When using Azure App Service to host web applications, it is common to mount file shares or blob storage hosted in an Azure Storage Account, a configuration also known as "Bring Your Own Storage (BYOS)". While the setup process is seamless, troubleshooting mount issues can be challenging due to different authentication, networking and other configuration aspects. Whether you are encountering errors during the application startup, or viewing any permission denied messages, this guide will help you by going through a step-by-step checklist, to validate the underlying dependencies and settings required for a successful mount: On Azure portal, open the Web App Configuration menu and select the Path Mappings tab. Confirm that the external storage is not being mounted to the unsupported filesys…
    How can I hide the Server information in the response headers in PHP?
    In certain scenarios, you might want to remove the server information from your request header. Therefore, we might consider hiding that information. In Azure App Service for PHP, we are using Nginx, and we can modify configuration files if necessary. First, we need to locate the Nginx configuration file on the Kudu site, which can be found at the path /etc/nginx/nginx.conf. Then, perform cp /etc/nginx/nginx.conf /home/site/nginx.conf. We modify the configuration file under /home to retain our changes. We open the configuration file and uncomment the server_tokens off; directive in the http section of the Nginx configuration. Then you need to configure the startup command using Azure Portal from Configuration -> General Settings as below: cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload Checking again, we can see that the Nginx version is hidden. But what if we want to hide all the server information? To do this, follow these steps: (1) Copy the Nginx configuration file to the /home directory as we mentioned earlier. This is necessary because any files outside of /home will not be preserved after a restart. Use the following command: cp /etc/nginx/nginx.conf /home/site/nginx.conf (2) Open the Nginx configuration file located in /home, and add the following line in the http section. more_clear_headers 'server'. After adding it, save the file. (3) Update custom startup command using Azure Portal from Configuration -> General Settings as follows: apt update && apt install -y nginx-extras && cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload (4) Once done, and the request header should no longer display the Server information.   Reference: How to set Nginx headers -  ( 21 min )
    How can I hide the Server information in the response headers in PHP?
    In certain scenarios, you might want to remove the server information from your request header. Therefore, we might consider hiding that information. In Azure App Service for PHP, we are using Nginx, and we can modify configuration files if necessary. First, we need to locate the Nginx configuration file on the Kudu site, which can be found at the path /etc/nginx/nginx.conf. Then, perform cp /etc/nginx/nginx.conf /home/site/nginx.conf. We modify the configuration file under /home to retain our changes. We open the configuration file and uncomment the server_tokens off; directive in the http section of the Nginx configuration. Then you need to configure the startup command using Azure Portal from Configuration -> General Settings as below: cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload Checking again, we can see that the Nginx version is hidden. But what if we want to hide all the server information? To do this, follow these steps: (1) Copy the Nginx configuration file to the /home directory as we mentioned earlier. This is necessary because any files outside of /home will not be preserved after a restart. Use the following command: cp /etc/nginx/nginx.conf /home/site/nginx.conf (2) Open the Nginx configuration file located in /home, and add the following line in the http section. more_clear_headers 'server'. After adding it, save the file. (3) Update custom startup command using Azure Portal from Configuration -> General Settings as follows: apt update && apt install -y nginx-extras && cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload (4) Once done, and the request header should no longer display the Server information.   Reference: How to set Nginx headers -
    Connect Azure SQL Server via System Assigned Managed Identity under ASP.NET
    TOC Why we use it Architecture How to use it References   Why we use it This tutorial will introduce how to integrate Microsoft Entra with Azure SQL Server to avoid using fixed usernames and passwords. By utilizing System-assigned managed identities as a programmatic bridge, it becomes easier for Azure-related PaaS services (such as Container Apps) to communicate with the database without storing connection information in plain text.   Architecture I will introduce each service or component and their configurations in subsequent chapters according to the order of A-C: A: The company's account administrator needs to create or designate a user as the database administrator. This role can only be assigned to one person within the database and is responsible for basic configuration and the cr…  ( 33 min )
    Connect Azure SQL Server via System Assigned Managed Identity under ASP.NET
    TOC Why we use it Architecture How to use it References   Why we use it This tutorial will introduce how to integrate Microsoft Entra with Azure SQL Server to avoid using fixed usernames and passwords. By utilizing System-assigned managed identities as a programmatic bridge, it becomes easier for Azure-related PaaS services (such as Container Apps) to communicate with the database without storing connection information in plain text.   Architecture I will introduce each service or component and their configurations in subsequent chapters according to the order of A-C: A: The company's account administrator needs to create or designate a user as the database administrator. This role can only be assigned to one person within the database and is responsible for basic configuration and the cr…
  • Open

    The Smarter Way to Migrate to Azure
    Cut Through the Chaos If you’ve worked on a cloud migration, you already know—it’s not just about moving workloads. It’s about navigating ambiguity. Mapping services. Aligning teams. Untangling legacy architecture. Making decisions that won’t come back to bite you six months down the line. We built the Azure Migration Hub to bring order to that mess. It’s not a campaign or a marketing layer. It’s a guide—built by architects, engineers, and field teams who’ve done this work at scale—designed to help others do it better. The Migration Hub offers a structured path forward, connecting strategy, architecture patterns, and tooling guidance in the sequence real teams actually need. It also incorporates Azure Essentials best practices, which facilitate a smoother and more efficient transition to t…  ( 24 min )
    The Smarter Way to Migrate to Azure
    Cut Through the Chaos If you’ve worked on a cloud migration, you already know—it’s not just about moving workloads. It’s about navigating ambiguity. Mapping services. Aligning teams. Untangling legacy architecture. Making decisions that won’t come back to bite you six months down the line. We built the Azure Migration Hub to bring order to that mess. It’s not a campaign or a marketing layer. It’s a guide—built by architects, engineers, and field teams who’ve done this work at scale—designed to help others do it better. The Migration Hub offers a structured path forward, connecting strategy, architecture patterns, and tooling guidance in the sequence real teams actually need. It also incorporates Azure Essentials best practices, which facilitate a smoother and more efficient transition to t…
    Future-proof your workloads with Windows Server updates and Azure migration skilling
    Windows Server has been a trusted solution for businesses for over 30 years, providing reliability, security, and scalability for critical workloads. With the introduction of Windows Server 2025, Microsoft is enhancing security, performance, and cloud integration to help organizations focus on innovation rather than administrative overhead. Designed for seamless connectivity with Azure, Windows Server 2025 enables your business to leverage cloud-native tools and hybrid management capabilities effortlessly.   In this blog we’ll focus on the value of migrating your on-premises Windows Server workloads to Azure and review official Plans on Microsoft Learn—Migrate and Secure Windows Server Workloads on Azure—that will help your team succeed. We’re also excited to invite you to our upcoming Win…  ( 28 min )
    Future-proof your workloads with Windows Server updates and Azure migration skilling
    Windows Server has been a trusted solution for businesses for over 30 years, providing reliability, security, and scalability for critical workloads. With the introduction of Windows Server 2025, Microsoft is enhancing security, performance, and cloud integration to help organizations focus on innovation rather than administrative overhead. Designed for seamless connectivity with Azure, Windows Server 2025 enables your business to leverage cloud-native tools and hybrid management capabilities effortlessly.   In this blog we’ll focus on the value of migrating your on-premises Windows Server workloads to Azure and review official Plans on Microsoft Learn—Migrate and Secure Windows Server Workloads on Azure—that will help your team succeed. We’re also excited to invite you to our upcoming Win…
    Power your Linux and PostgreSQL innovation with Azure migration skilling and community events
    Power your Linux and PostgreSQL innovation with Azure migration skilling and community events  Managing on-prem and hybrid Linux and PostgreSQL workloads presents ongoing challenges, from hardware maintenance and scalability limitations to security risks and operational complexity. As workloads grow, so do costs and administrative burdens of keeping them performant and resilient. Migrating to Azure provides a modern, cloud-based solution that enhances security, scalability, and cost efficiency—freeing teams to focus on innovation rather than infrastructure management.   In this blog, we’ll explore not only the value of migrating Linux and PostgreSQL workloads to Azure, but also some crucial, expert-curated skilling resources we provide to help your team master the process. Plus, we’ll disc…  ( 30 min )
    Power your Linux and PostgreSQL innovation with Azure migration skilling and community events
    Power your Linux and PostgreSQL innovation with Azure migration skilling and community events  Managing on-prem and hybrid Linux and PostgreSQL workloads presents ongoing challenges, from hardware maintenance and scalability limitations to security risks and operational complexity. As workloads grow, so do costs and administrative burdens of keeping them performant and resilient. Migrating to Azure provides a modern, cloud-based solution that enhances security, scalability, and cost efficiency—freeing teams to focus on innovation rather than infrastructure management.   In this blog, we’ll explore not only the value of migrating Linux and PostgreSQL workloads to Azure, but also some crucial, expert-curated skilling resources we provide to help your team master the process. Plus, we’ll disc…
  • Open

    New scale options in Azure AI Search: change your pricing tier and service upgrade
    Introduction Azure AI Search is announcing two new preview features that make it easier to scale your search service to avoid production issues from storage limitations as your needs grow. Available in preview today: Change your pricing tier: Change the tier of your existing Azure AI Search service via the portal or management plane REST API. Self-service upgrade: Upgrade your search service to enable features previously only available in new services, such as the new storage limits released in April 2024. Change your pricing tier to scale up Now you can change your service tier from the Azure portal or management plane API. It's a simple scaling operation like adding partitions or replicas that ensures uninterrupted growth and operational continuity. Before, if you reached the maximum t…  ( 28 min )
    New scale options in Azure AI Search: change your pricing tier and service upgrade
    Introduction Azure AI Search is announcing two new preview features that make it easier to scale your search service to avoid production issues from storage limitations as your needs grow. Available in preview today: Change your pricing tier: Change the tier of your existing Azure AI Search service via the portal or management plane REST API. Self-service upgrade: Upgrade your search service to enable features previously only available in new services, such as the new storage limits released in April 2024. Change your pricing tier to scale up Now you can change your service tier from the Azure portal or management plane API. It's a simple scaling operation like adding partitions or replicas that ensures uninterrupted growth and operational continuity. Before, if you reached the maximum t…
  • Open

    Announcing Hybrid Search with Semantic Kernel for .NET
    Today we’re thrilled to announce support for Hybrid search with Semantic Kernel Vector Stores for .NET. What is Hybrid Search? Hybrid search performs two parallel searches on a vector database.  The union of the results of these two searches are then returned to callers with a combined rank, based on the rankings from each of […] The post Announcing Hybrid Search with Semantic Kernel for .NET appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Guest Blog: A Comprehensive Guide to Agentic AI with Semantic Kernel
    Today we’re excited to welcome Arafat Tehsin, who’s a Microsoft Most Valuable Professional (MVP) for AI. back as a guest author on the Semantic Kernel blog today to cover his work on a Comprehensive Guide to Agentic AI with Semantic Kernel. We’ll turn it over to Arafat to dive in further. The world of AI is evolving […] The post Guest Blog: A Comprehensive Guide to Agentic AI with Semantic Kernel appeared first on Semantic Kernel.  ( 23 min )
  • Open

    Introducing the agent debugging experience in Microsoft 365 Copilot
    Learn how to debug your agents within Microsoft 365 Copilot to streamline your workflow using the agent debugging experience, now generally available. The post Introducing the agent debugging experience in Microsoft 365 Copilot appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    Lease Management in Azure Storage & Common troubleshooting scenarios
    The blog explains how lease management in Azure Storage works, covering the management of concurrent access to blobs and containers. It discusses key concepts such as acquiring, renewing, changing, releasing, and breaking leases, ensuring only the lease holder can modify or delete a resource for a specified duration. Additionally, it explores common troubleshooting scenarios in Azure Storage Lease Management. Lease management in Azure Storage allows you to create and manage locks on blobs for write and delete operations. This is particularly useful for ensuring that only one client can write to a blob at a time, preventing conflicts and ensuring data consistency. Key Concepts: Lease States: A blob can be in one of several lease states, such as Available, Leased, Expired, Breaking, and Bro…  ( 33 min )
    Lease Management in Azure Storage & Common troubleshooting scenarios
    The blog explains how lease management in Azure Storage works, covering the management of concurrent access to blobs and containers. It discusses key concepts such as acquiring, renewing, changing, releasing, and breaking leases, ensuring only the lease holder can modify or delete a resource for a specified duration. Additionally, it explores common troubleshooting scenarios in Azure Storage Lease Management. Lease management in Azure Storage allows you to create and manage locks on blobs for write and delete operations. This is particularly useful for ensuring that only one client can write to a blob at a time, preventing conflicts and ensuring data consistency. Key Concepts: Lease States: A blob can be in one of several lease states, such as Available, Leased, Expired, Breaking, and Bro…
    Performing Simple SFTP Operations on Azure Blob Storage using CURL Commands
    Introduction Azure Blob Storage now supports the SFTP protocol, making it easier to interact with blobs using standard tools like curl. This blog guides you through performing simple upload, download, delete, and list operations using curl over SFTP with Azure Blob Storage.   Pre-requisites Azure Blob Storage with SFTP enabled (Storage Account must have hierarchical namespace enabled) Enable SFTP Local user created in Azure Storage Account with SSH Key Pair as Authentication method and appropriate container permissions. Private key (.pem file) for SFTP authentication. Curl tool installed (version 7.55.0+ for SFTP support) Please note that the following tests are inclined towards Curl on Windows. The same can be performed with other OS with appropriate format changes.   Authentication Sup…  ( 25 min )
    Performing Simple SFTP Operations on Azure Blob Storage using CURL Commands
    Introduction Azure Blob Storage now supports the SFTP protocol, making it easier to interact with blobs using standard tools like curl. This blog guides you through performing simple upload, download, delete, and list operations using curl over SFTP with Azure Blob Storage.   Pre-requisites Azure Blob Storage with SFTP enabled (Storage Account must have hierarchical namespace enabled) Enable SFTP Local user created in Azure Storage Account with SSH Key Pair as Authentication method and appropriate container permissions. Private key (.pem file) for SFTP authentication. Curl tool installed (version 7.55.0+ for SFTP support) Please note that the following tests are inclined towards Curl on Windows. The same can be performed with other OS with appropriate format changes.   Authentication Sup…
  • Open

    Understanding Real-Time Intelligence CDC connector for PostgreSQL database
    Coauthor: Aazathraj, Chief Data Architect, Apollo Hospitals Real-Time Intelligence in Microsoft Fabric provides multiple database change data capture connectors including SQL database, MySQL, PostgreSQL, and Cosmos DB, which allows anyone to easily react and take actions on database changes in real-time. Each of the databases works differently when it comes to enabling CDC, giving permission … Continue reading “Understanding Real-Time Intelligence CDC connector for PostgreSQL database”  ( 9 min )
  • Open

    Strapi on App Service: Quick start
    Introduction Strapi is an open-source, headless CMS that is highly customizable and developer-friendly, making it a popular choice for content management. When it comes to Strapi hosting, deploying Strapi, or self hosting Strapi, Azure App Service stands out as a premier solution. Azure App Service is a fully managed platform for building, deploying, and scaling web apps, offering unparalleled scalability and reliability.  In this quick start guide, you will learn how to create and deploy your first Strapi site on Azure App Service Linux, using Azure Database for MySQL or PostgreSQL, along with other necessary Azure resources. Steps to deploy Strapi on App Service What is Strapi on App Service? App Service is a fully managed platform for building, deploying, and scaling web apps. Deploying…  ( 42 min )
    Strapi on App Service: Quick start
    Strapi is a widely used open-source headless CMS platform that empowers developers and content creators worldwide to manage and deliver content efficiently. It is known for its flexibility and robustness, having been continuously developed and improved by the community over the years. In this quick start guide, you will learn how to create and deploy your first Strapi site on Azure App Service Linux, using Azure Database for MySQL or PostgreSQL, along with other necessary Azure resources. This guide utilizes an ARM template to install the required resources for hosting your Strapi application, which will incur costs for your Azure Subscription. For pricing details, please refer to our section on estimating pricing. For more information read our Strapi on App Service overview blog. What is …
    Strapi on App Service: FAQ
    Where to host Strapi? How to self-host Strapi? When it comes to Strapi hosting, deploying Strapi, or self-hosting Strapi, Azure App Service stands out as a premier solution. Azure App Service is a fully managed platform for building, deploying, and scaling web apps, offering unparalleled scalability and reliability. Deploy Strapi on Azure App Service to leverage Strapi's flexible content management capabilities with the robust infrastructure of Microsoft's cloud. With greater customization control, global region availability, pre-built integration with other Azure services, Strapi on Azure App Service simplifies infrastructure management while ensuring high availability, security, and performance. Learn more from documentation below, Strapi on App Service - Overview How to deploy Strapi o…  ( 56 min )
    Strapi on App Service: FAQ
    How do I enable a custom domain for my Strapi website? Custom domains can be set up with these resources: Using custom domains on Azure App Service Configuring custom domains with Azure Front Door Does Strapi on App Service have email functionality? Yes, email functionality is supported through Azure Communication Services. You can also configure custom email domains.  In order to integrate Strapi application with Azure communication services, we use an Email plugin here. Can I use other databases with Strapi on Azure App Service? Strapi supports MySQL, PostgreSQL, MariaDB and SQL Lite. Our current ARM template solution supports using PostgreSQL or MySQL as database. To use another supported database, you could either modify the ARM template or even set up database manually and update th…
    Strapi on App Service: Overview
    What is Strapi on App Service? Strapi is an open-source, headless CMS that is highly customizable and developer-friendly, making it a popular choice for content management. When it comes to Strapi hosting, deploying Strapi, or self hosting Strapi, Azure App Service stands out as a premier solution. Azure App Service is a fully managed platform for building, deploying, and scaling web apps, offering unparalleled scalability and reliability. Deploy Strapi on Azure App Service to leverage Strapi's flexible content management capabilities with the robust infrastructure of Microsoft's cloud. Whether you're looking to self-host Strapi or find the best hosting options for Strapi, Azure App Service provides the ideal environment for high availability, security, and performance. This offering integ…  ( 36 min )
    Strapi on App Service: Overview
    What is Strapi on App Service? Strapi is an open-source, headless CMS that is highly customizable and developer-friendly. App Service is a fully managed platform for building, deploying, and scaling web apps. Deploying Strapi on App Service brings together the power of Strapi's flexible content management capabilities with the scalability and reliability of Microsoft's cloud infrastructure. This offering integrates key Azure services such as: Azure App Service: A scalable platform-as-a-service (PaaS) optimized for running Node.js applications such as Strapi. Azure Database for MySQL flexible server – A fully managed database service that offers high availability, automated maintenance, and elastic scaling for MySQL databases. Azure Database for PostgreSQL flexible server: A fully managed …
    Getting Started with Linux WebJobs on App Service - NodeJS
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the Node 20 LTS runtime stack, create your App Service Web App. The stack must be Node, since we plan on writing our WebJob using Node and a bash startup script. For this example, we’ll use Node 20 LTS. Next, we’ll add a basic Web…  ( 25 min )
    Getting Started with Linux WebJobs on App Service - NodeJS
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the Node 20 LTS runtime stack, create your App Service Web App. The stack must be Node, since we plan on writing our WebJob using Node and a bash startup script. For this example, we’ll use Node 20 LTS. Next, we’ll add a basic Web…
    Getting Started with Linux WebJobs on App Service – PHP
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the PHP 8.4 runtime stack, create your App Service Web App. The stack must be PHP, since we plan on writing our WebJob using PHP and a bash startup script. For this example, we’ll use PHP 8.4. Next, we’ll add a basic WebJob to our…  ( 25 min )
    Getting Started with Linux WebJobs on App Service – PHP
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the PHP 8.4 runtime stack, create your App Service Web App. The stack must be PHP, since we plan on writing our WebJob using PHP and a bash startup script. For this example, we’ll use PHP 8.4. Next, we’ll add a basic WebJob to our…
    Getting Started with Linux WebJobs on App Service - .NET 9
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the .NET 9 runtime stack, create your App Service Web App. The stack must be .NET, since we plan on writing our WebJob using .NET and a bash startup script. For this example, we’ll use .NET 9. Next, we’ll add a basic WebJob to our…  ( 26 min )
    Getting Started with Linux WebJobs on App Service - .NET 9
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the .NET 9 runtime stack, create your App Service Web App. The stack must be .NET, since we plan on writing our WebJob using .NET and a bash startup script. For this example, we’ll use .NET 9. Next, we’ll add a basic WebJob to our…
  • Open

    Fast Stress Test of DeepSeek 671B on Azure AMD MI300X
    This artical is refer to this artical, welcome to read it: https://techcommunity.microsoft.com/blog/azurehighperformancecomputingblog/running-deepseek-r1-on-a-single-ndv5-mi300x-vm/4372726   Azure GPU VM Environment PreparationQuickly create a Spot VM, using Spot VM and password-based authentication: az vm create --name --resource-group --location --image microsoft-dsvm:ubuntu-hpc:2204-rocm:22.04.2025030701 --size Standard_ND96isr_MI300X_v5 --security-type Standard --priority Spot --max-price -1 --eviction-policy Deallocate --os-disk-size-gb 256 --os-disk-delete-option Delete --admin-username azureadmin --authentication-type password --admin-password The CLI command I used to create the VM: xinyu [ ~ ]$ az vm create --name mi300x-x…  ( 27 min )
    Fast Stress Test of DeepSeek 671B on Azure AMD MI300X
    This artical is refer to: https://techcommunity.microsoft.com/blog/azurehighperformancecomputingblog/running-deepseek-r1-on-a-single-ndv5-mi300x-vm/4372726   Azure GPU VM Environment PreparationQuickly create a Spot VM, using Spot VM and password-based authentication: az vm create --name --resource-group --location --image microsoft-dsvm:ubuntu-hpc:2204-rocm:22.04.2025030701 --size Standard_ND96isr_MI300X_v5 --security-type Standard --priority Spot --max-price -1 --eviction-policy Deallocate --os-disk-size-gb 256 --os-disk-delete-option Delete --admin-username azureadmin --authentication-type password --admin-password The CLI command I used to create the VM: xinyu [ ~ ]$ az vm create --name mi300x-xinyu --resource-group amdrg --loc…
  • Open

    Microsoft Dev Box subscriptions and licensing requirements demystified: What You Need and Why
    Thinking of deploying Microsoft Dev Box service but want to understand the licensing and subscription requirements first? Then, you have come to the right place. This blog post breaks it all down—clearly and simply—so you know exactly what you need and why, whether you’re an IT admin, platform engineer, developer, or a decision-maker. Microsoft Dev […] The post Microsoft Dev Box subscriptions and licensing requirements demystified: What You Need and Why appeared first on Develop from the cloud.  ( 24 min )
  • Open

    April Patches for Azure DevOps Server and Team Foundation Server
    Today we are releasing patches that impact our self-hosted product, Azure DevOps Server, as well as Team Foundation Server 2018.3.2. We strongly encourage and recommend that all customers use the latest, most secure release of Azure DevOps Server. You can download the latest version of the product, Azure DevOps Server 2022.2 from the Azure DevOps […] The post April Patches for Azure DevOps Server and Team Foundation Server appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Announcing CI/CD Enhancements for Azure Load Testing
    We are excited to announce a significant update to our Azure Load Testing service, aimed at enhancing the experience of setting up and running load tests from CI/CD systems, including Azure DevOps and GitHub. This update is a direct response to customer feedback and is designed to streamline the process, making it more efficient and user-friendly. Key Features and Improvements: Enhanced CI/CD Integration: Developers and testers can now configure application components and the metrics to monitor directly from a CI/CD pipeline. This integration allows monitoring the application infrastructure during the test run. You can make the following changes to your load test YAML config. appComponents: - resourceId: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/samplerg/provid…  ( 23 min )
    Announcing CI/CD Enhancements for Azure Load Testing
    We are excited to announce a significant update to our Azure Load Testing service, aimed at enhancing the experience of setting up and running load tests from CI/CD systems, including Azure DevOps and GitHub. This update is a direct response to customer feedback and is designed to streamline the process, making it more efficient and user-friendly. Key Features and Improvements: Enhanced CI/CD Integration: Developers and testers can now configure application components and the metrics to monitor directly from a CI/CD pipeline. This integration allows monitoring the application infrastructure during the test run. You can make the following changes to your load test YAML config. appComponents: - resourceId: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/samplerg/provid…
    Meet your hosts for JDConf 2025!
    JDConf 2025 is right around the corner and is set to be a global gathering for Java developers passionate about Cloud, AI, and the future of Java. With 22+ sessions and 10+ hours of live content streaming from April 9 - 10, plus additional on-demand sessions, this year’s event dives into app modernization, intelligent apps, frameworks, and AI-powered development with tools like Copilot and more. We are excited to invite you to join us with our three distinguished hosts: Bruno Borges, Sandra Ahlgrimm, and Rory Preddy. Meet Bruno Borges - Your host for JDConf 2025 Americas Bruno Borges is a seasoned professional with a rich background in the tech industry. Currently serving as Principal Product Manager at Microsoft, he focuses on Java developers' experience on Azure and beyond. With over tw…  ( 25 min )
    Meet your hosts for JDConf 2025!
    JDConf 2025 is right around the corner and is set to be a global gathering for Java developers passionate about Cloud, AI, and the future of Java. With 22+ sessions and 10+ hours of live content streaming from April 9 - 10, plus additional on-demand sessions, this year’s event dives into app modernization, intelligent apps, frameworks, and AI-powered development with tools like Copilot and more. We are excited to invite you to join us with our three distinguished hosts: Bruno Borges, Sandra Ahlgrimm, and Rory Preddy. Meet Bruno Borges - Your host for JDConf 2025 Americas Bruno Borges is a seasoned professional with a rich background in the tech industry. Currently serving as Principal Product Manager at Microsoft, he focuses on Java developers' experience on Azure and beyond. With over tw…
  • Open

    Python Vector Store Connectors update: Faiss, Azure SQL Server and Pinecone
    Announcing New Vector Stores: Faiss, SQL Server, and Pinecone We are thrilled to announce the availability of three new Vector Stores and Vector Store Record Collections: Faiss, SQL Server, and Pinecone. These new connectors will enable you to store and retrieve vector data efficiently, making it easier to work with your own data and data […] The post Python Vector Store Connectors update: Faiss, Azure SQL Server and Pinecone appeared first on Semantic Kernel.  ( 23 min )
    Guest Blog: Semantic Kernel and Copilot Studio Usage Series – Part 1
    Today on the Semantic Kernel blog we’re excited to welcome a group of guest authors from Microsoft. We’ll turn it over to Riccardo Chiodaroli, Samar El Housseini, Daniel Lavve and Fabrizio Ruocco to dive into their use cases with Semantic Kernel and Copilot Studio. In today’s fast-paced digital economy, intelligent automation is no longer optional—it’s […] The post Guest Blog: Semantic Kernel and Copilot Studio Usage Series – Part 1 appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Modo Agente: disponible para todos los usuarios y compatible con MCP
    ¡El modo Agente se está lanzando para todos los usuarios de VS Code! El agente actúa como un programador autónomo que realiza tareas de codificación en varias etapas bajo tu comando, como analizar tu base de código, proponer ediciones de archivos y ejecutar comandos en el terminal. Responde a errores de compilación y lint, monitorea la salida del terminal y corrige automáticamente en un bucle hasta que la tarea esté completada. El agente también puede usar herramientas contribuidas, lo que le permite interactuar con servidores MCP externos o extensiones de VS Code para realizar una amplia variedad de tareas.   Disponible para todos los usuarios Abre la vista de Chat, inicia sesión en GitHub, configura chat.agent.enabled en tus ajustes y selecciona Agente en el menú desplegable del modo de…  ( 36 min )
    Modo Agente: disponible para todos los usuarios y compatible con MCP
    ¡El modo Agente se está lanzando para todos los usuarios de VS Code! El agente actúa como un programador autónomo que realiza tareas de codificación en varias etapas bajo tu comando, como analizar tu base de código, proponer ediciones de archivos y ejecutar comandos en el terminal. Responde a errores de compilación y lint, monitorea la salida del terminal y corrige automáticamente en un bucle hasta que la tarea esté completada. El agente también puede usar herramientas contribuidas, lo que le permite interactuar con servidores MCP externos o extensiones de VS Code para realizar una amplia variedad de tareas.   Disponible para todos los usuarios Abre la vista de Chat, inicia sesión en GitHub, configura chat.agent.enabled en tus ajustes y selecciona Agente en el menú desplegable del modo de…
  • Open

    Announcing permission model changes for OneLake events in Fabric Real-Time Hub
    We are excited to announce the latest update to our permission model for OneLake events in the Fabric Real-Time Hub. Previously, users with the ReadAll permission, such as workspace admins, members, and contributors, could subscribe to OneLake events for items like lakehouses, warehouses, SQL databases, mirrored databases, and KQL databases. To provide more granular control, we … Continue reading “Announcing permission model changes for OneLake events in Fabric Real-Time Hub”  ( 6 min )
    Optimizing for CI/CD in Microsoft Fabric
    For nearly three years, Microsoft’s internal Azure Data team has been developing data engineering solutions using Microsoft Fabric. Throughout this journey, we’ve refined our Continuous Integration and Continuous Deployment (CI/CD) approach by experimenting with various branching models, workspace structures, and parameterization techniques. This article walks you through why we chose our strategy and how to implement it in … Continue reading “Optimizing for CI/CD in Microsoft Fabric”  ( 9 min )
    Implementing proactive monitoring with KQL query alerts with Activator
    Driving actions from real-time organizational data is important for making informed data-driven decisions and improving overall efficiency. By leveraging data effectively, organizations can gain insights into customer behaviour, operational performance, and market trends, enabling them to respond promptly to emerging issues and opportunities. Setting alerts on KQL queries can significantly enhance this proactive approach, especially … Continue reading “Implementing proactive monitoring with KQL query alerts with Activator”  ( 7 min )
    Content Sharing Report (Preview)
    Overview of the Admin monitoring workspace The admin monitoring workspace is an out-of-the-box solution that installs automatically when a Fabric admin accesses it. You can share the entire workspace or individual reports/semantic models with any user or group. Use the reports for insights on user activity, content sharing, capacity performance, and more in your Fabric … Continue reading “Content Sharing Report (Preview)”  ( 9 min )
  • Open

    Getting Results with AI Agents + Bing Grounding
    With the rise of Generative AI technologies, web search has never been more important—or more overwhelming. Modern search engines like Bing excel at delivering vast amounts of information quickly, but quantity alone doesn’t always guarantee the best possible insights. Enter the AI Agent Bing Grounding search tool: a specialized tool that uses Bing to refine, curate, and tailor results for easy consumption. In this post, we’ll explore what makes the Bing Grounding tool special, and how curated results can transform the way you use search results in your application. Taming the Overload with Curated Results It’s no secret that information overload is a real challenge in today’s fast-moving digital landscape. Even casual browsing can feel time-consuming when sifting through pages of search re…  ( 25 min )
    Getting Results with AI Agents + Bing Grounding
    With the rise of Generative AI technologies, web search has never been more important—or more overwhelming. Modern search engines like Bing excel at delivering vast amounts of information quickly, but quantity alone doesn’t always guarantee the best possible insights. Enter the AI Agent Bing Grounding search tool: a specialized tool that uses Bing to refine, curate, and tailor results for easy consumption. In this post, we’ll explore what makes the Bing Grounding tool special, and how curated results can transform the way you use search results in your application. Taming the Overload with Curated Results It’s no secret that information overload is a real challenge in today’s fast-moving digital landscape. Even casual browsing can feel time-consuming when sifting through pages of search re…
    Revolutionizing Retail: Meet the Two-Stage AI-Enhanced Search
    This article was written by the AI GBB Team: Samer El Housseini, Setu Chokshi, Aastha Madaan, and Ali Soliman. If you've ever struggled with a retailer's website search—typing in something simple like "snow boots" and getting random results, e.g. garden hoses—you're not alone. Traditional search engines often miss the mark because they're stuck in an outdated world of keyword matching. Modern shoppers want more. They want searches that understand context, intent, and personal preferences. Enter the game-changer: Two-Stage AI-Enhanced Search, powered by Azure AI Search and Azure OpenAI services. What's the Big Idea? Several retailers and e-commerce giants in the UK and Australia are already looking to transform customer experience using AI-enabled cutting-edge solutions. Customers often wis…  ( 38 min )
    Revolutionizing Retail: Meet the Two-Stage AI-Enhanced Search
    If you've ever struggled with a retailer's website search—typing in something simple like "snow boots" and getting random results, e.g. garden hoses—you're not alone. Traditional search engines often miss the mark because they're stuck in an outdated world of keyword matching. Modern shoppers want more. They want searches that understand context, intent, and personal preferences. Enter the game-changer: Two-Stage AI-Enhanced Search, powered by Azure AI Search and Azure OpenAI services. What's the Big Idea? Several retailers and e-commerce giants in the UK and Australia are already looking to transform customer experience using AI-enabled cutting-edge solutions. Customers often wish to search for products that they may want to give as a gift, something nice to wear for an occasion, somethin…

  • Open

    OPENROWSET function in Fabric Data Warehouse (Generally Available)
    The OPENROWSET function is generally available in Fabric Data Warehouse and Fabric SQL endpoints for Lakehouse and Mirrored databases. The OPENROWSET function enables you to easily read Parquet and CSV files stored in Azure Data Lake Storage and Azure Blob Storage: With the OPENROWSET function, you can easily browse files before loading them into the … Continue reading “OPENROWSET function in Fabric Data Warehouse (Generally Available)”  ( 8 min )
    Utilize User Data Functions in Data pipelines with the Functions activity (Preview)
    User Data Functions are now available in preview within data pipeline’s functions activity! This exciting new feature is designed to significantly enhance your data processing capabilities by allowing you to create and manage custom functions tailored to your specific needs. What is a functions activity? The functions activity in data pipelines is a powerful tool … Continue reading “Utilize User Data Functions in Data pipelines with the Functions activity (Preview)”  ( 9 min )
    Building an analytical web application with Microsoft Fabric
    Imagine a retail company that wants to gain insights into customer sentiment for each of its products. They also want to find their top-selling and least-selling products. Using Microsoft Fabric, they can build a powerful analytical application to transform their raw data into actionable insights. The process starts with ingesting raw data, such as customer reviews and sales figures, and ends with providing refined data through an API for internal use. This helps the company efficiently process customer feedback and make informed decisions to improve their products. In this blog post, we’ll dive into the architecture of an analytical application powered by Microsoft Fabric, as shown in the image, and provide a step-by-step guide on how to build it.  ( 8 min )
  • Open

    AI Agents: Building Trustworthy Agents- Part 6
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a critical topic: building trustworthy AI agents. In previous posts (links at the end!), we explored agent fundamentals, frameworks, design principles, tool usage, and Agentic RAG. Now, we'll focus on ensuring safety, security, and user privacy in your AI agent applications. Building Safe and Effective AI Agents Safety in AI agents means ensuring they behave as intended. A core component of this is a robust system message (or prompt) framework. Building a System Message Framework System messages define the rules, instructions, and guidelines for LLMs within agents. A scalable framework for crafting these messages is crucial: Meta System Message: A template prompt used …  ( 26 min )
    AI Agents: Building Trustworthy Agents- Part 6
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a critical topic: building trustworthy AI agents. In previous posts (links at the end!), we explored agent fundamentals, frameworks, design principles, tool usage, and Agentic RAG. Now, we'll focus on ensuring safety, security, and user privacy in your AI agent applications. Building Safe and Effective AI Agents Safety in AI agents means ensuring they behave as intended. A core component of this is a robust system message (or prompt) framework. Building a System Message Framework System messages define the rules, instructions, and guidelines for LLMs within agents. A scalable framework for crafting these messages is crucial: Meta System Message: A template prompt used …

  • Open

    Removal of deprecated DISCO & WSDL aspx pages from SharePoint Online
    We are removing DSICO and WSDL pages from the SharePoint Online by mid September 2025. The post Removal of deprecated DISCO & WSDL aspx pages from SharePoint Online appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    Build AI agents with Python in #AgentsHack
    Microsoft is holding an AI Agents Hackathon, and we want to see what you can build with Python! We'll have 20+ live streams showing you how to build AI agents with Python using popular agent frameworks and Microsoft technologies. Then, you can submit your project for a chance to win prizes, including a Best in Python prize! The post Build AI agents with Python in #AgentsHack appeared first on Microsoft for Python Developers Blog.  ( 24 min )
    Python in Visual Studio Code – April 2025 Release
    The April 2025 release of the Python and Jupyter extensions for Visual Studio Code is now available. This update introduces enhancements to the Copilot experience in Notebooks, improved support for editable installs, faster and more reliable diagnostics, and the addition of custom Node.js arguments with Pylance, and more! The post Python in Visual Studio Code – April 2025 Release appeared first on Microsoft for Python Developers Blog.  ( 24 min )
  • Open

    Model Mondays: Teaching your model new tricks with fine-tuning
    Whether you're optimizing a model for a specialized customer service bot, adapting tone for brand voice, or turbocharging domain-specific performance- fine-tuning unlocks the next level of precision and relevance. And this week, we're going hands-on with Mistral models, showing you just how simple, powerful, and cost-efficient fine-tuning can be in Azure AI Foundry. When: Monday, April 7Time: 1:30 PM ET | 10:30 AM PTWhere: Microsoft Reactor Live Show – RSVP Here   Why Fine-Tune Mistral? Mistral models are quickly gaining traction thanks to their speed, efficiency, and open-access architecture. But what makes them shine is how easily they can be adapted for your business context. In this episode, we’ll show you how to: Prepare datasets and configure training parameters Launch a fine-tuning job in Azure AI Foundry (yes, it’s as easy as a few clicks) Deploy your fine-tuned model with confidence And if you're wondering whether it's worth the effort, spoiler alert: early adopters are seeing dramatic boosts in accuracy and latency reduction, especially in niche or sensitive use cases. What’s in it for you? Each Model Mondays episode is a 30-minute boost to your AI skillset: Stay updated – A 5-min recap of the week’s hottest model drops and Azure AI Foundry news Get hands-on – A 15-min walkthrough focused on how to fine-tune Mistral in Azure Ask the experts – Live Q&A with Microsoft and Mistral  And the conversation doesn’t stop there. Join us every Friday for a Model Mondays Watercooler Chat at 1:30 PM ET / 10:30 AM PT in our Discord community, where we recap, react, and nerd out with the broader AI community. In Case You Missed It Episode 1: GitHub Models – Building better dev experiences Episode 2: Reasoning Models  Episode 3: Search & Retrieval Models Episode 4 : Visual Generative Models Be Part of the Movement Watch Live on Microsoft Reactor – RSVP Now Join the AI Community – Discord Fridays Explore the Tech – Model Mondays GitHub The future of AI is happening every Monday—don’t miss it.  ( 22 min )
    Model Mondays: Teaching your model new tricks with fine-tuning
    Whether you're optimizing a model for a specialized customer service bot, adapting tone for brand voice, or turbocharging domain-specific performance- fine-tuning unlocks the next level of precision and relevance. And this week, we're going hands-on with Mistral models, showing you just how simple, powerful, and cost-efficient fine-tuning can be in Azure AI Foundry. When: Monday, April 7Time: 1:30 PM ET | 10:30 AM PTWhere: Microsoft Reactor Live Show – RSVP Here   Why Fine-Tune Mistral? Mistral models are quickly gaining traction thanks to their speed, efficiency, and open-access architecture. But what makes them shine is how easily they can be adapted for your business context. In this episode, we’ll show you how to: Prepare datasets and configure training parameters Launch a fine-tuning job in Azure AI Foundry (yes, it’s as easy as a few clicks) Deploy your fine-tuned model with confidence And if you're wondering whether it's worth the effort, spoiler alert: early adopters are seeing dramatic boosts in accuracy and latency reduction, especially in niche or sensitive use cases. What’s in it for you? Each Model Mondays episode is a 30-minute boost to your AI skillset: Stay updated – A 5-min recap of the week’s hottest model drops and Azure AI Foundry news Get hands-on – A 15-min walkthrough focused on how to fine-tune Mistral in Azure Ask the experts – Live Q&A with Microsoft and Mistral  And the conversation doesn’t stop there. Join us every Friday for a Model Mondays Watercooler Chat at 1:30 PM ET / 10:30 AM PT in our Discord community, where we recap, react, and nerd out with the broader AI community. In Case You Missed It Episode 1: GitHub Models – Building better dev experiences Episode 2: Reasoning Models  Episode 3: Search & Retrieval Models Episode 4 : Visual Generative Models Be Part of the Movement Watch Live on Microsoft Reactor – RSVP Now Join the AI Community – Discord Fridays Explore the Tech – Model Mondays GitHub The future of AI is happening every Monday—don’t miss it.
    Serverless Fine Tuning Now In More US-Regions!
    We value your feedback and recognize the demand for fine tuning to be accessible in more regions. Today, we are excited to announce that serverless finetuning for Mistral, Phi, and NTT models is now available across all US regions where base model inferencing is also accessible. This expansion aims to provide greater flexibility and accessibility for users, ensuring that everyone can benefit from the enhanced capabilities of serverless finetuning.   Region Availability Cross region finetuning is now enabled in the following regions: EastUS  EastUS2 SouthCentralUS  NorthCentralUS  WestUS  WestUS3 Model Availability Mistral-Nemo  Mistral-Large-2411  Ministral-3B  Phi-3.5-mini-instruct  Phi-3.5-MoE-instruct  Phi-4-mini-instruct  Tsuzumi-7b     Looking Ahead: More Models and Regions As we continue to innovate and expand our model offerings, more models and regions will soon be supported. Our team is working diligently to ensure that users across various locations can benefit from the latest advancements in serverless finetuning. Stay tuned for updates as we roll out these enhancements, providing even greater flexibility and accessibility for our global user base. We appreciate your ongoing support and look forward to sharing more details in the near future.   Get started today!  Whether you're a newcomer to fine-tuning or an experienced developer, getting started with Azure AI Foundry is now more accessible than ever. Fine-tuning is available through both Azure AI Foundry and Azure ML Studio, offering a user-friendly interface for those who prefer a graphical user interface (GUI) and SDK’s and CLI for advanced users. Learn more!  Try it out with Azure AI Foundry Explore documentation for the model catalog in Azure AI Foundry Begin using the Finetuning SDK in the notebook Learn more about Azure AI Content Safety - Azure AI Content Safety – AI Content Moderation | Microsoft Azure   Get started with Finetuning on Azure AI Foundry Learn more about region availability  ( 21 min )
    Serverless Fine Tuning Now In More US-Regions!
    We value your feedback and recognize the demand for fine tuning to be accessible in more regions. Today, we are excited to announce that serverless finetuning for Mistral, Phi, and NTT models is now available across all US regions where base model inferencing is also accessible. This expansion aims to provide greater flexibility and accessibility for users, ensuring that everyone can benefit from the enhanced capabilities of serverless finetuning.   Region Availability Cross region finetuning is now enabled in the following regions: EastUS  EastUS2 SouthCentralUS  NorthCentralUS  WestUS  WestUS3 Model Availability Mistral-Nemo  Mistral-Large-2411  Ministral-3B  Phi-3.5-mini-instruct  Phi-3.5-MoE-instruct  Phi-4-mini-instruct  Tsuzumi-7b     Looking Ahead: More Models and Regions As we continue to innovate and expand our model offerings, more models and regions will soon be supported. Our team is working diligently to ensure that users across various locations can benefit from the latest advancements in serverless finetuning. Stay tuned for updates as we roll out these enhancements, providing even greater flexibility and accessibility for our global user base. We appreciate your ongoing support and look forward to sharing more details in the near future.   Get started today!  Whether you're a newcomer to fine-tuning or an experienced developer, getting started with Azure AI Foundry is now more accessible than ever. Fine-tuning is available through both Azure AI Foundry and Azure ML Studio, offering a user-friendly interface for those who prefer a graphical user interface (GUI) and SDK’s and CLI for advanced users. Learn more!  Try it out with Azure AI Foundry Explore documentation for the model catalog in Azure AI Foundry Begin using the Finetuning SDK in the notebook Learn more about Azure AI Content Safety - Azure AI Content Safety – AI Content Moderation | Microsoft Azure   Get started with Finetuning on Azure AI Foundry Learn more about region availability
  • Open

    Build AI agent tools using remote MCP with Azure Functions
    Model Context Protocol (MCP) is a way for apps to provide capabilities and context to a large language model. A key feature of MCP is the ability to define tools that AI agents can leverage to accomplish whatever tasks they’ve been given. MCP servers can be run locally, but remote MCP servers are important for sharing tools that work at cloud scale.  Today, we’re pleased to share an early experimental preview of triggers and bindings that allow you to build tools using remote MCP with server-sent events (SSE) with Azure Functions. Azure Functions lets you author focused, event-driven logic that scales automatically in response to demand. You just write code reflecting unique requirements of your tools, and Functions will take care of the rest. Remote MCP quickstarts for Azure Functions are…  ( 36 min )
    Build AI agent tools using remote MCP with Azure Functions
    Model Context Protocol (MCP) is a way for apps to provide capabilities and context to a large language model. A key feature of MCP is the ability to define tools that AI agents can leverage to accomplish whatever tasks they’ve been given. MCP servers can be run locally, but remote MCP servers are important for sharing tools that work at cloud scale.  Today, we’re pleased to share an early experimental preview of triggers and bindings that allow you to build tools using remote MCP with server-sent events (SSE) with Azure Functions. Azure Functions lets you author focused, event-driven logic that scales automatically in response to demand. You just write code reflecting unique requirements of your tools, and Functions will take care of the rest. Remote MCP quickstarts for Azure Functions are…
  • Open

    Boards Integration with GitHub Enterprise Cloud and Data Residency (Public Preview)
    Back in January, we launched a private preview of our Boards integration with GitHub Enterprise Cloud with data residency. If you’re unfamiliar with GitHub’s data residency option and what it means for your organization, you can learn more in the original announcement. Since the private preview launch, we’ve gathered valuable feedback from early adopters, and […] The post Boards Integration with GitHub Enterprise Cloud and Data Residency (Public Preview) appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Semantic Kernel Agents are now Generally Available
    The time is finally here, Semantic Kernel’s Agent framework is now Generally Available! Available today as part of Semantic Kernel 1.45 (.NET) and 1.27 (Python), the Semantic Kernel Agent framework makes it easier for agents to coordinate and dramatically reduces the code developers need to write to build amazing AI applications. What does Generally Available […] The post Semantic Kernel Agents are now Generally Available appeared first on Semantic Kernel.  ( 24 min )

  • Open

    How to set up Windows 365 (2025 tutorial)
    Set up and access your Cloud PCs from anywhere with a full Windows experience on any device using Windows 365. Whether you’re working from a browser, the Windows app, or Windows 365 Link, your desktop, apps, and settings are always available — just like a traditional PC. As an admin, you can quickly provision and manage Cloud PCs for multiple users with Microsoft Intune.  Scott Manchester, Windows Cloud Vice President, shows how easy it is to set up secure, scalable environments, ensure business continuity with built-in restore, and optimize performance with AI-powered insights. Work securely from anywhere.  Windows 365 acts like your personal PC in the cloud — scale CPU, RAM, and storage as needed. See it here. Deploy Cloud PCs in minutes.  Provision Cloud PCs in just a few clicks with…  ( 62 min )
    How to set up Windows 365 (2025 tutorial)
    Set up and access your Cloud PCs from anywhere with a full Windows experience on any device using Windows 365. Whether you’re working from a browser, the Windows app, or Windows 365 Link, your desktop, apps, and settings are always available — just like a traditional PC. As an admin, you can quickly provision and manage Cloud PCs for multiple users with Microsoft Intune.  Scott Manchester, Windows Cloud Vice President, shows how easy it is to set up secure, scalable environments, ensure business continuity with built-in restore, and optimize performance with AI-powered insights. Work securely from anywhere.  Windows 365 acts like your personal PC in the cloud — scale CPU, RAM, and storage as needed. See it here. Deploy Cloud PCs in minutes.  Provision Cloud PCs in just a few clicks with…
  • Open

    AI-powered development with Data Factory Microsoft Fabric
    In today’s data-driven landscape, organizations are constantly seeking ways to streamline their data integration processes, enhance productivity, and democratize access to powerful data engineering capabilities with Copilot for Data Factory. At Microsoft, we’re committed to empowering data engineers and analysts with intelligent tools that reduce complexity and accelerate development. We’re excited to share the latest … Continue reading “AI-powered development with Data Factory Microsoft Fabric”  ( 7 min )
    DataOps in Fabric Data Factory
    We’ve made several significant updates to our Fabric Data Factory artifacts stories, including Continuous Integration/Continuous Deployment (CI/CD) and APIs support! These updates are designed to automate the integration, testing, and deployment of code changes, ensuring efficient and reliable development. In Fabric Data Factory, we currently support two key features in collaboration with the Application Lifecycle … Continue reading “DataOps in Fabric Data Factory”  ( 6 min )
    Best-in-class connectivity and data movement with Data Factory in Fabric
    In the fast-evolving data integration landscape, Data Factory continues to enhance the existing connectors to provide high-throughput data ingestion experience with no-code, low-code experience. With a focus on improving connector efficiency and expanding capabilities, recent updates bring significant advancements to a number of connectors. These improvements focus on: Latest innovations 1. Lakehouse connector now supports … Continue reading “Best-in-class connectivity and data movement with Data Factory in Fabric”  ( 7 min )
  • Open

    Learn Python + AI from our video series!
    We just wrapped up our first Python + AI series, a six-part series showing how to use generative AI models from Python, with versions in both English and Spanish. We covered multiple kinds of models, like LLMs, embedding models, and multimodal models. We introduced popular approaches like RAG, function calling, and structured outputs. Finally, we discussed AI risk mitigation layers and showed how to evaluate AI quality and safety. To make it easy for everyone to follow along, we made sure all of our code examples work with GitHub Models, a service which provides free models for every GitHub account holder for experimentation and education. Even if you missed the live series, you can still go through all the material from the links below! If you're an instructor yourself, feel free to use t…  ( 38 min )
    Learn Python + AI from our video series!
    We just wrapped up our first Python + AI series, a six-part series showing how to use generative AI models from Python, with versions in both English and Spanish. We covered multiple kinds of models, like LLMs, embedding models, and multimodal models. We introduced popular approaches like RAG, function calling, and structured outputs. Finally, we discussed AI risk mitigation layers and showed how to evaluate AI quality and safety. To make it easy for everyone to follow along, we made sure all of our code examples work with GitHub Models, a service which provides free models for every GitHub account holder for experimentation and education. Even if you missed the live series, you can still go through all the material from the links below! If you're an instructor yourself, feel free to use t…
  • Open

    Code the Future with Java and AI – Join Me at JDConf 2025
    JDConf 2025 is just around the corner, and whether you’re a Java developer, architect, team leader, or decision maker I hope you’ll join me as we explore how Java is evolving with the power of AI and how you can start building the next generation of intelligent applications today.  Why JDConf 2025?  With over 22 expert-led sessions and 10+ hours of live content, JDConf is packed with learning, hands-on demos, and real-world solutions. You’ll hear from Java leaders and engineers on everything from modern application design to bringing AI into your Java stack. It’s free, virtual and your chance to connect from wherever you are. (On-demand sessions will also be available globally from April 9–10, so you can tune in anytime from anywhere.)  Bring AI into Java Apps  At JDConf 2025, we are going…  ( 27 min )
    Code the Future with Java and AI – Join Me at JDConf 2025
    JDConf 2025 is just around the corner, and whether you’re a Java developer, architect, team leader, or decision maker I hope you’ll join me as we explore how Java is evolving with the power of AI and how you can start building the next generation of intelligent applications today.  Why JDConf 2025?  With over 22 expert-led sessions and 10+ hours of live content, JDConf is packed with learning, hands-on demos, and real-world solutions. You’ll hear from Java leaders and engineers on everything from modern application design to bringing AI into your Java stack. It’s free, virtual and your chance to connect from wherever you are. (On-demand sessions will also be available globally from April 9–10, so you can tune in anytime from anywhere.)  Bring AI into Java Apps  At JDConf 2025, we are going…
  • Open

    Hola, Spain Central! Microsoft Dev Box Expands in Europe
    You asked, we built it. We’re thrilled to announce that Spain Central is now a supported region for Microsoft Dev Box! 🎉 That’s right — starting today, you can spin up Dev Boxes in Spain Central and get all the benefits of fast, secure, ready-to-code workstations, now closer to your European teams and data. To […] The post Hola, Spain Central! Microsoft Dev Box Expands in Europe appeared first on Develop from the cloud.  ( 22 min )
  • Open

    Using OpenAI’s Audio-Preview Model with Semantic Kernel
    OpenAI’s gpt-4o-audio-preview is a powerful multimodal model that enables audio input and output capabilities, allowing developers to create more natural and accessible AI interactions. This model supports both speech-to-text and text-to-speech functionalities in a single API call through the Chat Completions API, making it suitable for building voice-enabled applications where turn-based interactions are appropriate. In this […] The post Using OpenAI’s Audio-Preview Model with Semantic Kernel appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Folder REST API (Preview)
    Workspace folders are an easy way for you to efficiently organize and manage items in the workspace. We’re pleased to share that Folder Rest API is now in preview. Create and manage folders in automation scenarios and seamlessly integrate with other systems and tools. What’s APIs are new? Updated existing APIs What’s coming soon? While the … Continue reading “Folder REST API (Preview)”  ( 5 min )
    New Eventstream sources: MQTT, Solace PubSub+, Azure Data Explorer, Weather & Azure Event Grid
    Eventstream is a data streaming service in Fabric Real-Time Intelligence, enables users to ingest, transform, and route real-time data streams from multiple sources into Fabric. We’re expanding Evenstream’s capabilities, and making real-time data integration even more seamless, by introducing five new source connectors and additional sample data streams.  With these new connectors, you can easily … Continue reading “New Eventstream sources: MQTT, Solace PubSub+, Azure Data Explorer, Weather & Azure Event Grid ”  ( 6 min )
    Workload Development Kit – OneLake support and Developer Experience enhancements
    We are excited to share several updates and enhancements for OneLake integration and the Workload Development Kit (WDK). These improvements aim to provide a smoother and more intuitive user experience, as well as new opportunities for monetization and real-time intelligence integration. OneLake integration All items now support storing data in OneLake. This means folders for … Continue reading “Workload Development Kit – OneLake support and Developer Experience enhancements”  ( 7 min )
  • Open

    Using Azure Monitor Workbook to calculate Azure Storage Used Capacity for all storage accounts
    In this blog, we will explore how to use Azure Monitor Workbook to collect and analyze metrics for all or selected storage accounts within a given subscription. We will walk through the steps to set up the workbook, configure metrics, and visualize storage account data to gain valuable insights into usage. Introduction For a given individual blob storage account, we can calculate the used capacity or transactions count or blob count by making use of PowerShell or Metrices available on Portal or Blob Inventory reports. However, if we are supposed to perform the same activity on all the storage accounts under a given subscription and create a consolidated report, it will be a huge task. For such cases, the Blob Inventory reports will not be of much help as it works on individual storage acco…  ( 26 min )
    Using Azure Monitor Workbook to calculate Azure Storage Used Capacity for all storage accounts
    In this blog, we will explore how to use Azure Monitor Workbook to collect and analyze metrics for all or selected storage accounts within a given subscription. We will walk through the steps to set up the workbook, configure metrics, and visualize storage account data to gain valuable insights into usage. Introduction For a given individual blob storage account, we can calculate the used capacity or transactions count or blob count by making use of PowerShell or Metrices available on Portal or Blob Inventory reports. However, if we are supposed to perform the same activity on all the storage accounts under a given subscription and create a consolidated report, it will be a huge task. For such cases, the Blob Inventory reports will not be of much help as it works on individual storage acco…
  • Open

    Microsoft 365 Certification control spotlight: Data access management
    Read how Microsoft 365 Certification ensures data access management best practices for Microsoft 365 apps, add-ins, and Copilot agents. The post Microsoft 365 Certification control spotlight: Data access management appeared first on Microsoft 365 Developer Blog.  ( 23 min )
    Dev Proxy v0.26 with improved mocking, plugin validation, and Docker support
    The latest version of Dev Proxy brings improved validation, plugin reliability, a brand-new Docker image and more. The post Dev Proxy v0.26 with improved mocking, plugin validation, and Docker support appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    CDN Domain URL change for Agents in Pipelines
    Introduction We have announced the retirement of Edgio CDN for Azure DevOps and are transitioning to a solution served by Akamai and Azure Front Door CDNs. This change affects Azure DevOps Pipelines customers. This article provides guidance for the Azure DevOps Pipelines customers to check if they are impacted by this change in CDN and […] The post CDN Domain URL change for Agents in Pipelines appeared first on Azure DevOps Blog.  ( 25 min )
    TFVC Policies Storage Updates
    TFVC Check-In Policies TFVC projects can have check-in policies such as Build (Require last build was successful), Work Item (Require associated work item), Changeset comments policy (Require users to add comment to their check-in), etc. We are changing the way we store these policies on the server. This change will slightly affect TFVC users since […] The post TFVC Policies Storage Updates appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Get Ready for .NET Conf: Focus on Modernization
    We’re excited to announce the topics and speakers for .NET Conf: Focus on Modernization, our latest virtual event on April 22-23, 2025! This event features live sessions from .NET and cloud computing experts, providing attendees with the latest insights into modernizing .NET applications, including technical upgrades, cloud migration, and tooling advancements. To get ready, visit the .NET Conf: Focus on Modernization home page and click Add to Calendar so you can save the date on your calendar. From this page, on the day of the event you’ll be able to join a live stream on YouTube and Twitch. We will also make the source code for the demos available on GitHub and the on-demand replays will be available on our YouTube channel. Learn more: https://focus.dotnetconf.net/ Why attend? In the fas…  ( 25 min )
    Get Ready for .NET Conf: Focus on Modernization
    We’re excited to announce the topics and speakers for .NET Conf: Focus on Modernization, our latest virtual event on April 22-23, 2025! This event features live sessions from .NET and cloud computing experts, providing attendees with the latest insights into modernizing .NET applications, including technical upgrades, cloud migration, and tooling advancements. To get ready, visit the .NET Conf: Focus on Modernization home page and click Add to Calendar so you can save the date on your calendar. From this page, on the day of the event you’ll be able to join a live stream on YouTube and Twitch. We will also make the source code for the demos available on GitHub and the on-demand replays will be available on our YouTube channel. Learn more: https://focus.dotnetconf.net/ Why attend? In the fas…
  • Open

    Join the UK University Cloud Challenge 2025
    DISCLAIMER: this is for UK only What is the UK University Cloud Challenge? The UK University Cloud Challenge 2025 is an exciting initiative aimed at enhancing employability and promoting friendly rivalry among students from various institutions. This challenge focuses on developing AI skills, which are in high demand, and provides participants with the opportunity to earn a Microsoft professional certification in AI.  Why You Should Join  AI Skills in Demand: Enhance your skillset with AI awareness.  Microsoft Certification: Earn a Microsoft AI Fundamentals certification (AI-900).  Access to Resources: Gain access to the recording of the  kick-off webinar session which was held on 26th March, 2025, and other valuable resources.  Friendly Rivalry: Compete against other students and institutions in a fun and engaging way.    Call to Actions  Register Now: Sign up for the University Cloud Challenge 2025 at https://aka.ms/UCC25.    Important: This is for UK only. Certification Exams: Take the certification exams between 2nd to 12th May 2025.  ( 19 min )
    Join the UK University Cloud Challenge 2025
    DISCLAIMER: this is for UK only What is the UK University Cloud Challenge? The UK University Cloud Challenge 2025 is an exciting initiative aimed at enhancing employability and promoting friendly rivalry among students from various institutions. This challenge focuses on developing AI skills, which are in high demand, and provides participants with the opportunity to earn a Microsoft professional certification in AI.  Why You Should Join  AI Skills in Demand: Enhance your skillset with AI awareness.  Microsoft Certification: Earn a Microsoft AI Fundamentals certification (AI-900).  Access to Resources: Gain access to the recording of the  kick-off webinar session which was held on 26th March, 2025, and other valuable resources.  Friendly Rivalry: Compete against other students and institutions in a fun and engaging way.    Call to Actions  Register Now: Sign up for the University Cloud Challenge 2025 at https://aka.ms/UCC25.    Important: This is for UK only. Certification Exams: Take the certification exams between 2nd to 12th May 2025.
    Exploring Azure Container Apps for Java developers: a must-watch video series
    Hi all! We are excited to share with you the second video in an ongoing series by Ayan Gupta, introducing Azure Container Apps (ACA) for Java developers. This video is titled "Java in Containers: Introduction to ACA's Architecture and Components" and is packed with valuable insights for anyone looking to take their Java applications to production. In this video, Ayan Gupta explains what ACA is and why it's an ideal platform for Java developers. You'll learn about ACA's architecture and components, and see various ways to quickly and easily deploy your Java apps using ACA. Don't miss out on this opportunity to enhance your skills and knowledge. Click here to subscribe to the Java at Microsoft YouTube channel to be notified of each new video in this series. Happy learning!  ( 19 min )
    Exploring Azure Container Apps for Java developers: a must-watch video series
    Hi all! We are excited to share with you the second video in an ongoing series by Ayan Gupta, introducing Azure Container Apps (ACA) for Java developers. This video is titled "Java in Containers: Introduction to ACA's Architecture and Components" and is packed with valuable insights for anyone looking to take their Java applications to production. In this video, Ayan Gupta explains what ACA is and why it's an ideal platform for Java developers. You'll learn about ACA's architecture and components, and see various ways to quickly and easily deploy your Java apps using ACA. Don't miss out on this opportunity to enhance your skills and knowledge. Click here to subscribe to the Java at Microsoft YouTube channel to be notified of each new video in this series. Happy learning!
  • Open

    RAG Time Journey 5: Enterprise-ready RAG
    Introduction Congratulations on making it this far and welcome to RAG Time Journey 5! This is the next step in our multi-format educational series on all things Retrieval Augmented Generation (RAG). Here we will explore how Azure AI Search integrates security measures while following safe AI principles to ensure secure RAG solutions.   Explore additional posts in our RAG Time series: Journey 1, Journey 2, Journey 3, Journey 4.   The development of AI and RAG is leading many companies to incorporate AI-driven solutions into their operations. This transition highlights the importance of embracing best practices for enterprise readiness to ensure long-term success.   But what is enterprise readiness? Enterprise readiness describes the process of being prepared to develop and manage a service,…  ( 46 min )
    RAG Time Journey 5: Enterprise-ready RAG
    Introduction Congratulations on making it this far and welcome to RAG Time Journey 5! This is the next step in our multi-format educational series on all things Retrieval Augmented Generation (RAG). Here we will explore how Azure AI Search integrates security measures while following safe AI principles to ensure secure RAG solutions.   Explore additional posts in our RAG Time series: Journey 1, Journey 2, Journey 3, Journey 4.   The development of AI and RAG is leading many companies to incorporate AI-driven solutions into their operations. This transition highlights the importance of embracing best practices for enterprise readiness to ensure long-term success.   But what is enterprise readiness? Enterprise readiness describes the process of being prepared to develop and manage a service,…

  • Open

    Announcing Fabric User Data Functions (Preview)
    We are excited to announce the preview of Fabric User Data Functions! This feature is made to empower developers to create functions that contain their business logic and connect to Fabric data sources, and/or invoke them from other Fabric items such as Data pipelines, Notebooks and Power BI reports. Fabric User Data Functions leverage the … Continue reading “Announcing Fabric User Data Functions (Preview)”  ( 7 min )
    On-premises data gateway February 2025 release
    Here is the February 2025 release of the on-premises data gateway.  ( 5 min )
    What’s new with OneLake shortcuts
    Microsoft Fabric shortcuts enable organizations to unify their data across various domains and clouds by creating a single virtual data lake. These shortcuts act as symbolic links to data in different storage locations, simplifying access and reducing the need for multiple copies. OneLake serves as the central hub for all analytics data. By using OneLake … Continue reading “What’s new with OneLake shortcuts”  ( 7 min )
    Hints in Fabric Data Warehouse
    What are hints? Hints are optional keywords that you can add to your SQL statements to provide additional information or instructions to the query optimizer. Hints can help you improve the performance, scalability, or consistency of your queries by overriding the default behavior of the query optimizer. For example, you can use hints to specify … Continue reading “Hints in Fabric Data Warehouse”  ( 8 min )
    Unlock the Power of Query insights and become a Fabric Data Warehouse performance detective
    In today’s data-driven landscape, optimizing query performance is paramount for organizations relying on data warehouses. Microsoft Fabric’s Query Insights emerges as a powerful tool, enabling data professionals to delve deep into query behaviors and enhance system efficiency. Understanding Query Insights Query Insights in Microsoft Fabric serves as a centralized repository, storing 30 days of historical … Continue reading “Unlock the Power of Query insights and become a Fabric Data Warehouse performance detective”  ( 7 min )
    SHOWPLAN_XML in Fabric Data Warehouse (Preview)
    We are excited to announce support for SHOWPLAN_XML in Microsoft Fabric Data Warehouse in preview. This capability allows users to generate and view the estimated query execution plan in XML format, a tool for analyzing and optimizing SQL queries. Whether you’re troubleshooting performance bottlenecks or refining query strategies during development, SHOWPLAN_XML offers a granular, detailed … Continue reading “SHOWPLAN_XML in Fabric Data Warehouse (Preview)”  ( 6 min )
    Playbook for metadata driven Lakehouse implementation in Microsoft Fabric
    Co-Author – Gyani Sinha, Abhishek Narain Overview A well-architected lakehouse enables organizations to efficiently manage and process data for analytics, machine learning, and reporting. To achieve governance, scalability, operational excellence, and optimal performance, adopting a structured, metadata-driven approach is crucial for lakehouse implementation. Building on our previous blog, Demystifying Data Ingestion in Fabric, this post … Continue reading “Playbook for metadata driven Lakehouse implementation in Microsoft Fabric”  ( 9 min )
    Secure, comply, collaborate: Item Permissions in Fabric Data Warehouse
    In today’s data-driven world, managing access to data is crucial for maintaining security, ensuring compliance, and optimizing collaboration. Item permissions play a vital role in controlling who can access, modify, and share data within an organization. This blog post will delve into the rationale behind the need for item permissions, what permissions can be assigned … Continue reading “Secure, comply, collaborate: Item Permissions in Fabric Data Warehouse”  ( 7 min )
    Introducing SQL Audit Logs for Fabric Data Warehouse
    Introducing SQL Audit Logs for Fabric Data Warehouse, a powerful new feature designed to enhance security, compliance, and operational insights for our users. The Role of SQL Audit Logs in Fabric Data Warehouse Security SQL Audit Logs in Microsoft Fabric Data Warehouse provide a comprehensive and immutable record of all database activities, capturing critical details … Continue reading “Introducing SQL Audit Logs for Fabric Data Warehouse”  ( 7 min )
    Introducing the Fabric CLI (Preview)
    ⚡️ TL;DR Give it a try. Break things. Tell us what you want next. 👉 Install the CLI and get started We’re excited to announce that the Fabric Command Line Interface (CLI) is now available in public preview — bringing a fast, flexible, and scriptable way to work with Microsoft Fabric from your terminal. What … Continue reading “Introducing the Fabric CLI (Preview)”  ( 8 min )
  • Open

    Understanding 'Always On' vs. Health Check in Azure App Service
    The 'Always On' feature in Azure App Service helps keep your app warm by ensuring it remains running and responsive, even during periods of inactivity with no incoming traffic. As this feature pings to root URI after every 5 minutes.              On Other hand Health-check feature helps pinging configured path every minute to monitor the application availability on each instance.   What is 'Always On' in Azure App Service? The Always On feature ensures that the host process of your web app stays running continuously. This results in better responsiveness after idle periods since the app doesn’t need to cold boot when a request arrives. How to enable Always On: Navigate to the Azure Portal and open your Web App. Go to Configuration > General Settings. Toggle Always On to On. What is Healt…  ( 23 min )
    Understanding 'Always On' vs. Health Check in Azure App Service
    The 'Always On' feature in Azure App Service helps keep your app warm by ensuring it remains running and responsive, even during periods of inactivity with no incoming traffic. As this feature pings to root URI after every 5 minutes.              On Other hand Health-check feature helps pinging configured path every minute to monitor the application availability on each instance.   What is 'Always On' in Azure App Service? The Always On feature ensures that the host process of your web app stays running continuously. This results in better responsiveness after idle periods since the app doesn’t need to cold boot when a request arrives. How to enable Always On: Navigate to the Azure Portal and open your Web App. Go to Configuration > General Settings. Toggle Always On to On. What is Healt…
    Discover new tools, skills, and best practices with Microsoft MVPs and Developer Influencers
    For April, we’re highlighting some of the great content being created by and featuring Microsoft MVPs, experts, and developer influencers. Dive deeper into GitHub Copilot, find hidden features in Visual Studio, learn how to bring GenAI into JavaScript apps, and more. We’ve got how-to videos, live events, demos, hackathons, and other learning resources that will help you level up your dev skills. You’ll find opportunities to connect with Microsoft experts, learn new skills, and explore the latest tools and feature updates for developers.     AI Agents Hackathon 2025 is here It’s time for AI Agents Hackathon 2025! Join this free hackathon to learn new skills, get hands-on experience, build an agent, and maybe even win a prize. Event runs April 8–30, 2025. Get details and register.    New Mic…  ( 35 min )
    Discover new tools, skills, and best practices with Microsoft MVPs and Developer Influencers
    For April, we’re highlighting some of the great content being created by and featuring Microsoft MVPs, experts, and developer influencers. Dive deeper into GitHub Copilot, find hidden features in Visual Studio, learn how to bring GenAI into JavaScript apps, and more. We’ve got how-to videos, live events, demos, hackathons, and other learning resources that will help you level up your dev skills. You’ll find opportunities to connect with Microsoft experts, learn new skills, and explore the latest tools and feature updates for developers.     AI Agents Hackathon 2025 is here It’s time for AI Agents Hackathon 2025! Join this free hackathon to learn new skills, get hands-on experience, build an agent, and maybe even win a prize. Event runs April 8–30, 2025. Get details and register.    New Mic…
  • Open

    Fast deploy and evaluate AI model performance on AML/AI Foundry
    This source code of this article: https://github.com/xinyuwei-david/AI-Foundry-Model-Performance.git Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git    Note: This repository is designed to test the performance of open-source models from the Azure Machine Learning Model Catalog in Managed Compute. I tested the performance of nearly 20 AI models in my repository. Due to space limitations, this article only shows the testing of two models to help readers understand how to use my script for testing. More detailed info,please refer to https://github.com/xinyuwei-david/AI-Foundry-Model-Performance.git  Deploying models Methods https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/deployments-overview NameAzure OpenA…  ( 41 min )
    Fast deploy and evaluate AI model performance on AML/AI Foundry
    This source code of this article: https://github.com/xinyuwei-david/AI-Foundry-Model-Performance.git Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git    Note: This repository is designed to test the performance of open-source models from the Azure Machine Learning Model Catalog in Managed Compute. I tested the performance of nearly 20 AI models in my repository. Due to space limitations, this article only shows the testing of two models to help readers understand how to use my script for testing. More detailed info,please refer to https://github.com/xinyuwei-david/AI-Foundry-Model-Performance.git  Deploying models Methods https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/deployments-overview NameAzure OpenA…
  • Open

    Automating PowerPoint Generation with AI: A Learn Live Series Case Study
    Introduction A Learn Live is a series of events where over a period of 45 to 60 minutes, a presenter walks attendees through a learning module or pathway. The show/series, takes you through a Microsoft Learn Module, Challenge or a particular sample. Between April 15 to May 13, we will be hosting a Learn Live series on "Master the Skills to Create AI Agents."  This premise is necessary for the blog because I was tasked with generating slides for the different presenters. Challenge: generation of the slides The series is based on the learning path: Develop AI agents on Azure and each session tackles one of the learn modules in the path. In addition, Learn Live series usually have a presentation template each speaker is provided with to help run their sessions. Each session has the same forma…  ( 31 min )
    Automating PowerPoint Generation with AI: A Learn Live Series Case Study
    Introduction A Learn Live is a series of events where over a period of 45 to 60 minutes, a presenter walks attendees through a learning module or pathway. The show/series, takes you through a Microsoft Learn Module, Challenge or a particular sample. Between April 15 to May 13, we will be hosting a Learn Live series on "Master the Skills to Create AI Agents."  This premise is necessary for the blog because I was tasked with generating slides for the different presenters. Challenge: generation of the slides The series is based on the learning path: Develop AI agents on Azure and each session tackles one of the learn modules in the path. In addition, Learn Live series usually have a presentation template each speaker is provided with to help run their sessions. Each session has the same forma…
  • Open

    Cut Costs and Speed Up AI API Responses with Semantic Caching in Azure API Management
    This article is part of a series of articles on API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. We previously covered the hidden risks of AI APIs in today's AI-driven technological landscape. In this article, we dive deeper into one of the supported Gen AI policies in API Management, which allows you to minimize Azure OpenAI costs and make your applications more performant by reducing the number of calls sent to your LLM service. How does it currently work without the semantic caching policy? For simplicity, let's look at a scenario where we only have a single client app, a single user, and a single model deployment. This of course does not represent most real-wo…  ( 29 min )
    Cut Costs and Speed Up AI API Responses with Semantic Caching in Azure API Management
    This article is part of a series of articles on API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. We previously covered the hidden risks of AI APIs in today's AI-driven technological landscape. In this article, we dive deeper into one of the supported Gen AI policies in API Management, which allows you to minimize Azure OpenAI costs and make your applications more performant by reducing the number of calls sent to your LLM service. How does it currently work without the semantic caching policy? For simplicity, let's look at a scenario where we only have a single client app, a single user, and a single model deployment. This of course does not represent most real-wo…
  • Open

    What’s new with Microsoft in open-source and Kubernetes at KubeCon + CloudNativeCon Europe 2025
    We are thrilled to join the community at this year’s KubeCon + CloudNativeCon Europe 2025 in London! The post What’s new with Microsoft in open-source and Kubernetes at KubeCon + CloudNativeCon Europe 2025 appeared first on Microsoft Open Source Blog.  ( 14 min )
  • Open

    Important Update: Server Name Indication (SNI) Now Mandatory for Azure DevOps Services
    Earlier this year, we announced an upgrade to our network infrastructure and the new IP addresses you need to allow list in your firewall – Update to Azure DevOps Allowed IP addresses – Azure DevOps Blog. This is our second blog post to inform you that starting from April 23rd, 2025, we will be requiring […] The post Important Update: Server Name Indication (SNI) Now Mandatory for Azure DevOps Services appeared first on Azure DevOps Blog.  ( 23 min )
  • Open

    Microsoft Graph APIs for permanent deletion of mailbox items now available
    We’re happy to announce the general availability (GA) of the permanent delete APIs for contacts, messages, and events as well as for contact folders, mail folders, and calendars. The post Microsoft Graph APIs for permanent deletion of mailbox items now available appeared first on Microsoft 365 Developer Blog.  ( 22 min )

  • Open

    Announcing the General Availability of CI/CD and REST APIs for Fabric Eventstream
    Collaborating on data streaming solutions can be challenging, especially when multiple developers work on the same Eventstream item. Version control challenges, deployment inefficiencies, and conflicts often slow down development. Since introducing Fabric CI/CD tools for Eventstream last year, many customers have streamlined their workflows, ensuring better source control and seamless versioning. Now, we’re excited to … Continue reading “Announcing the General Availability of CI/CD and REST APIs for Fabric Eventstream”  ( 6 min )
    Build event-driven workflows with Azure and Fabric Events (Generally Available)
    Business environments are more dynamic than ever, demanding real-time insights and automated responses to stay ahead. Organizations rely on event-driven solutions to detect changes, automate workflows, and drive intelligent actions as soon as events occur. Today, we’re excited to announce the general availability of Azure and Fabric Events, a powerful capability that allows organizations to … Continue reading “Build event-driven workflows with Azure and Fabric Events (Generally Available)”  ( 7 min )
    Seamlessly connect Azure Logic Apps to Fabric Eventstream using Managed Identity
    Eventstream’s Custom Endpoint is a powerful feature that allows users to send and fetch data from Eventstream. It provides two authentication methods for integrating external application:   Microsoft Entra ID Shared access signature (SAS) Keys While SAS Keys provide quick integration, they require users to store, rotate, and manage secrets manually, increasing security risks. On the … Continue reading “Seamlessly connect Azure Logic Apps to Fabric Eventstream using Managed Identity”  ( 7 min )
    Another dimension of Functions in Data Warehouse
    Today, we are announcing new types of Functions in Fabric Data Warehouse and Lakehouse SQL endpoint. Continue reading to find out more and if interested refer to sign up form for Functions preview in Fabric Data Warehouse. About functions Functions in SQL encapsulates specific logic that can be executed by invoking the function within queries, … Continue reading “Another dimension of Functions in Data Warehouse”  ( 8 min )
    Exciting New Features for Mirroring for Azure SQL in Fabric
    Attention data engineers, database developers, and data analysts! We’re pumped to reveal exciting upgrades to Mirroring for Azure SQL Database in Fabric today at the Fabric Conference in Las Vegas 2025. Since it became Generally Available, Mirroring for Azure SQL Database has been a game-changer, letting you replicate data seamlessly and integrate it within the … Continue reading “Exciting New Features for Mirroring for Azure SQL in Fabric”  ( 6 min )
    Revolutionizing Enterprise Network Security: support for VNET Data Gateway in Data pipeline and more (Preview)
    Virtual Network Data Gateway Support for Fabric Pipeline, Fast Copy in Dataflow Gen2, and Copy Job in Preview Unlocking seamless and Secure Data Integration for Enterprises In today’s data-driven world, network security and secure data transmission are paramount concerns for enterprises handling sensitive information. At Microsoft, we are committed to empowering businesses with the tools … Continue reading “Revolutionizing Enterprise Network Security: support for VNET Data Gateway in Data pipeline and more (Preview)”  ( 6 min )
    Running Apache Airflow jobs seamlessly in Microsoft Fabric
    Apache Airflow is a powerful platform to programmatically author, schedule, and monitor workflows. It is widely adopted for its flexibility, scalability, and ability to handle complex workflows with ease. With Apache Airflow, you can orchestrate your data pipelines, ensuring they run smoothly and efficiently. In May 2024, we launched the preview of Apache Airflow job … Continue reading “Running Apache Airflow jobs seamlessly in Microsoft Fabric”  ( 6 min )
    High Concurrency mode for notebooks in pipelines (Generally Available)
    High Concurrency mode for notebooks in pipelines is now generally available (GA)! This powerful feature enhances enterprise data ingestion and transformation by optimizing session sharing within one of the most widely used orchestration mechanisms. With this release, we’re also introducing Comprehensive Monitoring for High-Concurrency Spark Applications, bringing deeper visibility and control to your workloads. Key … Continue reading “High Concurrency mode for notebooks in pipelines (Generally Available)”  ( 6 min )
    Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric
    Introducing predefined Spark resource profiles in Microsoft Fabric—making it easier than ever for data engineers to optimize their compute configurations based on workload needs. Whether you’re handling read-heavy, write-heavy, or mixed workloads, Fabric now provides a property bag-based approach that streamlines Spark tuning with just a simple setting. With these new configurations, users can effortlessly … Continue reading “Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric”  ( 6 min )
    Supporting Database Mirroring sources behind a firewall
    Database mirroring is a powerful feature in Microsoft Fabric, enabling seamless data replication and high availability for critical workloads (learn more about mirroring). However, connecting to mirrored databases behind a firewall requires the right integration approach. Database Mirroring now supports firewall connectivity for Azure SQL Database, with Snowflake and Azure SQL Managed Instance coming soon, ensuring … Continue reading “Supporting Database Mirroring sources behind a firewall”  ( 6 min )
    Open Mirroring UI enhancements and CSV support to help you get started today
    Open Mirroring: Mirroring is one of the easiest ways to get data into Fabric, it creates a copy of your data in OneLake and keeps it up to date; no ETL required. Open Mirroring empowers everyone to create their own Mirroring Source using the publicly available API that allows you to replicate data from anywhere. … Continue reading “Open Mirroring UI enhancements and CSV support to help you get started today”  ( 6 min )
    What’s new for SQL database in Fabric?
    Spring 2025 Round up: Performance, Developer Experience, and Data Management! Co-author:  Idris Motiwala This week, at the 2025 Fabric Conference in Las Vegas, we are unveiling a series of features for the SQL database in Fabric, including: performance enhancements, streamlined developer workflows, and improved data pipeline management. Here are the high-level features you can look … Continue reading “What’s new for SQL database in Fabric?”  ( 6 min )
  • Open

    Enabling e2e TLS with Azure Container Apps
    This post will cover how to enable end-to-end TLS on Azure Container Apps.  ( 4 min )
  • Open

    Unlock the Power of Azure Container Apps for Java Developers
    Are you ready to dive into the world of Azure Container Apps and take your Java development skills to the next level? We have an exciting new video series just for you! 🎉 Check out the first video in our series, where we introduce Azure Container Apps for Java developers. This video is packed with valuable insights and practical tips to help you get started with Azure Container Apps.  But that's not all! This is just the beginning. We have more videos lined up to guide you through the journey of mastering Azure Container Apps. Stay tuned for upcoming videos that will cover advanced topics and best practices. Don't miss out on updates! Subscribe to the Java at Microsoft YouTube channel to be notified of each new video as soon as it's published. Click here and hit the subscribe button to be at the forefront of Java at Microsoft. Happy coding! 🚀  ( 19 min )
    Unlock the Power of Azure Container Apps for Java Developers
    Are you ready to dive into the world of Azure Container Apps and take your Java development skills to the next level? We have an exciting new video series just for you! 🎉 Check out the first video in our series, where we introduce Azure Container Apps for Java developers. This video is packed with valuable insights and practical tips to help you get started with Azure Container Apps.  But that's not all! This is just the beginning. We have more videos lined up to guide you through the journey of mastering Azure Container Apps. Stay tuned for upcoming videos that will cover advanced topics and best practices. Don't miss out on updates! Subscribe to the Java at Microsoft YouTube channel to be notified of each new video as soon as it's published. Click here and hit the subscribe button to be at the forefront of Java at Microsoft. Happy coding! 🚀
  • Open

    Actualizaciones en Visual Studio Code 1.98
    La versión 1.98 de Visual Studio Code ya está disponible y llega con una serie de novedades que llevarán tu experiencia de desarrollo al siguiente nivel. Entre los principales destaques se encuentran nuevas integraciones avanzadas con la inteligencia artificial de GitHub Copilot, como el Modo Agente (en vista previa), Copilot Edits para notebooks y el innovador Copilot Vision, que permite interactuar con imágenes directamente en las conversaciones de chat. Si quieres consultar todas las actualizaciones en detalle, visita la página de Novedades en el sitio oficial. Insiders: ¿Te gustaría probar estas funcionalidades antes que nadie? Descarga la versión Insiders y accede a los recursos más recientes en cuanto estén disponibles. Copilot Agent Mode (Vista previa) El Modo Agente de Copilot (v…  ( 34 min )
    Actualizaciones en Visual Studio Code 1.98
    La versión 1.98 de Visual Studio Code ya está disponible y llega con una serie de novedades que llevarán tu experiencia de desarrollo al siguiente nivel. Entre los principales destaques se encuentran nuevas integraciones avanzadas con la inteligencia artificial de GitHub Copilot, como el Modo Agente (en vista previa), Copilot Edits para notebooks y el innovador Copilot Vision, que permite interactuar con imágenes directamente en las conversaciones de chat. Si quieres consultar todas las actualizaciones en detalle, visita la página de Novedades en el sitio oficial. Insiders: ¿Te gustaría probar estas funcionalidades antes que nadie? Descarga la versión Insiders y accede a los recursos más recientes en cuanto estén disponibles. Copilot Agent Mode (Vista previa) El Modo Agente de Copilot (v…
  • Open

    Configure time-based scaling in Azure Container Apps
    Azure Container Apps leverages cron-type KEDA scaling rules to schedule autoscaling actions at specific times. This feature is ideal for applications with predictable workload fluctuations (e.g., batch jobs, reporting systems) that require scaling based on time-of-day or day-of-week patterns. This guide walks you through configuring and optimizing time-based scaling. Prerequisites An active Azure subscription with access to Azure Container Apps. Basic understanding of KEDA (Kubernetes Event-driven Autoscaling) concepts. A deployed application in Azure Container Apps (see Quickstart Guide). How Time-Based Scaling Works Time-based scaling in Azure Container Apps is achieved by defining cron-type scale rules(https://keda.sh/docs/2.15/scalers/cron/). It uses cron expressions to define start …  ( 26 min )
    Configure time-based scaling in Azure Container Apps
    Azure Container Apps leverages cron-type KEDA scaling rules to schedule autoscaling actions at specific times. This feature is ideal for applications with predictable workload fluctuations (e.g., batch jobs, reporting systems) that require scaling based on time-of-day or day-of-week patterns. This guide walks you through configuring and optimizing time-based scaling. Prerequisites An active Azure subscription with access to Azure Container Apps. Basic understanding of KEDA (Kubernetes Event-driven Autoscaling) concepts. A deployed application in Azure Container Apps (see Quickstart Guide). How Time-Based Scaling Works Time-based scaling in Azure Container Apps is achieved by defining cron-type scale rules(https://keda.sh/docs/2.15/scalers/cron/). It uses cron expressions to define start …
    Getting Started with Python WebJobs on App Service Linux
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting one of the Python runtime stacks, create your App Service Web App. The stack must be Python, since we plan on writing our WebJob using Python and a bash startup script. For this example, we’ll use Python 3.13. Next, we’l…  ( 25 min )
    Getting Started with Python WebJobs on App Service Linux
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting one of the Python runtime stacks, create your App Service Web App. The stack must be Python, since we plan on writing our WebJob using Python and a bash startup script. For this example, we’ll use Python 3.13. Next, we’l…
  • Open

    Migrating your Docker Compose applications to the Sidecar feature
    As we continue to enhance the developer experience on Azure App Service, we’re announcing the retirement of the Docker Compose feature on March 31, 2027. If you’re currently using Docker Compose to deploy and manage multi-container applications on App Service, now is the time to start planning your transition to the new Sidecar feature.  ( 8 min )
  • Open

    Kickstarting AI Agent Development with Synthetic Data: A GenAI Approach on Azure
    Introduction When building AI agents—especially for internal enterprise use cases—one of the biggest challenges is data access. Real organizational data may be: Disorganized or poorly labeled Stored across legacy systems Gatekept due to security or compliance Unavailable for a variety of reasons during early PoC stages Instead of waiting months for a data-wrangling project before testing AI Agent feasibility, you can bootstrap your efforts with synthetic data using Azure OpenAI. This lets you validate functionality, test LLM capabilities, and build a working prototype before touching production data. In this post, we’ll walk through how to use Azure OpenAI to generate realistic, structured synthetic data to power early-stage AI agents for internal tools such as CRM bots, HR assistants, a…  ( 31 min )
    Kickstarting AI Agent Development with Synthetic Data: A GenAI Approach on Azure
    Introduction When building AI agents—especially for internal enterprise use cases—one of the biggest challenges is data access. Real organizational data may be: Disorganized or poorly labeled Stored across legacy systems Gatekept due to security or compliance Unavailable for a variety of reasons during early PoC stages Instead of waiting months for a data-wrangling project before testing AI Agent feasibility, you can bootstrap your efforts with synthetic data using Azure OpenAI. This lets you validate functionality, test LLM capabilities, and build a working prototype before touching production data. In this post, we’ll walk through how to use Azure OpenAI to generate realistic, structured synthetic data to power early-stage AI agents for internal tools such as CRM bots, HR assistants, a…
    Best Practices for Kickstarting AI Agents with Azure OpenAI and Synthetic Data
    Introduction When building AI agents—especially for internal enterprise use cases—one of the biggest challenges is data access. Real organizational data may be: Disorganized or poorly labeled Stored across legacy systems Gatekept due to security or compliance Unavailable for a variety of reasons during early PoC stages Instead of waiting months for a data-wrangling project before testing AI Agent feasibility, you can bootstrap your efforts with synthetic data using Azure OpenAI. This lets you validate functionality, test LLM capabilities, and build a working prototype before touching production data. In this post, we’ll walk through how to use Azure OpenAI to generate realistic, structured synthetic data to power early-stage AI agents for internal tools such as CRM bots, HR assistants, a…

  • Open

    Improve the security of Generation 2 VMs via Trusted Launch in Azure DevTest Labs
    We’re thrilled to announce the public preview of the Trusted Launch feature for Generation 2 (Gen2) Virtual machines (VMs) in Azure DevTest Labs! 🌟 This game-changing feature is designed to enhance security of virtual machines (VMs), protecting against advanced and persistent attack techniques. Here are the key benefits: Securely deploy VMs with verified boot loaders, OS kernels, […] The post Improve the security of Generation 2 VMs via Trusted Launch in Azure DevTest Labs appeared first on Develop from the cloud.  ( 23 min )
  • Open

    Expand Azure AI Agent with New Knowledge Tools: Microsoft Fabric and Tripadvisor
    To help AI Agents make well-informed decisions with confidence, knowledge serves as the foundation for generating accurate and grounded responses. By integrating comprehensive and precise data, Azure AI Agent Service enhances accuracy and delivers effective solutions, thereby improving the overall customer experience. Azure AI Agent Service aims to provide a wide range of knowledge tools to address various customer use cases, encompassing unstructured text data, structured data, private data, licensed data, public web data, and more.  Today, we are thrilled to announce the public preview of two new knowledge tools - Microsoft Fabric and Tripadvisor – designed to further empower your AI agents. Alongside existing capabilities such as Azure AI Search, File Search, and Grounding with Bing Se…  ( 29 min )
    Expand Azure AI Agent with New Knowledge Tools: Microsoft Fabric and Tripadvisor
    To help AI Agents make well-informed decisions with confidence, knowledge serves as the foundation for generating accurate and grounded responses. By integrating comprehensive and precise data, Azure AI Agent Service enhances accuracy and delivers effective solutions, thereby improving the overall customer experience. Azure AI Agent Service aims to provide a wide range of knowledge tools to address various customer use cases, encompassing unstructured text data, structured data, private data, licensed data, public web data, and more.  Today, we are thrilled to announce the public preview of two new knowledge tools - Microsoft Fabric and Tripadvisor – designed to further empower your AI agents. Alongside existing capabilities such as Azure AI Search, File Search, and Grounding with Bing Se…
    Best Practices for Using Generative AI in Automated Response Generation for Complex Decision Making
    Real-world AI Solutions: Lessons from the Field Overview Generative AI offers significant potential to streamline processes in domains with complex regulatory or clinical documentation. For example, in the context of prior authorization for surgical procedures, automated response generation can help parse detailed guidelines—such as eligibility criteria based on patient age, BMI thresholds, comorbid conditions, and documented behavioral interventions—to produce accurate and consistent outputs. The following document outlines best practices along with recommended architecture and process breakdown approaches to ensure that GenAI-powered responses are accurate, compliant, and reliable. 1. Understanding the Use Case Recognize the complexity of policy and clinical documents. Use cases like pr…  ( 34 min )
    Best Practices for Using Generative AI in Automated Response Generation for Complex Decision Making
    Real-world AI Solutions: Lessons from the Field Overview Generative AI offers significant potential to streamline processes in domains with complex regulatory or clinical documentation. For example, in the context of prior authorization for surgical procedures, automated response generation can help parse detailed guidelines—such as eligibility criteria based on patient age, BMI thresholds, comorbid conditions, and documented behavioral interventions—to produce accurate and consistent outputs. The following document outlines best practices along with recommended architecture and process breakdown approaches to ensure that GenAI-powered responses are accurate, compliant, and reliable. 1. Understanding the Use Case Recognize the complexity of policy and clinical documents. Use cases like pr…
    March 2025: Azure AI Speech’s HD voices are generally available and more
    Authors: Yufei Liu, Lihui Wang, Yao Qian, Yang Zheng, Jiajun Zhang, Bing Liu, Yang Cui, Peter Pan, Yan Deng, Songrui Wu, Gang Wang, Xi Wang, Shaofei Zhang, Sheng Zhao   We are pleased to announce that our Azure AI Speech’s Dragon HD neural text to speech (language model-based TTS, similar to model design for text LLM) voices, which have been available to users for some time, are now moving to general availability (GA). These voices have gained significant traction across various scenarios and have received valuable feedback from our users. This milestone is a testament to the extensive feedback and growing popularity of Azure AI Speech’s HD voices. As we continue to enhance the user experience, we remain committed to exploring and experimenting with new voices and advanced models to push t…  ( 32 min )
    March 2025: Azure AI Speech’s HD voices are generally available and more
    Authors: Yufei Liu, Lihui Wang, Yao Qian, Yang Zheng, Jiajun Zhang, Bing Liu, Yang Cui, Peter Pan, Yan Deng, Songrui Wu, Gang Wang, Xi Wang, Shaofei Zhang, Sheng Zhao   We are pleased to announce that our Azure AI Speech’s Dragon HD neural text to speech (language model-based TTS, similar to model design for text LLM) voices, which have been available to users for some time, are now moving to general availability (GA). These voices have gained significant traction across various scenarios and have received valuable feedback from our users. This milestone is a testament to the extensive feedback and growing popularity of Azure AI Speech’s HD voices. As we continue to enhance the user experience, we remain committed to exploring and experimenting with new voices and advanced models to push t…
  • Open

    Terraform Provider for Microsoft Fabric (Generally Available)
    Unlocking the full potential of Microsoft Fabric with Terraform Provider Terraform Provider for Microsoft Fabric is now generally available (GA)! The first version of the Terraform Provider for Fabric was released six months ago, enabling engineers to automate key aspects of their Fabric Data Platform. Since then, adoption has grown significantly, now even more customers … Continue reading “Terraform Provider for Microsoft Fabric (Generally Available)”  ( 7 min )
    Simplify Your Data Ingestion with Copy Job (Generally Available)
    Copy Job is now generally available, bringing you a simpler, faster, and more intuitive way to move data! Whether you’re handling batch transfers or need the efficiency of incremental data movement, Copy Job gives you the flexibility and reliability to get the job done.  Since it’s preview last September, we’ve received incredible feedback from you. … Continue reading “Simplify Your Data Ingestion with Copy Job (Generally Available)”  ( 6 min )
    Simplify your Warehouse ALM with DacFx integration in Git and Deployment pipelines for Fabric Warehouse
    Managing data warehouse changes and automating deployments is now simpler than ever with the integration of DacFx with Git and Deployment Pipelines for Fabric Warehouse. This integration enables seamless export and import of your data warehouses, efficient schema change management, and deployment through Git-connected workflows. Whether you’re collaborating with a team using tools like VS … Continue reading “Simplify your Warehouse ALM with DacFx integration in Git and Deployment pipelines for Fabric Warehouse”  ( 7 min )
    Easily load Fabric OneLake data into Excel — OneLake catalog and Get Data are integrated into Excel for Windows (Preview)
    We are excited to announce that the Get Data experience, along with the OneLake catalog, is now integrated into Excel for Windows (Preview). Like OneDrive, OneLake is a single, unified, logical data lake for your whole organization analytics data. This makes it crucial to have a streamlined method for loading OneLake data into Excel, enabling … Continue reading “Easily load Fabric OneLake data into Excel — OneLake catalog and Get Data are integrated into Excel for Windows (Preview)”  ( 5 min )
    New Solace PubSub+ Connector: seamlessly connect Fabric Eventstream with Solace PubSub+ (Preview)
    Real-time data is crucial for enterprises to stay competitive, enabling instant decision-making, enhanced customer experiences, and operational efficiency. It helps detect fraud, optimize supply chains, and personalize interactions. By leveraging continuous data streams, businesses can unlock new opportunities, improve resilience, and drive smarter automation in a data-driven world. What are Fabric Event Streams and Solace … Continue reading “New Solace PubSub+ Connector: seamlessly connect Fabric Eventstream with Solace PubSub+ (Preview)”  ( 7 min )
    AI Ready Apps: build RAG Data pipeline from Azure Blob Storage to SQL Database in Microsoft Fabric within minutes
    Microsoft Fabric is a unified, secure, and user-friendly data platform equipped with features necessary for developing enterprise-grade applications with minimal or no coding required. Last year, the platform was enhanced by introducing SQL Database in Fabric, facilitating AI application development within Microsoft Fabric. In a previous blog post, we discussed how to build a chatbot … Continue reading “AI Ready Apps: build RAG Data pipeline from Azure Blob Storage to SQL Database in Microsoft Fabric within minutes”  ( 13 min )
    Mirroring in Fabric – What’s new
    Mirroring is a powerful feature in Microsoft Fabric, enabling you to seamlessly reflect your existing data estate continuously from any database or data warehouse into OneLake in Fabric. Once Mirroring starts the replication process, the mirrored data is automatically kept up to date at near real-time in Fabric OneLake. With your data estates landed into … Continue reading “Mirroring in Fabric – What’s new”  ( 9 min )
  • Open

    AI Agents: Mastering Agentic RAG - Part 5
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a deep dive into Agentic RAG (Retrieval-Augmented Generation). In previous posts (links at the end!), we've explored the foundations of AI agents. Now, we'll explore how Agentic RAG elevates traditional RAG by empowering LLMs to autonomously plan, retrieve information, and refine their reasoning process. I've even created some code samples demonstrating Agentic RAG with different tools and frameworks, which we'll explore below. What is Agentic RAG? Agentic RAG represents a significant evolution in how LLMs interact with external data. Unlike traditional RAG, which follows a linear "retrieve-then-read" approach, Agentic RAG empowers the LLM to act as an agent, autonomous…  ( 27 min )
    AI Agents: Mastering Agentic RAG - Part 5
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a deep dive into Agentic RAG (Retrieval-Augmented Generation). In previous posts (links at the end!), we've explored the foundations of AI agents. Now, we'll explore how Agentic RAG elevates traditional RAG by empowering LLMs to autonomously plan, retrieve information, and refine their reasoning process. I've even created some code samples demonstrating Agentic RAG with different tools and frameworks, which we'll explore below. What is Agentic RAG? Agentic RAG represents a significant evolution in how LLMs interact with external data. Unlike traditional RAG, which follows a linear "retrieve-then-read" approach, Agentic RAG empowers the LLM to act as an agent, autonomous…
  • Open

    Lifecycle Management of Azure storage blobs using Azure Data Factory (ADF)
    Background:  Many times, we have a requirement to delete the page blobs automatically after certain period of times from the Storage account as currently Lifecyle management does not support Page blob deletion  Note:  we can delete All blobs (Page/Block/Append blob) from the ADF   Deletion of page blobs (or any blob type) from the storage account can be achieved using Azure Storage explorer, REST API, SDK’s, PowerShell, Azure Data Factory, Azure logic App, Azure Function app, Azure storage actions (Preview) etc. This blog shows how to use ADF to delete blobs.  Step 1:  Create an azure data factory resource from azure portal.  If you are new to ADF, please refer this link on how to create one:  https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal   Step …  ( 22 min )
    Lifecycle Management of Azure storage blobs using Azure Data Factory (ADF)
    Background:  Many times, we have a requirement to delete the page blobs automatically after certain period of times from the Storage account as currently Lifecyle management does not support Page blob deletion  Note:  we can delete All blobs (Page/Block/Append blob) from the ADF   Deletion of page blobs (or any blob type) from the storage account can be achieved using Azure Storage explorer, REST API, SDK’s, PowerShell, Azure Data Factory, Azure logic App, Azure Function app, Azure storage actions (Preview) etc. This blog shows how to use ADF to delete blobs.  Step 1:  Create an azure data factory resource from azure portal.  If you are new to ADF, please refer this link on how to create one:  https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal   Step …

  • Open

    Superfast Installing Code Push Server in a Windows Web App
    TOC Introduction Setup Debugging References 1. Introduction CodePush Server is a self-hosted backend for Microsoft CodePush, allowing you to manage and deploy over-the-air updates for React Native and Cordova apps. It provides update versioning, deployment history, and authentication controls. It is typically designed to run on Linux-based Node environments. If you want to deploy it on Azure Windows Web App, you can follow this tutorial to apply the necessary modifications. 2. Setup 1. Create a Windows Node.js Web App. In this example, we use Node.js 20 LTS.     2. After the Web App is created, go to the Overview tab and copy its FQDN. You'll need this in later steps.     3. Create a standard Storage Account.     4. Once created, go to Access keys and copy the Storage Account’s name a…  ( 29 min )
    Superfast Installing Code Push Server in a Windows Web App
    TOC Introduction Setup Debugging References 1. Introduction CodePush Server is a self-hosted backend for Microsoft CodePush, allowing you to manage and deploy over-the-air updates for React Native and Cordova apps. It provides update versioning, deployment history, and authentication controls. It is typically designed to run on Linux-based Node environments. If you want to deploy it on Azure Windows Web App, you can follow this tutorial to apply the necessary modifications. 2. Setup 1. Create a Windows Node.js Web App. In this example, we use Node.js 20 LTS.     2. After the Web App is created, go to the Overview tab and copy its FQDN. You'll need this in later steps.     3. Create a standard Storage Account.     4. Once created, go to Access keys and copy the Storage Account’s name a…
  • Open

    Model Mondays: Lights, Prompts, Action!
    In the world of visual generative AI—whether you're crafting marketing visuals, building creative tools, or enhancing user experiences with rich media—imagination is the new interface. And while traditional content creation tools require time, talent, and iterations, there’s now a faster, smarter way to bring ideas to life: visual generative models. From turning plain text into photorealistic imagery to transforming rough sketches into refined art, these models are redefining what’s possible in content creation. They don’t just generate visuals—they unlock creativity at scale. On Monday March 31, grab a front-row seat to the future of creative AI and get ready for a visually mind-blowing episode, where we’ll dive into the dazzling world of Visual Generative AI. We’re talking next-gen text…  ( 22 min )
    Model Mondays: Lights, Prompts, Action!
    In the world of visual generative AI—whether you're crafting marketing visuals, building creative tools, or enhancing user experiences with rich media—imagination is the new interface. And while traditional content creation tools require time, talent, and iterations, there’s now a faster, smarter way to bring ideas to life: visual generative models. From turning plain text into photorealistic imagery to transforming rough sketches into refined art, these models are redefining what’s possible in content creation. They don’t just generate visuals—they unlock creativity at scale. On Monday March 31, grab a front-row seat to the future of creative AI and get ready for a visually mind-blowing episode, where we’ll dive into the dazzling world of Visual Generative AI. We’re talking next-gen text…
  • Open

    Essentials of Azure and AI project performance and security | New training!
    Are you ready to elevate your cloud skills and master the essentials of reliability, security, and performance of Azure and AI project? Join us for comprehensive training in Microsoft Azure Virtual Training Day events, where you'll gain the knowledge and tools to adopt the cloud at scale and optimize your cloud spend. Event Highlights: Two-Day Agenda: Dive deep into how-to learning on cloud and AI adoption, financial best practices, workload design, environment management, and more. Expert Guidance: Learn from industry experts and gain insights into designing with optimization in mind with the Azure Well-Architected Framework and the Cloud Adoption Framework for Azure. Hands-On Learning: Participate in interactive sessions and case studies to apply essentials of Azure and AI best practices in real-world scenarios, like reviewing and remediating workloads. FinOps in the Era of AI: Discover how to build a culture of cost efficiency and maximize the business value of the cloud with the FinOps Framework, including principles, phases, domains, and capabilities. Why Attend? Build Reliable and Secure Systems: Understand the shared responsibility between Microsoft and its customers to build resilient and secure systems. Optimize Cloud Spend: Learn best practices for cloud spend optimization and drive market differentiation through savings. Enhance Productivity: Improve productivity, customer experience, and competitive advantage by elevating the resiliency and security of your critical workloads. Don't miss the opportunity to transform your cloud strategy and take your skills to the next level. Register now and join us for an insightful and engaging virtual training experience! Register today!  Aka.ms/AzureEssentialsVTD  Eager to learn before the next event? Dive into our free self-paced training modules: Cost efficiency of Azure and AI Projects | on Microsoft Learn Resiliency and security of Azure and AI Projects | on Microsoft Learn   Overview of essential skilling for Azure and AI workloads | on Microsoft Learn  ( 21 min )
    Essentials of Azure and AI project performance and security | New training!
    Are you ready to elevate your cloud skills and master the essentials of reliability, security, and performance of Azure and AI project? Join us for comprehensive training in Microsoft Azure Virtual Training Day events, where you'll gain the knowledge and tools to adopt the cloud at scale and optimize your cloud spend. Event Highlights: Two-Day Agenda: Dive deep into how-to learning on cloud and AI adoption, financial best practices, workload design, environment management, and more. Expert Guidance: Learn from industry experts and gain insights into designing with optimization in mind with the Azure Well-Architected Framework and the Cloud Adoption Framework for Azure. Hands-On Learning: Participate in interactive sessions and case studies to apply essentials of Azure and AI best practices in real-world scenarios, like reviewing and remediating workloads. FinOps in the Era of AI: Discover how to build a culture of cost efficiency and maximize the business value of the cloud with the FinOps Framework, including principles, phases, domains, and capabilities. Why Attend? Build Reliable and Secure Systems: Understand the shared responsibility between Microsoft and its customers to build resilient and secure systems. Optimize Cloud Spend: Learn best practices for cloud spend optimization and drive market differentiation through savings. Enhance Productivity: Improve productivity, customer experience, and competitive advantage by elevating the resiliency and security of your critical workloads. Don't miss the opportunity to transform your cloud strategy and take your skills to the next level. Register now and join us for an insightful and engaging virtual training experience! Register today!  Aka.ms/AzureEssentialsVTD  Eager to learn before the next event? Dive into our free self-paced training modules: Cost efficiency of Azure and AI Projects | on Microsoft Learn Resiliency and security of Azure and AI Projects | on Microsoft Learn   Overview of essential skilling for Azure and AI workloads | on Microsoft Learn
    Monitoring Azure VMware Solution Basics
    The focus is on what to monitor with guidance based on Microsoft and VMware native tools; however, third-party tools can maintain the environment's infrastructure and health maintain the infrastructure and health of the environment. The focus is on what to monitor with guidance based on Microsoft and VMware native tools, however 3rd party tools can of course be used as alternatives. Solution components that write to the VMware syslog include VMware ESXi, VMware vSAN, VMware NSX-T Data Center, and VMware vCenter Server.   When Diagnostics is enabled, those logs are written to the designated Log Analytics Workspace. Basic health Impact: Operational Excellence Host Operations: Ensure you are aware of pending host operations by setting up the notifications for host remediation activities (chan…  ( 52 min )
    Monitoring Azure VMware Solution Basics
    The focus is on what to monitor with guidance based on Microsoft and VMware native tools; however, third-party tools can maintain the environment's infrastructure and health maintain the infrastructure and health of the environment. The focus is on what to monitor with guidance based on Microsoft and VMware native tools, however 3rd party tools can of course be used as alternatives. Solution components that write to the VMware syslog include VMware ESXi, VMware vSAN, VMware NSX-T Data Center, and VMware vCenter Server.   When Diagnostics is enabled, those logs are written to the designated Log Analytics Workspace. Basic health Impact: Operational Excellence Host Operations: Ensure you are aware of pending host operations by setting up the notifications for host remediation activities (chan…

  • Open

    Building a Model Context Protocol Server with Semantic Kernel
    This is second MCP related blog post that is part of a series of blog posts that will cover how to use Semantic Kernel (SK) with the Model Context Protocol (MCP). This blog post demonstrates how to build an MCP server using MCP C# SDK and SK, expose SK plugins as MCP tools and call […] The post Building a Model Context Protocol Server with Semantic Kernel appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Scaling Cloud ETL: Optimizing Performance and Resolving Azure Data Factory Copy Bottlenecks
    Optimizing ETL Data Pipelines When building ETL data pipelines using Azure Data Factory (ADF) to process huge amounts of data from different sources, you may often run into performance and design-related challenges. This article will serve as a guide in building high-performance ETL pipelines that are both efficient and scalable. Below are the major guidelines to consider when building optimized ETL data pipelines in ADF: Linked Services and Datasets A linked service is a connection to a data source that can be created once and reused across multiple pipelines within the same ADF. It is efficient to create one linked service per source for easy maintenance. Similarly, datasets are derived from the linked services to fetch the data from the source. These should ideally be a single dataset f…  ( 35 min )
    Scaling Cloud ETL: Optimizing Performance and Resolving Azure Data Factory Copy Bottlenecks
    Optimizing ETL Data Pipelines When building ETL data pipelines using Azure Data Factory (ADF) to process huge amounts of data from different sources, you may often run into performance and design-related challenges. This article will serve as a guide in building high-performance ETL pipelines that are both efficient and scalable. Below are the major guidelines to consider when building optimized ETL data pipelines in ADF: Linked Services and Datasets A linked service is a connection to a data source that can be created once and reused across multiple pipelines within the same ADF. It is efficient to create one linked service per source for easy maintenance. Similarly, datasets are derived from the linked services to fetch the data from the source. These should ideally be a single dataset f…
  • Open

    Keep Your Azure Functions Up to Date: Identify Apps Running on Retired Versions
    Running Azure Functions on retired language versions can lead to security risks, performance issues, and potential service disruptions. While Azure Functions Team notifies users about upcoming retirements through the portal, emails, and warnings, identifying affected Function Apps across multiple subscriptions can be challenging. To simplify this, we’ve provided Azure CLI scripts to help you:✅ Identify all Function Apps using a specific runtime version✅ Find apps running on unsupported or soon-to-be-retired versions✅ Take proactive steps to upgrade and maintain a secure, supported environment Read on for the full set of Azure CLI scripts and instructions on how to upgrade your apps today! Why Upgrading Your Azure Functions Matters Azure Functions supports six different programming language…  ( 32 min )
    Keep Your Azure Functions Up to Date: Identify Apps Running on Retired Versions
    Running Azure Functions on retired language versions can lead to security risks, performance issues, and potential service disruptions. While Azure Functions Team notifies users about upcoming retirements through the portal, emails, and warnings, identifying affected Function Apps across multiple subscriptions can be challenging. To simplify this, we’ve provided Azure CLI scripts to help you:✅ Identify all Function Apps using a specific runtime version✅ Find apps running on unsupported or soon-to-be-retired versions✅ Take proactive steps to upgrade and maintain a secure, supported environment Read on for the full set of Azure CLI scripts and instructions on how to upgrade your apps today! Why Upgrading Your Azure Functions Matters Azure Functions supports six different programming language…

  • Open

    Azure DevTest Labs’ feedback forum has a new home!
    We’re thrilled to announce that Azure DevTest Labs has joined the Visual Studio Developer Community to collect valuable feedback and suggestions from our customers. 🌟 This fantastic feedback portal is here to make your experience smoother and more engaging. It’s your go-to place to connect with the Azure DevTest Labs product and engineering teams to […] The post Azure DevTest Labs’ feedback forum has a new home! appeared first on Develop from the cloud.  ( 22 min )
  • Open

    Microsoft 365 Certification control spotlight: Data retention, back-up, and disposal
    Learn how Microsoft 365 Certification validates data retention, back-up, and disposal controls for Microsoft 365 apps. The post Microsoft 365 Certification control spotlight: Data retention, back-up, and disposal appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    Recovering Large Number of Soft-deleted Blobs Using Storage Actions
    Blob soft delete is a crucial feature that protects your data from accidental deletions or overwrites. By preserving deleted data for a defined period, it helps maintain data integrity and availability, even in cases of human error. However, restoring soft-deleted data can be time-consuming, as each deleted blob must be individually restored using the undelete API. At present, there is no option to bulk restore all soft-deleted blobs.   In this blog, we present a no-code solution for efficiently restoring soft-deleted data using Azure Storage tasks. This approach is especially useful when dealing with a large number of blobs, eliminating the need for custom scripts. Additionally, it allows you to apply multiple filters to restore only the necessary blobs.   Note: This feature is currently …  ( 24 min )
    Recovering Large Number of Soft-deleted Blobs Using Storage Actions
    Blob soft delete is a crucial feature that protects your data from accidental deletions or overwrites. By preserving deleted data for a defined period, it helps maintain data integrity and availability, even in cases of human error. However, restoring soft-deleted data can be time-consuming, as each deleted blob must be individually restored using the undelete API. At present, there is no option to bulk restore all soft-deleted blobs.   In this blog, we present a no-code solution for efficiently restoring soft-deleted data using Azure Storage tasks. This approach is especially useful when dealing with a large number of blobs, eliminating the need for custom scripts. Additionally, it allows you to apply multiple filters to restore only the necessary blobs.   Note: This feature is currently …
    [AI Search] Troubleshooting OneLake Files Connection via Wizard
    Are unable to connect to your OneLakeFiles? In this documentation you will be able to troubleshoot and find the solution for each issue.  *This function is at PREVIEW, therefore, please be informed on it (as of 2025.03.26) Are you are looking for how to integrate I Search with Fabric OneLake, please look at this article which explains an overview of the objective and the instructions. (article link) Also make sure that you have all the prerequisites checked before moving down to the article. You can find the prerequisites here. If you are still reading this article, that means you are having an issue with Connect to your data section as below . [Error 1 - The workspace or the lakehouse specified cannot be found]   This error can occur if AI Search and Lakehouse are not in the same tenant.  Please make sure that both services are in the same tenant.  You can check this article to find out more about how to find the Tenant ID.   [Error 2 - Unable to list items within the lakehouse using the specified identity as access to the workspace was denied] This error occurs for two reasons. In this article, we will demonstrate a situation with system-assigned. Make sure your AI Search is in system-assigned identity or user-assigned. You can find your configuration under AI Search Services > Settings > Identity Make sure that Status is ON – If using system-assigned.   Go to your Fabric OneLake Workspace and provide your AI Search service contributor role to the workspace. The process may take 5-15 minutes, please allow some time for it to be processed. If you would like to use user-assigned, here is an example that you can refer to as an example. However if you are unable to find your AI Search in your OneLake workspace, make sure to enable the below configuration from Fabric One Lake   Go to "app.powerbi.com" and click the configuration > Governance and Insights > Admin Portal From the Admin Portal > Tenant Settings > Search “API”   Make sure to enable Service Principals can use Fabric APIs.  ( 22 min )
    [AI Search] Troubleshooting OneLake Files Connection via Wizard
    Are unable to connect to your OneLakeFiles? In this documentation you will be able to troubleshoot and find the solution for each issue.  *This function is at PREVIEW, therefore, please be informed on it (as of 2025.03.26) Are you are looking for how to integrate I Search with Fabric OneLake, please look at this article which explains an overview of the objective and the instructions. (article link) Also make sure that you have all the prerequisites checked before moving down to the article. You can find the prerequisites here. If you are still reading this article, that means you are having an issue with Connect to your data section as below . [Error 1 - The workspace or the lakehouse specified cannot be found]   This error can occur if AI Search and Lakehouse are not in the same tenant.  Please make sure that both services are in the same tenant.  You can check this article to find out more about how to find the Tenant ID.   [Error 2 - Unable to list items within the lakehouse using the specified identity as access to the workspace was denied] This error occurs for two reasons. In this article, we will demonstrate a situation with system-assigned. Make sure your AI Search is in system-assigned identity or user-assigned. You can find your configuration under AI Search Services > Settings > Identity Make sure that Status is ON – If using system-assigned.   Go to your Fabric OneLake Workspace and provide your AI Search service contributor role to the workspace. The process may take 5-15 minutes, please allow some time for it to be processed. If you would like to use user-assigned, here is an example that you can refer to as an example. However if you are unable to find your AI Search in your OneLake workspace, make sure to enable the below configuration from Fabric One Lake   Go to "app.powerbi.com" and click the configuration > Governance and Insights > Admin Portal From the Admin Portal > Tenant Settings > Search “API”   Make sure to enable Service Principals can use Fabric APIs.
  • Open

    Observe Quarkus Apps with Azure Application Insights using OpenTelemetry
    Overview This blog shows you how to observe Red Hat Quarkus applications with Azure Application Insights using OpenTelemetry. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app utilizes OpenTelemetry to instrument, generate, collect, and export telemetry data for observability. The blog guides you to test your app locally, deploy it to Azure Container Apps and observe its telemetry data with Azure Application Insights. Prerequisites An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Prepare a local machine with Unix-like operating system installed - for example, Ubuntu, macOS, or Windows Subsystem for Linux. …  ( 47 min )
    Observe Quarkus Apps with Azure Application Insights using OpenTelemetry
    Overview This blog shows you how to observe Red Hat Quarkus applications with Azure Application Insights using OpenTelemetry. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app utilizes OpenTelemetry to instrument, generate, collect, and export telemetry data for observability. The blog guides you to test your app locally, deploy it to Azure Container Apps and observe its telemetry data with Azure Application Insights. Prerequisites An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Prepare a local machine with Unix-like operating system installed - for example, Ubuntu, macOS, or Windows Subsystem for Linux. …
    Getting Started with Java WebJobs on Azure App Service
    Getting Started with Linux WebJobs on App Service - Java   WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting one of the Java runtime stacks, create your App Service Web App. The stack must be Java, since we plan on writing our WebJob using Java and a bash startup script…  ( 26 min )
    Getting Started with Java WebJobs on Azure App Service
    Getting Started with Linux WebJobs on App Service - Java   WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting one of the Java runtime stacks, create your App Service Web App. The stack must be Java, since we plan on writing our WebJob using Java and a bash startup script…
  • Open

    Announcing backup storage billing for SQL database in Microsoft Fabric: what you need to know
    Ensuring data protection with automated backups SQL database in Microsoft Fabric provides automatic backups from the moment a database is created, ensuring seamless data protection and recovery. The system follows a robust backup strategy: This approach lets users restore their database to any point in the past seven days, making data recovery simple and efficient. … Continue reading “Announcing backup storage billing for SQL database in Microsoft Fabric: what you need to know”  ( 5 min )
  • Open

    Unleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
    Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge.  Key Features of MCP Standardized Comm…  ( 32 min )
    Unleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
    Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge.  Key Features of MCP Standardized Comm…
  • Open

    New Overlapping Secrets on Azure DevOps OAuth
    As you may have read, Azure DevOps OAuth apps are due for deprecation in 2026. All developers are encouraged to migrate their applications to use Microsoft Entra ID OAuth, which can access all Azure DevOps APIs and has the added benefit of enhanced security features and long-term investment. Although we are nearing Azure DevOps OAuth’s […] The post New Overlapping Secrets on Azure DevOps OAuth appeared first on Azure DevOps Blog.  ( 23 min )
  • Open

    RAG Time Journey 4: Advanced Multimodal Indexing
    Introduction Welcome to RAG Time Journey 4, the next step in our deep dive into Retrieval-Augmented Generation (RAG). If you’ve been following along, you might remember that in Journey 2 , we explored data ingestion, hybrid search, and semantic reranking—key concepts that laid the groundwork for effective search and retrieval. Now, we’re moving beyond text and into the multimodal world, where text, images, audio, and video coexist in search environments that demand more sophisticated retrieval capabilities. Modern AI-powered applications require more than just keyword matching. They need to understand relationships across multiple data types, extract meaning from diverse formats, and provide accurate, context-rich results. That’s where multimodal indexing in Azure AI Search comes into play…  ( 37 min )
    RAG Time Journey 4: Advanced Multimodal Indexing
    Introduction Welcome to RAG Time Journey 4, the next step in our deep dive into Retrieval-Augmented Generation (RAG). If you’ve been following along, you might remember that in Journey 2 , we explored data ingestion, hybrid search, and semantic reranking—key concepts that laid the groundwork for effective search and retrieval. Now, we’re moving beyond text and into the multimodal world, where text, images, audio, and video coexist in search environments that demand more sophisticated retrieval capabilities. Modern AI-powered applications require more than just keyword matching. They need to understand relationships across multiple data types, extract meaning from diverse formats, and provide accurate, context-rich results. That’s where multimodal indexing in Azure AI Search comes into play…

  • Open

    Improve performance and security using Standard Load Balancer and Standard SKU public IP addresses in Azure DevTest Labs
    We are excited to announce preview of enhancements in Azure DevTest Labs designed to accommodate two upcoming retirements in Azure: Retirement Date Details Azure Basic Load Balancer September 30, 2025 The Azure Basic Load Balancer will be retired. You can continue using your existing Basic Load Balancers until this date, but you will not be […] The post Improve performance and security using Standard Load Balancer and Standard SKU public IP addresses in Azure DevTest Labs appeared first on Develop from the cloud.  ( 22 min )
  • Open

    Hyperlight Wasm: Fast, secure, and OS-free
    We're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine “micro-guest” that can run wasm component workloads written in many programming languages. The post Hyperlight Wasm: Fast, secure, and OS-free appeared first on Microsoft Open Source Blog.  ( 16 min )
  • Open

    Best Practices for Requesting Quota Increase for Azure OpenAI Models
    Introduction This document outlines a set of best practices to guide users in submitting quota increase requests for Azure OpenAI models. Following these recommendations will help streamline the process, ensure proper documentation, and improve the likelihood of a successful request.  Understand the Quota and Limitations Before submitting a quota increase request, make sure you have a clear understanding of: The current quota and limits for your Azure OpenAI instance. Your specific use case requirements, including estimated daily/weekly/monthly usage. The rate limits for API calls and how they affect your solution's performance. Use the Azure portal or CLI to monitor your current usage and identify patterns that justify the need for a quota increase. Provide a Clear and Detailed Justific…  ( 26 min )
    Best Practices for Requesting Quota Increase for Azure OpenAI Models
    Introduction This document outlines a set of best practices to guide users in submitting quota increase requests for Azure OpenAI models. Following these recommendations will help streamline the process, ensure proper documentation, and improve the likelihood of a successful request.  Understand the Quota and Limitations Before submitting a quota increase request, make sure you have a clear understanding of: The current quota and limits for your Azure OpenAI instance. Your specific use case requirements, including estimated daily/weekly/monthly usage. The rate limits for API calls and how they affect your solution's performance. Use the Azure portal or CLI to monitor your current usage and identify patterns that justify the need for a quota increase. Provide a Clear and Detailed Justific…
    Best Practices for Structured Extraction from Documents Using Azure OpenAI
    Introduction In a recent project, a customer needed to extract structured data from legal documents to populate a standardized form. The legal documents varied in length and structure, and the customer required consistent and accurate outputs that mapped directly to the expected form schema. The implemented solution leveraged Azure OpenAI to iteratively process document chunks and update the form output dynamically. A key component to successfully extract the correct output was using Structured Outputs to enforce the desired output fields to populate the form. This article outlines best practices derived from this project, with a focus on reliable structure enforcement and iterative processing of unstructured legal data. These lessons learned can be leveraged for additional scenarios and d…  ( 27 min )
    Best Practices for Structured Extraction from Documents Using Azure OpenAI
    Introduction In a recent project, a customer needed to extract structured data from legal documents to populate a standardized form. The legal documents varied in length and structure, and the customer required consistent and accurate outputs that mapped directly to the expected form schema. The implemented solution leveraged Azure OpenAI to iteratively process document chunks and update the form output dynamically. A key component to successfully extract the correct output was using Structured Outputs to enforce the desired output fields to populate the form. This article outlines best practices derived from this project, with a focus on reliable structure enforcement and iterative processing of unstructured legal data. These lessons learned can be leveraged for additional scenarios and d…
  • Open

    Elevate Your AI Expertise with Microsoft Azure: Learn Live Series for Developers
    Unlock the power of Azure AI and master the art of creating advanced AI agents. Starting from April 15th, embark on a comprehensive learning journey designed specifically for professional developers like you. This series will guide you through the official Microsoft Learn Plan, focused on the latest agentic AI technologies and innovations. Generative AI has evolved to become an essential tool for crafting intelligent applications, and AI agents are leading the charge. Here's your opportunity to deepen your expertise in building powerful, scalable agent-based solutions using the Azure AI Foundry, Azure AI Agent Service, and the Semantic Kernel Framework. Why Attend? This Learn Live series will provide you with: In-depth Knowledge: Understand when to use AI agents, how they function, and th…  ( 24 min )
    Elevate Your AI Expertise with Microsoft Azure: Learn Live Series for Developers
    Unlock the power of Azure AI and master the art of creating advanced AI agents. Starting from April 15th, embark on a comprehensive learning journey designed specifically for professional developers like you. This series will guide you through the official Microsoft Learn Plan, focused on the latest agentic AI technologies and innovations. Generative AI has evolved to become an essential tool for crafting intelligent applications, and AI agents are leading the charge. Here's your opportunity to deepen your expertise in building powerful, scalable agent-based solutions using the Azure AI Foundry, Azure AI Agent Service, and the Semantic Kernel Framework. Why Attend? This Learn Live series will provide you with: In-depth Knowledge: Understand when to use AI agents, how they function, and th…
  • Open

    Kickstart Your Web Development: VS Code Basics & GitHub Integration
    As students we get access to an amazing set of developer resources for FREE! Microsoft offers all registered students world wide $100 of Azure Credit and over 25 FREE services with Microsoft Azure for Student. GitHub offers all students FREE Codespaces and GitHub Copilot with the GitHub Education Pack so let walk through how you can get started with these resources. 1. Setting Up VS Code Installing VS Code Download VS Code from official site. Install it on your system (Windows, macOS, or Linux). Open VS Code and customize your settings. Essential Extensions Extensions enhance productivity by adding new functionalities. Some must-have extensions are: GitHub Copilot – AI-powered code suggestions. Prettier – Code formatter for clean and consistent code. Live Server – For real-time web dev…  ( 27 min )
    Kickstart Your Web Development: VS Code Basics & GitHub Integration
    As students we get access to an amazing set of developer resources for FREE! Microsoft offers all registered students world wide $100 of Azure Credit and over 25 FREE services with Microsoft Azure for Student. GitHub offers all students FREE Codespaces and GitHub Copilot with the GitHub Education Pack so let walk through how you can get started with these resources. 1. Setting Up VS Code Installing VS Code Download VS Code from official site. Install it on your system (Windows, macOS, or Linux). Open VS Code and customize your settings. Essential Extensions Extensions enhance productivity by adding new functionalities. Some must-have extensions are: GitHub Copilot – AI-powered code suggestions. Prettier – Code formatter for clean and consistent code. Live Server – For real-time web dev…

  • Open

    Announcing the Extension of Some QnA Maker Functionality
    In 2022, we announced the deprecation of QnA Maker by March 31, 2025 with a recommendation to migrate to Custom Question Answering (CQA). In response to feedback from our valued customers, we have decided to extend the availability of certain functionalities in QnA Maker until October 31, 2025. This extension aims to support our customers in their smooth migration to Custom Question Answering (CQA), ensuring minimal disruption to their operations.  Extension Details  Here are some details on how the QnA Maker functionality will change:  Inference Support: You will be able to continue using your existing QnA Maker bots for query inferencing. This ensures that the QnA Maker bots remain operational and can be used as they are currently configured until October 31, 2025.   Portal Shutdown: The QnA Maker portal will no longer be available after March 31, 2025. You will not be able to make any edits or changes to your QnA Maker bots through the online QnA Maker portal.   Programmatic Bot Changes: You will be able to make changes to your QnA Maker bots programmatically via the QnA Maker API.  In preparation for this change, we recommend that you migrate all of your knowledge bases to your offline storage before the portal is shutdown on March 31, 2025.   Looking Ahead  After October 31, 2025, the QnA Maker service will be fully deprecated, and any query inferencing requests will return an error message. We encourage all our customers to complete their migration to CQA as soon as possible to avoid any disruptions.  We appreciate your understanding and cooperation as we work together to ensure a smooth migration.   Thank you for your continued support and trust in our services.  ( 21 min )
    Announcing the Extension of Some QnA Maker Functionality
    In 2022, we announced the deprecation of QnA Maker by March 31, 2025 with a recommendation to migrate to Custom Question Answering (CQA). In response to feedback from our valued customers, we have decided to extend the availability of certain functionalities in QnA Maker until October 31, 2025. This extension aims to support our customers in their smooth migration to Custom Question Answering (CQA), ensuring minimal disruption to their operations.  Extension Details  Here are some details on how the QnA Maker functionality will change:  Inference Support: You will be able to continue using your existing QnA Maker bots for query inferencing. This ensures that the QnA Maker bots remain operational and can be used as they are currently configured until October 31, 2025.   Portal Shutdown: The QnA Maker portal will no longer be available after March 31, 2025. You will not be able to make any edits or changes to your QnA Maker bots through the online QnA Maker portal.   Programmatic Bot Changes: You will be able to make changes to your QnA Maker bots programmatically via the QnA Maker API.  In preparation for this change, we recommend that you migrate all of your knowledge bases to your offline storage before the portal is shutdown on March 31, 2025.   Looking Ahead  After October 31, 2025, the QnA Maker service will be fully deprecated, and any query inferencing requests will return an error message. We encourage all our customers to complete their migration to CQA as soon as possible to avoid any disruptions.  We appreciate your understanding and cooperation as we work together to ensure a smooth migration.   Thank you for your continued support and trust in our services.
    Agentic P2P Automation: Harnessing the Power of OpenAI's Responses API
    The Procure-to-Pay (P2P) process is traditionally error-prone and labor-intensive, requiring someone to manually open each purchase invoice, look up contract details in a separate system, and painstakingly compare the two to identify anomalies—a task prone to oversight and inconsistency. About the sample Application The 'Agentic' characteristics demonstrated here using the Responses API are: The client application makes a single call to the Responses API that internally handles all the actions autonomously, processes the information and returns the response. In other words, the client application does not have to perform those actions itself. These actions that the Responses API uses, are Hosted tools like (file search, vision-based reasoning). Function calling is used to invoke custom ac…  ( 40 min )
    Agentic P2P Automation: Harnessing the Power of OpenAI's Responses API
    The Procure-to-Pay (P2P) process is traditionally error-prone and labor-intensive, requiring someone to manually open each purchase invoice, look up contract details in a separate system, and painstakingly compare the two to identify anomalies—a task prone to oversight and inconsistency. About the sample Application The 'Agentic' characteristics demonstrated here using the Responses API are: The client application makes a single call to the Responses API that internally handles all the actions autonomously, processes the information and returns the response. In other words, the client application does not have to perform those actions itself. These actions that the Responses API uses, are Hosted tools like (file search, vision-based reasoning). Function calling is used to invoke custom ac…
  • Open

    AI Toolkit for Visual Studio Code Now Supports NVIDIA NIM Microservices for RTX AI PCs
    AI Toolkit now supports NVIDIA NIM™ microservice-based foundation models for inference testing in the model playground and advanced features like bulk run, evaluation and building prompts.  This collaboration helps AI Engineers streamline development processes with foundational AI models.  About AI Toolkit AI Toolkit is a VS Code extension for AI engineers to build, deploy, and manage AI solutions. It includes model and prompt-centric features that allow users to explore and test different AI models, create and evaluate prompts, and perform model finetuning, all from within VS Code. Since its preview launch in 2024, AI Toolkit has helped developers worldwide learn about generative AI models and start building AI solutions.  NVIDIA NIM Microservices This January, NVIDIA announced that state…  ( 23 min )
    AI Toolkit for Visual Studio Code Now Supports NVIDIA NIM Microservices for RTX AI PCs
    AI Toolkit now supports NVIDIA NIM™ microservice-based foundation models for inference testing in the model playground and advanced features like bulk run, evaluation and building prompts.  This collaboration helps AI Engineers streamline development processes with foundational AI models.  About AI Toolkit AI Toolkit is a VS Code extension for AI engineers to build, deploy, and manage AI solutions. It includes model and prompt-centric features that allow users to explore and test different AI models, create and evaluate prompts, and perform model finetuning, all from within VS Code. Since its preview launch in 2024, AI Toolkit has helped developers worldwide learn about generative AI models and start building AI solutions.  NVIDIA NIM Microservices This January, NVIDIA announced that state…
    Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
    Did you attend a Microsoft AI Tour stop? Did you enjoy the content delivered? Did you know that the same technical presentations, open-source curriculum, and hands-on workshops you experienced are now available for you to redeliver and share with your community? Whether you're a Microsoft MVP, Developer, or IT Professional, these resources make it easier than ever to equip fellow professionals with the skills to successfully adopt Microsoft AI services. From expert-led skilling sessions to interactive networking opportunities, the Microsoft AI Tour offers more than just knowledge—it fosters collaboration and real-world impact. By delivering these sessions, you can help audiences simplify AI adoption, accelerate digital transformation, and drive innovation within their organizations. Whethe…  ( 45 min )
    Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
    Did you attend a Microsoft AI Tour stop? Did you enjoy the content delivered? Did you know that the same technical presentations, open-source curriculum, and hands-on workshops you experienced are now available for you to redeliver and share with your community? Whether you're a Microsoft MVP, Developer, or IT Professional, these resources make it easier than ever to equip fellow professionals with the skills to successfully adopt Microsoft AI services. From expert-led skilling sessions to interactive networking opportunities, the Microsoft AI Tour offers more than just knowledge—it fosters collaboration and real-world impact. By delivering these sessions, you can help audiences simplify AI adoption, accelerate digital transformation, and drive innovation within their organizations. Whethe…
    Global AI Bootcamp Bari – in presenza e online
    Dopo il successo del Global AI Bootcamp Milan dello scorso 12 Marzo, siamo entusiasti di annunciare un altro imperdibile evento nel Sud della penisola, organizzato nell’ambito dell’iniziativa Global AI Bootcamp 2025 dalla community Data Masters. Data Masters è l’AI Academy italiana che offre percorsi di formazione nei settori della Data Science, del Machine Learning e dell’Intelligenza Artificiale, guidando aziende e professionisti in percorsi di upskilling e reskilling. 🗓️ Quando? 11 Aprile, h17:00 CET📍Dove? In presenza c/o Data Masters e onlineUnisciti a noi, registrandoti sul sito ufficiale 👉🏼 Global AI Bootcamp - Italy - Bari - Global AI Community   Rivedi l'evento on-demand: Cos’ è il Global AI BootcampIl Global AI Bootcamp è un evento annuale e globale organizzato dalla più gra…  ( 23 min )
    Global AI Bootcamp Bari – in presenza e online
    Dopo il successo del Global AI Bootcamp Milan dello scorso 12 Marzo, siamo entusiasti di annunciare un altro imperdibile evento nel Sud della penisola, organizzato nell’ambito dell’iniziativa Global AI Bootcamp 2025 dalla community Data Masters. Data Masters è l’AI Academy italiana che offre percorsi di formazione nei settori della Data Science, del Machine Learning e dell’Intelligenza Artificiale, guidando aziende e professionisti in percorsi di upskilling e reskilling. 🗓️ Quando? 11 Aprile, h17:00 CET📍Dove? In presenza c/o Data Masters e onlineUnisciti a noi, registrandoti sul sito ufficiale 👉🏼 Global AI Bootcamp - Italy - Bari - Global AI Community    Cos’ è il Global AI BootcampIl Global AI Bootcamp è un evento annuale e globale organizzato dalla più grande community di appassiona…
  • Open

    Announcing: Azure API Center Hands-on Workshop 🚀
    What is the Azure API Center Workshop? The Azure API Center flash workshop is a resource designed to expand your understanding of how organizations can enhance and streamline their API management and governance strategies using Azure API Center. With this practical knowledge and insights, you will be able to streamline secure API integration and enforce security and compliance with tools that evolve to meet your growing business needs. GIF showing Contoso Airlines API Center Azure API Center is a centralized inventory designed to track all your APIs, regardless of their type, lifecycle stage, or deployment location. It enhances discoverability, development, and reuse of APIs. While Azure API Management focuses on API deployment and runtime governance, Azure API Center complements it by cen…  ( 26 min )
    Announcing: Azure API Center Hands-on Workshop 🚀
    What is the Azure API Center Workshop? The Azure API Center flash workshop is a resource designed to expand your understanding of how organizations can enhance and streamline their API management and governance strategies using Azure API Center. With this practical knowledge and insights, you will be able to streamline secure API integration and enforce security and compliance with tools that evolve to meet your growing business needs. GIF showing Contoso Airlines API Center Azure API Center is a centralized inventory designed to track all your APIs, regardless of their type, lifecycle stage, or deployment location. It enhances discoverability, development, and reuse of APIs. While Azure API Management focuses on API deployment and runtime governance, Azure API Center complements it by cen…
  • Open

    Fabric Espresso – Episodes about Data Warehousing & Storage Solutions in Microsoft Fabric
    For the past 1.5 years, the Microsoft Fabric Product Group Product Managers have been publishing a YouTube series featuring deep dives into Microsoft Fabric’s features. These episodes cover both technical functionalities and real-world scenarios, providing insights into the product roadmap and the people driving innovation.  ( 5 min )
  • Open

    Announcing: Azure API Center Hands-on Workshop 🚀
    What is the Azure API Center Workshop? The Azure API Center flash workshop is a resource designed to expand your understanding of how organizations can enhance and streamline their API management and governance strategies using Azure API Center. With this practical knowledge and insights, you will be able to streamline secure API integration and enforce security and compliance with tools that evolve to meet your growing business needs. GIF showing Contoso Airlines API Center Azure API Center is a centralized inventory designed to track all your APIs, regardless of their type, lifecycle stage, or deployment location. It enhances discoverability, development, and reuse of APIs. While Azure API Management focuses on API deployment and runtime governance, Azure API Center complements it by cen…  ( 26 min )
    Announcing: Azure API Center Hands-on Workshop 🚀
    What is the Azure API Center Workshop? The Azure API Center flash workshop is a resource designed to expand your understanding of how organizations can enhance and streamline their API management and governance strategies using Azure API Center. With this practical knowledge and insights, you will be able to streamline secure API integration and enforce security and compliance with tools that evolve to meet your growing business needs. GIF showing Contoso Airlines API Center Azure API Center is a centralized inventory designed to track all your APIs, regardless of their type, lifecycle stage, or deployment location. It enhances discoverability, development, and reuse of APIs. While Azure API Management focuses on API deployment and runtime governance, Azure API Center complements it by cen…
    Microsoft AI Agents Learn Live Starting 15th April
    Join us for an exciting Learn Live webinar where we dive into the fundamentals of using Azure AI Foundry and AI Agents. The series is to help you build powerful Agent applications.This learn live series will help you understand the AI agents, including when to use them and how to build them, using Azure AI Agent Service and Semantic Kernel Agent Framework. By the end of this learning series, you will have the skills needed to develop AI agents on Azure.This sessions will introduce you to AI agents, the next frontier in intelligent applications and explore how they can be developed and deployed on Microsoft Azure. Through this webinar, you'll gain essential skills to begin creating agents with the Azure AI Agent Service. We'll also discuss how to take your agents to the next level by integr…  ( 27 min )
    Microsoft AI Agents Learn Live Starting 15th April
    Join us for an exciting Learn Live webinar where we dive into the fundamentals of using Azure AI Foundry and AI Agents. The series is to help you build powerful Agent applications.This learn live series will help you understand the AI agents, including when to use them and how to build them, using Azure AI Agent Service and Semantic Kernel Agent Framework. By the end of this learning series, you will have the skills needed to develop AI agents on Azure.This sessions will introduce you to AI agents, the next frontier in intelligent applications and explore how they can be developed and deployed on Microsoft Azure. Through this webinar, you'll gain essential skills to begin creating agents with the Azure AI Agent Service. We'll also discuss how to take your agents to the next level by integr…
  • Open

    Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
    Did you attend a Microsoft AI Tour stop? Did you enjoy the content delivered? Did you know that the same technical presentations, open-source curriculum, and hands-on workshops you experienced are now available for you to redeliver and share with your community? Whether you're a Microsoft MVP, Developer, or IT Professional, these resources make it easier than ever to equip fellow professionals with the skills to successfully adopt Microsoft AI services. From expert-led skilling sessions to interactive networking opportunities, the Microsoft AI Tour offers more than just knowledge—it fosters collaboration and real-world impact. By delivering these sessions, you can help audiences simplify AI adoption, accelerate digital transformation, and drive innovation within their organizations. Whethe…  ( 45 min )
    Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
    Did you attend a Microsoft AI Tour stop? Did you enjoy the content delivered? Did you know that the same technical presentations, open-source curriculum, and hands-on workshops you experienced are now available for you to redeliver and share with your community? Whether you're a Microsoft MVP, Developer, or IT Professional, these resources make it easier than ever to equip fellow professionals with the skills to successfully adopt Microsoft AI services. From expert-led skilling sessions to interactive networking opportunities, the Microsoft AI Tour offers more than just knowledge—it fosters collaboration and real-world impact. By delivering these sessions, you can help audiences simplify AI adoption, accelerate digital transformation, and drive innovation within their organizations. Whethe…
  • Open

    Skill your team to increase performance efficiency of Azure and AI projects
    The cost and performance benefits of moving your workload to the cloud are clear — reduced latency, improved elasticity, and great agility of resources — but it’s also critical to learn to manage ongoing performance efficiency beyond initial migrating to see optimal results. Best practices in performance efficiency go beyond designing your workloads so you only pay for what you need; it’s building the best of cloud computing into every design choice. Ideally, a workload should meet performance targets without overprovisioning, which makes the resources, skilling and how-to guidance offered by Azure Essentials crucial considerations for any team looking to scale efficiently. Built to provide help at your point of need, the resources available in Azure Essentials have helped clients complete…  ( 31 min )
    Skill your team to increase performance efficiency of Azure and AI projects
    The cost and performance benefits of moving your workload to the cloud are clear — reduced latency, improved elasticity, and great agility of resources — but it’s also critical to learn to manage ongoing performance efficiency beyond initial migrating to see optimal results. Best practices in performance efficiency go beyond designing your workloads so you only pay for what you need; it’s building the best of cloud computing into every design choice. Ideally, a workload should meet performance targets without overprovisioning, which makes the resources, skilling and how-to guidance offered by Azure Essentials crucial considerations for any team looking to scale efficiently. Built to provide help at your point of need, the resources available in Azure Essentials have helped clients complete…
    Cross-Region Resiliency for Ecommerce Reference Application
    Authors: Radu Dilirici (radudilirici@microsoft.com) Ioan Dragan (ioan.dragan@microsoft.com) Ciprian Amzuloiu (camzuloiu@microsoft.com) Introduction The initial Resilient Ecommerce Reference Application demonstrated the best practices to achieve regional resiliency using Azure’s availability zones. Expanding on this foundation, in the current article we aim to achieve cross-region resiliency, ensuring high availability and disaster recovery capabilities across multiple geographic regions. This article outlines the enhancements made to extend the application into a cross-region resilient architecture. The app is publicly available on GitHub and can be used for educational purposes or as a starting point for developing cross-region resilient applications.  Overview of Cross-Region Enhanceme…  ( 25 min )
    Cross-Region Resiliency for Ecommerce Reference Application
    Authors: Radu Dilirici (radudilirici@microsoft.com) Ioan Dragan (ioan.dragan@microsoft.com) Ciprian Amzuloiu (camzuloiu@microsoft.com) Introduction The initial Resilient Ecommerce Reference Application demonstrated the best practices to achieve regional resiliency using Azure’s availability zones. Expanding on this foundation, in the current article we aim to achieve cross-region resiliency, ensuring high availability and disaster recovery capabilities across multiple geographic regions. This article outlines the enhancements made to extend the application into a cross-region resilient architecture. The app is publicly available on GitHub and can be used for educational purposes or as a starting point for developing cross-region resilient applications.  Overview of Cross-Region Enhanceme…
  • Open

    Semantic Kernel Agent Framework RC2
    Three weeks ago we released the Release the Agents! SK Agents Framework RC1 | Semantic Kernel and we’ve been thrilled to see the momentum grow. Thank you to everyone who has shared feedback, filed issues, and started building with agents in Semantic Kernel—we’re seeing more developers try agents than ever before. Today, we’re declaring build […] The post Semantic Kernel Agent Framework RC2 appeared first on Semantic Kernel.  ( 23 min )
  • Open

    Introducing Microsoft Purview Data Security Investigations
    Investigate data security, risk and leak cases faster by leveraging AI-driven insights with Microsoft Purview Data Security Investigations. This goes beyond the superficial metadata and activity-only signals found in incident management and SIEM tools, by analyzing the content itself within compromised files, emails, messages, and Microsoft Copilot interactions. Data Security Investigations allows you to pinpoint sensitive data and assess risks at a deeper level — quickly understanding the value of what’s been exposed.  Then by mapping connections between compromised data and activities, you can easily find the source of the security risk or exposure. And using real-time risk insights, you can also apply the right protections to minimize future vulnerabilities. Data Security Investigations is also integrated with Microsoft Defender incident management as part your broader SOC toolset. Nick Robinson, Microsoft Purview Principal Product Manager, joins Jeremy Chapman to share how to enhance your ability to safeguard critical information. Find the source of a data leak fast.  ( 19 min )
    Introducing Microsoft Purview Data Security Investigations
    Investigate data security, risk and leak cases faster by leveraging AI-driven insights with Microsoft Purview Data Security Investigations. This goes beyond the superficial metadata and activity-only signals found in incident management and SIEM tools, by analyzing the content itself within compromised files, emails, messages, and Microsoft Copilot interactions. Data Security Investigations allows you to pinpoint sensitive data and assess risks at a deeper level — quickly understanding the value of what’s been exposed.  Then by mapping connections between compromised data and activities, you can easily find the source of the security risk or exposure. And using real-time risk insights, you can also apply the right protections to minimize future vulnerabilities. Data Security Investigations is also integrated with Microsoft Defender incident management as part your broader SOC toolset. Nick Robinson, Microsoft Purview Principal Product Manager, joins Jeremy Chapman to share how to enhance your ability to safeguard critical information. Find the source of a data leak fast.

  • Open

    Teams Toolkit for Visual Studio Code update – March 2025
    We’re excited to announce the latest updates for Teams Toolkit for Visual Studio Code featuring tenant switching, new capabilities for declarative agents, and more. The post Teams Toolkit for Visual Studio Code update – March 2025 appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    Simplify data transformation and management with Copilot for Data Factory
    The process of extracting, transforming, and loading (ETL) data is important for turning raw data into actionable insights. ETL allows data to be collected from various sources, cleansed and formatted into a standard structure, and then loaded into a data warehouse for analysis. This process ensures that data is accurate, consistent, and ready for business … Continue reading “Simplify data transformation and management with Copilot for Data Factory”  ( 7 min )
  • Open

    Comment l’IA générative impacte-t-elle l’expérience développeur?
    Adlene Sifi explore l’impact de l’IA générative sur l’expérience des développeurs. Dans cet article, nous allons tenter de déterminer s’il existe un lien entre l’utilisation de l’IA générative (ex. : GitHub Copilot) et l’expérience développeur (DevEx). Nous souhaitons vérifier plus précisément si l’utilisation de l’IA générative a un impact positif sur l’expérience développeur. Nous tenterons même […] The post Comment l’IA générative impacte-t-elle l’expérience développeur? appeared first on Developer Support.  ( 30 min )
    How does generative AI impact Developer Experience?
    Adlene Sifi explores the impact of generative AI on developer experience. In this article, we will try to determine if there is a link between the use of generative AI (e.g., GitHub Copilot) and developer experience (DevEx). Specifically, we aim to verify whether the use of generative AI has a positive impact on developer experience. […] The post How does generative AI impact Developer Experience? appeared first on Developer Support.  ( 29 min )
  • Open

    AI Agents: Mastering the Tool Use Design Pattern - Part 4
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a deep dive into the Tool Use Design Pattern. In previous posts (links at the end!), we covered agent fundamentals, frameworks, and design principles. Now, we'll explore how tools empower agents to interact with the world, expanding their capabilities and enabling them to perform a wider range of tasks. What is the Tool Use Design Pattern? The Tool Use Design Pattern enables Large Language Models (LLMs) within AI agents to leverage external tools. These tools are essentially code blocks, ranging from simple functions like calculators to complex API calls, that agents execute to perform actions, access information, and achieve goals. Crucially, these tools are invoked th…  ( 26 min )
    AI Agents: Mastering the Tool Use Design Pattern - Part 4
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a deep dive into the Tool Use Design Pattern. In previous posts (links at the end!), we covered agent fundamentals, frameworks, and design principles. Now, we'll explore how tools empower agents to interact with the world, expanding their capabilities and enabling them to perform a wider range of tasks. What is the Tool Use Design Pattern? The Tool Use Design Pattern enables Large Language Models (LLMs) within AI agents to leverage external tools. These tools are essentially code blocks, ranging from simple functions like calculators to complex API calls, that agents execute to perform actions, access information, and achieve goals. Crucially, these tools are invoked th…
  • Open

    GitHub Copilot for Azure: Deploy an AI RAG App to ACA using AZD
    Recently, I had to develop a Retrieval-Augmented Generation (RAG) prototype for an internal project. Since I enjoy working with LlamaIndex, I decided to use GitHub Copilot for Azure to quickly find an existing sample that I could use as a starting point and deploy it to Azure Container Apps. Getting Started with GitHub Copilot for Azure To begin, I installed the GitHub Copilot for Azure extension in VS Code. This extension allows me to interact with Azure directly using the azure command. I used this feature to ask my Copilot to help me locate a relevant sample to use as a foundation for my project. After querying available Azure resources, the extension found a LlamaIndex JavaScript sample, which was ideal for my needs. I then copied the Azure Developer CLI (azd) command to initialize my…  ( 23 min )
    GitHub Copilot for Azure: Deploy an AI RAG App to ACA using AZD
    Recently, I had to develop a Retrieval-Augmented Generation (RAG) prototype for an internal project. Since I enjoy working with LlamaIndex, I decided to use GitHub Copilot for Azure to quickly find an existing sample that I could use as a starting point and deploy it to Azure Container Apps. Getting Started with GitHub Copilot for Azure To begin, I installed the GitHub Copilot for Azure extension in VS Code. This extension allows me to interact with Azure directly using the azure command. I used this feature to ask my Copilot to help me locate a relevant sample to use as a foundation for my project. After querying available Azure resources, the extension found a LlamaIndex JavaScript sample, which was ideal for my needs. I then copied the Azure Developer CLI (azd) command to initialize my…
  • Open

    Rehost mainframe applications by using NTT DATA UniKix
    UniKix is a mainframe-rehosting software suite from NTT DATA. This suite provides a way to run migrated legacy assets on Azure. Example assets include IBM CICS transactions, IBM IMS applications, batch workloads, and JCL workloads. This article outlines a solution for rehosting mainframe applications on Azure. Besides UniKix, the solution's core components include Azure ExpressRoute, Azure Site Recovery, and Azure storage and database services. Mainframe architecture The following diagram shows a legacy mainframe system before it's rehosted to the cloud:     Workflow On-premises users interact with the mainframe by using TCP/IP (A): Admin users interact through a TN3270 terminal emulator. Web interface users interact via a web browser over TLS 1.3 port 443. Mainframes use communication…  ( 37 min )
    Rehost mainframe applications by using NTT DATA UniKix
    UniKix is a mainframe-rehosting software suite from NTT DATA. This suite provides a way to run migrated legacy assets on Azure. Example assets include IBM CICS transactions, IBM IMS applications, batch workloads, and JCL workloads. This article outlines a solution for rehosting mainframe applications on Azure. Besides UniKix, the solution's core components include Azure ExpressRoute, Azure Site Recovery, and Azure storage and database services. Mainframe architecture The following diagram shows a legacy mainframe system before it's rehosted to the cloud:     Workflow On-premises users interact with the mainframe by using TCP/IP (A): Admin users interact through a TN3270 terminal emulator. Web interface users interact via a web browser over TLS 1.3 port 443. Mainframes use communication…
    Refactor mainframe applications with Amdocs
    Amdocs automated COBOL refactoring solution delivers cloud-enabled applications and databases that do the same things as their legacy counterparts. The refactored applications run as Azure applications in virtual machines provided by Azure Virtual Machines. Azure ExpressRoute makes them available to users, and Azure Load Balancer distributes the load. Mainframe architecture Here's a mainframe architecture that represents the kind of system that's suitable for the Amdocs refactoring solution.         Dataflow TN3270 and HTTP(S) user input arrives over TCP/IP. Mainframe input uses standard mainframe protocols. There are batch and online applications. Applications written in COBOL, PL/I, Assembler, and other languages run in an enabled environment. Data is held in files and in hierarchical,…  ( 43 min )
    Refactor mainframe applications with Amdocs
    Amdocs automated COBOL refactoring solution delivers cloud-enabled applications and databases that do the same things as their legacy counterparts. The refactored applications run as Azure applications in virtual machines provided by Azure Virtual Machines. Azure ExpressRoute makes them available to users, and Azure Load Balancer distributes the load. Mainframe architecture Here's a mainframe architecture that represents the kind of system that's suitable for the Amdocs refactoring solution.         Dataflow TN3270 and HTTP(S) user input arrives over TCP/IP. Mainframe input uses standard mainframe protocols. There are batch and online applications. Applications written in COBOL, PL/I, Assembler, and other languages run in an enabled environment. Data is held in files and in hierarchical,…
    Migrate IBM mainframe applications to Azure with TmaxSoft OpenFrame
    Migrate IBM mainframe applications to Azure with TmaxSoft OpenFrame  Lift and shift, also known as rehosting, is the process of mainframe migration to produce an exact copy of an application, workload, and all associated data from one environment to another. Mainframe applications can be migrated from on-premises to public or private cloud.  TmaxSoft OpenFrame is a rehosting solution that makes it easy to lift-and-shift existing IBM zSeries mainframe applications to Microsoft Azure, using a no-code approach. TmaxSoft quickly migrates an existing application, as is, to a zSeries mainframe emulation environment on Azure.  This article illustrates how the TmaxSoft OpenFrame solution runs on Azure. The approach consists of two virtual machines (VMs) running Linux in an active-active configurat…  ( 36 min )
    Migrate IBM mainframe applications to Azure with TmaxSoft OpenFrame
    Migrate IBM mainframe applications to Azure with TmaxSoft OpenFrame  Lift and shift, also known as rehosting, is the process of mainframe migration to produce an exact copy of an application, workload, and all associated data from one environment to another. Mainframe applications can be migrated from on-premises to public or private cloud.  TmaxSoft OpenFrame is a rehosting solution that makes it easy to lift-and-shift existing IBM zSeries mainframe applications to Microsoft Azure, using a no-code approach. TmaxSoft quickly migrates an existing application, as is, to a zSeries mainframe emulation environment on Azure.  This article illustrates how the TmaxSoft OpenFrame solution runs on Azure. The approach consists of two virtual machines (VMs) running Linux in an active-active configurat…

  • Open

    Building a multimodal, multi-agent system using Azure AI Agent Service and OpenAI Agent SDK
    In the rapidly evolving landscape of artificial intelligence (AI), the development of systems that can autonomously interact, learn, and make decisions has become a focal point. A pivotal aspect of this advancement is the architecture of these systems, specifically the distinction between single-agent and multi-agent frameworks. Single-Agent Systems A single-agent system consists of one autonomous entity operating within an environment to achieve specific goals. This agent perceives its surroundings, processes information, and acts accordingly, all in isolation. For example, a standalone chatbot designed to handle customer inquiries functions as a single-agent system, managing interactions without collaborating with other agents. Multi-Agent Systems In contrast, a multi-agent system (MAS) …  ( 61 min )
    Building a multimodal, multi-agent system using Azure AI Agent Service and OpenAI Agent SDK
    In the rapidly evolving landscape of artificial intelligence (AI), the development of systems that can autonomously interact, learn, and make decisions has become a focal point. A pivotal aspect of this advancement is the architecture of these systems, specifically the distinction between single-agent and multi-agent frameworks. Single-Agent Systems A single-agent system consists of one autonomous entity operating within an environment to achieve specific goals. This agent perceives its surroundings, processes information, and acts accordingly, all in isolation. For example, a standalone chatbot designed to handle customer inquiries functions as a single-agent system, managing interactions without collaborating with other agents. Multi-Agent Systems In contrast, a multi-agent system (MAS) …
    Deploy Your First Azure AI Agent Service-Powered App on Azure App Service
    1. Introduction Azure AI Agent Service is a fully managed service designed to empower developers to securely build, deploy, and scale high-quality, extensible AI agents without needing to manage the underlying compute and storage resources 1. These AI agents act as “smart” microservices that can answer questions, perform actions, or automate workflows by combining generative AI models with tools that allow them to interact with real-world data sources 1. Deploying Azure AI Agent Service on Azure App Service offers several benefits: Scalability: Azure App Service provides automatic scaling options to handle varying loads. Security: Built-in security features ensure that your AI agents are protected. Ease of Deployment: Simplified deployment processes allow developers to focus on building a…  ( 37 min )
    Deploy Your First Azure AI Agent Service-Powered App on Azure App Service
    1. Introduction Azure AI Agent Service is a fully managed service designed to empower developers to securely build, deploy, and scale high-quality, extensible AI agents without needing to manage the underlying compute and storage resources 1. These AI agents act as “smart” microservices that can answer questions, perform actions, or automate workflows by combining generative AI models with tools that allow them to interact with real-world data sources 1. Deploying Azure AI Agent Service on Azure App Service offers several benefits: Scalability: Azure App Service provides automatic scaling options to handle varying loads. Security: Built-in security features ensure that your AI agents are protected. Ease of Deployment: Simplified deployment processes allow developers to focus on building a…
    Deploy Your First Azure AI Agent Service on Azure App Service
    1. Introduction Azure AI Agent Service is a fully managed service designed to empower developers to securely build, deploy, and scale high-quality, extensible AI agents without needing to manage the underlying compute and storage resources 1. These AI agents act as “smart” microservices that can answer questions, perform actions, or automate workflows by combining generative AI models with tools that allow them to interact with real-world data sources 1. Deploying Azure AI Agent Service on Azure App Service offers several benefits: Scalability: Azure App Service provides automatic scaling options to handle varying loads. Security: Built-in security features ensure that your AI agents are protected. Ease of Deployment: Simplified deployment processes allow developers to focus on building a…

  • Open

    Model Mondays: Why Rerank Models Are the Secret Sauce of High-Quality Search
    In the world of search and retrieval—whether you're powering a chatbot, building a product recommendation engine, or designing a retrieval-augmented generation (RAG) pipeline—relevance is everything. And while your first-pass retrieval system might do a decent job surfacing content, there's a not-so-secret tool that can take your results from pretty good to exceptional: rerank models. Join us on Monday March 24 on Model Mondays when we are going live with Episode 3 of our 8-week power series with a spotlight on Search and Retrieval models at 1:30 pm ET on Microsoft Reactor Register Here: https://aka.ms/model-mondays/rsvp  What’s in it for you? Every Monday, we will do 30 min live podcast that will help you Stay updated – A 5-min highlight on the latest AI model breakthroughs Get hands-on – A 15-min deep dive into a must-know model each week Ask the experts – A live Q&A to answer your burning questions Then you can join the community on the Azure AI Discord #model-mondays channel to continue these conversations.  We'll wrap up the week with a Model Mondays watercooler chat every Friday at 1:30 pm EST , 10 :30 am PST where we'll revisit the news and demos - and give you a chance to ask questions or show-and-tell us your model-driven experiences. In case you missed our previous episodes: Watch it now Episode 1 : Github Models Episode 2 : Reasoning Models. Be a part of the conversation!  Watch Live on Microsoft Reactor – RSVP Now Join the AI community – Discord Office Hours every Friday – Join Here Get exclusive resources – Explore the GitHub Repo  Don’t fall behind—jump in and level up your AI game with Model Mondays! #ModelMondays  ( 21 min )
    Model Mondays: Why Rerank Models Are the Secret Sauce of High-Quality Search
    In the world of search and retrieval—whether you're powering a chatbot, building a product recommendation engine, or designing a retrieval-augmented generation (RAG) pipeline—relevance is everything. And while your first-pass retrieval system might do a decent job surfacing content, there's a not-so-secret tool that can take your results from pretty good to exceptional: rerank models. Join us on Monday March 24 on Model Mondays when we are going live with Episode 3 of our 8-week power series with a spotlight on Search and Retrieval models at 1:30 pm ET on Microsoft Reactor Register Here: https://aka.ms/model-mondays/rsvp  What’s in it for you? Every Monday, we will do 30 min live podcast that will help you Stay updated – A 5-min highlight on the latest AI model breakthroughs Get hands-on – A 15-min deep dive into a must-know model each week Ask the experts – A live Q&A to answer your burning questions Then you can join the community on the Azure AI Discord #model-mondays channel to continue these conversations.  We'll wrap up the week with a Model Mondays watercooler chat every Friday at 1:30 pm EST , 10 :30 am PST where we'll revisit the news and demos - and give you a chance to ask questions or show-and-tell us your model-driven experiences. In case you missed our previous episodes: Watch it now Episode 1 : Github Models Episode 2 : Reasoning Models. Be a part of the conversation!  Watch Live on Microsoft Reactor – RSVP Now Join the AI community – Discord Office Hours every Friday – Join Here Get exclusive resources – Explore the GitHub Repo  Don’t fall behind—jump in and level up your AI game with Model Mondays! #ModelMondays
  • Open

    Best Practices for Leveraging Azure OpenAI in Code Conversion Scenarios
    Introduction Code conversion is the process of translating code from one programming language to another. This is a critical step for organizations modernizing legacy systems, migrating to new technologies, or improving maintainability. However, manual code conversion is labor-intensive, error-prone, and often requires deep expertise in both source and target languages. This document outlines best practices for leveraging Azure OpenAI GPT-4o for code conversion, addressing challenges, and providing solutions for efficient and accurate outcomes. Problem Statement In the world of programming, the ability to convert code from one language to another is vital for modernizing legacy systems and adapting to evolving technology. However: Manual conversion is time-consuming and prone to errors. C…  ( 30 min )
    Best Practices for Leveraging Azure OpenAI in Code Conversion Scenarios
    Introduction Code conversion is the process of translating code from one programming language to another. This is a critical step for organizations modernizing legacy systems, migrating to new technologies, or improving maintainability. However, manual code conversion is labor-intensive, error-prone, and often requires deep expertise in both source and target languages. This document outlines best practices for leveraging Azure OpenAI GPT-4o for code conversion, addressing challenges, and providing solutions for efficient and accurate outcomes. Problem Statement In the world of programming, the ability to convert code from one language to another is vital for modernizing legacy systems and adapting to evolving technology. However: Manual conversion is time-consuming and prone to errors. C…
    Build your own conversational AI agent and share $50K in prizes with Microsoft AI Skills Fest
    What if your AI could do more than just respond? With Azure AI Agent Service, developers are building conversational AI agents that not only understand natural language but also take meaningful actions to drive business success. And with customized, self-paced skilling Plans and an upcoming, dedicated Hackathon event (with $50,000 in prizes!), Microsoft Learn is your one-stop resource for developing your own AI agent and further exploring the capabilities of Azure AI Agent Service.   Build and deploy your own AI agent with Azure AI Agent Service  Introduced at Microsoft Ignite 2024, Azure AI Agent Service is a fully managed platform designed to help you build, deploy, and scale high-quality conversational AI agents with minimal complexity. By leveraging Microsoft's advanced AI capabilities…  ( 30 min )
    Build your own conversational AI agent and share $50K in prizes with Microsoft AI Skills Fest
    What if your AI could do more than just respond? With Azure AI Agent Service, developers are building conversational AI agents that not only understand natural language but also take meaningful actions to drive business success. And with customized, self-paced skilling Plans and an upcoming, dedicated Hackathon event (with $50,000 in prizes!), Microsoft Learn is your one-stop resource for developing your own AI agent and further exploring the capabilities of Azure AI Agent Service.   Build and deploy your own AI agent with Azure AI Agent Service  Introduced at Microsoft Ignite 2024, Azure AI Agent Service is a fully managed platform designed to help you build, deploy, and scale high-quality conversational AI agents with minimal complexity. By leveraging Microsoft's advanced AI capabilities…
    RAG Time Journey 3: Optimize your vector index for scale
    Introduction Journey 3 in our 5-part RAG Time developer series covers how to optimize your vector index for large-scale AI applications. I’m Mike, a program manager on the Azure AI Search team. Read the second post of this series and access all videos and resources in our Github repo. Journey 3 covers various methods to optimize your vector index for large-scale RAG including: Vector storage and storage optimization Vector Compression (Scalar/Binary Quantization) Vector Truncation (MRL) Quality improvements for optimized vectors Vector storage and storage optimization A vector is an array of numbers generated by an embedding model where the number of items corresponds to the number of dimensions. Azure AI Search supports a range of data types from a 32-bit single precision floating poin…  ( 30 min )
    RAG Time Journey 3: Optimize your vector index for scale
    Introduction Journey 3 in our 5-part RAG Time developer series covers how to optimize your vector index for large-scale AI applications. I’m Mike, a program manager on the Azure AI Search team. Read the second post of this series and access all videos and resources in our Github repo. Journey 3 covers various methods to optimize your vector index for large-scale RAG including: Vector storage and storage optimization Vector Compression (Scalar/Binary Quantization) Vector Truncation (MRL) Quality improvements for optimized vectors Vector storage and storage optimization A vector is an array of numbers generated by an embedding model where the number of items corresponds to the number of dimensions. Azure AI Search supports a range of data types from a 32-bit single precision floating poin…
    Best Practices for Leveraging Azure OpenAI in Constrained Optimization Scenarios
    Introduction Constrained optimization problems come up in various domains and range from scheduling and logistics to financial planning and resource allocation. By leveraging Generative AI (GenAI) to solve these complex decision-making tasks, organizations can become more efficient and productive in their operations. This article outlines best practices for using GenAI in constrained optimization, using a real-world example: an AI-powered college course scheduling solution. These lessons have been gathered by implementing solutions with Microsoft partners and customers. Understanding Constrained Optimization in AI Constrained optimization involves finding the best solution to a problem while satisfying a set of predefined constraints. These constraints could be based on rules, resources, o…  ( 30 min )
    Best Practices for Leveraging Azure OpenAI in Constrained Optimization Scenarios
    Introduction Constrained optimization problems come up in various fields and range from scheduling and logistics to financial planning and resource allocation. By leveraging Generative AI (GenAI) to solve these complex decision-making tasks, organizations can become more efficient and productive in their operations. This article outlines best practices for using GenAI in constrained optimization, using a real-world example: an AI-powered college course scheduling solution. These lessons have been gathered by implementing solutions with Microsoft partners and customers. Understanding Constrained Optimization in AI Constrained optimization involves finding the best solution to a problem while satisfying a set of predefined constraints. These constraints could be based on rules, resources, or…

  • Open

    Removal of Deprecated SharePoint & OneDrive Permission Resource Properties
    We are announcing the phased rollout of an update to remove the grantedTo and grantedToIdentities properties from the Permission resource type in Microsoft Graph. The post Removal of Deprecated SharePoint & OneDrive Permission Resource Properties appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    How to Automate Cross-OS File Fixes with Azure Automation and PowerShell
    Start your free Azure account here to follow along.   Table of Contents Introduction Prerequisites The Problem: Why File Compatibility Matters The Solution: Build a Serverless File Fixer in Azure Step-by-Step Tutorial Conclusion Introduction  Hi, I'm Raunak Dev, a Microsoft Learn Student Ambassador from India, majoring in Electrical and Computer Engineering at Amrita School of Engineering – Amritapuri.  Have you ever wondered why your scripts suddenly break when switching between operating systems? When collaborating across Windows and Linux, subtle OS differences can cause major headaches. For example, mismatched line endings and file permissions can break your scripts, delay projects, and frustrate teams.  In this tutorial, I'll show you how to build a serverless file fixer in Azure …  ( 45 min )
    How to Automate Cross-OS File Fixes with Azure Automation and PowerShell
    Start your free Azure account here to follow along.   Table of Contents Introduction Prerequisites The Problem: Why File Compatibility Matters The Solution: Build a Serverless File Fixer in Azure Step-by-Step Tutorial Conclusion Introduction  Hi, I'm Raunak Dev, a Microsoft Learn Student Ambassador from India, majoring in Electrical and Computer Engineering at Amrita School of Engineering – Amritapuri.  Have you ever wondered why your scripts suddenly break when switching between operating systems? When collaborating across Windows and Linux, subtle OS differences can cause major headaches. For example, mismatched line endings and file permissions can break your scripts, delay projects, and frustrate teams.  In this tutorial, I'll show you how to build a serverless file fixer in Azure …
    Interacting with historical characters using Generative AI
    DISCLAIMER: this article showcase an interesting way to interact with historical characters using Generative AI. The responses from the characters are based on their training data and does not represent their actual thoughts or opinions. This article describes a companion app used in the Generative AI curriculum for JavaScript that was recently released. The app allows users to interact with historical characters by setting its system message to a historical character plus some added context and instructions. How it works For Generative AI response use GitHub Models, a great way to test models for free, all you need is a GitHub account. Below code works when run inside of a GitHub Codespace but can be made to work with a personal access token, a so called PAT. Here's a link to a GitHub Mod…  ( 32 min )
    Interacting with historical characters using Generative AI
    DISCLAIMER: this article showcase an interesting way to interact with historical characters using Generative AI. The responses from the characters are based on their training data and does not represent their actual thoughts or opinions. This article describes a companion app used in the Generative AI curriculum for JavaScript that was recently released. The app allows users to interact with historical characters by setting its system message to a historical character plus some added context and instructions. How it works For Generative AI response use GitHub Models, a great way to test models for free, all you need is a GitHub account. Below code works when run inside of a GitHub Codespace but can be made to work with a personal access token, a so called PAT. Here's a link to a GitHub Mod…
  • Open

    Medallion Architecture in Microsoft Fabric: Leveraging OneLake for Scalable Data Management
    Medallion Architecture Overview Medallion Architecture, a systematic data management approach, offers a three-tier structure for data processing: Bronze, Silver, and Gold. Bronze Layer This raw layer stores different data types, including unstructured, semi-structured, and structured. The aim here is to ensure that the data is secured in its raw form during the storage processes so that it can be processed in the future. Silver Layer This is an intermediate layer where the data is cleaned and transformed to maintain uniformity and fitness. Each data source is suitable for analysis by employing data cleaning, joining, and filtering methods that permit data unification from different sources. Therefore, the layer takes care of errors or a non-standardized data structure and improves its qual…  ( 31 min )
    Medallion Architecture in Microsoft Fabric: Leveraging OneLake for Scalable Data Management
    Medallion Architecture Overview Medallion Architecture, a systematic data management approach, offers a three-tier structure for data processing: Bronze, Silver, and Gold. Bronze Layer This raw layer stores different data types, including unstructured, semi-structured, and structured. The aim here is to ensure that the data is secured in its raw form during the storage processes so that it can be processed in the future. Silver Layer This is an intermediate layer where the data is cleaned and transformed to maintain uniformity and fitness. Each data source is suitable for analysis by employing data cleaning, joining, and filtering methods that permit data unification from different sources. Therefore, the layer takes care of errors or a non-standardized data structure and improves its qual…
  • Open

    Introducing Java, JS and Python support in Test Plans
    We are excited to announce new capabilities in Azure Test Plans that will enhance your testing workflows. This feature is currently available in Public Preview. You can find instructions at the end of the article on how to request it for your Azure DevOps project. With this latest release, we are introducing the ability to […] The post Introducing Java, JS and Python support in Test Plans appeared first on Azure DevOps Blog.  ( 23 min )
  • Open

    Azure Monitor Private Link Scope (AMPLS) Scale Limits Increased by 10x!
    What is Azure Monitor Private Link Scope (AMPLS)? Azure Monitor Private Link Scope (AMPLS) is a feature that allows you to securely connect Azure Monitor resources to your virtual network using private endpoints. This ensures that your monitoring data is accessed only through authorized private networks, preventing data exfiltration and keeping all traffic inside the Azure backbone network.  AMPLS – Scale Limits Increased by 10x in Public Cloud - Public Preview In a groundbreaking development, we are excited to share that the scale limits for Azure Monitor Private Link Scope (AMPLS) have been significantly increased by tenfold (10x) in Public Cloud regions as part of the Public Preview! This substantial enhancement empowers our customers to manage their resources more efficiently and secur…  ( 24 min )
    Azure Monitor Private Link Scope (AMPLS) Scale Limits Increased by 10x!
    What is Azure Monitor Private Link Scope (AMPLS)? Azure Monitor Private Link Scope (AMPLS) is a feature that allows you to securely connect Azure Monitor resources to your virtual network using private endpoints. This ensures that your monitoring data is accessed only through authorized private networks, preventing data exfiltration and keeping all traffic inside the Azure backbone network.  AMPLS – Scale Limits Increased by 10x in Public Cloud - Public Preview In a groundbreaking development, we are excited to share that the scale limits for Azure Monitor Private Link Scope (AMPLS) have been significantly increased by tenfold (10x) in Public Cloud regions as part of the Public Preview! This substantial enhancement empowers our customers to manage their resources more efficiently and secur…
  • Open

    Managing Traffic Jams with Azure OpenAI PTU Spillover
    In my previous blog, (Azure OpenAI offering models - Explain it Like I'm 5 | Microsoft Community Hub), I drew a comparison of our Azure OpenAI service to a highway. In the standard offering, all users (cars) share common capacity (lanes) as they use the service. In times of congestion or high usage, traffic can slow down due to all the cars on the road at a single time. Provisioned Throughput (PTU) deployments can be seen as private express lanes designed for consistency and low variability on the highway of our service. These private lanes guarantee a certain amount of traffic can move through the service at a steady speed. Also, by having your own lane, you control the amount of traffic and are not impeded by other cars (or other users) using the service. But what happens when too many c…  ( 31 min )
    Managing Traffic Jams with Azure OpenAI PTU Spillover
    In my previous blog, (Azure OpenAI offering models - Explain it Like I'm 5 | Microsoft Community Hub), I drew a comparison of our Azure OpenAI service to a highway. In the standard offering, all users (cars) share common capacity (lanes) as they use the service. In times of congestion or high usage, traffic can slow down due to all the cars on the road at a single time. Provisioned Throughput (PTU) deployments can be seen as private express lanes designed for consistency and low variability on the highway of our service. These private lanes guarantee a certain amount of traffic can move through the service at a steady speed. Also, by having your own lane, you control the amount of traffic and are not impeded by other cars (or other users) using the service. But what happens when too many c…
  • Open

    Microsoft Dev Box roadmap update
    What features and enhancements are planned for Microsoft Dev Box Microsoft Dev Box continues to evolve, providing a secure, scalable, and ready-to-code cloud-based development environment. Over the next few months, significant improvements will focus on three key areas: Ready-to-code environments: Faster setup, enhanced debugging, and GitHub Copilot integration to streamline developer workflows. Developers will benefit […] The post Microsoft Dev Box roadmap update appeared first on Develop from the cloud.  ( 21 min )

  • Open

    Microsoft 365 Certification control spotlight: Data at rest
    Read how Microsoft 365 Certification ensures compliance for data at rest. The post Microsoft 365 Certification control spotlight: Data at rest appeared first on Microsoft 365 Developer Blog.  ( 23 min )
    Enforcement of license checks for PSTN Bot calls
    As part of Microsoft’s feature parity with Teams Phone extensibility, we’re announcing the enforcement of Phone System license checks for Bot-initiated transfers to Teams users. This current gap in our systems will be addressed in June 2025.  Microsoft Teams requires that Teams users behind applications such as queue applications require a phone system license and […] The post Enforcement of license checks for PSTN Bot calls appeared first on Microsoft 365 Developer Blog.  ( 21 min )
  • Open

    Announcing the public preview launch of Azure Functions durable task scheduler
    We are excited to roll out the public preview of the Azure Functions durable task scheduler. This new Azure-managed backend is designed to provide high performance, improve reliability, reduce operational overhead, and simplify the monitoring of your stateful orchestrations. If you missed the initial announcement of the private preview, see this blog post. Durable Task Scheduler Durable functions simplifies the development of complex, stateful, and long-running apps in a serverless environment. It allows developers to orchestrate multiple function calls without having to handle fault tolerance. It's great for scenarios like orchestrating multiple agents, distributed transactions, big data processing, batch processing like ETL (extract, transform, load), asynchronous APIs, and essentially …  ( 41 min )
    Announcing the public preview launch of Azure Functions durable task scheduler
    We are excited to roll out the public preview of the Azure Functions durable task scheduler. This new Azure-managed backend is designed to provide high performance, improve reliability, reduce operational overhead, and simplify the monitoring of your stateful orchestrations. If you missed the initial announcement of the private preview, see this blog post. Durable Task Scheduler Durable functions simplifies the development of complex, stateful, and long-running apps in a serverless environment. It allows developers to orchestrate multiple function calls without having to handle fault tolerance. It's great for scenarios like orchestrating multiple agents, distributed transactions, big data processing, batch processing like ETL (extract, transform, load), asynchronous APIs, and essentially …
    Deploy Dynatrace OneAgent on your Container Apps
    TOC Introduction Setup References   1. Introduction Dynatrace OneAgent is an advanced monitoring tool that automatically collects performance data across your entire IT environment. It provides deep visibility into applications, infrastructure, and cloud services, enabling real-time observability. OneAgent supports multiple platforms, including containers, VMs, and serverless architectures, ensuring seamless monitoring with minimal configuration. It captures detailed metrics, traces, and logs, helping teams diagnose performance issues, optimize resources, and enhance user experiences. With AI-driven insights, OneAgent proactively detects anomalies and automates root cause analysis, making it an essential component for modern DevOps, SRE, and cloud-native monitoring strategies. 2. Setup 1…  ( 23 min )
    Deploy Dynatrace OneAgent on your Container Apps
    TOC Introduction Setup References   1. Introduction Dynatrace OneAgent is an advanced monitoring tool that automatically collects performance data across your entire IT environment. It provides deep visibility into applications, infrastructure, and cloud services, enabling real-time observability. OneAgent supports multiple platforms, including containers, VMs, and serverless architectures, ensuring seamless monitoring with minimal configuration. It captures detailed metrics, traces, and logs, helping teams diagnose performance issues, optimize resources, and enhance user experiences. With AI-driven insights, OneAgent proactively detects anomalies and automates root cause analysis, making it an essential component for modern DevOps, SRE, and cloud-native monitoring strategies. 2. Setup 1…
    Running PowerShell Scripts on Azure VMs with Domain User Authentication using Azure Functions
    Azure Functions provides a powerful platform for automating various tasks within your Azure environment. In this specific scenario, we’ll use Azure Functions to run commands remotely on a VM while authenticating as a domain user, not as a system-assigned or user-assigned managed identity. Prerequisites Before we dive into the implementation, ensure you have the following prerequisites: Azure Subscription: You'll need an Azure subscription. Azure VM: A Windows VM is required where the PowerShell script will be executed. Azure Function: You will need an Azure Function with the appropriate configurations to run the PowerShell script. Domain User Account: A domain user account for authentication. Azure PowerShell Modules: The Azure PowerShell modules (Az) installed to work with Azure resource…  ( 37 min )
    Running PowerShell Scripts on Azure VMs with Domain User Authentication using Azure Functions
    Azure Functions provides a powerful platform for automating various tasks within your Azure environment. In this specific scenario, we’ll use Azure Functions to run commands remotely on a VM while authenticating as a domain user, not as a system-assigned or user-assigned managed identity. Prerequisites Before we dive into the implementation, ensure you have the following prerequisites: Azure Subscription: You'll need an Azure subscription. Azure VM: A Windows VM is required where the PowerShell script will be executed. Azure Function: You will need an Azure Function with the appropriate configurations to run the PowerShell script. Domain User Account: A domain user account for authentication. Azure PowerShell Modules: The Azure PowerShell modules (Az) installed to work with Azure resource…
  • Open

    Virtualize your Cloudera/Hadoop data estate into Fabric OneLake with Apache Ozone
    OneLake Shortcut Microsoft Fabric OneLake shortcuts facilitate the virtualization of data from various cloud object stores and on-premises environments. For on-premises sources like Cloudera/Apache Ozone, the OneLake S3 Compatible Shortcut can be utilized to connect to these data sources. With OneLake Shortcuts, users can create a virtual reference to their Cloudera cluster data without moving … Continue reading “Virtualize your Cloudera/Hadoop data estate into Fabric OneLake with Apache Ozone”  ( 8 min )
    Virtualize your Cloudera/Hadoop data estate into Fabric OneLake with Apache Ozone
    OneLake Shortcut Microsoft Fabric OneLake shortcuts facilitate the virtualization of data from various cloud object stores and on-premises environments. For on-premises sources like Cloudera/Apache Ozone, the OneLake S3 Compatible Shortcut can be utilized to connect to these data sources. With OneLake Shortcuts, users can create a virtual reference to their Cloudera cluster data without moving … Continue reading “Virtualize your Cloudera/Hadoop data estate into Fabric OneLake with Apache Ozone”  ( 8 min )
  • Open

    Building Trustworthy AI Agents
    Building AI agents should be safe applications. Here, safety means that AI agent performs as designed. As builders of agentic applications, we have methods and tools to maximize safety:  Building a Meta Prompting System If you have ever built an AI application using Large Language Models (LLMs), you know the importance of designing a robust system prompt or system message. These prompts establish meta rules, instructions and guidelines for how the LLM will interact with the user and data. The system prompt in AI agents will need highly specific instructions to complete the tasks designed for them.   Microsoft has developed a FREE course on the AI Agents for Beginners in this blog I am going to focus on the Meta Prompting System.To create scalable system prompts, we can use a meta prompting…  ( 29 min )
    Building Trustworthy AI Agents
    Building AI agents should be safe applications. Here, safety means that AI agent performs as designed. As builders of agentic applications, we have methods and tools to maximize safety:  Building a Meta Prompting System If you have ever built an AI application using Large Language Models (LLMs), you know the importance of designing a robust system prompt or system message. These prompts establish meta rules, instructions and guidelines for how the LLM will interact with the user and data. The system prompt in AI agents will need highly specific instructions to complete the tasks designed for them.   Microsoft has developed a FREE course on the AI Agents for Beginners in this blog I am going to focus on the Meta Prompting System.To create scalable system prompts, we can use a meta prompting…
  • Open

    Improve LLM backend resiliency with load balancer and circuit breaker rules in Azure API Management
    This article is part of a series of articles on Azure API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. We previously covered the hidden risks of AI APIs in today's AI-driven technological landscape. In this article, we dive deeper into one of the supported Gen AI policies in API Management, which allows your applications to change the effective Gen AI backend based on unexpected and specified events. In Azure API Management, you can set up your different LLMs as backends and define structures to route requests to prioritized backends and add automatic circuit breaker rules to protect backends from too many requests. Under normal conditions, if your Azure OpenAI s…  ( 27 min )
    Improve LLM backend resiliency with load balancer and circuit breaker rules in Azure API Management
    This article is part of a series of articles on Azure API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. We previously covered the hidden risks of AI APIs in today's AI-driven technological landscape. In this article, we dive deeper into one of the supported Gen AI policies in API Management, which allows your applications to change the effective Gen AI backend based on unexpected and specified events. In Azure API Management, you can set up your different LLMs as backends and define structures to route requests to prioritized backends and add automatic circuit breaker rules to protect backends from too many requests. Under normal conditions, if your Azure OpenAI se…
  • Open

    Announcing new Phi pricing, Empowering Your Business with Small Language Models
    Last month, Microsoft announced two new models to the Phi family. With that announcement comes new pricing for designed to provide our customers with even more value and flexibility. Why small language models matter Small language models are revolutionizing the way businesses operate by offering powerful capabilities without the need for extensive computational resources. These models are not only cost-effective but also highly efficient, making them ideal for a wide range of applications, from customer service to data analysis. Phi pricing Language and vison models ModelsContext lengthInput (per 1,000 tokens)Output (per 1,000 tokens) Phi-3-mini 4K $0.00013 $0.00052   Phi-3-mini 128K $0.00013 $0.00052 Phi-3.5-mini 128K $0.00013  $0.00052 Phi-3-small 8K $0.00015 $0.0006…  ( 21 min )
    Announcing new Phi pricing, Empowering Your Business with Small Language Models
    Last month, Microsoft announced two new models to the Phi family. With that announcement comes new pricing for designed to provide our customers with even more value and flexibility. Why small language models matter Small language models are revolutionizing the way businesses operate by offering powerful capabilities without the need for extensive computational resources. These models are not only cost-effective but also highly efficient, making them ideal for a wide range of applications, from customer service to data analysis. Phi pricing Language and vison models ModelsContext lengthInput (per 1,000 tokens)Output (per 1,000 tokens) Phi-3-mini 4K $0.00013 $0.00052   Phi-3-mini 128K $0.00013 $0.00052 Phi-3.5-mini 128K $0.00013  $0.00052 Phi-3-small 8K $0.00015 $0.0006…
  • Open

    Advanced RAG Solution Accelerator
    Overview What is RAG and Why Advanced RAG? Retrieval-Augmented Generation (RAG) is a natural language processing technique that combines the strengths of retrieval-based and generation-based models. It uses search algorithms to retrieve relevant data from external sources such as databases, knowledge bases, document corpora, and web pages. This retrieved data, known as "grounding information," is then input into a large language model (LLM) to generate more accurate, relevant, and up-to-date outputs.   Figure1: High level Retrieval Augmented Flow Usage Patterns Here are some horizontal use cases where customers have used Retrieval Augmented Generation based systems: Conversational Search and Insights: Summarize large volumes of information for easier consumption and communication. Conten…  ( 76 min )
    Advanced RAG Solution Accelerator
    Overview What is RAG and Why Advanced RAG? Retrieval-Augmented Generation (RAG) is a natural language processing technique that combines the strengths of retrieval-based and generation-based models. It uses search algorithms to retrieve relevant data from external sources such as databases, knowledge bases, document corpora, and web pages. This retrieved data, known as "grounding information," is then input into a large language model (LLM) to generate more accurate, relevant, and up-to-date outputs.   Figure1: High level Retrieval Augmented Flow Usage Patterns Here are some horizontal use cases where customers have used Retrieval Augmented Generation based systems: Conversational Search and Insights: Summarize large volumes of information for easier consumption and communication. Conten…
  • Open

    Introducing Copilot in the Microsoft 365 admin centers
    Streamline daily admin tasks with AI-powered insights, natural language queries, and automation using Copilot in Microsoft 365 admin centers. Quickly recap key updates, monitor service health, and track important changes — all in one place. No more digging through multiple pages — just ask Copilot for the answers you need, grounded in real-time data from your tenant. From finding users and managing licenses to generating visual insights and automating tasks with PowerShell, use Copilot to simplify complex admin workflows and save valuable time. For Copilot in the admin center to light up, all you need is one active Microsoft 365 Copilot license for any user in your tenant and from the Microsoft 365 admin center, you can get started right away. Jeremy Chapman, Director of Microsoft 365, de…  ( 42 min )
    Introducing Copilot in the Microsoft 365 admin centers
    Streamline daily admin tasks with AI-powered insights, natural language queries, and automation using Copilot in Microsoft 365 admin centers. Quickly recap key updates, monitor service health, and track important changes — all in one place. No more digging through multiple pages — just ask Copilot for the answers you need, grounded in real-time data from your tenant. From finding users and managing licenses to generating visual insights and automating tasks with PowerShell, use Copilot to simplify complex admin workflows and save valuable time. For Copilot in the admin center to light up, all you need is one active Microsoft 365 Copilot license for any user in your tenant and from the Microsoft 365 admin center, you can get started right away. Jeremy Chapman, Director of Microsoft 365, de…

  • Open

    The Startup Stage: Powered by Microsoft for Startups at European AI & Cloud Summit
    🚀 The Startup Stage: Powered by Microsoft for Startups Take center stage in the AI and Cloud Startup Program, designed to showcase groundbreaking solutions and foster collaboration between ambitious startups and influential industry leaders. Whether you're looking to engage with potential investors, connect with clients, or share your boldest ideas, this is the platform to shine. Why Join the Startup Stage? Pitch to Top Investors: Present your ideas and products to key decision-makers in the tech world. Gain Visibility: Showcase your startup in a vibrant space dedicated to innovation, and prove that you are the next game-changer. Learn from the Best: Hear from visionary thought leaders and Microsoft AI experts about the latest trends and opportunities in AI and cloud. AI Competition: Prop…  ( 22 min )
    The Startup Stage: Powered by Microsoft for Startups at European AI & Cloud Summit
    🚀 The Startup Stage: Powered by Microsoft for Startups Take center stage in the AI and Cloud Startup Program, designed to showcase groundbreaking solutions and foster collaboration between ambitious startups and influential industry leaders. Whether you're looking to engage with potential investors, connect with clients, or share your boldest ideas, this is the platform to shine. Why Join the Startup Stage? Pitch to Top Investors: Present your ideas and products to key decision-makers in the tech world. Gain Visibility: Showcase your startup in a vibrant space dedicated to innovation, and prove that you are the next game-changer. Learn from the Best: Hear from visionary thought leaders and Microsoft AI experts about the latest trends and opportunities in AI and cloud. AI Competition: Prop…
  • Open

    Deploy Your First App Using GitHub Copilot for Azure: A Beginner’s Guide
    Deploying an app for the first time can feel overwhelming. You may find yourself switching between tutorials, scanning documentation, and wondering if you missed a step. But what if you could do it all in one place? Now you can! With GitHub Copilot for Azure, you can receive real time deployment guidance without leaving the Visual Studio Code. While it won’t fully automate deployments, it serves as a step-by-step AI powered assistant, helping you navigate the process with clear, actionable instructions. No more endless tab switching or searching for the right tutorial—simply type, deploy, and learn, all within your IDE i.e. Visual Studio Code. If you are a student, you have access to exclusive opportunities! Whether you are exploring new technologies or experimenting with them, platforms l…  ( 31 min )
    Deploy Your First App Using GitHub Copilot for Azure: A Beginner’s Guide
    Deploying an app for the first time can feel overwhelming. You may find yourself switching between tutorials, scanning documentation, and wondering if you missed a step. But what if you could do it all in one place? Now you can! With GitHub Copilot for Azure, you can receive real time deployment guidance without leaving the Visual Studio Code. While it won’t fully automate deployments, it serves as a step-by-step AI powered assistant, helping you navigate the process with clear, actionable instructions. No more endless tab switching or searching for the right tutorial—simply type, deploy, and learn, all within your IDE i.e. Visual Studio Code. If you are a student, you have access to exclusive opportunities! Whether you are exploring new technologies or experimenting with them, platforms l…
  • Open

    Accelerating Agentic Workflows with NVIDIA AgentIQ, Azure AI Foundry and Semantic Kernel
    Today, we’re excited to announce our collaboration with NVIDIA. In Azure AI Foundry, we’ve integrated NVIDIA NIM microservices and the NVIDIA AgentIQ toolkit into Azure AI Foundry—unlocking unprecedented efficiency, performance, and cost optimization for your AI projects. Read more on the announcement here. Optimizing performance with NVIDIA AgentIQ and Semantic Kernel Once your NVIDIA NIM […] The post Accelerating Agentic Workflows with NVIDIA AgentIQ, Azure AI Foundry and Semantic Kernel appeared first on Semantic Kernel.  ( 23 min )
  • Open

    Introducing Sidecar Extensions for Azure App Service on Linux
    In November 2024, we announced the General Availability (GA) of the Sidecar feature for Azure App Service for Linux, enabling developers to run sidecar containers alongside their applications. Today, we’re excited to take this capability even further with the introduction of Sidecar Extensions—pre-packaged integrations that simplify common use cases and accelerate development.  ( 2 min )
    Using Datadog as a Sidecar Extension for Azure App Service on Linux
    Monitoring your applications is crucial for performance and reliability. With Datadog as a sidecar extension, you can seamlessly collect logs, metrics, and traces from your application—without modifying your app code.  ( 5 min )
    Running SLMs as Sidecar extensions on App Service for Linux
    Introduction:  ( 5 min )
    Using the Redis Sidecar Extension with Azure App Service for Linux
    Azure App Service now supports running Redis as a sidecar extension, allowing you to easily add Redis caching to your applications. This blog will walk you through deploying an application to Azure App Service, adding the Redis sidecar extension, and verifying that it works.  ( 4 min )
  • Open

    Unlock Performance Gains with NVIDIA Inference Optimizations on Azure AI Foundry
    Microsoft has been working closely with NVIDIA to optimize the most popular models lke Meta Llama models using NVIDIA TensorRT-LLM (TRT-LLM). This ongoing effort ensures that Azure AI Foundry customers benefit from state-of-the-art inference performance improvements, and increased cost efficiency while maintaining response quality.  Optimized Llama Models Now Available  The following Llama models have been optimized delivering significant throughput and latency improvements:  Llama 3.3 70B   Llama 3.1 70B  Llama 3.1 8B   Llama 3.1 405B   These enhancements are automatically applied, so customers using Llama models from the model catalog in Azure AI Foundry will experience improved performance seamlessly—no additional steps or actions required.    Real World Performance Gains  Synop…  ( 25 min )
    Unlock Performance Gains with NVIDIA Inference Optimizations on Azure AI Foundry
    Microsoft has been working closely with NVIDIA to optimize the most popular models lke Meta Llama models using NVIDIA TensorRT-LLM (TRT-LLM). This ongoing effort ensures that Azure AI Foundry customers benefit from state-of-the-art inference performance improvements, and increased cost efficiency while maintaining response quality.  Optimized Llama Models Now Available  The following Llama models have been optimized delivering significant throughput and latency improvements:  Llama 3.3 70B   Llama 3.1 70B  Llama 3.1 8B   Llama 3.1 405B   These enhancements are automatically applied, so customers using Llama models from the model catalog in Azure AI Foundry will experience improved performance seamlessly—no additional steps or actions required.    Real World Performance Gains  Synop…
  • Open

    Announcing GA for Azure Container Apps Serverless GPUs
    Azure Container Apps Serverless GPUs accelerated by NVIDIA are now generally available. Serverless GPUs enable you to seamlessly run AI workloads with per-second billing and scale down to zero when not in use. Thus, reducing operational overhead to support easy real-time custom model inferencing and other GPU-accelerated workloads. Serverless GPUs accelerate the speed of AI development teams by allowing customers to focus on core AI code and less on managing infrastructure when using GPUs. This provides an excellent middle layer option between Azure AI Model Catalog's serverless APIs and hosting custom models on managed compute. Now customers can build their own serverless API endpoints for inferencing AI models including custom models. Customers can also provision on-demand GPU-powered Ju…  ( 28 min )
    Announcing GA for Azure Container Apps Serverless GPUs
    Azure Container Apps Serverless GPUs accelerated by NVIDIA are now generally available. Serverless GPUs enable you to seamlessly run billing with scale down to zero when not in use. Thus, reducing operational overhead to support easy real-time custom model inferencing and other GPU-accelerated workloads. Serverless GPUs accelerate the speed of AI development teams by allowing customers to focus on core AI code and less on managing infrastructure when using GPUs. This provides an excellent middle layer option between Azure AI Model Catalog's serverless APIs and hosting custom models on managed compute. Now customers can build their own serverless API endpoints for inferencing AI models including custom models. Customers can also provision on-demand GPU-powered Jupyter Notebooks or run other…
    Azure at KubeCon Europe 2025 | London, UK - April 1-4
    Are you ready for KubeCon + CloudNativeCon Europe 2025? We are thrilled to join the community in London, UK, from April 1-4, 2025, for an exciting lineup of events and activities. As a Diamond Sponsor, Microsoft Azure is set to showcase the latest innovations in Kubernetes, AI, and open-source. Read on for the many ways to connect with our team during the event: Azure Day with Kubernetes (April 1st):   (This is not a prank!) We are kicking things off with our Azure Day with Kubernetes pre-day event, focused on all things cloud-native and intelligent apps with Kubernetes on Azure.   Morning - Presentations: Start the day by learning about the latest on AKS. Learn how AKS streamlines the deployment and management intelligent applications, and gain insights into the best practices across key…  ( 39 min )
    Azure at KubeCon Europe 2025 | London, UK - April 1-4
    Are you ready for KubeCon + CloudNativeCon Europe 2025? We are thrilled to join the community in London, UK, from April 1-4, 2025, for an exciting lineup of events and activities. As a Diamond Sponsor, Microsoft Azure is set to showcase the latest innovations in Kubernetes, AI, and open-source. Read on for the many ways to connect with our team during the event: Azure Day with Kubernetes (April 1st):   (This is not a prank!) We are kicking things off with our Azure Day with Kubernetes pre-day event, focused on all things cloud-native and intelligent apps with Kubernetes on Azure.   Morning - Presentations: Start the day by learning about the latest on AKS. Learn how AKS streamlines the deployment and management intelligent applications, and gain insights into the best practices across key…
  • Open

    Upcoming Updates for Azure Pipelines Agents Images
    To ensure our hosted agents in Azure Pipelines are operating in the most secure and up-to-date environments, we continuously update the supported images and phase out older ones. In October 2024, we announced support for Ubuntu-24.04. Soon, we plan to update the ubuntu-latest image to map to Ubuntu-24.04. Additionally, MacOS 15 Sequoia and Windows 2025 […] The post Upcoming Updates for Azure Pipelines Agents Images appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Build, Innovate, and #Hacktogether!
    Learn from 20+ expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Get hands-on experience, unleash your creativity, and build powerful AI agents—then submit your hack for a chance to win amazing prizes! 💸 Key Dates Expert sessions: April 8th 2025 – April 30th 2025 Hack submission deadline: April 30th 2025, 11:59 PM PST Don't miss out — join us and start building the future of AI! 🔥 Registration  Register now! That form will register you for the hackathon. Afterwards, browse through the live stream schedule below and register for the sessions you're interested in. Once you're registered, introduce yourself and look for teammates! Project Submission  Once your hack is ready, fo…  ( 29 min )
    Build, Innovate, and #Hacktogether!
    Learn from 20+ expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Get hands-on experience, unleash your creativity, and build powerful AI agents—then submit your hack for a chance to win amazing prizes! 💸 Key Dates Expert sessions: April 8th 2025 – April 30th 2025 Hack submission deadline: April 30th 2025, 11:59 PM PST Don't miss out — join us and start building the future of AI! 🔥 Registration  Register now! That form will register you for the hackathon. Afterwards, browse through the live stream schedule below and register for the sessions you're interested in. Once you're registered, introduce yourself and look for teammates! Project Submission  Once your hack is ready, fo…
    Cloud Security Made Easy: Protect Your Apps with Microsoft Azure
    Hi everyone, I'm Rajat Rajput, a Microsoft Learn Student Ambassador, constantly exploring Azure and the opportunities it offers. I recently earned my Azure AI Fundamentals (AI-900) and Azure Fundamentals (AZ-900) certifications and realized how important cloud security is. In this post, we'll dive into the concept of cloud security and explore how it can be implemented using Microsoft Azure.   As businesses increasingly migrate to cloud platforms like Microsoft Azure, understanding and implementing security measures is no longer optional rather it's essential. Whether you're a new developer deploying your first cloud app or an IT professional managing enterprise level infrastructure, Azure's comprehensive security services are your shield against cyber attacks.   Why Cloud Security is Non …  ( 27 min )
    Cloud Security Made Easy: Protect Your Apps with Microsoft Azure
    Hi everyone, I'm Rajat Rajput, a Microsoft Learn Student Ambassador, constantly exploring Azure and the opportunities it offers. I recently earned my Azure AI Fundamentals (AI-900) and Azure Fundamentals (AZ-900) certifications and realized how important cloud security is. In this post, we'll dive into the concept of cloud security and explore how it can be implemented using Microsoft Azure.   As businesses increasingly migrate to cloud platforms like Microsoft Azure, understanding and implementing security measures is no longer optional rather it's essential. Whether you're a new developer deploying your first cloud app or an IT professional managing enterprise level infrastructure, Azure's comprehensive security services are your shield against cyber attacks.   Why Cloud Security is Non …
    Take Your Startup from Campus to the Cloud at the European AI and Cloud Summit
    Why University Startups Should Attend This is more than just a conference—it’s a launchpad for student-led startups eager to make their mark in AI and cloud computing. Here’s how the summit can accelerate your journey: - Pitch to Industry Leaders: Present your solution to a live audience of investors, potential clients, and mentors. - Build Connections: Meet industry pioneers, Microsoft experts, and other startups—expand your network and unlock collaboration opportunities. - Gain Expertise: Learn from thought leaders in AI and cloud through insightful talks and workshops.     🏆 AI Competition: A Game-Changing Opportunity The AI & Cloud Startup Stage competition is specifically designed for emerging startups, like those in university accelerator programs, that are building AI with Microso…  ( 22 min )
    Take Your Startup from Campus to the Cloud at the European AI and Cloud Summit
    Why University Startups Should Attend This is more than just a conference—it’s a launchpad for student-led startups eager to make their mark in AI and cloud computing. Here’s how the summit can accelerate your journey: - Pitch to Industry Leaders: Present your solution to a live audience of investors, potential clients, and mentors. - Build Connections: Meet industry pioneers, Microsoft experts, and other startups—expand your network and unlock collaboration opportunities. - Gain Expertise: Learn from thought leaders in AI and cloud through insightful talks and workshops.     🏆 AI Competition: A Game-Changing Opportunity The AI & Cloud Startup Stage competition is specifically designed for emerging startups, like those in university accelerator programs, that are building AI with Microso…
  • Open

    Fabric Espresso – Episodes about Data Science & Machine Learning in Microsoft Fabric
    For the past 1.5 years, the Microsoft Fabric Product Group Product Managers have been publishing a YouTube series featuring deep dives into Microsoft Fabric’s features. These episodes cover both technical functionalities and real-world scenarios, providing insights into the product roadmap and the people driving innovation. With over 80+ episodes, the series serves as a valuable resource for anyone looking to understand and optimize their use of Microsoft Fabric  ( 5 min )
    Powerful improvements for Copy Job
    Copy Job has been a go-to tool for simplified data ingestion in Microsoft Fabric, offering a seamless data movement experience from any source to any destination. Whether you need batch or incremental copying, it provides the flexibility to meet diverse data needs while maintaining a simple and intuitive workflow. We continuously refine Copy Job based … Continue reading “Powerful improvements for Copy Job”  ( 6 min )
  • Open

    Unlocking Java and AI Potential: Learning Plan, JavaOne, and JDConf 2025
    Java remains a cornerstone in today’s AI-driven world, offering scalability, high performance, and seamless integration for enterprise applications. Microsoft continues its commitment to Java developers with two exciting events: JavaOne 2025 and Microsoft JDConf 2025. Additionally, we’ve launched an official Microsoft Learn plan, "From Enthusiasts to Enterprise Java: A Practitioner’s Guide," providing expert-curated resources to help teams build Java-focused, AI-driven solutions. Why Choose Java and Azure for AI Development? Azure offers a comprehensive ecosystem to support Java-based AI development, providing tools, frameworks, and cloud-native solutions to enhance performance and scalability. 1. Scalability and Performance Azure Container Apps: Enables scalable microservices and serverl…  ( 28 min )
    Unlocking Java and AI Potential: Learning Plan, JavaOne, and JDConf 2025
    Java remains a cornerstone in today’s AI-driven world, offering scalability, high performance, and seamless integration for enterprise applications. Microsoft continues its commitment to Java developers with two exciting events: JavaOne 2025 and Microsoft JDConf 2025. Additionally, we’ve launched an official Microsoft Learn plan, "From Enthusiasts to Enterprise Java: A Practitioner’s Guide," providing expert-curated resources to help teams build Java-focused, AI-driven solutions. Why Choose Java and Azure for AI Development? Azure offers a comprehensive ecosystem to support Java-based AI development, providing tools, frameworks, and cloud-native solutions to enhance performance and scalability. 1. Scalability and Performance Azure Container Apps: Enables scalable microservices and serverl…
    Power AI App Development with Java and Azure at Microsoft JDConf 2025
    Java remains a cornerstone in today’s AI-driven world, offering scalability, high performance, and seamless integration for enterprise applications. Microsoft continues its commitment to Java developers with two exciting events: JavaOne 2025 and Microsoft JDConf 2025. Additionally, we’ve launched an official Microsoft Learn plan, "Let’s Explore Java Together on Azure," providing expert-curated resources to help teams build AI-driven solutions effectively. Why Choose Java and Azure for AI Development? Azure offers a comprehensive ecosystem to support Java-based AI development, providing tools, frameworks, and cloud-native solutions to enhance performance and scalability. 1. Scalability and Performance Azure Container Apps: Enables scalable microservices and serverless containers. Azure App…
  • Open

    Self-Serve Restoration in Microsoft Dev Box
    Developers grapple with the anxiety of potential data loss due to unforeseen issues, such as accidental deletions, system failures, or even corrupt files. The Dev Box team is addressing this concern with the self-service restoration feature for the Dev Box, designed to put you back in control. This functionality streamlines recovery by allowing direct restorations […] The post Self-Serve Restoration in Microsoft Dev Box appeared first on Develop from the cloud.  ( 23 min )
  • Open

    Introducing ABAP Support in GitHub Copilot for Eclipse
    The latest release of GitHub Copilot for Eclipse now includes support for ABAP! This update builds on the recent release of code completion and chat integration, offering a robust toolset for developers working within the SAP environment. ABAP remains a critical language in the enterprise space, powering a wide range of business applications, and its […] The post Introducing ABAP Support in GitHub Copilot for Eclipse appeared first on Microsoft for Java Developers.  ( 22 min )
  • Open

    AI-Powered Load Testing in VS Code with Azure Load Testing & GitHub Copilot
    There's a better way than writing load test scripts by hand. The new Azure Load Testing extension for Visual Studio Code (Preview), now integrated with GitHub Copilot, automatically generates realistic, Locust-based load tests. It seamlessly handles authentication, API request sequencing, response validation, and test data—helping you save time and ensure realistic performance testing.  With this AI-driven tool, you can: Instantly generate Locust test scripts from Postman collections, Insomnia collections, or .http files. Easily enhance tests with GitHub Copilot, like adding random data or dynamic user flows. Quickly iterate by running tests locally before scaling up. Easily execute large-scale tests in Azure Load Testing to uncover performance bottlenecks. Spend less time wrestling with…  ( 25 min )
    AI-Powered Load Testing in VS Code with Azure Load Testing & GitHub Copilot
    There's a better way than writing load test scripts by hand. The new Azure Load Testing extension for Visual Studio Code (Preview), now integrated with GitHub Copilot, automatically generates realistic, Locust-based load tests. It seamlessly handles authentication, API request sequencing, response validation, and test data—helping you save time and ensure realistic performance testing.  With this AI-driven tool, you can: Instantly generate Locust test scripts from Postman collections, Insomnia collections, or .http files. Easily enhance tests with GitHub Copilot, like adding random data or dynamic user flows. Quickly iterate by running tests locally before scaling up. Easily execute large-scale tests in Azure Load Testing to uncover performance bottlenecks. Spend less time wrestling with…
    API teams and Platform teams for better API management
    This article is written partly as a conversation. As developers, we usually have questions and the idea is to lay out these questions and answer them in a conversational manner thereby making it easier to understand. API management Let's provide some context what API management is all about. API management is the process of creating and publishing web APIs, enforcing their usage policies, controlling access, nurturing the subscriber community, collecting and analyzing usage statistics, and reporting on performance. API management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. All that sounds like a nice pitch, but what does it really mean? Let's break it down. You're a developer, you build an API and you ma…  ( 35 min )
    API teams and Platform teams for better API management
    This article is written partly as a conversation. As developers, we usually have questions and the idea is to lay out these questions and answer them in a conversational manner thereby making it easier to understand. API management Let's provide some context what API management is all about. API management is the process of creating and publishing web APIs, enforcing their usage policies, controlling access, nurturing the subscriber community, collecting and analyzing usage statistics, and reporting on performance. API management helps organizations publish APIs to external, partner, and internal developers to unlock the potential of their data and services. All that sounds like a nice pitch, but what does it really mean? Let's break it down. You're a developer, you build an API and you ma…
  • Open

    Guest Blog: Build a Multi-Agent System Using Microsoft Azure AI Agent Service and Semantic Kernel in 3 Simple Steps!
    Build a Multi-Agent System Using Microsoft Azure AI Agent Service and Semantic Kernel in 3 Simple Steps! Today we’re thrilled to welcome back guest author, Akshay Kokane to share his recent Medium article on Build a Multi-Agent System Using Microsoft Azure AI Agent Service and Semantic Kernel in 3 Simple Steps. We’ll turn it over to […] The post Guest Blog: Build a Multi-Agent System Using Microsoft Azure AI Agent Service and Semantic Kernel in 3 Simple Steps! appeared first on Semantic Kernel.  ( 24 min )

  • Open

    Enterprise Application Development with Azure Responses API and Agents SDK
    Overview Artificial Intelligence (AI) has become essential in modern enterprise applications, significantly enhancing automation and intelligent decision-making processes. Microsoft's Azure Responses API, when combined with OpenAI's Agents SDK, AutoGen, Swarm, LangGraph, and LangMem, creates a robust ecosystem that enables developers to build, orchestrate, and deploy intelligent, action-oriented AI agents within enterprise environments.    These technologies collectively address the growing demand for AI systems that can:  Understand and respond to complex business queries  Execute multi-step operations across different systems  Maintain context and memory across interactions  Scale securely within enterprise infrastructure Installation and Setup Prerequisites  Before beginning dev…  ( 39 min )
    Enterprise Application Development with Azure Responses API and Agents SDK
    Overview Artificial Intelligence (AI) has become essential in modern enterprise applications, significantly enhancing automation and intelligent decision-making processes. Microsoft's Azure Responses API, when combined with OpenAI's Agents SDK, AutoGen, Swarm, LangGraph, and LangMem, creates a robust ecosystem that enables developers to build, orchestrate, and deploy intelligent, action-oriented AI agents within enterprise environments.    These technologies collectively address the growing demand for AI systems that can:  Understand and respond to complex business queries  Execute multi-step operations across different systems  Maintain context and memory across interactions  Scale securely within enterprise infrastructure Installation and Setup Prerequisites  Before beginning dev…
    Monitor OpenAI Agents SDK with Application Insights
    As AI agents become more prevalent in applications, monitoring their behavior and performance becomes crucial. In this blog post, we'll explore how to monitor the OpenAI Agents SDK using Azure Application Insights through OpenTelemetry integration. Enhancing OpenAI Agents with OpenTelemetry The OpenAI Agents SDK provides powerful capabilities for building agent-based applications. By default, the SDK doesn't emit OpenTelemetry data, as noted in GitHub issue #18. This presents an opportunity to extend the SDK's functionality with robust observability features. Adding OpenTelemetry integration enables you to: Track agent interactions across distributed systems Monitor performance metrics in production Gain insights into agent behaviour Seamlessly integrate with existing observability plat…  ( 35 min )
    Monitor OpenAI Agents SDK with Application Insights
    As AI agents become more prevalent in applications, monitoring their behavior and performance becomes crucial. In this blog post, we'll explore how to monitor the OpenAI Agents SDK using Azure Application Insights through OpenTelemetry integration. Enhancing OpenAI Agents with OpenTelemetry The OpenAI Agents SDK provides powerful capabilities for building agent-based applications. By default, the SDK doesn't emit OpenTelemetry data, as noted in GitHub issue #18. This presents an opportunity to extend the SDK's functionality with robust observability features. Adding OpenTelemetry integration enables you to: Track agent interactions across distributed systems Monitor performance metrics in production Gain insights into agent behaviour Seamlessly integrate with existing observability plat…
  • Open

    Building a DeepSeek Extension for GitHub Copilot in VS Code
    DeepSeek has been getting a lot of buzz lately, and with a little setup, you can start using it today in GitHub Copilot within VS Code. In this post, I’ll walk you through how to install and run a VS Code extension I built, so you can take advantage of DeepSeek right on your machine. With this extension, you can use “@deepseek” to explore the deepseek-coder model. It’s powered by Ollama, enabling seamless, fully offline interactions with DeepSeek models—giving you a local coding assistant that prioritizes privacy and performance.   In a future post I'll walk you through the extension code and explain how to call models hosted locally using Ollama. Feel free to subscribe to get notified. Features and Benefits Open-Source and Extendable As an open-source project, the DeepSeek for GitHub Copi…  ( 26 min )
    Building a DeepSeek Extension for GitHub Copilot in VS Code
    DeepSeek has been getting a lot of buzz lately, and with a little setup, you can start using it today in GitHub Copilot within VS Code. In this post, I’ll walk you through how to install and run a VS Code extension I built, so you can take advantage of DeepSeek right on your machine. With this extension, you can use “@deepseek” to explore the deepseek-coder model. It’s powered by Ollama, enabling seamless, fully offline interactions with DeepSeek models—giving you a local coding assistant that prioritizes privacy and performance.   In a future post I'll walk you through the extension code and explain how to call models hosted locally using Ollama. Feel free to subscribe to get notified. Features and Benefits Open-Source and Extendable As an open-source project, the DeepSeek for GitHub Copi…
  • Open

    AI Agents: Key Principles and Guidelines - Part 3
    Hi everyone, Shivam Goyal here! In the previous posts (Part 1, Part 2—links provided at the end), we covered the basics of AI agents and explored available frameworks. This week, we'll focus on a crucial aspect: designing user-centric agentic systems, drawing specifically from the repository's section on agentic design patterns. The Challenge of Agentic Design Building effective AI agents isn't just about technical prowess but understanding and catering to the user experience. We need agents that empower users without being intrusive or confusing. This post introduces a set of UX design principles to guide this process. Agentic Design Principles: Space, Time, and Core These principles are categorized into three key areas: Agent (Space): Defining the Agent's Environment Connecting, Not Col…  ( 24 min )
    AI Agents: Key Principles and Guidelines - Part 3
    Hi everyone, Shivam Goyal here! In the previous posts (Part 1, Part 2—links provided at the end), we covered the basics of AI agents and explored available frameworks. This week, we'll focus on a crucial aspect: designing user-centric agentic systems, drawing specifically from the repository's section on agentic design patterns. The Challenge of Agentic Design Building effective AI agents isn't just about technical prowess but understanding and catering to the user experience. We need agents that empower users without being intrusive or confusing. This post introduces a set of UX design principles to guide this process. Agentic Design Principles: Space, Time, and Core These principles are categorized into three key areas: Agent (Space): Defining the Agent's Environment Connecting, Not Col…
  • Open

    Unlock the power of your Iceberg data in OneLake
    Microsoft OneLake is the single, unified, logical data lake that allows your entire organization to store, manage, and analyze data in one place. It provides seamless integration with various data sources and engines, making it easier to derive insights and drive innovation. At the most recent Microsoft Build conference, we announced the integration effort between … Continue reading “Unlock the power of your Iceberg data in OneLake”  ( 6 min )

  • Open

    Model Context Protocol (MCP): Integrating Azure OpenAI for Enhanced Tool Integration and Prompting
    Model Context Protocol serves as a critical communication bridge between AI models and external systems, enabling AI assistants to interact directly with various services through a standardized interface. This protocol was designed to address the inherent limitations of standalone AI models by providing them with pathways to access real-time data, perform actions in external systems, and leverage specialized tools beyond their built-in capabilities. The fundamental architecture of MCP consists of client-server communication where the AI model (client) can send requests to specialized servers that handle specific service integrations, process these requests, and return formatted results that the AI can incorporate into its responses. This design pattern enables AI systems to maintain their …  ( 37 min )
    Model Context Protocol (MCP): Integrating Azure OpenAI for Enhanced Tool Integration and Prompting
    Model Context Protocol serves as a critical communication bridge between AI models and external systems, enabling AI assistants to interact directly with various services through a standardized interface. This protocol was designed to address the inherent limitations of standalone AI models by providing them with pathways to access real-time data, perform actions in external systems, and leverage specialized tools beyond their built-in capabilities. The fundamental architecture of MCP consists of client-server communication where the AI model (client) can send requests to specialized servers that handle specific service integrations, process these requests, and return formatted results that the AI can incorporate into its responses. This design pattern enables AI systems to maintain their …

  • Open

    Customer Case Study: Announcing the Microsoft Semantic Kernel Couchbase Connector
    We’re thrilled to announce the launch of the Semantic Kernel Couchbase Vector Store Connector for .NET developers, created through our strategic partnership with Microsoft’s Semantic Kernel team. This powerful out-of-the-box connector transforms how developers integrate vector search capabilities into their AI applications. What sets this connector apart is how it harnesses Couchbase’s distributed NoSQL platform […] The post Customer Case Study: Announcing the Microsoft Semantic Kernel Couchbase Connector appeared first on Semantic Kernel.  ( 25 min )

  • Open

    How to build Tool-calling Agents with Azure OpenAI and Lang Graph
    Introducing  MyTreat Our demo is a fictional website that shows customers their total bill in dollars, but they have the option of getting the total bill in their local currencies. The button sends a request to the Node.js service and a response is simply returned from our Agent given the tool it chooses. Let’s dive in and understand how this works from a broader perspective.     Prerequisites An active Azure subscription. You can sign up for a free trial here or get $100 worth of credits on Azure every year if you are a student. A GitHub account (not necessarily) Node.js LTS 18 + VS Code installed (or your favorite IDE) Basic knowledge of HTML, CSS, JS   Creating an Azure OpenAI Resource Go over to your browser and key in portal.azure.com to access the Microsoft Azure Portal. Over ther…  ( 33 min )
    How to build Tool-calling Agents with Azure OpenAI and Lang Graph
    Introducing  MyTreat Our demo is a fictional website that shows customers their total bill in dollars, but they have the option of getting the total bill in their local currencies. The button sends a request to the Node.js service and a response is simply returned from our Agent given the tool it chooses. Let’s dive in and understand how this works from a broader perspective.     Prerequisites An active Azure subscription. You can sign up for a free trial here or get $100 worth of credits on Azure every year if you are a student. A GitHub account (not necessarily) Node.js LTS 18 + VS Code installed (or your favorite IDE) Basic knowledge of HTML, CSS, JS   Creating an Azure OpenAI Resource Go over to your browser and key in portal.azure.com to access the Microsoft Azure Portal. Over ther…
  • Open

    Enhancing API Security: Implementing OAuth 2.0 with PKCE in API Management
    In the modern digital era, securing APIs is essential. OAuth 2.0 is a trusted method for managing access, and the Proof Key for Code Exchange (PKCE) adds an extra layer of security, especially for mobile and single-page applications.    This blog will walk you through implementing OAuth 2.0 with PKCE in Azure API Management (APIM) to enhance security and prevent code interception attacks.     Why PKCE:   PKCE mitigates the risk of authorization code interception by using a dynamically generated secret instead of a static client secret. It works by introducing:   Code Verifier: A randomly generated string by the client.   Code Challenge: A hashed version of the code verifier, sent during authorization.   Code Exchange: The client sends the original code verifier to validate the request…  ( 33 min )
    Enhancing API Security: Implementing OAuth 2.0 with PKCE in API Management
    In the modern digital era, securing APIs is essential. OAuth 2.0 is a trusted method for managing access, and the Proof Key for Code Exchange (PKCE) adds an extra layer of security, especially for mobile and single-page applications.    This blog will walk you through implementing OAuth 2.0 with PKCE in Azure API Management (APIM) to enhance security and prevent code interception attacks.     Why PKCE:   PKCE mitigates the risk of authorization code interception by using a dynamically generated secret instead of a static client secret. It works by introducing:   Code Verifier: A randomly generated string by the client.   Code Challenge: A hashed version of the code verifier, sent during authorization.   Code Exchange: The client sends the original code verifier to validate the request…
  • Open

    The Future of AI: Customizing AI Agents with the Semantic Kernel Agent Framework
    Today we’re excited to promote a recent blog from AI Platform focused on customizing AI agents with the Semantic Kernel agent framework. Read the entire blog post: here The Future of AI blog series is an evolving collection of posts from the AI Futures team in collaboration with subject matter experts across Microsoft. In this […] The post The Future of AI: Customizing AI Agents with the Semantic Kernel Agent Framework appeared first on Semantic Kernel.  ( 23 min )
  • Open

    Markdown for large text fields (private preview)
    Adding Markdown capabilities to the work item is a long-standing request. We introduced Markdown for comments in early 2024, but due to the rollout of the New Boards Hub, we put the feature on hold. Today, we’re excited to announce a private preview for Markdown support in large text fields! 🎉 🦄 How it works […] The post Markdown for large text fields (private preview) appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Microsoft 365 Certification control spotlight: Data in transit
    Ensure your Microsoft Teams app, Microsoft 365 add-in, or agent meets the latest compliance standards with Microsoft 365 Certification. The post Microsoft 365 Certification control spotlight: Data in transit appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Introducing Azure AI Foundry — Everything you need for AI development
    Create agentic solutions quickly and efficiently with Azure AI Foundry. Choose the right models, ground your agents with knowledge, and seamlessly integrate AI into your development workflow — from early experimentation to production. Test, optimize, and deploy with built-in evaluation and management tools. See how to leverage the Azure AI Foundry SDK to code and orchestrate intelligent agents, monitor performance with tracing and assessments, and streamline DevOps with production-ready management. Yina Arenas, from the Azure AI Foundry team, shares its extensive capabilities as a unified platform that supports you throughout the entire AI development lifecycle. Access models to power your agents. The model catalog in Azure AI Foundry gives you access to thousands of AI models, including…  ( 50 min )
    Introducing Azure AI Foundry — Everything you need for AI development
    Create agentic solutions quickly and efficiently with Azure AI Foundry. Choose the right models, ground your agents with knowledge, and seamlessly integrate AI into your development workflow — from early experimentation to production. Test, optimize, and deploy with built-in evaluation and management tools. See how to leverage the Azure AI Foundry SDK to code and orchestrate intelligent agents, monitor performance with tracing and assessments, and streamline DevOps with production-ready management. Yina Arenas, from the Azure AI Foundry team, shares its extensive capabilities as a unified platform that supports you throughout the entire AI development lifecycle. Access models to power your agents. The model catalog in Azure AI Foundry gives you access to thousands of AI models, including…

  • Open

    Using Azure AI Agents with Semantic Kernel in .NET and Python
    Today we’re excited to dive into Semantic Kernel and Azure AI Agents. There are additional details about using an AzureAIAgent within Semantic Kernel covered in our documentation here. Azure AI Agents are powerful tools for developers seeking to integrate AI capabilities into their applications. In this blog post, we’ll explore how to utilize Azure AI […] The post Using Azure AI Agents with Semantic Kernel in .NET and Python appeared first on Semantic Kernel.  ( 24 min )
    Customer Case Story: Creating a Semantic Kernel Agent for Automated GitHub Code Reviews
    Today I want to welcome a guest author to our Semantic Kernel blog, Rasmus Wulff Jensen, to cover how he’s created a Semantic Kernel agent for automated GitHub code review. We’ll turn it over to Rasmus to dive in. Introduction If you work in software development, you know that Code reviews are an essential part […] The post Customer Case Story: Creating a Semantic Kernel Agent for Automated GitHub Code Reviews appeared first on Semantic Kernel.  ( 26 min )
  • Open

    External data sharing enhancements out now
    It’s been almost a year since we announced the external data sharing in preview at Fabric Conference 2024, and later that year it was made generally available at Ignite 2024. We’ve listened to customer feedback and have continued to improve the functionality of external data sharing. Many new feature enhancements have been delivered since the … Continue reading “External data sharing enhancements out now”  ( 6 min )
    Spatial queries in Fabric Data Warehouse
    Spatial data has become increasingly important in various fields, from urban planning and environmental monitoring to transportation and logistics. Fabric Data Warehouse offers spatial functionalities that enable you to query and analyze spatial data efficiently. In this blog post, we will delve into the spatial capabilities in the Fabric Data Warehouse and demonstrate how to … Continue reading “Spatial queries in Fabric Data Warehouse”  ( 7 min )
  • Open

    Capture a JVM heap dump for Java apps running on Azure Container Apps
    Overview Capturing a JVM heap dump is one of the most widely used debugging techniques for troubleshooting memory issues in Java applications. It might be not that straight-forward when it comes to cloud services, but in Azure Container Apps, you can do a JVM heap dump easily with the help of the debug console. Connect to the debug console of a running Azure Container Apps instance With the Azure CLI and the latest Azure Container Apps extension installed, run the following command to connect to the debug console. az containerapp debug \ --resource-group \ --name To use JDK built-in tools to capture a JVM heap dump, run the following command to install the JDK after connected to the debug console: root [ / ]# setup-jdk Then, use arrow up (↑) and down (…  ( 25 min )
    Capture a JVM heap dump for Java apps running on Azure Container Apps
    Overview Capturing a JVM heap dump is one of the most widely used debugging techniques for troubleshooting memory issues in Java applications. It might be not that straight-forward when it comes to cloud services, but in Azure Container Apps, you can do a JVM heap dump easily with the help of the debug console. Connect to the debug console of a running Azure Container Apps instance With the Azure CLI and the latest Azure Container Apps extension installed, run the following command to connect to the debug console. az containerapp debug \ --resource-group \ --name To use JDK built-in tools to capture a JVM heap dump, run the following command to install the JDK after connected to the debug console: root [ / ]# setup-jdk Then, use arrow up (↑) an…
  • Open

    What’s new in Microsoft Dev Box
    This post covers the latest features and enhancements in Microsoft Dev Box. These updates introduce advanced customization capabilities, improved security, performance optimizations, and better developer experience to make Dev Box even more efficient. Last update: 03/12/2025 In this post: Config-as-workflow improvements Enhanced user provided customizations Developer onboarding & experience Streamlined and flexible onboarding for enterprises […] The post What’s new in Microsoft Dev Box appeared first on Develop from the cloud.  ( 24 min )
  • Open

    Use Azure OpenAI and APIM with the OpenAI Agents SDK
    The OpenAI Agents SDK provides a powerful framework for building intelligent AI assistants with specialised capabilities. In this blog post, I'll demonstrate how to integrate Azure OpenAI Service and Azure API Management (APIM) with the OpenAI Agents SDK to create a banking assistant system with specialised agents. Key Takeaways: Learn how to connect the OpenAI Agents SDK to Azure OpenAI Service Understand the differences between direct Azure OpenAI integration and using Azure API Management Implement tracing with the OpenAI Agents SDK for monitoring and debugging Create a practical banking application with specialized agents and handoff capabilities The OpenAI Agents SDK The OpenAI Agents SDK is a powerful toolkit that enables developers to create AI agents with specialised capabilitie…  ( 35 min )
    Use Azure OpenAI and APIM with the OpenAI Agents SDK
    The OpenAI Agents SDK provides a powerful framework for building intelligent AI assistants with specialised capabilities. In this blog post, I'll demonstrate how to integrate Azure OpenAI Service and Azure API Management (APIM) with the OpenAI Agents SDK to create a banking assistant system with specialised agents. Key Takeaways: Learn how to connect the OpenAI Agents SDK to Azure OpenAI Service Understand the differences between direct Azure OpenAI integration and using Azure API Management Implement tracing with the OpenAI Agents SDK for monitoring and debugging Create a practical banking application with specialized agents and handoff capabilities The OpenAI Agents SDK The OpenAI Agents SDK is a powerful toolkit that enables developers to create AI agents with specialised capabilitie…
    RAG Time Journey 2: Data ingestion and search practices for the ultimate RAG retrieval system
    Introduction This is the second post for RAG Time, a 7-part educational series on retrieval-augmented generation (RAG). Read the first post of this series and access all videos and resources in our Github repo. Journey 2 covers indexing and retrieval techniques for RAG: Data ingestion approaches: use Azure AI Search to upload, extract, and process documents using Azure Blob Storage, Document Intelligence, and integrated vectorization. Keyword and vector search: compare traditional keyword matching with vector search Hybrid search: how to apply keyword and vector search techniques with Reciprocal Rank Fusion (RRF) for better quality results across more use cases. Semantic ranker and query rewriting: See how reordering results using semantic scoring and enhancing queries through rewriting c…  ( 38 min )
    RAG Time Journey 2: Data ingestion and search practices for the ultimate RAG retrieval system
    Introduction This is the second post for RAG Time, a 7-part educational series on retrieval-augmented generation (RAG). Read the first post of this series and access all videos and resources in our Github repo. Journey 2 covers indexing and retrieval techniques for RAG: Data ingestion approaches: use Azure AI Search to upload, extract, and process documents using Azure Blob Storage, Document Intelligence, and integrated vectorization. Keyword and vector search: compare traditional keyword matching with vector search Hybrid search: how to apply keyword and vector search techniques with Reciprocal Rank Fusion (RRF) for better quality results across more use cases. Semantic ranker and query rewriting: See how reordering results using semantic scoring and enhancing queries through rewriting c…
  • Open

    New Python Driver for SQL Server and Azure SQL!
    We’re thrilled to announce the alpha release of our new open-source Python driver for Microsoft SQL Server and the Azure SQL family, now available on GitHub at mssql-python. The post New Python Driver for SQL Server and Azure SQL! appeared first on Microsoft for Python Developers Blog.  ( 21 min )

  • Open

    Azure Monitor Network Security Perimeter - Features available in 56 Public Cloud Regions
    What is Network Security Perimeter? The Network Security Perimeter is a feature designed to enhance the security of Azure PaaS resources by creating a logical network isolation boundary. This allows Azure PaaS resources to communicate within an explicit trusted boundary, ensuring that external access is limited based on network controls defined across all Private Link Resources within the perimeter. Azure Monitor - Network Security Perimeter - Public Cloud Regions - Update We are pleased to announce the expansion of Network Security Perimeter features in Azure Monitor services from 6 to 56 Azure regions. This significant milestone enables us to reach a broader audience and serve a larger customer base. It underscores our continuous growth and dedication to meeting the security needs of our…  ( 23 min )
    Azure Monitor Network Security Perimeter - Features available in 56 Public Cloud Regions
    What is Network Security Perimeter? The Network Security Perimeter is a feature designed to enhance the security of Azure PaaS resources by creating a logical network isolation boundary. This allows Azure PaaS resources to communicate within an explicit trusted boundary, ensuring that external access is limited based on network controls defined across all Private Link Resources within the perimeter. Azure Monitor - Network Security Perimeter - Public Cloud Regions - Update We are pleased to announce the expansion of Network Security Perimeter features in Azure Monitor services from 6 to 56 Azure regions. This significant milestone enables us to reach a broader audience and serve a larger customer base. It underscores our continuous growth and dedication to meeting the security needs of our…
  • Open

    Azure Platform Metrics for AKS Control Plane Monitoring
    Azure Kubernetes Service (AKS) now offers free platform metrics for monitoring your control plane components. This enhancement provides essential insights into the availability and performance of managed control plane components, such as the API server and etcd. In this blog post, we'll explore these new metrics and demonstrate how to leverage them to ensure the health and performance of your AKS clusters. What's New? Previously, detailed control plane metrics were only available through the paid Azure Managed Prometheus feature. Now, these metrics are automatically collected for free for all AKS clusters and are available for creating metric alerts. This democratizes access to critical monitoring data and helps all AKS users maintain more reliable Kubernetes environments. Available Contro…  ( 29 min )
    Azure Platform Metrics for AKS Control Plane Monitoring
    Azure Kubernetes Service (AKS) now offers free platform metrics for monitoring your control plane components. This enhancement provides essential insights into the availability and performance of managed control plane components, such as the API server and etcd. In this blog post, we'll explore these new metrics and demonstrate how to leverage them to ensure the health and performance of your AKS clusters. What's New? Previously, detailed control plane metrics were only available through the paid Azure Managed Prometheus feature. Now, these metrics are automatically collected for free for all AKS clusters and are available for creating metric alerts. This democratizes access to critical monitoring data and helps all AKS users maintain more reliable Kubernetes environments. Available Contro…
    Superfast using Web App and Managed Identity to invoke Function App triggers
    TOC Introduction Setup References   1. Introduction Many enterprises prefer not to use App Keys to invoke Function App triggers, as they are concerned that these fixed strings might be exposed. This method allows you to invoke Function App triggers using Managed Identity for enhanced security. I will provide examples in both Bash and Node.js.   2. Setup 1. Create a Linux Python 3.11 Function App   1.1. Configure Authentication to block unauthenticated callers while allowing the Web App’s Managed Identity to authenticate. Identity Provider Microsoft Choose a tenant for your application and it's users Workforce Configuration App registration type Create Name [automatically generated] Client Secret expiration [fit-in your business purpose] Supported Account Type Any Micr…  ( 25 min )
    Superfast using Web App and Managed Identity to invoke Function App triggers
    TOC Introduction Setup References   1. Introduction Many enterprises prefer not to use App Keys to invoke Function App triggers, as they are concerned that these fixed strings might be exposed. This method allows you to invoke Function App triggers using Managed Identity for enhanced security. I will provide examples in both Bash and Node.js.   2. Setup 1. Create a Linux Python 3.11 Function App   1.1. Configure Authentication to block unauthenticated callers while allowing the Web App’s Managed Identity to authenticate. Identity Provider Microsoft Choose a tenant for your application and it's users Workforce Configuration App registration type Create Name [automatically generated] Client Secret expiration [fit-in your business purpose] Supported Account Type Any Micr…
    Getting Started with Linux WebJobs on App Service
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting a runtime stack (does not matter for this example…we’ll do a more in-depth sample later), create your App Service Web App. Next, we’ll add a basic WebJob to our app. Create WebJob Script Before we do anything else, let’s…  ( 23 min )
    Getting Started with Linux WebJobs on App Service
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting a runtime stack (does not matter for this example…we’ll do a more in-depth sample later), create your App Service Web App. Next, we’ll add a basic WebJob to our app. Create WebJob Script Before we do anything else, let’s…
  • Open

    Create your own QA RAG Chatbot with LangChain.js + Azure OpenAI Service
    Demo: Mpesa for Business Setup QA RAG Application In this tutorial we are going to build a Question-Answering RAG Chat Web App. We utilize Node.js and HTML, CSS, JS. We also incorporate Langchain.js + Azure OpenAI + MongoDB Vector Store (MongoDB Search Index). Get a quick look below. Note: Documents and illustrations shared here are for demo purposes only and Microsoft or its products are not part of Mpesa. The content demonstrated here should be used for educational purposes only. Additionally, all views shared here are solely mine. What you will need: An active Azure subscription, get Azure for Student for free or get started with Azure for 12 months free. VS Code Basic knowledge in JavaScript (not a must) Access to Azure OpenAI, click here if you don't have access. Create a MongoDB ac…  ( 31 min )
    Create your own QA RAG Chatbot with LangChain.js + Azure OpenAI Service
    Demo: Mpesa for Business Setup QA RAG Application In this tutorial we are going to build a Question-Answering RAG Chat Web App. We utilize Node.js and HTML, CSS, JS. We also incorporate Langchain.js + Azure OpenAI + MongoDB Vector Store (MongoDB Search Index). Get a quick look below. Note: Documents and illustrations shared here are for demo purposes only and Microsoft or its products are not part of Mpesa. The content demonstrated here should be used for educational purposes only. Additionally, all views shared here are solely mine. What you will need: An active Azure subscription, get Azure for Student for free or get started with Azure for 12 months free. VS Code Basic knowledge in JavaScript (not a must) Access to Azure OpenAI, click here if you don't have access. Create a MongoDB ac…
  • Open

    Extending flexibility: default checkbox changes on tenant settings for SQL database in Fabric
    In our ongoing effort to enhance the visibility, accessibility, and efficiency of SQL database in Fabric, we are making a change that ensures organizations can make an informed decision before default enablement takes effect. We have changed the timeline for when SQL database will be enabled by default. Initially, we planned to roll out the … Continue reading “Extending flexibility: default checkbox changes on tenant settings for SQL database in Fabric”  ( 5 min )
  • Open

    GitHub Copilot Chat now available in public preview for Eclipse
    Today, GitHub Copilot Chat is available in public preview for Eclipse!  This release follows the initial public preview of GitHub Copilot in Eclipse, which only supported code completions, and is available for all Eclipse users with access to GitHub Copilot.   After installing or updating GitHub Copilot for Eclipse, click on  in the bottom right corner […] The post GitHub Copilot Chat now available in public preview for Eclipse appeared first on Microsoft for Java Developers.  ( 23 min )
  • Open

    Customer Case Study: INCM transforms legal accessibility with an AI Search Assistant
    Customer Case Study: INCM transforms legal accessibility with an AI Search Assistant The Imprensa Nacional-Casa da Moeda (INCM) is responsible for managing and publishing Portugal’s Diário da República (Official Gazette of the Republic of Portugal), which includes essential information for understanding laws, regulations, and legal processes. The quantity of information and the complex language used […] The post Customer Case Study: INCM transforms legal accessibility with an AI Search Assistant appeared first on Semantic Kernel.  ( 25 min )
  • Open

    March Patches for Azure DevOps Server
    Today we are releasing patches that impact our self-hosted product, Azure DevOps Server. We strongly encourage and recommend that all customers use the latest, most secure release of Azure DevOps Server. You can download the latest version of the product, Azure DevOps Server 2022.2 from the Azure DevOps Server download page. The following versions of […] The post March Patches for Azure DevOps Server appeared first on Azure DevOps Blog.  ( 23 min )
    New Boards Hub Update
    We’ve reached a major milestone in the rollout of New Boards Hub this week by making it the default experience for all organizations and users. While many users can still temporarily switch back if they encounter a blocking issue, our telemetry shows that 97% of users are staying on New Boards without reverting. This is […] The post New Boards Hub Update appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Fabric Espresso – Episodes about Data Integration and Data Engineering in Microsoft Fabric
    For the past 1.5 years, the Microsoft Fabric Product Group Product Managers have been publishing a YouTube series featuring deep dives into Microsoft Fabric’s features. These episodes cover both technical functionalities and real-world scenarios, providing insights into the product roadmap and the people driving innovation. With over 80+ episodes, the series serves as a valuable … Continue reading “Fabric Espresso – Episodes about Data Integration and Data Engineering in Microsoft Fabric “  ( 6 min )
  • Open

    Building AI Agents on edge devices using Ollama + Phi-4-mini Function Calling
    The new Phi-4-mini and Phi-4-multimodal now support Function Calling. This feature enables the models to connect with external tools and APIs. By deploying Phi-4-mini and Phi-4-multimodal with Function Calling capabilities on edge devices, we can achieve local expansion of knowledge capabilities and enhance their task execution efficiency. This blog will focus on how to use Phi-4-mini's Function Calling capabilities to build efficient AI Agents on edge devices.   What‘s Function Calling How it works First we need to learn how Function Calling works Tool Integration: Function Calling allows LLM/SLM to interact with external tools and APIs, such as weather APIs, databases, or other services. Function Definition: Defines a function (tool) that LLM/SLM can call, specifying its name, paramete…  ( 34 min )
    Building AI Agents on edge devices using Ollama + Phi-4-mini Function Calling
    The new Phi-4-mini and Phi-4-multimodal now support Function Calling. This feature enables the models to connect with external tools and APIs. By deploying Phi-4-mini and Phi-4-multimodal with Function Calling capabilities on edge devices, we can achieve local expansion of knowledge capabilities and enhance their task execution efficiency. This blog will focus on how to use Phi-4-mini's Function Calling capabilities to build efficient AI Agents on edge devices.   What‘s Function Calling How it works First we need to learn how Function Calling works Tool Integration: Function Calling allows LLM/SLM to interact with external tools and APIs, such as weather APIs, databases, or other services. Function Definition: Defines a function (tool) that LLM/SLM can call, specifying its name, paramete…
    AI Agents: Exploring Agentic Frameworks - Part 2
    Hi everyone,  Shivam Goyal here again! In the first post, we introduced the fundamental concepts of AI agents. Now, we'll get practical and explore Microsoft's agentic frameworks: AutoGen, Semantic Kernel, and Azure AI Agent Service. This post will equip you to choose the right tool for your AI agent projects. Why AI Agent Frameworks? AI agent frameworks take AI further than traditional frameworks by enabling dynamic interactions between agents and their environment. They offer: Agent Collaboration and Coordination: Build sophisticated multi-agent systems where agents can work together seamlessly, sharing information and coordinating actions to achieve complex goals. Task Automation and Management: Streamline and automate intricate workflows, distributing tasks efficiently among multiple …  ( 27 min )
    AI Agents: Exploring Agentic Frameworks - Part 2
    Hi everyone,  Shivam Goyal here again! In the first post, we introduced the fundamental concepts of AI agents. Now, we'll get practical and explore Microsoft's agentic frameworks: AutoGen, Semantic Kernel, and Azure AI Agent Service. This post will equip you to choose the right tool for your AI agent projects. Why AI Agent Frameworks? AI agent frameworks take AI further than traditional frameworks by enabling dynamic interactions between agents and their environment. They offer: Agent Collaboration and Coordination: Build sophisticated multi-agent systems where agents can work together seamlessly, sharing information and coordinating actions to achieve complex goals. Task Automation and Management: Streamline and automate intricate workflows, distributing tasks efficiently among multiple …
  • Open

    Use AI for Free with GitHub Models and TypeScript! 💸💸💸
    Artificial Intelligence is becoming more accessible to developers. However, one of the biggest challenges remains the cost of advanced model APIs, such as GPT-4o and many others. Fortunately, GitHub Models is here to change the game! Now, you can experiment with AI for free, without needing a paid API key or downloading large models on your local machine. In this article, we will explain in detail what GitHub Models is and how to use it for free with TypeScript in a practical project. We chose the Microblog AI Remix project as an example, an open-source microblog with AI-powered features. We will explore the structure of this project and provide a step-by-step guide on how to integrate GitHub Models, replacing the need for paid LLMs. This includes before and after code comparisons. We will…  ( 35 min )
    Use AI for Free with GitHub Models and TypeScript! 💸💸💸
    Artificial Intelligence is becoming more accessible to developers. However, one of the biggest challenges remains the cost of advanced model APIs, such as GPT-4o and many others. Fortunately, GitHub Models is here to change the game! Now, you can experiment with AI for free, without needing a paid API key or downloading large models on your local machine. In this article, we will explain in detail what GitHub Models is and how to use it for free with TypeScript in a practical project. We chose the Microblog AI Remix project as an example, an open-source microblog with AI-powered features. We will explore the structure of this project and provide a step-by-step guide on how to integrate GitHub Models, replacing the need for paid LLMs. This includes before and after code comparisons. We will…
  • Open

    Keeping the Conversation Flowing: Managing Context with Semantic Kernel Python
    In the dynamic field of conversational AI, managing coherent and contextually meaningful interactions between humans and digital assistants poses increasingly complex challenges. As dialogue lengths extend, maintaining full conversational context becomes problematic due to token constraints and memory limitations inherent to large language models (LLMs). These constraints not only degrade conversational clarity but also compromise […] The post Keeping the Conversation Flowing: Managing Context with Semantic Kernel Python appeared first on Semantic Kernel.  ( 25 min )

  • Open

    G3J Learn Semantic Kernel Show – A Deep Dive in Korean! | 세계로 뻗어갑니다: “G3J Learn Semantic Kernel” 쇼 – 한국어로 배우는 Semantic Kernel!
    Global Expansion – “G3J Learn Semantic Kernel” Show – A Deep Dive in Korean! Localization Increases Demand Following the success of this multi-language delivery, we quickly noticed a surge in demand for localized content. Developers from different parts of the world have expressed interest in diving deeper into Semantic Kernel, and we couldn’t be more […] The post G3J Learn Semantic Kernel Show – A Deep Dive in Korean! | 세계로 뻗어갑니다: “G3J Learn Semantic Kernel” 쇼 – 한국어로 배우는 Semantic Kernel! appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Speed Up OpenAI Embedding By 4x With This Simple Trick!
    In today’s fast-paced world of AI applications, optimizing performance should be one of your top priorities. This guide walks you through a simple yet powerful way to reduce OpenAI embedding response sizes by 75%—cutting them from 32 KB to just 8 KB per request. By switching from float32 to base64 encoding in your Retrieval-Augmented Generation (RAG) system, you can achieve a 4x efficiency boost, minimizing network overhead, saving costs and dramatically improving responsiveness.  Let's consider the following scenario.  Use Case: RAG Application Processing a 10-Page PDF  A user interacts with a RAG-powered application that processes a 10-page PDF and uses OpenAI embedding models to make the document searchable from an LLM. The goal is to show how optimizing embedding response size impacts …  ( 39 min )
    Speed Up OpenAI Embedding By 4x With This Simple Trick!
    In today’s fast-paced world of AI applications, optimizing performance should be one of your top priorities. This guide walks you through a simple yet powerful way to reduce OpenAI embedding response sizes by 75%—cutting them from 32 KB to just 8 KB per request. By switching from float32 to base64 encoding in your Retrieval-Augmented Generation (RAG) system, you can achieve a 4x efficiency boost, minimizing network overhead, saving costs and dramatically improving responsiveness.  Let's consider the following scenario.  Use Case: RAG Application Processing a 10-Page PDF  A user interacts with a RAG-powered application that processes a 10-page PDF and uses OpenAI embedding models to make the document searchable from an LLM. The goal is to show how optimizing embedding response size impacts …
  • Open

    Unlocking the Power of Azure Container Apps in 1 Minute Video
    Azure Container Apps provides a seamless way to build, deploy, and scale cloud-native applications without the complexity of managing infrastructure. Whether you’re developing microservices, APIs, or AI-powered applications, this fully managed service enables you to focus on writing code while Azure handles scalability, networking, and deployments.   In this blog post, we explore five essential aspects of Azure Container Apps—each highlighted in a one-minute video. From intelligent applications and secure networking to effortless deployments and rollbacks, these insights will help you maximize the capabilities of serverless containers on Azure. Azure Container Apps - in 1 Minute Azure Container Apps is a fully managed platform designed for cloud-native applications, providing effortless de…  ( 27 min )
    Unlocking the Power of Azure Container Apps in 1 Minute Video
    Azure Container Apps provides a seamless way to build, deploy, and scale cloud-native applications without the complexity of managing infrastructure. Whether you’re developing microservices, APIs, or AI-powered applications, this fully managed service enables you to focus on writing code while Azure handles scalability, networking, and deployments.   In this blog post, we explore five essential aspects of Azure Container Apps—each highlighted in a one-minute video. From intelligent applications and secure networking to effortless deployments and rollbacks, these insights will help you maximize the capabilities of serverless containers on Azure. Azure Container Apps - in 1 Minute Azure Container Apps is a fully managed platform designed for cloud-native applications, providing effortless de…
  • Open

    Build a Hyperlight C guest to securely execute JavaScript
    This article will show you how to create a “guest” application that uses the Hyperlight library and have fun with some JavaScript. The post Build a Hyperlight C guest to securely execute JavaScript appeared first on Microsoft Open Source Blog.  ( 14 min )
  • Open

    Changes on SharePoint Framework (SPFx) permission grants in Microsoft Entra ID
    Changes on the SPFx permission management model in the Microsoft Entra ID. The post Changes on SharePoint Framework (SPFx) permission grants in Microsoft Entra ID appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Transitioning from Non-managed to Managed WordPress on App Service Linux
    Introduction We've received numerous queries about WordPress on App Service, and we love it! Your feedback helps us improve our offerings. A common theme is the challenges faced with non-managed WordPress setups. Our managed WordPress offering on App Service is designed to be highly performant, secure, and seamlessly integrated with Azure services like MySQL flexible server, CDN/Front Door, Blob Storage, VNET, and Azure Communication Services. While some specific cases might require a custom WordPress setup, most users benefit significantly from our managed service, enjoying better performance, security, easier management, and cost savings. If you're experiencing performance issues or problems with stack updates, you might be using a non-managed WordPress setup. This could happen if you di…  ( 32 min )
    Transitioning from Non-managed to Managed WordPress on App Service Linux
    Introduction We've received numerous queries about WordPress on App Service, and we love it! Your feedback helps us improve our offerings. A common theme is the challenges faced with non-managed WordPress setups. Our managed WordPress offering on App Service is designed to be highly performant, secure, and seamlessly integrated with Azure services like MySQL flexible server, CDN/Front Door, Blob Storage, VNET, and Azure Communication Services. While some specific cases might require a custom WordPress setup, most users benefit significantly from our managed service, enjoying better performance, security, easier management, and cost savings. In this article, we'll explore how to identify if you're using the managed offering and how to transition if you're not.In this article, we will learn …
  • Open

    Announcing AI functions for seamless data engineering with GenAI
    With AI functions, you can harness the power of Fabric’s native large-language model endpoint for seamless summarization, classification, text generation, and much more—all with a single line of code.  ( 6 min )
  • Open

    Introducing Model Mondays - Build Your AI Model IQ With This Weekly Hands-on Series
    Model Mondays is an intiative to help you build your knowledge of generative AI models through 5-minute news recaps and 15-minute model spotlights each week. Register and watch the livestream each Monday at 1:30pm ET https://aka.ms/model-mondays/rsvp Join the conversation on Discord each Friday at 1:30pm EThttps://aka.ms/model-mondays/chat  Catch up with replays, resources and more at any time on GitHubhttps://aka.ms/model-mondays  The generative AI model landscape is getting increasingly crowded. It feels like there are new models being released daily, even before we've had time to understand what the existing set of models can do for us. There are over 1800 models today on the Azure AI Foundry model catalog - and  over 1 million community-created model variants on the Hugging Face mod…  ( 27 min )
    Introducing Model Mondays - Build Your AI Model IQ With This Weekly Hands-on Series
    Model Mondays is an intiative to help you build your knowledge of generative AI models through 5-minute news recaps and 15-minute model spotlights each week. Register and watch the livestream each Monday at 1:30pm ET https://aka.ms/model-mondays/rsvp Join the conversation on Discord each Friday at 1:30pm EThttps://aka.ms/model-mondays/chat  Catch up with replays, resources and more at any time on GitHubhttps://aka.ms/model-mondays  The generative AI model landscape is getting increasingly crowded. It feels like there are new models being released daily, even before we've had time to understand what the existing set of models can do for us. There are over 1800 models today on the Azure AI Foundry model catalog - and  over 1 million community-created model variants on the Hugging Face mod…

  • Open

    Integration of AWS Bedrock Agents in Semantic Kernel
    Overview of AWS Bedrock Agents AWS Bedrock Agents provide a managed service that facilitates the experimentation and rapid deployment of AI agents. Users can leverage proprietary AWS models as well as a diverse selection of models from various providers available on AWS Bedrock. Semantic Kernel’s Integration with AWS Bedrock Semantic Kernel now integrates with AWS […] The post Integration of AWS Bedrock Agents in Semantic Kernel appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Introducing Model Mondays – Your AI Model Power-Up!
    Build your Model IQ! Join us for an 8-week power series where we cut through the noise and bring you the most relevant models, insights, and hands-on demos. Every Monday, we’ll round up the latest AI model news, do a deep dive into a key model, and help you stay ahead of the curve.  Then join the community on the Azure AI Discord #model-mondays channel to continue these conversations. We'll wrap up the week with a Model Mondays watercooler chat every Friday where we'll revisit the news and demos - and give you a chance to ask questions or show-and-tell us your model-driven experiences. What’s in it for you? Stay updated – A 5-min news blast on the latest AI model breakthroughs Get hands-on – A 15-min deep dive into a must-know model each week Ask the experts – A live Q&A to answer your b…  ( 22 min )
    Introducing Model Mondays – Your AI Model Power-Up!
    Build your Model IQ! Join us for an 8-week power series where we cut through the noise and bring you the most relevant models, insights, and hands-on demos. Every Monday, we’ll round up the latest AI model news, do a deep dive into a key model, and help you stay ahead of the curve.  Then join the community on the Azure AI Discord #model-mondays channel to continue these conversations. We'll wrap up the week with a Model Mondays watercooler chat every Friday where we'll revisit the news and demos - and give you a chance to ask questions or show-and-tell us your model-driven experiences. What’s in it for you? Stay updated – A 5-min news blast on the latest AI model breakthroughs Get hands-on – A 15-min deep dive into a must-know model each week Ask the experts – A live Q&A to answer your burning questions  Season Kickoff: March 10 We'll kick off Season 1 this Monday, March 10. Join us to get the 5-minute roundup of news from the past week, and a 15-minute deep-dive where we go Hands-on With GitHub Models! Why put the spotlight on GitHub Models? Because the future of AI is open, accessible, and in your hands! We’ll show you how to explore, experiment, and leverage GitHub models with just a GitHub account. By the end of this episode you should have an intuitive sense for: What GitHub Models are, and why they matter. How to get started with your first GitHub Model. How to compare models for evaluating responses. How to go from catalog (explore) to code (develop) How to use Azure Inference API for easy model swap Register Here: https://aka.ms/model-mondays/rsvp    Be a part of the conversation!  Watch Live on Microsoft Reactor – RSVP Now  Join the AI community – Discord Office Hours every Friday – Join Here  Get exclusive resources – Explore the GitHub Repo  Don’t fall behind—jump in and level up your AI game with Model Mondays! #ModelMondays
  • Open

    Join the Migrate to Innovate Summit to build your AI-ready cloud foundation
    At the Migrate to Innovate Summit, you’ll learn how Azure provides an optimized platform to fully embrace AI while addressing your most pressing business priorities by maximizing ROI, performance, and resilience. This event will focus on how to migrate and modernize your infrastructure, data and applications to Azure to help position your organization for innovation, efficiency, growth, and long-term success.  For organizations looking to build competitive advantage through AI, building the cloud foundation needed to support the innovation is critical. However, we also hear that organizations are trying to balance the need to embrace the latest innovations with the need to meet current business challenges. Whether it’s optimizing costs, safeguarding against security threats, or controlling…  ( 28 min )
    Join the Migrate to Innovate Summit to build your AI-ready cloud foundation
    At the Migrate to Innovate Summit, you’ll learn how Azure provides an optimized platform to fully embrace AI while addressing your most pressing business priorities by maximizing ROI, performance, and resilience. This event will focus on how to migrate and modernize your infrastructure, data and applications to Azure to help position your organization for innovation, efficiency, growth, and long-term success.  For organizations looking to build competitive advantage through AI, building the cloud foundation needed to support the innovation is critical. However, we also hear that organizations are trying to balance the need to embrace the latest innovations with the need to meet current business challenges. Whether it’s optimizing costs, safeguarding against security threats, or controlling…

  • Open

    Take full control of your AI APIs with Azure API Management Gateway
    This article is part of a series of articles on API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. In this article, we will shed some light on capabilities in API Management, which are designed to help you govern and manage Generative AI APIs, ensuring that you are building resilient and secure intelligent applications.   But why exactly do I need API Management for my AI APIs? Common challenges when implementing Gen AI-Powered solutions include: - Quota, (calculated in tokens-per-minute (TPM)), allocation across multiple client apps, How to control and track token consumption for all users, Mechanisms to attribute costs to specific client apps, activities, or user…  ( 27 min )
    Take full control of your AI APIs with Azure API Management Gateway
    This article is part of a series of articles on API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. In this article, we will shed some light on capabilities in API Management, which are designed to help you govern and manage Generative AI APIs, ensuring that you are building resilient and secure intelligent applications.   But why exactly do I need API Management for my AI APIs? Common challenges when implementing Gen AI-Powered solutions include: - Quota, (calculated in tokens-per-minute (TPM)), allocation across multiple client apps, How to control and track token consumption for all users, Mechanisms to attribute costs to specific client apps, activities, or user…
  • Open

    Construyendo una Aplicación Web con Inteligencia Artificial usando Python
    En la segunda sesión del GitHub Copilot Bootcamp LATAM, organizado por Microsoft Reactor, el ingeniero Manuel Ortiz, Embajador de Microsoft Learn y líder comunitario en GitHub, guió a desarrolladores en la creación de una aplicación web con capacidades de inteligencia artificial. Este taller práctico combinó fundamentos de desarrollo backend en Python con técnicas avanzadas de integración de modelos de lenguaje de Azure OpenAI. Introducción a Azure Open AI Azure Open AI es una colaboración entre Microsoft y OpenAI que permite a los desarrolladores integrar modelos avanzados de inteligencia artificial en sus aplicaciones utilizando la infraestructura de Azure. Esto ofrece acceso a modelos poderosos como GPT-4, que pueden ser utilizados para una variedad de tareas, desde procesamiento de le…  ( 25 min )
    Construyendo una Aplicación Web con Inteligencia Artificial usando Python
    En la segunda sesión del GitHub Copilot Bootcamp LATAM, organizado por Microsoft Reactor, el ingeniero Manuel Ortiz, Embajador de Microsoft Learn y líder comunitario en GitHub, guió a desarrolladores en la creación de una aplicación web con capacidades de inteligencia artificial. Este taller práctico combinó fundamentos de desarrollo backend en Python con técnicas avanzadas de integración de modelos de lenguaje de Azure OpenAI. Introducción a Azure Open AI Azure Open AI es una colaboración entre Microsoft y OpenAI que permite a los desarrolladores integrar modelos avanzados de inteligencia artificial en sus aplicaciones utilizando la infraestructura de Azure. Esto ofrece acceso a modelos poderosos como GPT-4, que pueden ser utilizados para una variedad de tareas, desde procesamiento de le…
  • Open

    Announcing DeepSeek-V3 on Azure AI Foundry and GitHub
    Building on the interest of DeepSeek-R1, launched one month ago, we are pleased to announce the availability of DeepSeek-V3 on Azure AI Foundry model catalog with token-based billing and GitHub Models free experience. This latest iteration is part of our commitment to enable powerful, efficient, and accessible AI solutions through the breadth and diversity of choice in the model catalog. DeepSeek-V3 is set to empower organizations across industries to unlock value from their data. What is DeepSeek-V3? As DeepSeek mentions, ​DeepSeek-V3 is an advanced large language model (LLM), that has gained significant attention for its performance and cost-effectiveness. DeepSeek's innovations highlight the potential for achieving high-level AI performance with fewer resources, challenging existing ind…  ( 24 min )
    Announcing DeepSeek-V3 on Azure AI Foundry and GitHub
    Building on the interest of DeepSeek-R1, launched one month ago, we are pleased to announce the availability of DeepSeek-V3 on Azure AI Foundry model catalog with token-based billing and GitHub Models free experience. This latest iteration is part of our commitment to enable powerful, efficient, and accessible AI solutions through the breadth and diversity of choice in the model catalog. DeepSeek-V3 is set to empower organizations across industries to unlock value from their data. What is DeepSeek-V3? As DeepSeek mentions, ​DeepSeek-V3 is an advanced large language model (LLM), that has gained significant attention for its performance and cost-effectiveness. DeepSeek's innovations highlight the potential for achieving high-level AI performance with fewer resources, challenging existing ind…
  • Open

    Talk to your agents! Introducing the Realtime API’s in Semantic Kernel!
    Introducing Realtime Agents in Semantic Kernel for Python! With release 1.23.0 of the Python version of Semantic Kernel we are introducing a new set of clients for interacting with the realtime multi-modal API’s of OpenAI and Azure OpenAI. They provide a abstracted approach to connecting to those services, adding your tools and running apps that […] The post Talk to your agents! Introducing the Realtime API’s in Semantic Kernel! appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Azure App Service Auto-Heal: Capturing Relevant Data During Performance Issues
    Introduction Azure App Service is a powerful platform that simplifies the deployment and management of web applications. However, maintaining application performance and availability is crucial. When performance issues arise, identifying the root cause can be challenging. This is where Auto-Heal in Azure App Service becomes a game-changer. Auto-Heal is a diagnostic and recovery feature that allows you to proactively detect and mitigate issues affecting your application’s performance. It enables automatic corrective actions and helps capture vital diagnostic data to troubleshoot problems efficiently. In this blog, we’ll explore how Auto-Heal works, its configuration, and how it assists in diagnosing performance bottlenecks. What is Auto-Heal in Azure App Service? Auto-Heal is a self-healing…  ( 39 min )
    Azure App Service Auto-Heal: Capturing Relevant Data During Performance Issues
    Introduction Azure App Service is a powerful platform that simplifies the deployment and management of web applications. However, maintaining application performance and availability is crucial. When performance issues arise, identifying the root cause can be challenging. This is where Auto-Heal in Azure App Service becomes a game-changer. Auto-Heal is a diagnostic and recovery feature that allows you to proactively detect and mitigate issues affecting your application’s performance. It enables automatic corrective actions and helps capture vital diagnostic data to troubleshoot problems efficiently. In this blog, we’ll explore how Auto-Heal works, its configuration, and how it assists in diagnosing performance bottlenecks. What is Auto-Heal in Azure App Service? Auto-Heal is a self-healing…
  • Open

    RAG Time Journey 1: RAG and knowledge retrieval fundamentals
    Introduction Farzad here! Welcome to the first post in RAG Time, a multi-part, multi-format educational series covering all things Retrieval-Augmented Generation (RAG). This series consists of five distinct journeys, each comprising a blog post and a video exploring a key RAG concept, including practical guidance on leveraging Azure AI Search. Visit our RAG Time repo to access the complete series and supporting resources. Series Overview: RAG Time This series consists of 5 journeys, that cover various aspects of a RAG system: RAG fundamentals Building the ultimate retrieval system Optimize your vector index at scale RAG for all your data Hero use cases   Journey 1 Overview: RAG Fundamentals In Journey 1, we'll introduce core RAG concepts and explore Azure AI Search's role: What is RAG …  ( 29 min )
    RAG Time Journey 1: RAG and knowledge retrieval fundamentals
    Introduction Farzad here! Welcome to the first post in RAG Time, a multi-part, multi-format educational series covering all things Retrieval-Augmented Generation (RAG). This series consists of five distinct journeys, each comprising a blog post and a video exploring a key RAG concept, including practical guidance on leveraging Azure AI Search. Visit our RAG Time site to access the complete series and supporting resources. Series Overview: RAG Time This series consists of 5 journeys, that cover various aspects of a RAG system: RAG fundamentals Retrieval: Building the ultimate retrieval system System performance: Optimize your vector index at scale Data pipeline and indexing: RAG for all your data Hero use cases Journey 1 Overview: RAG Fundamentals In Journey 1, we'll introduce core RAG co…

  • Open

    Effortlessly Integrate xAI’s Grok with Semantic Kernel
    For Semantic Kernel users, integrating xAI’s Grok API using the OpenAI connector is a breeze thanks to its compatibility with OpenAI’s API format. This tutorial focuses on setting up Grok in your Semantic Kernel projects with minimal fuss, using C# and Python examples. Why Grok? Grok, built by xAI, is a powerful AI model, offers […] The post Effortlessly Integrate xAI’s Grok with Semantic Kernel appeared first on Semantic Kernel.  ( 24 min )
    AutoGen and Semantic Kernel, Part 2
    Following on from our blog post a couple months ago: Microsoft’s Agentic AI Frameworks: AutoGen and Semantic Kernel, Microsoft’s agentic AI story is evolving at a steady pace. Both Azure AI Foundry’s Semantic Kernel and AI Frontier’s AutoGen are designed to empower developers to build advanced multi-agent systems. The AI Frontier’s team is charging ahead […] The post AutoGen and Semantic Kernel, Part 2 appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Step-by-step: Integrate Ollama Web UI to use Azure Open AI API with LiteLLM Proxy
    Introductions Ollama WebUI  is a streamlined interface for deploying and interacting with open-source large language models (LLMs) like Llama 3 and Mistral, enabling users to manage models, test them via a ChatGPT-like chat environment, and integrate them into applications through Ollama’s local API. While it excels for self-hosted models on platforms like Azure VMs, it does not natively support Azure OpenAI API endpoints—OpenAI’s proprietary models (e.g., GPT-4) remain accessible only through OpenAI’s managed API. However, tools like LiteLLM bridge this gap, allowing developers to combine Ollama-hosted models with OpenAI’s API in hybrid workflows, while maintaining compliance and cost-efficiency. This setup empowers users to leverage both self-managed open-source models and cloud-based AI…  ( 30 min )
    Deploy Open Web UI on Azure VM via Docker: A Step-by-Step Guide with Custom Domain Setup.
    Introductions Open Web UI (often referred to as "Ollama Web UI" in the context of LLM frameworks like Ollama) is an open-source, self-hostable interface designed to simplify interactions with large language models (LLMs) such as GPT-4, Llama 3, Mistral, and others. It provides a user-friendly, browser-based environment for deploying, managing, and experimenting with AI models, making advanced language model capabilities accessible to developers, researchers, and enthusiasts without requiring deep technical expertise. This article will delve into the step-by-step configurations on hosting OpenWeb UI on Azure. Requirements: Azure Portal Account - For students you can claim $USD100 Azure Cloud credits from this URL. Azure Virtual Machine - with a Linux of any distributions installed. Domain …  ( 29 min )
    Step-by-step: Integrate Ollama Web UI to use Azure Open AI API with LiteLLM Proxy
    Introductions Ollama WebUI  is a streamlined interface for deploying and interacting with open-source large language models (LLMs) like Llama 3 and Mistral, enabling users to manage models, test them via a ChatGPT-like chat environment, and integrate them into applications through Ollama’s local API. While it excels for self-hosted models on platforms like Azure VMs, it does not natively support Azure OpenAI API endpoints—OpenAI’s proprietary models (e.g., GPT-4) remain accessible only through OpenAI’s managed API. However, tools like LiteLLM bridge this gap, allowing developers to combine Ollama-hosted models with OpenAI’s API in hybrid workflows, while maintaining compliance and cost-efficiency. This setup empowers users to leverage both self-managed open-source models and cloud-based AI…
    Deploy Open Web UI on Azure VM via Docker: A Step-by-Step Guide with Custom Domain Setup.
    Introductions Open Web UI (often referred to as "Ollama Web UI" in the context of LLM frameworks like Ollama) is an open-source, self-hostable interface designed to simplify interactions with large language models (LLMs) such as GPT-4, Llama 3, Mistral, and others. It provides a user-friendly, browser-based environment for deploying, managing, and experimenting with AI models, making advanced language model capabilities accessible to developers, researchers, and enthusiasts without requiring deep technical expertise. This article will delve into the step-by-step configurations on hosting OpenWeb UI on Azure. Requirements: Azure Portal Account - For students you can claim $USD100 Azure Cloud credits from this URL. Azure Virtual Machine - with a Linux of any distributions installed. Domain …
  • Open

    Azure Load Testing Celebrates Two Years with Two Exciting Announcements!
    [Update on March 18, 2025: AI-powered load test generation, referred to in the third section below, is in preview now!] Azure Load Testing (ALT) has been an essential tool for performance testing, enabling customers across industries to run thousands of tests every month. We are thrilled to celebrate its second anniversary with two major announcements. In this blog post, we will delve into the remarkable capabilities of ALT and reveal the exciting developments that will redefine load testing for you. Why do customers love ALT? ALT is a powerful service designed to ensure that your applications can handle high traffic and perform optimally under peak load. Here are some key features of ALT: Large-scale tests: Simulate over 100,000 concurrent users. Long-duration tests: Run tests for up to …  ( 25 min )
    Azure Load Testing Celebrates Two Years with Two Exciting Announcements!
    Azure Load Testing (ALT) has been an essential tool for performance testing, enabling customers across industries to run thousands of tests every month. We are thrilled to celebrate its second anniversary with two major announcements. In this blog post, we will delve into the remarkable capabilities of ALT and reveal the exciting developments that will redefine load testing for you. Why do customers love ALT? ALT is a powerful service designed to ensure that your applications can handle high traffic and perform optimally under peak load. Here are some key features of ALT: Large-scale tests: Simulate over 100,000 concurrent users. Long-duration tests: Run tests for up to 24 hours. Multi-region tests: Simultaneously simulate users from any of the 20 supported regions. Continuous tests: Catc…
    Using NVIDIA Triton Inference Server on Azure Container Apps
    TOC Introduction to Triton System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Testing Azure Container Apps Conclusion References   1. Introduction to Triton Triton Inference Server is an open-source, high-performance inferencing platform developed by NVIDIA to simplify and optimize AI model deployment. Designed for both cloud and edge environments, Triton enables developers to serve models from multiple deep learning frameworks, including TensorFlow, PyTorch, ONNX Runtime, TensorRT, and OpenVINO, using a single standardized interface. Its goal is to streamline AI inferencing while maximizing hardware utilization and scalability.   A key feature of Triton is its support for multiple m…  ( 31 min )
    Using NVIDIA Triton Inference Server on Azure Container Apps
    TOC Introduction to Triton System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Testing Azure Container Apps Conclusion References   1. Introduction to Triton Triton Inference Server is an open-source, high-performance inferencing platform developed by NVIDIA to simplify and optimize AI model deployment. Designed for both cloud and edge environments, Triton enables developers to serve models from multiple deep learning frameworks, including TensorFlow, PyTorch, ONNX Runtime, TensorRT, and OpenVINO, using a single standardized interface. Its goal is to streamline AI inferencing while maximizing hardware utilization and scalability.   A key feature of Triton is its support for multiple m…
  • Open

    Sidecars in Azure App Service: A Deep Dive
    Sidecars in Azure App Service: A Deep Dive  ( 5 min )
  • Open

    Unleashing Innovation: AI Agent Development with Azure AI Foundry
    Creating AI agents using Azure AI Foundry is a game-changer for businesses and developers looking to harness the power of artificial intelligence. These AI agents can automate complex tasks, provide insightful data analysis, and enhance customer interactions, leading to increased efficiency and productivity. By leveraging Azure AI Foundry, organizations can build, deploy, and manage AI solutions with ease, ensuring they stay competitive in an ever-evolving technological landscape. The importance of creating AI agents lies in their ability to transform operations, drive innovation, and deliver personalized experiences, making them an invaluable asset in today's digital age. Let's take a look at how to create an agent on Azure AI Foundry. We'll explore some of the features and experiment wit…  ( 34 min )
    Unleashing Innovation: AI Agent Development with Azure AI Foundry
    Creating AI agents using Azure AI Foundry is a game-changer for businesses and developers looking to harness the power of artificial intelligence. These AI agents can automate complex tasks, provide insightful data analysis, and enhance customer interactions, leading to increased efficiency and productivity. By leveraging Azure AI Foundry, organizations can build, deploy, and manage AI solutions with ease, ensuring they stay competitive in an ever-evolving technological landscape. The importance of creating AI agents lies in their ability to transform operations, drive innovation, and deliver personalized experiences, making them an invaluable asset in today's digital age. Let's take a look at how to create an agent on Azure AI Foundry. We'll explore some of the features and experiment wit…
  • Open

    Python in Visual Studio Code – March 2025 Release
    The March 2025 release of the Python and Jupyter extensions for Visual Studio Code are now available. This month's updates include improvements to shell integration, a new setting to change auto test discovery file patterns, inline values shown on hover, and more! The post Python in Visual Studio Code – March 2025 Release appeared first on Python.  ( 24 min )

  • Open

    Integrating Model Context Protocol Tools with Semantic Kernel: A Step-by-Step Guide
    This post describes how to use Model Context Protocol tools with Semantic Kernel. Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. MCP standardizes the connection between AI models and various data sources and tools. The Model Context Protocol is significant because it enhances the way AI models […] The post Integrating Model Context Protocol Tools with Semantic Kernel: A Step-by-Step Guide appeared first on Semantic Kernel.  ( 26 min )
  • Open

    Azure APIM Cost Rate Limiting with Cosmos & Flex Functions
    Azure API Management (APIM) provides built-in rate limiting policies, but implementing sophisticated Dollar cost quota management for Azure OpenAI services requires a more tailored approach. This solution combines Azure Functions, Cosmos DB, and stored procedures to implement cost-based quota management with automatic renewal periods. Architecture Client → APIM (with RateLimitConfig) → Azure Function Proxy → Azure OpenAI                                                                                  ↓                                                              Cosmos DB (quota tracking) Technical Implementation 1. Rate Limit Configuration in APIM The rate limiting configuration is injected into the request body by APIM using a policy fragment. Here's an example for a basic $5 quota: <set…  ( 28 min )
    Azure APIM Cost Rate Limiting with Cosmos & Flex Functions
    Implementing Rate Limiting for Azure OpenAI with Cosmos DB Azure API Management (APIM) provides built-in rate limiting policies, but implementing sophisticated quota management for Azure OpenAI services requires a more tailored approach. This solution combines Azure Functions, Cosmos DB, and stored procedures to implement cost-based quota management with automatic renewal periods. Architecture Client → APIM (with rate limit config) → Azure Function Proxy → Azure OpenAI ↓ Cosmos DB (quota tracking) Technical Implementation 1. Rate Limit Configuration in APIM The rate limiting configuration is injected into the request body by APIM using a policy fragment. Here's an example for a basic $5 daily quota: <set-variable name="rateLimitConfig" value="@{ var productId = context.Product.Id; …
  • Open

    Run Locust-based Tests in Azure Load Testing
    We are excited to announce support for Locust, a Python-based open-source performance testing framework, in Azure Load Testing. As a cloud-based and fully managed service for performance testing, Azure Load Testing helps you easily achieve high scale loads and quickly identify performance bottlenecks. We now support two load testing frameworks – Apache JMeter and Locust. You can use your existing Locust scripts and seamlessly leverage all the capabilities of Azure Load Testing. Locust is a developer friendly framework that lets you write code to create load test scripts as opposed to using GUI based test creation. You can check-in the scripts into your repos, seek peer feedback, and better maintain the scripts as they evolve – just like you would do for your product code. As for extensibil…  ( 29 min )
    Run Locust-based Tests in Azure Load Testing
    We are excited to announce support for Locust, a Python-based open-source performance testing framework, in Azure Load Testing. As a cloud-based and fully managed service for performance testing, Azure Load Testing helps you easily achieve high scale loads and quickly identify performance bottlenecks. We now support two load testing frameworks – Apache JMeter and Locust. You can use your existing Locust scripts and seamlessly leverage all the capabilities of Azure Load Testing. Locust is a developer friendly framework that lets you write code to create load test scripts as opposed to using GUI based test creation. You can check-in the scripts into your repos, seek peer feedback, and better maintain the scripts as they evolve – just like you would do for your product code. As for extensibil…
    Capture .NET Profiler Trace on the Azure App Service platform
    Summary The article provides guidance on using the .NET Profiler Trace feature in Microsoft Azure App Service to diagnose performance issues in ASP.NET applications. It explains how to configure and collect the trace by accessing the Azure Portal, navigating to the Azure App Service, and selecting the "Collect .NET Profiler Trace" feature. Users can choose between "Collect and Analyze Data" or "Collect Data only" and must select the instance to perform the trace on. The trace stops after 60 seconds but can be extended up to 15 minutes. After analysis, users can view the report online or download the trace file for local analysis, which includes information like slow requests and CPU stacks. The article also details how to analyze the trace using Perf View, a tool available on GitHub, to id…  ( 53 min )
    Capture .NET Profiler Trace on the Azure App Service platform
    Summary The article provides guidance on using the .NET Profiler Trace feature in Microsoft Azure App Service to diagnose performance issues in ASP.NET applications. It explains how to configure and collect the trace by accessing the Azure Portal, navigating to the Azure App Service, and selecting the "Collect .NET Profiler Trace" feature. Users can choose between "Collect and Analyze Data" or "Collect Data only" and must select the instance to perform the trace on. The trace stops after 60 seconds but can be extended up to 15 minutes. After analysis, users can view the report online or download the trace file for local analysis, which includes information like slow requests and CPU stacks. The article also details how to analyze the trace using Perf View, a tool available on GitHub, to id…
    Azure Load Testing: Price Drop and Usage Limits to Supercharge Your Testing
    Azure Load Testing is a fully managed service that allows you to simulate high traffic loads on your applications and websites to ensure they can handle real-world conditions. It helps you identify potential performance bottlenecks, improve scalability, and guarantee a seamless user experience. Whether you're scaling up your tests or simply looking for a way to manage costs, we've got you covered with two exciting updates. 🎉 Price Drop for Azure Load Testing We know how important it is to manage costs while scaling your load testing efforts, and we're here to make it easier. We are excited to announce a price reduction for Azure Load Testing effective March 1, 2025. This new pricing is designed to provide you with the most value for your investment, enabling you to load test at a more aff…  ( 26 min )
    Azure Load Testing: Price Drop and Usage Limits to Supercharge Your Testing
    Azure Load Testing is a fully managed service that allows you to simulate high traffic loads on your applications and websites to ensure they can handle real-world conditions. It helps you identify potential performance bottlenecks, improve scalability, and guarantee a seamless user experience. Whether you're scaling up your tests or simply looking for a way to manage costs, we've got you covered with two exciting updates. 🎉 Price Drop for Azure Load Testing We know how important it is to manage costs while scaling your load testing efforts, and we're here to make it easier. We are excited to announce a price reduction for Azure Load Testing effective March 1, 2025. This new pricing is designed to provide you with the most value for your investment, enabling you to load test at a more aff…
  • Open

    Prompt Engineering Simplified: AI Toolkit's Prompt Builder
    In the age of generative AI, crafting effective prompts is no longer a nice-to-have, it's a must-have.  Understanding how to communicate with these underlying models is the key to unlocking their true potential and getting the results we need. What are Prompts? Every time we want to communicate to the language model, we give set of instructions to these models, we refer to these inputs as Prompts. Prompts play a very crucial role while working with the GenAI models. The quality of a prompt directly impacts the output of GenAI models. Precise and well-crafted prompts are crucial for achieving desired results. What factors crafts an optimal Prompt? Crafting an optimal requires balancing clarity, specificity and context. Besides these, constraints are a critical factor in crafting effective p…  ( 50 min )
    Prompt Engineering Simplified: AI Toolkit's Prompt Builder
    In the age of generative AI, crafting effective prompts is no longer a nice-to-have, it's a must-have.  Understanding how to communicate with these underlying models is the key to unlocking their true potential and getting the results we need. What are Prompts? Every time we want to communicate to the language model, we give set of instructions to these models, we refer to these inputs as Prompts. Prompts play a very crucial role while working with the GenAI models. The quality of a prompt directly impacts the output of GenAI models. Precise and well-crafted prompts are crucial for achieving desired results. What factors crafts an optimal Prompt? Crafting an optimal requires balancing clarity, specificity and context. Besides these, constraints are a critical factor in crafting effective p…
  • Open

    Learn how to develop innovative AI solutions with updated Azure skilling paths
    The rapid evolution of generative AI is reshaping how organizations operate, innovate, and deliver value. Professionals who develop expertise in generative AI development, prompt engineering, and AI lifecycle management are increasingly valuable to organizations looking to harness these powerful capabilities while ensuring responsible and effective implementation. In this blog, we’re excited to share our newly refreshed series of Plans on Microsoft Learn that aim to supply your team with the tools and knowledge to leverage the latest AI technologies, including: Find the best model for your generative AI solution with Azure AI Foundry Create agentic AI solutions by using Azure AI Foundry Build secure and responsible AI solutions and manage generative AI lifecycles From sophisticated AI ag…  ( 30 min )
    Learn how to develop innovative AI solutions with updated Azure skilling paths
    The rapid evolution of generative AI is reshaping how organizations operate, innovate, and deliver value. Professionals who develop expertise in generative AI development, prompt engineering, and AI lifecycle management are increasingly valuable to organizations looking to harness these powerful capabilities while ensuring responsible and effective implementation. In this blog, we’re excited to share our newly refreshed series of Plans on Microsoft Learn that aim to supply your team with the tools and knowledge to leverage the latest AI technologies, including: Find the best model for your generative AI solution with Azure AI Foundry Create agentic AI solutions by using Azure AI Foundry Build secure and responsible AI solutions and manage generative AI lifecycles From sophisticated AI ag…
  • Open

    Learn how to develop innovative AI solutions with updated Azure skilling paths
    The rapid evolution of generative AI is reshaping how organizations operate, innovate, and deliver value. Professionals who develop expertise in generative AI development, prompt engineering, and AI lifecycle management are increasingly valuable to organizations looking to harness these powerful capabilities while ensuring responsible and effective implementation. In this blog, we’re excited to share our newly refreshed series of Plans on Microsoft Learn that aim to supply your team with the tools and knowledge to leverage the latest AI technologies, including: Find the best model for your generative AI solution with Azure AI Foundry Create agentic AI solutions by using Azure AI Foundry Build secure and responsible AI solutions and manage generative AI lifecycles From sophisticated AI ag…  ( 30 min )
    Learn how to develop innovative AI solutions with updated Azure skilling paths
    The rapid evolution of generative AI is reshaping how organizations operate, innovate, and deliver value. Professionals who develop expertise in generative AI development, prompt engineering, and AI lifecycle management are increasingly valuable to organizations looking to harness these powerful capabilities while ensuring responsible and effective implementation. In this blog, we’re excited to share our newly refreshed series of Plans on Microsoft Learn that aim to supply your team with the tools and knowledge to leverage the latest AI technologies, including: Find the best model for your generative AI solution with Azure AI Foundry Create agentic AI solutions by using Azure AI Foundry Build secure and responsible AI solutions and manage generative AI lifecycles From sophisticated AI ag…
  • Open

    Operational Reporting with Microsoft Fabric Real-Time Intelligence
    Operational reporting and historical reporting serve distinct purposes in organizations. Historically, data teams have heavily leaned on providing historical reporting, as being able to report on the operational business processes has proved elusive.   As a result, organizations have created reports directly against the operational database for operational needs or spend significant effort trying to get … Continue reading “Operational Reporting with Microsoft Fabric Real-Time Intelligence”  ( 10 min )
  • Open

    Connect to any data with Shortcuts, Mirroring and Data Factory using Microsoft Fabric
    Easily access and unify your data for analytics and AI — no matter where it lives. With OneLake in Microsoft Fabric, you can connect to data across multiple clouds, databases, and formats without duplication. Use the OneLake catalog to quickly find and interact with your data, and let Copilot in Fabric help you transform and analyze it effortlessly. Eliminate barriers to working with your data using Shortcuts to virtualize external sources and Mirroring to keep databases and warehouses in sync — all without ETL. For deeper integration, leverage Data Factory’s 180+ connectors to bring in structured, unstructured, and real-time streaming data at scale. Maraki Ketema from the Microsoft Fabric team shows how to combine these methods, ensuring fast, reliable access to quality data for analytics…  ( 60 min )
    Connect to any data with Shortcuts, Mirroring and Data Factory using Microsoft Fabric
    Easily access and unify your data for analytics and AI — no matter where it lives. With OneLake in Microsoft Fabric, you can connect to data across multiple clouds, databases, and formats without duplication. Use the OneLake catalog to quickly find and interact with your data, and let Copilot in Fabric help you transform and analyze it effortlessly. Eliminate barriers to working with your data using Shortcuts to virtualize external sources and Mirroring to keep databases and warehouses in sync — all without ETL. For deeper integration, leverage Data Factory’s 180+ connectors to bring in structured, unstructured, and real-time streaming data at scale. Maraki Ketema from the Microsoft Fabric team shows how to combine these methods, ensuring fast, reliable access to quality data for analytics…

  • Open

    ‘no space left on device’ with Azure Container Apps
    This post will cover the message ‘no space on device’ that you may end up seeing when deploying Container Apps, or in other runtime scenarios.  ( 7 min )
  • Open

    Azure VMware Solution Broadcom VMSA-2025-0004 Remediation
    With continuous monitoring and security intelligence gathering, Microsoft ensures proactive identification and mitigation of security threats. By leveraging advanced analytics, Microsoft is able to detect vulnerabilities early, empowering organizations to stay ahead of potential risks and safeguard their digital assets effectively. Recently, Microsoft discovered a critical ESXi vulnerability and has been collaborating with Broadcom to develop and qualify a secure patch to address this issue.  With Microsoft’s commitment to the security of our platform and our improved lifecycle management process, we were able to quickly assemble a global team to work on the acceleration and validation of the ESXi 8.0 U2d Build 24585300 security patch. We have successfully qualified the security patch that…  ( 23 min )
    Azure VMware Solution Broadcom VMSA-2025-0004 Remediation
    With continuous monitoring and security intelligence gathering, Microsoft ensures proactive identification and mitigation of security threats. By leveraging advanced analytics, Microsoft is able to detect vulnerabilities early, empowering organizations to stay ahead of potential risks and safeguard their digital assets effectively. Recently, Microsoft discovered a critical ESXi vulnerability and has been collaborating with Broadcom to develop and qualify a secure patch to address this issue.  With Microsoft’s commitment to the security of our platform and our improved lifecycle management process, we were able to quickly assemble a global team to work on the acceleration and validation of the ESXi 8.0 U2d Build 24585300 security patch. We have successfully qualified the security patch that…
  • Open

    Introducing a new curriculum: Generative AI with JavaScript
    Introducing a new curriculum: Generative AI with JavaScript TLDR; This course on Generative AI with JavaScript aims to take you through a series of 5 lessons so that you can integrate Generative AI in your JavaScript apps.   You can use eithe GitHub Models or Azure Open AI, see setup section in below repo/curriculum To the curriculum     Read more here, Introducing a new curriculum: Generative AI with JavaScript | Microsoft Community Hub  ( 18 min )
    Introducing a new curriculum: Generative AI with JavaScript
    Introducing a new curriculum: Generative AI with JavaScript TLDR; This course on Generative AI with JavaScript aims to take you through a series of 5 lessons so that you can integrate Generative AI in your JavaScript apps.   You can use eithe GitHub Models or Azure Open AI, see setup section in below repo/curriculum To the curriculum     Read more here, Introducing a new curriculum: Generative AI with JavaScript | Microsoft Community Hub
  • Open

    Introducing a new curriculum: Generative AI with JavaScript
    Introducing the Generative AI with JavaScript Curriculum In just 5 lessons, this course equips you with the knowledge and skills to seamlessly integrate Generative AI into your JavaScript applications. Perfect for students and educators alike, it's time to dive in and bring your ideas to life with cutting-edge AI technology! To the curriculum Other resources Gen AI JavaScript videos: Watch the video series Gen AI JavaScript GitHub curriculum: Generative AI with JavaScript Why should I take this course? AI is top of mind for many developers, and Generative AI is a fascinating field that allows you to integrate AI models into your applications. This integration can help improve how your apps interact with users as the user can use natural language to interact with the app. Heres' some grea…  ( 24 min )
    Introducing a new curriculum: Generative AI with JavaScript
    Introducing the Generative AI with JavaScript Curriculum In just 5 lessons, this course equips you with the knowledge and skills to seamlessly integrate Generative AI into your JavaScript applications. Perfect for students and educators alike, it's time to dive in and bring your ideas to life with cutting-edge AI technology! To the curriculum Other resources Gen AI JavaScript videos: Watch the video series Gen AI JavaScript GitHub curriculum: Generative AI with JavaScript Why should I take this course? AI is top of mind for many developers, and Generative AI is a fascinating field that allows you to integrate AI models into your applications. This integration can help improve how your apps interact with users as the user can use natural language to interact with the app. Heres' some grea…
  • Open

    Guest Blog: LLMAgentOps Toolkit for Semantic Kernel
    Today the Semantic Kernel team is excited to welcome a guest author, Prabal Deb to share his work. LLMAgentOps Toolkit is repository that contains basic structure of LLM Agent based application built on top of the Semantic Kernel Python version. The toolkit is designed to be a starting point for data scientists and developers for experimentation […] The post Guest Blog: LLMAgentOps Toolkit for Semantic Kernel appeared first on Semantic Kernel.  ( 25 min )
  • Open

    AI in Action — How to build scalable RAG-enabled AI apps
    For developers, the emphasis on building intelligence into apps has never been clearer. Over the next three years, 92% of companies plan on investing in AI to achieve business outcomes like enhancing productivity and delivering better customer service. At Microsoft, developers and engineers are pushing the boundaries of AI at scale, crafting applications that harness […] The post AI in Action — How to build scalable RAG-enabled AI apps appeared first on Engineering@Microsoft.  ( 23 min )

  • Open

    Unlocking the Power of AI Agents: An Introductory Guide - Part 1
    Hi everyone! I'm Shivam Goyal, excited to start this blog series on AI agents. We'll explore the fundamentals and practical applications of this exciting field, using Microsoft's excellent AI Agents for Beginners GitHub repository as our guide. This first post lays the groundwork, focusing on core concepts and a simple implementation. The AI Agent System: More Than Just LLMs While Large Language Models (LLMs) are powerful, AI agents are complete systems that leverage LLMs to interact with and change their environment. This systemic view is crucial. An AI agent comprises: Environment: The context of the agent's operation (a simulated world, a database, a robot's surroundings). Sensors: How the agent perceives its environment (database queries, sensor readings). Actuators: The actions the a…  ( 25 min )
    Unlocking the Power of AI Agents: An Introductory Guide - Part 1
    Hi everyone! I'm Shivam Goyal, excited to start this blog series on AI agents. We'll explore the fundamentals and practical applications of this exciting field, using Microsoft's excellent AI Agents for Beginners GitHub repository as our guide. This first post lays the groundwork, focusing on core concepts and a simple implementation. The AI Agent System: More Than Just LLMs While Large Language Models (LLMs) are powerful, AI agents are complete systems that leverage LLMs to interact with and change their environment. This systemic view is crucial. An AI agent comprises: Environment: The context of the agent's operation (a simulated world, a database, a robot's surroundings). Sensors: How the agent perceives its environment (database queries, sensor readings). Actuators: The actions the a…
  • Open

    Public preview of SharePoint Framework 1.21 – First release of upcoming features
    We are excited to announce the first preview of the upcoming SharePoint Framework 1.21. This release will have updates for SharePoint, Viva Connections and Microsoft Teams experiences. The post Public preview of SharePoint Framework 1.21 – First release of upcoming features appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Queue-Based Load Leveling Pattern (Starring Phi4 and Azure Service Bus)
    In this blog, we’re exploring the Queue-Based Load Leveling pattern—a powerful method for smoothing out workload spikes Why Choose Queue-Based Load Leveling? In today’s fast-paced software environment, dynamic workloads are common. The Queue-Based Load Leveling pattern decouples the production and consumption of tasks by introducing a queue between them. This allows producers and consumers to work independently, […] The post Queue-Based Load Leveling Pattern (Starring Phi4 and Azure Service Bus) appeared first on Microsoft for Java Developers.  ( 23 min )
  • Open

    Get certified as an Azure AI Engineer (AI-102) this summer?
    For developers, the accreditation as an Azure AI Engineer—certified through the rigorous AI-102 exam—has become a golden ticket to career acceleration. It isn’t just about coding chatbots or fine-tuning machine learning models; it’s about gaining the confidence (for you and for your business) that you can wield Azure’s toolkits to configure AI solutions that augment human capability.   Before we dive in, if you’re planning to become certified as an Azure AI Engineer, you may find this Starter Learning Plan (AI 102) valuable—recently curated by a group of Microsoft experts, purposed for your success. We recommend adding it to your existing learning portfolio. It’s a light introduction that should take less than four hours, but it offers a solid glimpse into what to expect on your journey a…  ( 27 min )
    Get certified as an Azure AI Engineer (AI-102) this summer?
    For developers, the accreditation as an Azure AI Engineer—certified through the rigorous AI-102 exam—has become a golden ticket to career acceleration. It isn’t just about coding chatbots or fine-tuning machine learning models; it’s about gaining the confidence (for you and for your business) that you can wield Azure’s toolkits to configure AI solutions that augment human capability.   Before we dive in, if you’re planning to become certified as an Azure AI Engineer, you may find this Starter Learning Plan (AI 102) valuable—recently curated by a group of Microsoft experts, purposed for your success. We recommend adding it to your existing learning portfolio. It’s a light introduction that should take less than four hours, but it offers a solid glimpse into what to expect on your journey a…
  • Open

    Get certified as an Azure AI Engineer (AI-102) this summer?
    For developers, the accreditation as an Azure AI Engineer—certified through the rigorous AI-102 exam—has become a golden ticket to career acceleration. It isn’t just about coding chatbots or fine-tuning machine learning models; it’s about gaining the confidence (for you and for your business) that you can wield Azure’s toolkits to configure AI solutions that augment human capability.   Before we dive in, if you’re planning to become certified as an Azure AI Engineer, you may find this Starter Learning Plan (AI 102) valuable—recently curated by a group of Microsoft experts, purposed for your success. We recommend adding it to your existing learning portfolio. It’s a light introduction that should take less than four hours, but it offers a solid glimpse into what to expect on your journey a…  ( 27 min )
    Get certified as an Azure AI Engineer (AI-102) this summer?
    For developers, the accreditation as an Azure AI Engineer—certified through the rigorous AI-102 exam—has become a golden ticket to career acceleration. It isn’t just about coding chatbots or fine-tuning machine learning models; it’s about gaining the confidence (for you and for your business) that you can wield Azure’s toolkits to configure AI solutions that augment human capability.   Before we dive in, if you’re planning to become certified as an Azure AI Engineer, you may find this Starter Learning Plan (AI 102) valuable—recently curated by a group of Microsoft experts, purposed for your success. We recommend adding it to your existing learning portfolio. It’s a light introduction that should take less than four hours, but it offers a solid glimpse into what to expect on your journey a…
  • Open

    Automating Database Migration with Autogen 0.4
    Introduction Modern multi-agent AI systems use frameworks that allow for the orchestration of complex tasks through specialized agents.  Autogen is a Microsoft Research project designed to enhance scalability and efficiency of AI systems by leveraging layered and event-driven architecture. The framework relies on a three-layer architecture: The Core Layer acts as the fundamental infrastructure, handling basic communication and resource management. The AgentChat Layer provides high-level interaction capabilities. The Extensions Layer enables specialized functionalities. The Autogen project has seen significant interest given its potential to revolutionize AI development by moving beyond single-model systems to collaborative AI, where multiple specialized agents work together to solve comp…  ( 42 min )
    Automating Database Migration with Autogen 0.4
    Introduction Modern multi-agent AI systems use frameworks that allow for the orchestration of complex tasks through specialized agents.  Autogen is a Microsoft Research project designed to enhance scalability and efficiency of AI systems by leveraging layered and event-driven architecture. The framework relies on a three-layer architecture: The Core Layer acts as the fundamental infrastructure, handling basic communication and resource management. The AgentChat Layer provides high-level interaction capabilities. The Extensions Layer enables specialized functionalities. The Autogen project has seen significant interest given its potential to revolutionize AI development by moving beyond single-model systems to collaborative AI, where multiple specialized agents work together to solve comp…

  • Open

    Global AI Bootcamp Milan: in presenza a Reggio Emilia e online
    Nell’ambito dell’iniziativa Global AI Bootcamp 2025, siamo entusiasti di annunciare l’evento tenuto dall’unico chapter Italiano della Global AI Community  - Global AI Milan. Un evento imperdibile per sviluppatori, data scientist e appassionati di intelligenza artificiale.  🗓️ Quando? 12 marzo, dalle 14.00 alle 18.00 CET📍Dove? In presenza c/o Impresoft 4ward - Reggio Emilia & onlineUnisciti a noi, registrandoti sul sito ufficiale 👉🏼 Global AI Bootcamp 2025 - Milan | Global AI Milan - Global AI Community Guarda l'evento on-demand: Come funzionaL’evento si terrà in presenza e in streaming. La location è la sede Impresoft di Reggio Emilia, i posti sono limitati e quindi l’iscrizione in presenza è soggetta a conferma. Durante questa mezza giornata intensiva, i partecipanti avranno l'opport…  ( 22 min )
    Global AI Bootcamp Milan: in presenza a Reggio Emilia e online
    Nell’ambito dell’iniziativa Global AI Bootcamp 2025, siamo entusiasti di annunciare l’evento tenuto dall’unico chapter Italiano della Global AI Community  - Global AI Milan. Un evento imperdibile per sviluppatori, data scientist e appassionati di intelligenza artificiale.  🗓️ Quando? 12 marzo, dalle 14.00 alle 18.00 CET📍Dove? In presenza c/o Impresoft 4ward - Reggio Emilia & onlineUnisciti a noi, registrandoti sul sito ufficiale 👉🏼 Global AI Bootcamp 2025 - Milan | Global AI Milan - Global AI Community Come funzionaL’evento si terrà in presenza e in streaming. La location è la sede Impresoft di Reggio Emilia, i posti sono limitati e quindi l’iscrizione in presenza è soggetta a conferma. Durante questa mezza giornata intensiva, i partecipanti avranno l'opportunità di:•    Ascoltare keyn…

  • Open

    Azure DevOps Basic usage included with GitHub Enterprise
    Many customers want to use both GitHub and Azure DevOps together. Until now, unless you purchased Visual Studio subscriptions with GitHub Enterprise, you had to pay separately for both products. With the Sprint 252 release, Azure DevOps Basic usage rights are included with GitHub Enterprise Cloud. Your users access this benefit automatically when they login […] The post Azure DevOps Basic usage included with GitHub Enterprise appeared first on Azure DevOps Blog.  ( 21 min )
  • Open

    Building new AI skills for developers
    For this post, we’re focusing on learning new AI skills. We explore resources that will help developers take their AI skills (and their applications) to the next level. Whether you’re new to AI and don’t know where to get started, or you’re experienced but want to advance your skillset with some new tools and capabilities, we have resources that will get you there. Join a challenge, find a Microsoft Learn path, get info on the latest tools and updates, watch in-depth videos, join a live event for hands-on learning, and more.      How to develop AI Apps and Agents in Azure: A Visual Guide Move beyond proof-of-concept and build production-ready AI apps and agents in Azure. This visual guide walks you through the essential steps and decisions for building intelligent apps in Azure.    Join th…  ( 34 min )
    Building new AI skills for developers
    For this post, we’re focusing on learning new AI skills. We explore resources that will help developers take their AI skills (and their applications) to the next level. Whether you’re new to AI and don’t know where to get started, or you’re experienced but want to advance your skillset with some new tools and capabilities, we have resources that will get you there. Join a challenge, find a Microsoft Learn path, get info on the latest tools and updates, watch in-depth videos, join a live event for hands-on learning, and more.      How to develop AI Apps and Agents in Azure: A Visual Guide Move beyond proof-of-concept and build production-ready AI apps and agents in Azure. This visual guide walks you through the essential steps and decisions for building intelligent apps in Azure.    Join th…
  • Open

    Release the Agents! SK Agents Framework RC1
    Semantic Kernel Agent Framework Reaches Release Candidate 1 We’re excited to announce that with the release of Semantic Kernel 1.40 (.NET) and 1.22.0 (Python), we’re elevating the Semantic Kernel Agent Framework to Release Candidate 1. This marks a significant milestone in our journey toward providing a robust, versatile framework for building AI agents for enterprise […] The post Release the Agents! SK Agents Framework RC1 appeared first on Semantic Kernel.  ( 23 min )
  • Open

    Introducing Face Check with Microsoft Entra Verified ID
    Protect your organization from account takeover and hiring fraud as deepfake impersonation threats grow. With Microsoft Entra Verified ID, you can use Face Check to verify identities in real time against government-issued IDs like driver’s licenses and passports. Use Face Check with integrated solutions for new employee, guest or admin onboarding step-up authentication to access sensitive information securing common helpdesk-driven tasks, like user account recovery Setup is simple and has been designed so that both the enterprise and the person verifying their identity maintain control — without storing or passing biometric information like other face matching solutions. Join Ankur Patel, from the Microsoft Entra team, as he demonstrates how Face Check with Verified ID works and how to s…  ( 42 min )
    Introducing Face Check with Microsoft Entra Verified ID
    Protect your organization from account takeover and hiring fraud as deepfake impersonation threats grow. With Microsoft Entra Verified ID, you can use Face Check to verify identities in real time against government-issued IDs like driver’s licenses and passports. Use Face Check with integrated solutions for new employee, guest or admin onboarding step-up authentication to access sensitive information securing common helpdesk-driven tasks, like user account recovery Setup is simple and has been designed so that both the enterprise and the person verifying their identity maintain control — without storing or passing biometric information like other face matching solutions. Join Ankur Patel, from the Microsoft Entra team, as he demonstrates how Face Check with Verified ID works and how to s…

  • Open

    Avoiding hardcoding Java or Tomcat versions on Windows App Service
    This post covers information on why to avoid hardcoding Java or Tomcat versions in web.config’s with Java on Windows App Service.  ( 5 min )
  • Open

    Azure AI Speech text to speech Feb 2025 updates: New HD voices and more
    By Garfield He   Our dedication to enhancing Azure AI Speech voices remains steadfast, as we continually strive to make them more expressive and engaging. We are pleased to announce an upgraded HD version of our neural text-to-speech service for selected voices. This latest iteration further enhances expressiveness by incorporating emotion detection based on the input context. Our advanced technology employs acoustic and linguistic features to produce speech with rich, natural variations. It effectively detects emotional cues within the text and autonomously adjusts the voice's tone and style. With this enhancement, users can expect a more human-like speech pattern characterized by improved intonation, rhythm, and emotional expression. What is new? Public preview: updated 13 HD voices to …  ( 53 min )
    Azure AI Speech text to speech Feb 2025 updates: New HD voices and more
    By Garfield He   Our dedication to enhancing Azure AI Speech voices remains steadfast, as we continually strive to make them more expressive and engaging. We are pleased to announce an upgraded HD version of our neural text-to-speech service for selected voices. This latest iteration further enhances expressiveness by incorporating emotion detection based on the input context. Our advanced technology employs acoustic and linguistic features to produce speech with rich, natural variations. It effectively detects emotional cues within the text and autonomously adjusts the voice's tone and style. With this enhancement, users can expect a more human-like speech pattern characterized by improved intonation, rhythm, and emotional expression. What is new? Public preview: updated 13 HD voices to …
    Learn about Azure AI during the Global AI Bootcamp 2025
    The Global AI Bootcamp is here to connect you with a global community of developers, data scientists, and AI experts. This annual event brings together like-minded individuals to learn, share, and collaborate on the latest advancements in AI technology. Attendees can expect immersive workshops, insightful sessions, and networking opportunities that cater to all skill levels. This upcoming week, from Saturday, March 1st to Friday, March 7th, we have an exciting lineup of 29 bootcamps happening across 19 countries. With the rapid evolution of AI and its impact on various industries, there's no better time than now to enhance your skills and contribute to this transformative field.   Why You Should Attend With 135 bootcamps happening in 44 countries this year, the Global AI Bootcamp is the p…  ( 26 min )
    Learn about Azure AI during the Global AI Bootcamp 2025
    The Global AI Bootcamp is here to connect you with a global community of developers, data scientists, and AI experts. This annual event brings together like-minded individuals to learn, share, and collaborate on the latest advancements in AI technology. Attendees can expect immersive workshops, insightful sessions, and networking opportunities that cater to all skill levels. This upcoming week, from Saturday, March 1st to Friday, March 7th, we have an exciting lineup of 29 bootcamps happening across 19 countries. With the rapid evolution of AI and its impact on various industries, there's no better time than now to enhance your skills and contribute to this transformative field.   Why You Should Attend With 135 bootcamps happening in 44 countries this year, the Global AI Bootcamp is the p…
    Built-in Enterprise Readiness with Azure AI Agent Service
    Introduction  In today's rapidly evolving AI landscape, enterprises are increasingly seeking greater control and flexibility over their data and resources. A key barrier to enterprise adoption of AI technology is the concern over data protection. A Principal Technology Specialist supporting leading financial services companies at Microsoft explains,   “It doesn't matter how useful any particular technology is. If it doesn’t meet their stringent security requirements, financial services customers cannot and will not adopt it. It's critical that services are designed from the ground up with these requirements in mind.”  This post explores how Azure AI Agent Service addresses these concerns by offering:  Two distinct Standard Agent Setup configurations tailored to your organization's network…  ( 30 min )
    Announcing Provisioned Deployment for Azure OpenAI Service Fine-tuning
    You've fine-tuned your models to make your agents behave and speak how you'd like. You've scaled up your RAG application to meet customer demand. You've now got a good problem: users love the service but want it snappier and more responsive. Azure OpenAI Service now offers provisioned deployments for fine-tuned models, giving your applications predictable performance with predictable costs! 💡 What is Provisioned Throughput? If you're unfamiliar with Provisioned Throughput, it allows Azure OpenAI Service customers to purchase capacity in terms of performance needs instead of per-token. With fine-tuned deployments, it replaces both the hosting fee and the token-based billing of Standard and Global Standard (now in Public Preview) with a throughput-based capacity unit called provisioned thro…  ( 23 min )
    Built-in Enterprise Readiness with Azure AI Agent Service
    Introduction  In today's rapidly evolving AI landscape, enterprises are increasingly seeking greater control and flexibility over their data and resources. A key barrier to enterprise adoption of AI technology is the concern over data protection. A Principal Technology Specialist supporting leading financial services companies at Microsoft explains,   “It doesn't matter how useful any particular technology is. If it doesn’t meet their stringent security requirements, financial services customers cannot and will not adopt it. It's critical that services are designed from the ground up with these requirements in mind.”  This post explores how Azure AI Agent Service addresses these concerns by offering:  Two distinct Standard Agent Setup configurations tailored to your organization's network…
    Announcing Provisioned Deployment for Azure OpenAI Service Fine-tuning
    You've fine-tuned your models to make your agents behave and speak how you'd like. You've scaled up your RAG application to meet customer demand. You've now got a good problem: users love the service but want it snappier and more responsive. Azure OpenAI Service now offers provisioned deployments for fine-tuned models, giving your applications predictable performance with predictable costs! 💡 What is Provisioned Throughput? If you're unfamiliar with Provisioned Throughput, it allows Azure OpenAI Service customers to purchase capacity in terms of performance needs instead of per-token. With fine-tuned deployments, it replaces both the hosting fee and the token-based billing of Standard and Global Standard (now in Public Preview) with a throughput-based capacity unit called provisioned thro…
  • Open

    Global AI Bootcamp
    Are you ready to embark on an exhilarating journey into the world of Artificial Intelligence? The Global AI Bootcamp invites tech students and AI developers to join a vibrant global community of innovators, data scientists, and AI experts. This annual event is your gateway to cutting-edge advancements, where you can learn, share, and collaborate on the latest AI technologies. From Saturday, March 1st to Friday, March 7th, we have an action-packed schedule featuring 29 bootcamps across 19 countries. With the rapid evolution of AI shaping various industries, there's no better time to elevate your skills and make a meaningful impact in this dynamic field. Attendees can expect hands-on workshops, insightful sessions, and numerous networking opportunities designed for all skill levels. Don't mi…  ( 26 min )
    Global AI Bootcamp
    Are you ready to embark on an exhilarating journey into the world of Artificial Intelligence? The Global AI Bootcamp invites tech students and AI developers to join a vibrant global community of innovators, data scientists, and AI experts. This annual event is your gateway to cutting-edge advancements, where you can learn, share, and collaborate on the latest AI technologies. From Saturday, March 1st to Friday, March 7th, we have an action-packed schedule featuring 29 bootcamps across 19 countries. With the rapid evolution of AI shaping various industries, there's no better time to elevate your skills and make a meaningful impact in this dynamic field. Attendees can expect hands-on workshops, insightful sessions, and numerous networking opportunities designed for all skill levels. Don't mi…
    RAG Time: Ultimate Guide to Mastering RAG!
    Get ready to dive into the future of AI with RAG Time—a new, super engaging learning series designed just for you! 🎉 Join us every Wednesday at 9 AM PT from March 5 through April 2 on Microsoft Developer YouTube. RAG Time will unlock the full potential of Retrieval-Augmented Generation (RAG), helping you build smarter, more efficient AI systems. Best part? All the learning resources are available on GitHub. So don’t miss out! 👉 Mark your calendars for RAG Time, starting March 5th. See you there!  Every Wednesday 9AM PT from March 5 through April 2 on Microsoft Developer YouTube. What's in RAG Time? RAG Time is a five-part learning journey, with new videos and blog posts releasing every week in March. The series features: 🔥 Expert-led discussions breaking down RAG fundamentals and best p…  ( 23 min )
    RAG Time: Ultimate Guide to Mastering RAG!
    Get ready to dive into the future of AI with RAG Time—a new, super engaging learning series designed just for you! 🎉 Join us every Wednesday at 9 AM PT from March 5 through April 2 on Microsoft Developer YouTube. RAG Time will unlock the full potential of Retrieval-Augmented Generation (RAG), helping you build smarter, more efficient AI systems. Best part? All the learning resources are available on GitHub. So don’t miss out! 👉 Mark your calendars for RAG Time, starting March 5th. See you there!  Every Wednesday 9AM PT from March 5 through April 2 on Microsoft Developer YouTube. What's in RAG Time? RAG Time is a five-part learning journey, with new videos and blog posts releasing every week in March. The series features: 🔥 Expert-led discussions breaking down RAG fundamentals and best p…
  • Open

    Contributor Stories: Deepanshu Katara
    If you’ve ever engaged with the content on the Microsoft Learn platform, the material you utilized was likely written or co-authored by dedicated contributors. These contributors, often volunteers, generously offer their time and expertise to fill knowledge gaps within our content portfolio by suggesting valuable updates to our material, sharing their knowledge within the Microsoft community, and/or answering questions on the Q&A area of the Microsoft Learn platform. In this interview series, we aim to acquaint ourselves with some of these valuable contributors. Through these conversations, we seek to understand their motivations for sharing their knowledge on Microsoft Learn and gain insights into their experiences.   Welcome and congratulations on being our spotlighted contributor this m…  ( 40 min )
    Contributor Stories: Deepanshu Katara
    If you’ve ever engaged with the content on the Microsoft Learn platform, the material you utilized was likely written or co-authored by dedicated contributors. These contributors, often volunteers, generously offer their time and expertise to fill knowledge gaps within our content portfolio by suggesting valuable updates to our material, sharing their knowledge within the Microsoft community, and/or answering questions on the Q&A area of the Microsoft Learn platform. In this interview series, we aim to acquaint ourselves with some of these valuable contributors. Through these conversations, we seek to understand their motivations for sharing their knowledge on Microsoft Learn and gain insights into their experiences.   Welcome and congratulations on being our spotlighted contributor this m…
  • Open

    Jakarta EE and MicroProfile on Azure – February 2025
    Hi everyone, welcome to the February 2025 update for Jakarta EE and MicroProfile on Azure. It covers topics such as the latest updates to Azure extensions for Quarkus, and the recent refresh to Jakarta EE Solutions for supporting multiple deployments within the same resource group. If you’re interested in providing feedback or collaborating on migrating […] The post Jakarta EE and MicroProfile on Azure – February 2025 appeared first on Microsoft for Java Developers.  ( 25 min )
  • Open

    Supporting Managed Identity based authentication flows in Azure Load Testing
    At Microsoft, we prioritize security above anything else. As part of the Secure Future Initiative (SFI), one of the key guidelines from an identity and access security perspective is to replace secrets, credentials, certificates, and keys with more secure authentication, such as managed identities for Azure resources. With managed identities (MIs), credentials are fully managed, and they can’t be accidentally leaked. As you move towards using managed identities, our job at Azure Load Testing is to ensure that you can seamlessly run load tests on flows using MI based authentication. Introducing the support for MI based authentication scenarios in Azure Load Testing. Managed identity based authentication is typically used in communication between services. The target Azure resource authentic…  ( 27 min )
    Supporting Managed Identity based authentication flows in Azure Load Testing
    At Microsoft, we prioritize security above anything else. As part of the Secure Future Initiative (SFI), one of the key guidelines from an identity and access security perspective is to replace secrets, credentials, certificates, and keys with more secure authentication, such as managed identities for Azure resources. With managed identities (MIs), credentials are fully managed, and they can’t be accidentally leaked. As you move towards using managed identities, our job at Azure Load Testing is to ensure that you can seamlessly run load tests on flows using MI based authentication. Introducing the support for MI based authentication scenarios in Azure Load Testing. Managed identity based authentication is typically used in communication between services. The target Azure resource authentic…
  • Open

    Introducing the Microsoft Purview Unified Catalog
    Locate, access, and trust the data you need using Microsoft Purview’s Unified Catalog. By leveraging AI-powered search and automated quality checks, you can use data across your organization while staying compliant and meeting privacy standards. With streamlined approval workflows, request and gain access to data quickly, collaborate with stakeholders, and ensure data quality across projects. Daniel Hidalgo, Microsoft Purview Senior Product Manager, joins Jeremy Chapman to share how to manage data governance, drive better decisions, and support meaningful AI outcomes with Microsoft Purview. Simplify collaboration. Deliver trusted data as a business user, data steward, or data owner. Check out the Microsoft Purview Unified Catalog. Automate data quality checks. Ensure consistent, quality …  ( 51 min )
    Introducing the Microsoft Purview Unified Catalog
    Locate, access, and trust the data you need using Microsoft Purview’s Unified Catalog. By leveraging AI-powered search and automated quality checks, you can use data across your organization while staying compliant and meeting privacy standards. With streamlined approval workflows, request and gain access to data quickly, collaborate with stakeholders, and ensure data quality across projects. Daniel Hidalgo, Microsoft Purview Senior Product Manager, joins Jeremy Chapman to share how to manage data governance, drive better decisions, and support meaningful AI outcomes with Microsoft Purview. Simplify collaboration. Deliver trusted data as a business user, data steward, or data owner. Check out the Microsoft Purview Unified Catalog. Automate data quality checks. Ensure consistent, quality …
    Copilot Control System explained
    With the Copilot Control System, you can control Copilot experiences spanning IT administrator tools used every day across the Microsoft 365, Microsoft Purview, and Power Platform admin centers. As an IT administrator, you’re in control of Microsoft 365 Copilot and agent experiences, ensuring security and compliance while optimizing productivity. Copilot Control System focuses on three core areas, where Microsoft Copilot services and the agents you create and use have unmatched manageability and visibility, compared to other AI options. — Protect your data by enabling intelligent grounding on enterprise data that respects your organization’s controls. — Govern access and usage by setting who can use Copilot and agents, while monitoring agent status and lifecycle. — Measure impact effectiv…  ( 29 min )
    Copilot Control System explained
    With the Copilot Control System, you can control Copilot experiences spanning IT administrator tools used every day across the Microsoft 365, Microsoft Purview, and Power Platform admin centers. As an IT administrator, you’re in control of Microsoft 365 Copilot and agent experiences, ensuring security and compliance while optimizing productivity. Copilot Control System focuses on three core areas, where Microsoft Copilot services and the agents you create and use have unmatched manageability and visibility, compared to other AI options. — Protect your data by enabling intelligent grounding on enterprise data that respects your organization’s controls. — Govern access and usage by setting who can use Copilot and agents, while monitoring agent status and lifecycle. — Measure impact effectiv…

  • Open

    Spark Connector for Fabric Data Warehouse (DW) – Preview
    We are pleased to announce the availability of the Fabric Spark connector for Fabric Data Warehouse (DW) in the Fabric Spark runtime. This connector enables Spark developers and data scientists to access and work with data from Fabric DW and the SQL analytics endpoint of the lakehouse, either within the same workspace or across different … Continue reading “Spark Connector for Fabric Data Warehouse (DW) – Preview”  ( 5 min )
    Admin API updates and upcoming definition changes
    We are pleased to announce the release of several new Admin APIs, which offer tenant administrators advanced tools for managing tenant settings, as well as tenant setting overrides at the capacity, workspace, and domain levels.   These new APIs, along with enhancements to existing APIs, represent our continued commitment to expanding Microsoft Fabric’s administrative capabilities, thereby … Continue reading “Admin API updates and upcoming definition changes “  ( 8 min )
    TLS deprecation for Fabric
    Fabric Platform support for Transport Layer Security (TLS) 1.1 and earlier versions will end on July 31, 2025    To align with Azure and the rest of Microsoft, Fabric platform will soon require TLS 1.2 or later versions for all outbound connections from Fabric to customer data sources. All incoming connections to Fabric Platform already required … Continue reading “TLS deprecation for Fabric”  ( 5 min )
    Announcing Eventhouse OneLake availability migration
    Beginning April 9th, we will be migrating the data you have enabled for OneLake Availability in Eventhouse. During this migration, we will be backfilling all of your data along with moving it to an area of OneLake that is designated not to be charged. This allows us to simplify the billing process and provide you with the … Continue reading “Announcing Eventhouse OneLake availability migration”  ( 5 min )
  • Open

    Dev Proxy v0.25, now available, with automatic shut down and simplified configuration management
    Read more about the latest Dev Proxy release featuring significant improvements to configuration management, plugin support, and usability. The post Dev Proxy v0.25, now available, with automatic shut down and simplified configuration management appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    Step-by-Step Tutorial: Building an AI Agent Using Azure AI Foundry
    I'm Shivam Goyal, at Microsoft Learn Student Ambassador, I am constantly exploring new ways to leverage the power of Azure's AI services. In this tutorial, I'll share my experience building a flight booking agent using Azure AI Agent service tools within the Azure AI Foundry portal. I was inspired by the new Microsoft AI Agents for Beginners course and took on the challenge on developing this agent. The agent solution is capable of interacting with users and providing information about flights, demonstrating the potential of Azure's conversational AI capabilities. Follow along as we create this intelligent agent from scratch.   Prerequisites To complete this tutorial, you'll need: Azure Account: An Azure account with an active subscription is required. If you don't have one, you can creat…  ( 28 min )
    Step-by-Step Tutorial: Building an AI Agent Using Azure AI Foundry
    I'm Shivam Goyal, at Microsoft Learn Student Ambassador, I am constantly exploring new ways to leverage the power of Azure's AI services. In this tutorial, I'll share my experience building a flight booking agent using Azure AI Agent service tools within the Azure AI Foundry portal. I was inspired by the new Microsoft AI Agents for Beginners course and took on the challenge on developing this agent. The agent solution is capable of interacting with users and providing information about flights, demonstrating the potential of Azure's conversational AI capabilities. Follow along as we create this intelligent agent from scratch.   Prerequisites To complete this tutorial, you'll need: Azure Account: An Azure account with an active subscription is required. If you don't have one, you can creat…
    Welcome to the new Phi-4 models - Microsoft Phi-4-mini & Phi-4-multimodal
    These new Phi-4 mini and multimodal models are now available on Hugging Face, Azure AI Foundry Model Catalog, GitHub Models, and Ollama. Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported. As for Phi-4-multimodal, it is a fully multimodal model capable of vision, audio, text, multilingual understanding, strong reasoning, encoding, and more. These models can also be deployed on edge devices, enabling IoT applications to integrate generative AI even in environments with limited computing power and network access. Now, let’s dive into the new Phi-4-mini and Phi-4-multimodal together! Function calling This is a highly anticipated feature within the community. With function call…  ( 24 min )
    Welcome to the new Phi-4 models - Microsoft Phi-4-mini & Phi-4-multimodal
    These new Phi-4 mini and multimodal models are now available on Hugging Face, Azure AI Foundry Model Catalog, GitHub Models, and Ollama. Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported. As for Phi-4-multimodal, it is a fully multimodal model capable of vision, audio, text, multilingual understanding, strong reasoning, encoding, and more. These models can also be deployed on edge devices, enabling IoT applications to integrate generative AI even in environments with limited computing power and network access. Now, let’s dive into the new Phi-4-mini and Phi-4-multimodal together! Function calling This is a highly anticipated feature within the community. With function call…
  • Open

    RAG Time: Ultimate Guide to Mastering RAG!
    RAG Time is a brand-new AI learning series designed to help developers unlock the full potential of Retrieval-Augmented Generation (RAG). If you’ve been looking for a way to build smarter, more efficient AI systems—join us in RAG Time, every Wednesday 9AM PT from March 5 through April 2 on Microsoft Developer YouTube. What's in RAG Time? RAG Time is a five-part learning journey, with new videos and blog posts releasing every week in March. The series features: 🔥 Expert-led discussions breaking down RAG fundamentals and best practices 🎤 Exclusive leadership interviews with AI leaders ⚡ Hands-on demos & real-world case studies showing RAG in action 🎨 Creative doodle summaries making complex concepts easier to grasp and remember 🛠 Samples & resources in the RAG Time repository so you can …  ( 22 min )
    RAG Time: Ultimate Guide to Mastering RAG!
    RAG Time is a brand-new AI learning series designed to help developers unlock the full potential of Retrieval-Augmented Generation (RAG). If you’ve been looking for a way to build smarter, more efficient AI systems—join us in RAG Time, every Wednesday 9AM PT from March 5 through April 2 on Microsoft Developer YouTube. What's in RAG Time? RAG Time is a five-part learning journey, with new videos and blog posts releasing every week in March. The series features: 🔥 Expert-led discussions breaking down RAG fundamentals and best practices 🎤 Exclusive leadership interviews with AI leaders ⚡ Hands-on demos & real-world case studies showing RAG in action 🎨 Creative doodle summaries making complex concepts easier to grasp and remember 🛠 Samples & resources in the RAG Time repository so you can …
    Measure and Mitigate Risks for a generative AI app in Azure AI Foundry
    Join us on Tuesday 4th March at 9AM PST (6 pm CET) as we wrap up the Generative AI Level Up Tuesdays series with a hands-on session on how to measure and mitigate risks in your generative AI solution with the Microsoft Responsible AI toolchain. In the previous episodes of the series we introduced generative AI capabilities and use-cases and we took a close look at the practices and tools within Azure AI Foundry to support your GenAIOps workflow. We presented practical applications of LLMs and RAG-based solutions as well as agentic AI samples. If you missed any sessions, you can always watch them on-demand here. Also, the accompanying Microsoft Learn Challenge is still on-going till 11th March. Join the challenge now! There's more! Next week, you'll have the unique opportunity to interact w…  ( 22 min )
    Measure and Mitigate Risks for a generative AI app in Azure AI Foundry
    Join us on Tuesday 4th March at 9AM PST (6 pm CET) as we wrap up the Generative AI Level Up Tuesdays series with a hands-on session on how to measure and mitigate risks in your generative AI solution with the Microsoft Responsible AI toolchain. In the previous episodes of the series we introduced generative AI capabilities and use-cases and we took a close look at the practices and tools within Azure AI Foundry to support your GenAIOps workflow. We presented practical applications of LLMs and RAG-based solutions as well as agentic AI samples. If you missed any sessions, you can always watch them on-demand here. Also, the accompanying Microsoft Learn Challenge is still on-going till 11th March. Join the challenge now! There's more! Next week, you'll have the unique opportunity to interact w…
  • Open

    RAG Time: Ultimate Guide to Mastering RAG!
    RAG Time is a brand-new AI learning series designed to help developers unlock the full potential of Retrieval-Augmented Generation (RAG). If you’ve been looking for a way to build smarter, more efficient AI systems—join us in RAG Time, every Wednesday 9AM PT from March 5 through April 2 on Microsoft Developer YouTube. What's in RAG Time? RAG Time is a five-part learning journey, with new videos and blog posts releasing every week in March. The series features: 🔥 Expert-led discussions breaking down RAG fundamentals and best practices 🎤 Exclusive leadership interviews with AI leaders ⚡ Hands-on demos & real-world case studies showing RAG in action 🎨 Creative doodle summaries making complex concepts easier to grasp and remember 🛠 Samples & resources in the RAG Time repository so you can …  ( 22 min )
    RAG Time: Ultimate Guide to Mastering RAG!
    RAG Time is a brand-new AI learning series designed to help developers unlock the full potential of Retrieval-Augmented Generation (RAG). If you’ve been looking for a way to build smarter, more efficient AI systems—join us in RAG Time, every Wednesday 9AM PT from March 5 through April 2 on Microsoft Developer YouTube. What's in RAG Time? RAG Time is a five-part learning journey, with new videos and blog posts releasing every week in March. The series features: 🔥 Expert-led discussions breaking down RAG fundamentals and best practices 🎤 Exclusive leadership interviews with AI leaders ⚡ Hands-on demos & real-world case studies showing RAG in action 🎨 Creative doodle summaries making complex concepts easier to grasp and remember 🛠 Samples & resources in the RAG Time repository so you can …
  • Open

    Case Study: Data Driven DevOps with ADO
    Customer Scenario: One of Microsoft's manufacturing partners provides services to a large number of customers that require recurring deployments of cloud-based artifacts. This can be in service of onboarding new customers, as well as updates and new releases of existing applications. The customer approached Microsoft to assist them in developing a new system with the following attributes: Automate releases Provide a templatized approach to deploying and configuring new releases Allow for reuse of existing releases Provide for meta-data tagging of deployed artifacts Utilize a data driven approach to artifact generation Be easy to use for non-IT based employees After researching numerous commercially available deployment management products, the partner desired to develop a custom system i…  ( 38 min )
    Case Study: Data Driven DevOps with ADO
    Customer Scenario: One of Microsoft's manufacturing partners provides services to a large number of customers that require recurring deployments of cloud-based artifacts. This can be in service of onboarding new customers, as well as updates and new releases of existing applications. The customer approached Microsoft to assist them in developing a new system with the following attributes: Automate releases Provide a templatized approach to deploying and configuring new releases Allow for reuse of existing releases Provide for meta-data tagging of deployed artifacts Utilize a data driven approach to artifact generation Be easy to use for non-IT based employees After researching numerous commercially available deployment management products, the partner desired to develop a custom system i…
    JDConf 2025: Announcing Keynote Speaker and Exciting Sessions on Java, Cloud, and AI
    Microsoft JDConf 2025 is rapidly approaching and promises to be the must-attend event for Java developers, particularly those interested in the latest advancements in Java, Cloud and AI. This year, the conference will feature over 22 sessions and more than 10 hours of live streaming content for global audience, along with additional on-demand sessions available from April 9 to 10. The spotlight this year is on integrating AI into your development workflow with tools like Copilot, showcasing how these advancements are revolutionizing the coding landscape. Whether you are exploring application modernization, leveraging AI for intelligent apps, or optimizing Java deployments, JDConf has sessions for every interest. Code the future with AI Explore AI-driven Java innovation: Uncover the role o…  ( 36 min )
    JDConf 2025: Announcing Keynote Speaker and Exciting Sessions on Java, Cloud, and AI
    Microsoft JDConf 2025 is rapidly approaching and promises to be the must-attend event for Java developers, particularly those interested in the latest advancements in Java, Cloud and AI. This year, the conference will feature over 22 sessions and more than 10 hours of live streaming content for global audience, along with additional on-demand sessions available from April 9 to 10. The spotlight this year is on integrating AI into your development workflow with tools like Copilot, showcasing how these advancements are revolutionizing the coding landscape. Whether you are exploring application modernization, leveraging AI for intelligent apps, or optimizing Java deployments, JDConf has sessions for every interest. Code the future with AI Explore AI-driven Java innovation: Uncover the role o…
  • Open

    Capacity's AI Answer Engine® leveraged Phi to deliver better results for their customers, faster
    Capacity an all-in-one Support Automation Platform, provides organizations with the ultimate Answer Engine®. They needed a way to help unify diverse datasets across tens of millions of search results and billions of interactions and make information more easily accessible and understandable for their customers. By leveraging Phi—Microsoft’s family of powerful small language models offering groundbreaking performance at low cost and low latency—Capacity provides the enterprise with an effective AI knowledge management solution that democratizes knowledge on large teams securely and in a way that maximizes value to the customer. With Phi, Capacity’s Answer Engine® improved results quality and scale, so customers save both time and money by more quickly finding the rich information they inves…  ( 30 min )
    Capacity's AI Answer Engine® leveraged Phi to deliver better results for their customers, faster
    Capacity an all-in-one Support Automation Platform, provides organizations with the ultimate Answer Engine®. They needed a way to help unify diverse datasets across tens of millions of search results and billions of interactions and make information more easily accessible and understandable for their customers. By leveraging Phi—Microsoft’s family of powerful small language models offering groundbreaking performance at low cost and low latency—Capacity provides the enterprise with an effective AI knowledge management solution that democratizes knowledge on large teams securely and in a way that maximizes value to the customer. With Phi, Capacity’s Answer Engine® improved results quality and scale, so customers save both time and money by more quickly finding the rich information they inves…
  • Open

    Better Search, Smarter AI: Cohere Rerank v3.5 Launches on Azure AI Foundry
    What is Cohere Rerank v3.5?  Cohere Rerank v3.5 is the latest iteration in Cohere's series of reranking models, designed to reorder search results based on a deep semantic understanding of user queries and document content. With a context window of 4,096 tokens, Rerank v3.5 excels in processing complex queries and large documents, delivering precise and relevant search outcomes. Notably, it offers multilingual support for over 100 languages, making it an invaluable tool for global enterprises.    How Does a Reranker Work?  In search systems, initial retrieval methods often rely on keyword matching or vector search. The initial retrieval returns a mix of both relevant and irrelevant information, in no particular order. While there is some semblance of relevance in descending search results,…  ( 28 min )
    Better Search, Smarter AI: Cohere Rerank v3.5 Launches on Azure AI Foundry
    What is Cohere Rerank v3.5?  Cohere Rerank v3.5 is the latest iteration in Cohere's series of reranking models, designed to reorder search results based on a deep semantic understanding of user queries and document content. With a context window of 4,096 tokens, Rerank v3.5 excels in processing complex queries and large documents, delivering precise and relevant search outcomes. Notably, it offers multilingual support for over 100 languages, making it an invaluable tool for global enterprises.    How Does a Reranker Work?  In search systems, initial retrieval methods often rely on keyword matching or vector search. The initial retrieval returns a mix of both relevant and irrelevant information, in no particular order. While there is some semblance of relevance in descending search results,…
    DeepSeek R1: Improved Performance, Higher Limits, and Transparent Pricing
    On Jan 29, 2025, we introduced DeepSeek R1 in the model catalog in Azure AI Foundry, bringing one of the popular open-weight models to developers and enterprises looking for high-performance AI capabilities. At launch, we made DeepSeek R1 available without pricing as we gathered insights on real-world usage and performance.  Now, we’re excited to share that the model has better latency and throughput along with competitive pricing, making it easier to integrate DeepSeek R1 into your applications while keeping costs predictable.  Scaling to Meet Demand: Performance Optimizations in Action  The high adoption brought a few challenges—early users experienced capacity constraints and performance fluctuations due to the surge in demand. Our product and engineering teams moved quickly, optimizing…  ( 23 min )
    DeepSeek R1: Improved Performance, Higher Limits, and Transparent Pricing
    On Jan 29, 2025, we introduced DeepSeek R1 in the model catalog in Azure AI Foundry, bringing one of the popular open-weight models to developers and enterprises looking for high-performance AI capabilities. At launch, we made DeepSeek R1 available without pricing as we gathered insights on real-world usage and performance.  Now, we’re excited to share that the model has better latency and throughput along with competitive pricing, making it easier to integrate DeepSeek R1 into your applications while keeping costs predictable.  Scaling to Meet Demand: Performance Optimizations in Action  The high adoption brought a few challenges—early users experienced capacity constraints and performance fluctuations due to the surge in demand. Our product and engineering teams moved quickly, optimizing infrastructure and fine-tuning system performance.  You can expect higher rate limits and improved response times starting from Feb 26, 2025. We continue rolling out further improvements to meet customers’ expectations. You can learn more about rate limits in the Azure AI model inference quotas and limits documentation page.  Thanks to these improvements, we’ve significantly increased model efficiency, reduced latency, and improved throughput, ensuring a smoother experience for all users.  DeepSeek R1 Pricing  With these optimizations, DeepSeek R1 now delivers a good price-to-performance ratio. Whether you’re building chatbots, document summarization tools, or AI-driven search experiences, you get a high-quality model at a competitive cost, making it easier to scale AI workloads without breaking the bank.    What’s Next?  We’re committed to continuously improving DeepSeek R1’s availability as we scale. If you haven’t tried it yet, now is the perfect time to explore how DeepSeek R1 on Azure AI Foundry can power your AI applications with state-of-the-art capabilities.  Start using DeepSeek R1 today in https://ai.azure.com/

  • Open

    Introducing Notifications in Azure Load Testing: Stay Updated in Real-Time
    We are thrilled to introduce Notifications in Azure Load Testing to ensure you never miss critical updates in your performance testing workflows. With notifications, you can stay informed about test runs, schedules, and results effortlessly, enabling smoother collaboration and faster response times. Why Notifications Matter? In the world of performance testing, staying on top of test progress and results is crucial. The new Notifications feature in Azure Load Testing simplifies this process by: Keeping You Informed: Receive real-time updates when test runs start, complete, or fail. Enabling Automation: Use webhooks to trigger automated actions or integrate with tools like Azure Logic Apps. Fostering Collaboration: Ensure team members stay aligned with email alerts for test and schedule up…  ( 23 min )
    Introducing Notifications in Azure Load Testing: Stay Updated in Real-Time
    We are thrilled to introduce Notifications in Azure Load Testing to ensure you never miss critical updates in your performance testing workflows. With notifications, you can stay informed about test runs, schedules, and results effortlessly, enabling smoother collaboration and faster response times. Why Notifications Matter? In the world of performance testing, staying on top of test progress and results is crucial. The new Notifications feature in Azure Load Testing simplifies this process by: Keeping You Informed: Receive real-time updates when test runs start, complete, or fail. Enabling Automation: Use webhooks to trigger automated actions or integrate with tools like Azure Logic Apps. Fostering Collaboration: Ensure team members stay aligned with email alerts for test and schedule up…
  • Open

    RAG Time: Your Guide to Mastering Retrieval-Augmented Generation!
    RAG Time is a brand-new AI learning series designed to help developers unlock the full potential of Retrieval-Augmented Generation (RAG). If you’ve been looking for a way to build smarter, more efficient AI systems—join us in RAG Time, every Wednesday 9AM PT from March 5 through April 2 on Microsoft Developer YouTube. What's in RAG Time? RAG Time is a five-part learning journey, with new videos and blog posts releasing every week in March. The series features: 🔥 Expert-led discussions breaking down RAG fundamentals and best practices 🎤 Exclusive leadership interviews with AI leaders ⚡ Hands-on demos & real-world case studies showing RAG in action 🎨 Creative doodle summaries making complex concepts easier to grasp and remember 🛠 Samples & resources in the RAG Time repository so you can …  ( 23 min )
    RAG Time: Your Guide to Mastering Retrieval-Augmented Generation!
    RAG Time is a brand-new AI learning series designed to help developers unlock the full potential of Retrieval-Augmented Generation (RAG). If you’ve been looking for a way to build smarter, more efficient AI systems—join us in RAG Time, every Wednesday 9AM PT from March 5 through April 2 on Microsoft Developer YouTube. What's in RAG Time? RAG Time is a five-part learning journey, with new videos and blog posts releasing every week in March. The series features: 🔥 Expert-led discussions breaking down RAG fundamentals and best practices 🎤 Exclusive leadership interviews with AI leaders ⚡ Hands-on demos & real-world case studies showing RAG in action 🎨 Creative doodle summaries making complex concepts easier to grasp and remember 🛠 Samples & resources in the RAG Time repository so you can …
    Enabling SharePoint RAG with LogicApps Workflows
    SharePoint Online is quite popular for storing organizational documents. Many organizations use it due to its robust features for document management, collaboration, and integration with other Microsoft 365 services. SharePoint Online provides a secure, centralized location for storing documents, making it easier for everyone from organization to access and collaborate on files from the device of their choice. Retrieve-Augment-Generate (RAG) is a process used to infuse the large language model with organizational knowledge without explicitly fine tuning it which is a laborious process. RAG enhances the capabilities of language models by integrating them with external data sources, such as SharePoint documents. In this approach, documents stored in SharePoint are first converted into smalle…  ( 33 min )
    Enabling SharePoint RAG with LogicApps Workflows
    SharePoint Online is quite popular for storing organizational documents. Many organizations use it due to its robust features for document management, collaboration, and integration with other Microsoft 365 services. SharePoint Online provides a secure, centralized location for storing documents, making it easier for everyone from organization to access and collaborate on files from the device of their choice. Retrieve-Augment-Generate (RAG) is a process used to infuse the large language model with organizational knowledge without explicitly fine tuning it which is a laborious process. RAG enhances the capabilities of language models by integrating them with external data sources, such as SharePoint documents. In this approach, documents stored in SharePoint are first converted into smalle…
    Code First Distillation with Stored Completions in Azure OpenAI Service
    We are thrilled to announce the Public Preview release of the Stored Completions API and SDK in Azure OpenAI Service! Following our recent announcement on the enhanced Azure OpenAI Service distillation and Fine-Tuning Capabilities, we are excited to introduce a set of new API capabilities and an SDK experience that will empower our customers to interact with Stored Completions through code. What were we supporting before? Model distillation empowers developers to use the outputs of large, complex models to fine-tune smaller, more efficient ones. This technique allows the smaller models to perform just as well on specific tasks, all while significantly cutting down on both cost and latency. Azure OpenAI Service distillation involves three main components: Stored Completions: Capture and st…  ( 27 min )
    Code First Distillation with Stored Completions in Azure OpenAI Service
    We are thrilled to announce the Public Preview release of the Stored Completions API and SDK in Azure OpenAI Service! Following our recent announcement on the enhanced Azure OpenAI Service distillation and Fine-Tuning Capabilities, we are excited to introduce a set of new API capabilities and an SDK experience that will empower our customers to interact with Stored Completions through code. What were we supporting before? Model distillation empowers developers to use the outputs of large, complex models to fine-tune smaller, more efficient ones. This technique allows the smaller models to perform just as well on specific tasks, all while significantly cutting down on both cost and latency. Azure OpenAI Service distillation involves three main components: Stored Completions: Capture and st…
    Code First Distillation with Stored Completions in Azure OpenAI Service
    We are thrilled to announce the Public Preview release of the Stored Completions API and SDK in Azure OpenAI Service! Following our recent announcement on the enhanced Azure OpenAI Service distillation and Fine-Tuning Capabilities, we are excited to introduce a set of new API capabilities and an SDK experience that will empower our customers to interact with Stored Completions through code. What were we supporting before? Model distillation empowers developers to use the outputs of large, complex models to fine-tune smaller, more efficient ones. This technique allows the smaller models to perform just as well on specific tasks, all while significantly cutting down on both cost and latency. Azure OpenAI Service distillation involves three main components: Stored Completions: Capture and st…
  • Open

    Exploring Azure OpenAI Assistants and Azure AI Agent Services: Benefits and Opportunities
    In the rapidly evolving landscape of artificial intelligence, businesses are increasingly turning to cloud-based solutions to harness the power of AI. Microsoft Azure offers two prominent services in this domain: Azure OpenAI Assistants and Azure AI Agent Services. While both services aim to enhance user experiences and streamline operations, they cater to different needs and use cases. This blog post will delve into the details of each service, their benefits, and the opportunities they present for businesses.     Understanding Azure OpenAI Assistants What Are Azure OpenAI Assistants? Azure OpenAI Assistants are designed to leverage the capabilities of OpenAI's models, such as GPT-3 and its successors. These assistants are tailored for applications that require advanced natural language…  ( 32 min )
    Exploring Azure OpenAI Assistants and Azure AI Agent Services: Benefits and Opportunities
    In the rapidly evolving landscape of artificial intelligence, businesses are increasingly turning to cloud-based solutions to harness the power of AI. Microsoft Azure offers two prominent services in this domain: Azure OpenAI Assistants and Azure AI Agent Services. While both services aim to enhance user experiences and streamline operations, they cater to different needs and use cases. This blog post will delve into the details of each service, their benefits, and the opportunities they present for businesses.     Understanding Azure OpenAI Assistants What Are Azure OpenAI Assistants? Azure OpenAI Assistants are designed to leverage the capabilities of OpenAI's models, such as GPT-3 and its successors. These assistants are tailored for applications that require advanced natural language…
  • Open

    Fabric February 2025 Feature Summary
    Welcome to the Fabric 2025 update There are a lot of exciting features for you this month! Here are some highlights: In Power BI, Explore from Copilot visual answers which lets you do easy ad-hoc exploration. In Data Warehouse, Browse files with OPENROWSET (Preview) and Copilot for Data Warehouse Chat (Preview). For Data Science, AI Skill is now conversational.  ( 33 min )
    Enhancing SQL database in Fabric: share your feedback and shape the future
    SQL database in Fabric is currently in preview, and we are actively rolling out improvements as we progress towards general availability (GA). Your insights play a crucial role in this journey. Whether it’s challenges you’ve encountered, features you adore, or any suggestions for enhancement, we want to hear it all—the good, the bad, and the … Continue reading “Enhancing SQL database in Fabric: share your feedback and shape the future”  ( 7 min )

  • Open

    CPU usage metrics for Container Apps and single threaded applications
    This post will cover looking at CPU metrics for single threaded applications experiencing high CPU and why there may be differences depending on the CPU core count  ( 4 min )
  • Open

    New improvements coming to the AI Skill
    Additional Authors: Amir Jafari, Shreyas Radhakrishna, Nellie Gustafsson We’re excited to unveil significant enhancements to the AI Skill in Fabric! These updates make it easier than ever to build a powerful generative AI data agent, helping you to unlock deeper insights and streamline decision-making. With improved flexibility, intelligence, and user-friendliness, the updated AI Skill is … Continue reading “New improvements coming to the AI Skill”  ( 7 min )
    ArcGIS GeoAnalytics for Microsoft Fabric Spark – Preview
    Additional authors – Ashit Gosalia, Aniket Adnaik, Mahesh Prakriya, Madhu Bhowal, Sarah Battersby, and Michael Park Esri is recognized as the global market leader in geographic information system (GIS) technology, location intelligence, and mapping, primarily through its flagship software, ArcGIS. Esri empowers businesses, governments, and communities to tackle the world’s most pressing challenges through spatial … Continue reading “ArcGIS GeoAnalytics for Microsoft Fabric Spark – Preview”  ( 6 min )
  • Open

    Unlock the Future of Secure Authentication: Moving to Keyless Authentication with Managed Identity
    Why Managed Identity? Traditional authentication methods often rely on keys, secrets, and passwords that can be easily compromised. Managed identity, on the other hand, provides a secure and seamless way to authenticate without the need for managing credentials. By leveraging managed identity, you can: Reduce the Risk of Compromise: As most security breaches start from identity-related issues, moving to a keyless authentication system significantly reduces the chances of such compromises. Simplify Credential Management: Managed identity eliminates the need for managing keys and secrets, making the authentication process more straightforward and less error-prone. Enhance Security: With managed identity, your applications are granted access to resources securely, without the risk of exposin…  ( 23 min )
    Unlock the Future of Secure Authentication: Moving to Keyless Authentication with Managed Identity
    Why Managed Identity? Traditional authentication methods often rely on keys, secrets, and passwords that can be easily compromised. Managed identity, on the other hand, provides a secure and seamless way to authenticate without the need for managing credentials. By leveraging managed identity, you can: Reduce the Risk of Compromise: As most security breaches start from identity-related issues, moving to a keyless authentication system significantly reduces the chances of such compromises. Simplify Credential Management: Managed identity eliminates the need for managing keys and secrets, making the authentication process more straightforward and less error-prone. Enhance Security: With managed identity, your applications are granted access to resources securely, without the risk of exposin…
    Unlock the Future of Secure Authentication: Moving to Keyless Authentication with Managed Identity
    Why Managed Identity? Traditional authentication methods often rely on keys, secrets, and passwords that can be easily compromised. Managed identity, on the other hand, provides a secure and seamless way to authenticate without the need for managing credentials. By leveraging managed identity, you can: Reduce the Risk of Compromise: As most security breaches start from identity-related issues, moving to a keyless authentication system significantly reduces the chances of such compromises. Simplify Credential Management: Managed identity eliminates the need for managing keys and secrets, making the authentication process more straightforward and less error-prone. Enhance Security: With managed identity, your applications are granted access to resources securely, without the risk of exposin…
  • Open

    Securely Integrating Azure API Management with Azure OpenAI via Application Gateway
    Introduction As organizations increasingly integrate AI into their applications, securing access to Azure OpenAI services becomes a critical priority. By default, Azure OpenAI can be exposed over the public internet, posing potential security risks. To mitigate these risks, enterprises often restrict OpenAI access using Private Endpoints, ensuring that traffic remains within their Azure Virtual Network (VNET) and preventing direct internet exposure. However, restricting OpenAI to a private endpoint introduces challenges when external applications, such as those hosted in AWS or on-premises environments, need to securely interact with OpenAI APIs. This is where Azure API Management (APIM) plays a crucial role. By deploying APIM within an internal VNET, it acts as a secure proxy between exte…  ( 41 min )
    Securely Integrating Azure API Management with Azure OpenAI via Application Gateway
    Introduction As organizations increasingly integrate AI into their applications, securing access to Azure OpenAI services becomes a critical priority. By default, Azure OpenAI can be exposed over the public internet, posing potential security risks. To mitigate these risks, enterprises often restrict OpenAI access using Private Endpoints, ensuring that traffic remains within their Azure Virtual Network (VNET) and preventing direct internet exposure. However, restricting OpenAI to a private endpoint introduces challenges when external applications, such as those hosted in AWS or on-premises environments, need to securely interact with OpenAI APIs. This is where Azure API Management (APIM) plays a crucial role. By deploying APIM within an internal VNET, it acts as a secure proxy between exte…
    Securely Integrating Azure API Management with Azure OpenAI via Application Gateway
    Introduction As organizations increasingly integrate AI into their applications, securing access to Azure OpenAI services becomes a critical priority. By default, Azure OpenAI can be exposed over the public internet, posing potential security risks. To mitigate these risks, enterprises often restrict OpenAI access using Private Endpoints, ensuring that traffic remains within their Azure Virtual Network (VNET) and preventing direct internet exposure. However, restricting OpenAI to a private endpoint introduces challenges when external applications, such as those hosted in AWS or on-premises environments, need to securely interact with OpenAI APIs. This is where Azure API Management (APIM) plays a crucial role. By deploying APIM within an internal VNET, it acts as a secure proxy between exte…
    Need inspirations? Real AI Apps stories by Azure customers to help you get started
    In this blog, we present a tapestry of authentic stories from real Azure customers. You will read about how AI-empowered applications are revolutionizing enterprises and the myriad ways organizations choose to modernize their software, craft innovative experiences, and unveil new revenue streams. We hope that these stories inspire you to embark upon your own Azure AI journey.  Before we begin, be sure to bookmark the newly unveiled Plan on Microsoft Learn—meticulously designed for developers and technical managers—to enhance your expertise on this subject. Inspiration 1: Transform customer service Intelligent apps today can offer a self-service natural language chat interface for customers to resolve service issues faster. They can route and divert calls, allowing agents to focus on the mo…  ( 51 min )
    Need inspirations? Real AI Apps stories by Azure customers to help you get started
    In this blog, we present a tapestry of authentic stories from real Azure customers. You will read about how AI-empowered applications are revolutionizing enterprises and the myriad ways organizations choose to modernize their software, craft innovative experiences, and unveil new revenue streams. We hope that these stories inspire you to embark upon your own Azure AI journey.  Before we begin, be sure to bookmark the newly unveiled Plan on Microsoft Learn—meticulously designed for developers and technical managers—to enhance your expertise on this subject. Inspiration 1: Transform customer service Intelligent apps today can offer a self-service natural language chat interface for customers to resolve service issues faster. They can route and divert calls, allowing agents to focus on the mo…
    Need inspirations? Real AI Apps stories by Azure customers to help you get started
    In this blog, we present a tapestry of authentic stories from real Azure customers. You will read about how AI-empowered applications are revolutionizing enterprises and the myriad ways organizations choose to modernize their software, craft innovative experiences, and unveil new revenue streams. We hope that these stories inspire you to embark upon your own Azure AI journey.  Before we begin, be sure to bookmark the newly unveiled Plan on Microsoft Learn—meticulously designed for developers and technical managers—to enhance your expertise on this subject. Inspiration 1: Transform customer service Intelligent apps today can offer a self-service natural language chat interface for customers to resolve service issues faster. They can route and divert calls, allowing agents to focus on the mo…
  • Open

    Compatibility of PostgreSQL Connector with AWS and GCP
    As AI-driven applications continue to evolve, the need for efficient vector-based search capabilities is greater than ever. Microsoft Semantic Kernel makes it easy to integrate these capabilities with PostgreSQL databases using the Postgres connector. Whether you’re leveraging cloud-hosted PostgreSQL instances on Amazon Web Services or Google Cloud, this connector enables seamless interaction, allowing you to […] The post Compatibility of PostgreSQL Connector with AWS and GCP appeared first on Semantic Kernel.  ( 23 min )
    Hybrid Model Orchestration
    Hybrid model orchestration is a powerful technique that AI applications can use to intelligently select and switch between multiple models based on various criteria, all while being transparent to the calling code. This technique not only allows for model selection based on factors such as the prompt’s input token size and each model’s min/max token […] The post Hybrid Model Orchestration appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Mastering API Management - Demos and best practices (presentation highlight)
    I was fortunate to present to the Wellington .Net User Group about my experience using the Azure API Management (APIM) service. In the presentation I cover some of the aspects that I often see teams re-invent solutions, and in my opinion, misuse Azure APIM.  Mastering Azure API Management - Demos and Best practices Why am I highlighting this again? All aspects of a solution should be under source control to ensure consistency, collaboration, and reliability. Azure APIM is not an exception. APIOps, or Azure API Management (APIM) with DevOps, is a DevOps approach for managing APIs in Azure API Management (APIM). The focus of APIOps is on automation, CI/CD, and version control for APIs.  The guidance here is adopt a strategy for APIM as early as possible as well as don't reinvent. APIOps is a flexible, open source project with an active and engaged community. Most of the organizations I work with have either wrote their own solution or have gone without putting APIM under versioned source control with automated CI/CD. The main reason for this is the team have not allowed themselves the time to learn and adopt APIOps. So take the time to fully evaluate APIOps before rolling your own and never go without. Here are some references to help you get started: APIOps - Basic Concepts This page in the wiki gives you an idea of the content and structure of how the resources are saved to files. Spend the time to read and watch the video on the GitHub readme as a good starting point And, like always, let me know if you agree or disagree by commenting below.  Cheers!  ( 22 min )
    Mastering API Management - Demos and best practices (presentation highlight)
    I was fortunate to present to the Wellington .Net User Group about my experience using the Azure API Management (APIM) service. In the presentation I cover some of the aspects that I often see teams re-invent solutions, and in my opinion, misuse Azure APIM.  Mastering Azure API Management - Demos and Best practices Why am I highlighting this again? All aspects of a solution should be under source control to ensure consistency, collaboration, and reliability. Azure APIM is not an exception. APIOps, or Azure API Management (APIM) with DevOps, is a DevOps approach for managing APIs in Azure API Management (APIM). The focus of APIOps is on automation, CI/CD, and version control for APIs.  The guidance here is adopt a strategy for APIM as early as possible as well as don't reinvent. APIOps is a flexible, open source project with an active and engaged community. Most of the organizations I work with have either wrote their own solution or have gone without putting APIM under versioned source control with automated CI/CD. The main reason for this is the team have not allowed themselves the time to learn and adopt APIOps. So take the time to fully evaluate APIOps before rolling your own and never go without. Here are some references to help you get started: APIOps - Basic Concepts This page in the wiki gives you an idea of the content and structure of how the resources are saved to files. Spend the time to read and watch the video on the GitHub readme as a good starting point And, like always, let me know if you agree or disagree by commenting below.  Cheers!
    Mastering API Management - Demos and best practices (presentation highlight)
    I was fortunate to present to the Wellington .Net User Group about my experience using the Azure API Management (APIM) service. In the presentation I cover some of the aspects that I often see teams re-invent solutions, and in my opinion, misuse Azure APIM.  Mastering Azure API Management - Demos and Best practices Why am I highlighting this again? All aspects of a solution should be under source control to ensure consistency, collaboration, and reliability. Azure APIM is not an exception. APIOps, or Azure API Management (APIM) with DevOps, is a DevOps approach for managing APIs in Azure API Management (APIM). The focus of APIOps is on automation, CI/CD, and version control for APIs.  The guidance here is adopt a strategy for APIM as early as possible as well as don't reinvent. APIOps is a flexible, open source project with an active and engaged community. Most of the organizations I work with have either wrote their own solution or have gone without putting APIM under versioned source control with automated CI/CD. The main reason for this is the team have not allowed themselves the time to learn and adopt APIOps. So take the time to fully evaluate APIOps before rolling your own and never go without. Here are some references to help you get started: APIOps - Basic Concepts This page in the wiki gives you an idea of the content and structure of how the resources are saved to files. Spend the time to read and watch the video on the GitHub readme as a good starting point And, like always, let me know if you agree or disagree by commenting below.  Cheers!
  • Open

    Mastering API Management - Demos and best practices (presentation highlight)
    I was fortunate to present to the Wellington .Net User Group about my experience using the Azure API Management (APIM) service. In the presentation I cover some of the aspects that I often see teams re-invent solutions, and in my opinion, misuse Azure APIM.  Mastering Azure API Management - Demos and Best practices Why am I highlighting this again? All aspects of a solution should be under source control to ensure consistency, collaboration, and reliability. Azure APIM is not an exception. APIOps, or Azure API Management (APIM) with DevOps, is a DevOps approach for managing APIs in Azure API Management (APIM). The focus of APIOps is on automation, CI/CD, and version control for APIs.  The guidance here is adopt a strategy for APIM as early as possible as well as don't reinvent. APIOps is a flexible, open source project with an active and engaged community. Most of the organizations I work with have either wrote their own solution or have gone without putting APIM under versioned source control with automated CI/CD. The main reason for this is the team have not allowed themselves the time to learn and adopt APIOps. So take the time to fully evaluate APIOps before rolling your own and never go without. Here are some references to help you get started: APIOps - Basic Concepts This page in the wiki gives you an idea of the content and structure of how the resources are saved to files. Spend the time to read and watch the video on the GitHub readme as a good starting point And, like always, let me know if you agree or disagree by commenting below.  Cheers!  ( 22 min )
    Mastering API Management - Demos and best practices (presentation highlight)
    I was fortunate to present to the Wellington .Net User Group about my experience using the Azure API Management (APIM) service. In the presentation I cover some of the aspects that I often see teams re-invent solutions, and in my opinion, misuse Azure APIM.  Mastering Azure API Management - Demos and Best practices Why am I highlighting this again? All aspects of a solution should be under source control to ensure consistency, collaboration, and reliability. Azure APIM is not an exception. APIOps, or Azure API Management (APIM) with DevOps, is a DevOps approach for managing APIs in Azure API Management (APIM). The focus of APIOps is on automation, CI/CD, and version control for APIs.  The guidance here is adopt a strategy for APIM as early as possible as well as don't reinvent. APIOps is a flexible, open source project with an active and engaged community. Most of the organizations I work with have either wrote their own solution or have gone without putting APIM under versioned source control with automated CI/CD. The main reason for this is the team have not allowed themselves the time to learn and adopt APIOps. So take the time to fully evaluate APIOps before rolling your own and never go without. Here are some references to help you get started: APIOps - Basic Concepts This page in the wiki gives you an idea of the content and structure of how the resources are saved to files. Spend the time to read and watch the video on the GitHub readme as a good starting point And, like always, let me know if you agree or disagree by commenting below.  Cheers!
    Mastering API Management - Demos and best practices (presentation highlight)
    I was fortunate to present to the Wellington .Net User Group about my experience using the Azure API Management (APIM) service. In the presentation I cover some of the aspects that I often see teams re-invent solutions, and in my opinion, misuse Azure APIM.  Mastering Azure API Management - Demos and Best practices Why am I highlighting this again? All aspects of a solution should be under source control to ensure consistency, collaboration, and reliability. Azure APIM is not an exception. APIOps, or Azure API Management (APIM) with DevOps, is a DevOps approach for managing APIs in Azure API Management (APIM). The focus of APIOps is on automation, CI/CD, and version control for APIs.  The guidance here is adopt a strategy for APIM as early as possible as well as don't reinvent. APIOps is a flexible, open source project with an active and engaged community. Most of the organizations I work with have either wrote their own solution or have gone without putting APIM under versioned source control with automated CI/CD. The main reason for this is the team have not allowed themselves the time to learn and adopt APIOps. So take the time to fully evaluate APIOps before rolling your own and never go without. Here are some references to help you get started: APIOps - Basic Concepts This page in the wiki gives you an idea of the content and structure of how the resources are saved to files. Spend the time to read and watch the video on the GitHub readme as a good starting point And, like always, let me know if you agree or disagree by commenting below.  Cheers!

  • Open

    Announcing the launch of Microsoft Fabric Quotas
    On February 24, 2025, we launched Microsoft Fabric Quotas, a new feature designed to control resource governance for the acquisition of your Microsoft Fabric capacities. Fabric quotas aimed at helping customers ensure that Fabric resources are used efficiently and help manage the overall performance and reliability of the Azure platform while preventing misuse. What is … Continue reading “Announcing the launch of Microsoft Fabric Quotas”  ( 6 min )
    Build a python app with Fabric API for GraphQL
    In this blog post, we will cover how to build a Python application using Flask framework to connect to Fabric SQL Database and query data.  ( 7 min )
  • Open

    List all the web apps runtime and stack version under a subscription
    In some common scenarios, it needed to list all the web apps runtime and stack versions for a subscriptions. While the stack version for Windows web apps and Linux Web apps need to be retrieved from different places. In this blog, it shows how to retrieve the runtime and stack version both for Windows web apps and Linux web app in batch with Powershell script. For Linux web apps, the runtime and stack version can be retrieved from az webapp show linuxFxVersion property. For Windows web apps, the runtime and stack version can be retrieved by List Metadata REST API. With the above information, we can use the script below to list all the web apps runtime and stack version under a subscription: $runtimes = [System.Collections.Generic.List[object]]@() Write-Progress "Fetching subscription id"…  ( 23 min )
    List all the web apps runtime and stack version under a subscription
    In some common scenarios, it needed to list all the web apps runtime and stack versions for a subscriptions. While the stack version for Windows web apps and Linux Web apps need to be retrieved from different places. In this blog, it shows how to retrieve the runtime and stack version both for Windows web apps and Linux web app in batch with Powershell script. For Linux web apps, the runtime and stack version can be retrieved from az webapp show linuxFxVersion property. For Windows web apps, the runtime and stack version can be retrieved by List Metadata REST API. With the above information, we can use the script below to list all the web apps runtime and stack version under a subscription: $runtimes = [System.Collections.Generic.List[object]]@() Write-Progress "Fetching subscription id"…
    List all the web apps runtime and stack version under a subscription
    In some common scenarios, it needed to list all the web apps runtime and stack versions for a subscriptions. While the stack version for Windows web apps and Linux Web apps need to be retrieved from different places. In this blog, it shows how to retrieve the runtime and stack version both for Windows web apps and Linux web app in batch with Powershell script. For Linux web apps, the runtime and stack version can be retrieved from az webapp show linuxFxVersion property. For Windows web apps, the runtime and stack version can be retrieved by List Metadata REST API. With the above information, we can use the script below to list all the web apps runtime and stack version under a subscription: $runtimes = [System.Collections.Generic.List[object]]@() Write-Progress "Fetching subscription id"…
  • Open

    Transforming Static Learning into Interactive AI Experiences with Azure Prompt Flow & Flask
    Large Language Models (LLMs) like GPT-4 have revolutionized AI, but their static knowledge cutoff limits their utility in dynamic fields like education. Retrieval-augmented generation (RAG) bridges this gap by grounding AI responses in real-time data.  Why RAG? RAG combines the power of LLMs with domain-specific data retrieval, enabling: Dynamic Knowledge Updates: Pulling from sources like PDF textbooks or research papers. Context-Aware Responses: Using hybrid search (vector + keyword) to fetch relevant content Reduced Hallucinations: Cross-referencing facts against indexed data. For educators, this means students get responses derived from trusted sources rather than generic online content. In the realm of GRE prep, this marks the first copilot preview powered by RAG.   I.  Architectur…  ( 68 min )
    Transforming Static Learning into Interactive AI Experiences with Azure Prompt Flow & Flask
    Large Language Models (LLMs) like GPT-4 have revolutionized AI, but their static knowledge cutoff limits their utility in dynamic fields like education. Retrieval-augmented generation (RAG) bridges this gap by grounding AI responses in real-time data.  Why RAG? RAG combines the power of LLMs with domain-specific data retrieval, enabling: Dynamic Knowledge Updates: Pulling from sources like PDF textbooks or research papers. Context-Aware Responses: Using hybrid search (vector + keyword) to fetch relevant content Reduced Hallucinations: Cross-referencing facts against indexed data. For educators, this means students get responses derived from trusted sources rather than generic online content. In the realm of GRE prep, this marks the first copilot preview powered by RAG.   I.  Architectur…
    Transforming Static Learning into Interactive AI Experiences with Azure Prompt Flow & Flask
    Large Language Models (LLMs) like GPT-4 have revolutionized AI, but their static knowledge cutoff limits their utility in dynamic fields like education. Retrieval-augmented generation (RAG) bridges this gap by grounding AI responses in real-time data.  Why RAG? RAG combines the power of LLMs with domain-specific data retrieval, enabling: Dynamic Knowledge Updates: Pulling from sources like PDF textbooks or research papers. Context-Aware Responses: Using hybrid search (vector + keyword) to fetch relevant content Reduced Hallucinations: Cross-referencing facts against indexed data. For educators, this means students get responses derived from trusted sources rather than generic online content. In the realm of GRE prep, this marks the first copilot preview powered by RAG.   I.  Architectur…

  • Open

    Deploying Azure ND H100 v5 Instances in AKS with NVIDIA MIG GPU Slicing
    In this article we will cover: AKS Cluster Deployment (Latest Version) – creating an AKS cluster using the latest Kubernetes version. GPU Node Pool Provisioning – adding an ND H100 v5 node pool on Ubuntu, with --skip-gpu-driver-install to disable automatic driver installation. NVIDIA H100 MIG Slicing Configurations – available MIG partition profiles on the H100 GPU and how to enable them. Workload Recommendations for MIG Profiles – choosing optimal MIG slice sizes for different AI/ML and HPC scenarios. Best Practices for MIG Management and Scheduling – managing MIG in AKS, scheduling pods, and operational tips. AKS Cluster Deployment (Using the Latest Version) Install/Update Azure CLI: Ensure you have Azure CLI 2.0.64+ (or Azure CLI 1.0.0b2 for preview features)​. This is required for …  ( 77 min )
    Deploying Azure ND H100 v5 Instances in AKS with NVIDIA MIG GPU Slicing
    In this article we will cover: AKS Cluster Deployment (Latest Version) – creating an AKS cluster using the latest Kubernetes version. GPU Node Pool Provisioning – adding an ND H100 v5 node pool on Ubuntu, with --skip-gpu-driver-install to disable automatic driver installation. NVIDIA H100 MIG Slicing Configurations – available MIG partition profiles on the H100 GPU and how to enable them. Workload Recommendations for MIG Profiles – choosing optimal MIG slice sizes for different AI/ML and HPC scenarios. Best Practices for MIG Management and Scheduling – managing MIG in AKS, scheduling pods, and operational tips. AKS Cluster Deployment (Using the Latest Version) Install/Update Azure CLI: Ensure you have Azure CLI 2.0.64+ (or Azure CLI 1.0.0b2 for preview features)​. This is required for …
    Deploying Azure ND H100 v5 Instances in AKS with NVIDIA MIG GPU Slicing
    In this article we will cover: AKS Cluster Deployment (Latest Version) – creating an AKS cluster using the latest Kubernetes version. GPU Node Pool Provisioning – adding an ND H100 v5 node pool on Ubuntu, with --skip-gpu-driver-install to disable automatic driver installation. NVIDIA H100 MIG Slicing Configurations – available MIG partition profiles on the H100 GPU and how to enable them. Workload Recommendations for MIG Profiles – choosing optimal MIG slice sizes for different AI/ML and HPC scenarios. Best Practices for MIG Management and Scheduling – managing MIG in AKS, scheduling pods, and operational tips. AKS Cluster Deployment (Using the Latest Version) Install/Update Azure CLI: Ensure you have Azure CLI 2.0.64+ (or Azure CLI 1.0.0b2 for preview features)​. This is required for …

  • Open

    Igniting AI Innovation: The VS Code Toolkit -First of AI Sparks Episode
    Our first session covered a range of topics, from model hosting and performance optimization to different execution options using Microsoft AI Toolkit extension for VSCode.  Attendees learned how to leverage AI models via GitHub, local execution, or Azure, each offering varying levels of performance, flexibility, and rate limits.  A key focus was data privacy, with the assurance that user inputs are never used for model training or improvement, regardless of the chosen execution method. This  blog post highlights upcoming sessions of AI Sparks series that will delve further into AI development, including building AI-powered applications and sharing repositories for practical learning. The demand for AI-powered applications is exploding, but integrating  Generative AI models into your workf…  ( 29 min )
    Igniting AI Innovation: The VS Code Toolkit -First of AI Sparks Episode
    Our first session covered a range of topics, from model hosting and performance optimization to different execution options using Microsoft AI Toolkit extension for VSCode.  Attendees learned how to leverage AI models via GitHub, local execution, or Azure, each offering varying levels of performance, flexibility, and rate limits.  A key focus was data privacy, with the assurance that user inputs are never used for model training or improvement, regardless of the chosen execution method. This  blog post highlights upcoming sessions of AI Sparks series that will delve further into AI development, including building AI-powered applications and sharing repositories for practical learning. The demand for AI-powered applications is exploding, but integrating  Generative AI models into your workf…
    Igniting AI Innovation: The VS Code Toolkit -First of AI Sparks Episode
    Our first session covered a range of topics, from model hosting and performance optimization to different execution options using Microsoft AI Toolkit extension for VSCode.  Attendees learned how to leverage AI models via GitHub, local execution, or Azure, each offering varying levels of performance, flexibility, and rate limits.  A key focus was data privacy, with the assurance that user inputs are never used for model training or improvement, regardless of the chosen execution method. This  blog post highlights upcoming sessions of AI Sparks series that will delve further into AI development, including building AI-powered applications and sharing repositories for practical learning. The demand for AI-powered applications is exploding, but integrating  Generative AI models into your workf…
  • Open

    Supercharge Developer Workflows with GitHub Copilot Workspace Extensions
    David Minkovski takes you on a hands-on journey to extend GitHub Copilot beyond just being a code assistant — turning it into a true AI-powered teammate inside VS Code. Motivation It is 2025 and Software Developers are faced with an incredibly fast-paced high-tech landscape — full of innovation and automation around every corner. Even with the consideration of AI taking on many roles (and […] The post Supercharge Developer Workflows with GitHub Copilot Workspace Extensions appeared first on Developer Support.  ( 23 min )
    AI-Powered Customer Support: The Ultimate Multi-Agent System
    David Minkovski explores using Azure OpenAI and Rust to Build Intelligent and Scalable AI Systems Motivation For the past few months, I’ve had the pleasure of taking the front seat to some really fascinating and exciting AI projects, thanks to my amazing customers and colleagues at Microsoft. During these sessions, I noticed a common challenge: […] The post AI-Powered Customer Support: The Ultimate Multi-Agent System appeared first on Developer Support.  ( 21 min )
  • Open

    Way to minimize the impact of Allocation Failure issue in Cloud Service Extended Support
    Allocation Failure is a common issue for Cloud Service Extended Support. The cause of this issue is explained in our official document. The best solutions are as documented: Redeploy to a new Cloud Service or Delete Swappable Cloud Services. But for both, the downtime is unavoidable. For real user scenario, it’s almost impossible and very harmful to completely delete a Production environment deployment and recreate it as it will cause a huge impact.   This blog will mainly talk about the way to mitigate the Allocation Failure issue by switch the request to newly created Cloud Service in order to minimize the impact. P.S. The blog will use a CSES with public IP address as example to explain. If internal Load Balancer is used which means CSES doesn’t have any public IP address, this blog can…  ( 28 min )
    Way to minimize the impact of Allocation Failure issue in Cloud Service Extended Support
    Allocation Failure is a common issue for Cloud Service Extended Support. The cause of this issue is explained in our official document. The best solutions are as documented: Redeploy to a new Cloud Service or Delete Swappable Cloud Services. But for both, the downtime is unavoidable. For real user scenario, it’s almost impossible and very harmful to completely delete a Production environment deployment and recreate it as it will cause a huge impact.   This blog will mainly talk about the way to mitigate the Allocation Failure issue by switch the request to newly created Cloud Service in order to minimize the impact. P.S. The blog will use a CSES with public IP address as example to explain. If internal Load Balancer is used which means CSES doesn’t have any public IP address, this blog can…
    Way to minimize the impact of Allocation Failure issue in Cloud Service Extended Support
    Allocation Failure is a common issue for Cloud Service Extended Support. The cause of this issue is explained in our official document. The best solutions are as documented: Redeploy to a new Cloud Service or Delete Swappable Cloud Services. But for both, the downtime is unavoidable. For real user scenario, it’s almost impossible and very harmful to completely delete a Production environment deployment and recreate it as it will cause a huge impact.   This blog will mainly talk about the way to mitigate the Allocation Failure issue by switch the request to newly created Cloud Service in order to minimize the impact. P.S. The blog will use a CSES with public IP address as example to explain. If internal Load Balancer is used which means CSES doesn’t have any public IP address, this blog can…
  • Open

    Automating Developer Environments with Microsoft Dev Box and Teams Customizations Part 2
    Developers today typically work on multiple projects simultaneously, with each project requiring different tools and configuration requirements. Being able to have a dedicated environment explicitly configured to that project unlocks enhanced developer productivity and reduces outside noise, allowing developers to focus just on the work they are doing. This also extends to developers who may […] The post Automating Developer Environments with Microsoft Dev Box and Teams Customizations Part 2 appeared first on Develop from the cloud.  ( 24 min )
  • Open

    Guest Blog: Revolutionizing AI Workflows: Multi-Agent Group Chat with Copilot Agent Plugins in Microsoft Semantic Kernel
    Revolutionizing AI Workflows: Multi-Agent Group Chat with Copilot Agent Plugins in Microsoft Semantic Kernel  Copilot Agent Plugins (CAPs) are revolutionizing how developers interact with Microsoft 365 data. By transforming natural language into seamless CRUD actions using Microsoft Graph and Semantic Kernel, CAPs enable the creation of intelligent, AI-driven solutions. This sample demonstrates a multi-agent group […] The post Guest Blog: Revolutionizing AI Workflows: Multi-Agent Group Chat with Copilot Agent Plugins in Microsoft Semantic Kernel appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Implement App Service Best Practices into your Azure ARM/Bicep Templates with GitHub Copilot
    If you’re using VS code and you’re not using GitHub Copilot yet, you should definitely check it out. GitHub Copilot is your AI pair programmer tool in Visual Studio Code. Get code suggestions as you type or use Inline Chat in the editor to write code faster. Add new functionality or resolve bugs across your project with Copilot Edits or use natural language in chat to explore your codebase. You can definitely do what we’ll be doing in this blog post without GitHub Copilot and instead use a different Copilot implementation or other AI tool altogether, but using GitHub Copilot directly where you’re writing your code and building your ARM/Bicep templates is a great way to make your process quicker and more efficient. In this blog post, we’ll be discussing how to implement best practices, when…  ( 34 min )
    Implement App Service Best Practices into your Azure ARM/Bicep Templates with GitHub Copilot
    If you’re using VS code and you’re not using GitHub Copilot yet, you should definitely check it out. GitHub Copilot is your AI pair programmer tool in Visual Studio Code. Get code suggestions as you type or use Inline Chat in the editor to write code faster. Add new functionality or resolve bugs across your project with Copilot Edits or use natural language in chat to explore your codebase. You can definitely do what we’ll be doing in this blog post without GitHub Copilot and instead use a different Copilot implementation or other AI tool altogether, but using GitHub Copilot directly where you’re writing your code and building your ARM/Bicep templates is a great way to make your process quicker and more efficient. In this blog post, we’ll be discussing how to implement best practices, when…
    Implement App Service Best Practices into your Azure ARM/Bicep Templates with GitHub Copilot
    If you’re using VS code and you’re not using GitHub Copilot yet, you should definitely check it out. GitHub Copilot is your AI pair programmer tool in Visual Studio Code. Get code suggestions as you type or use Inline Chat in the editor to write code faster. Add new functionality or resolve bugs across your project with Copilot Edits or use natural language in chat to explore your codebase. You can definitely do what we’ll be doing in this blog post without GitHub Copilot and instead use a different Copilot implementation or other AI tool altogether, but using GitHub Copilot directly where you’re writing your code and building your ARM/Bicep templates is a great way to make your process quicker and more efficient. In this blog post, we’ll be discussing how to implement best practices, when…

  • Open

    Announcing the launch of Fabric Catalyst: empowering partners with scalable knowledge and best practices
    Partners, come join the Microsoft Fabric Catalyst program. A collaborative knowledge hub aimed at scaling technical expertise and providing partners with the right resources to drive high-impact solutions in Fabric.  ( 5 min )
    Streamline Data Engineering & Data Science with Copilot in Fabric
    In today’s data-driven world, it’s important to be able to read & understand your data to enable you to gain insights on which patterns/trends you want to monitor. After a data driven decision is taken, you can understand which visualizations you want to build and which machine learning models you want to train to give … Continue reading “Streamline Data Engineering & Data Science with Copilot in Fabric”  ( 7 min )
  • Open

    Unleashing the Power of AI Agents: Transforming Business Operations
    Let "Get Started with AI Agents," in this short blog I want explore the evolution, capabilities, and applications of AI agents, highlighting their potential to enhance productivity and efficiency. We take a peak into the challenges of developing AI agents and introduce powerful tools like Azure AI Foundry and Azure AI Agent Service that empower developers to build, deploy, and scale AI agents securely and efficiently.In today's rapidly evolving technological landscape, the integration of AI agents into business processes is becoming increasingly essential.  Lets delve into the transformative potential of AI agents and how they can revolutionize various aspects of our operations. We begin by exploring the evolution of LLM-based solutions, tracing the journey from no agents to sophisticated …  ( 24 min )
    Unleashing the Power of AI Agents: Transforming Business Operations
    Let "Get Started with AI Agents," in this short blog I want explore the evolution, capabilities, and applications of AI agents, highlighting their potential to enhance productivity and efficiency. We take a peak into the challenges of developing AI agents and introduce powerful tools like Azure AI Foundry and Azure AI Agent Service that empower developers to build, deploy, and scale AI agents securely and efficiently.In today's rapidly evolving technological landscape, the integration of AI agents into business processes is becoming increasingly essential.  Lets delve into the transformative potential of AI agents and how they can revolutionize various aspects of our operations. We begin by exploring the evolution of LLM-based solutions, tracing the journey from no agents to sophisticated …
    Unleashing the Power of AI Agents: Transforming Business Operations
    Let "Get Started with AI Agents," in this short blog I want explore the evolution, capabilities, and applications of AI agents, highlighting their potential to enhance productivity and efficiency. We take a peak into the challenges of developing AI agents and introduce powerful tools like Azure AI Foundry and Azure AI Agent Service that empower developers to build, deploy, and scale AI agents securely and efficiently.In today's rapidly evolving technological landscape, the integration of AI agents into business processes is becoming increasingly essential.  Lets delve into the transformative potential of AI agents and how they can revolutionize various aspects of our operations. We begin by exploring the evolution of LLM-based solutions, tracing the journey from no agents to sophisticated …
  • Open

    Build your code-first app with Azure AI Agent Service
    Welcome to the 3rd week of learning with Generative AI Level up Tuesdays series. Last week we introduced you to the world of agentic AI and we announced the release of a free AI Agents for beginners course on GitHub. Did you miss last session? Catch it up here. What's next? Next Tuesday 25th February at 9AM PST (6PM CET) we are going to host the 4th episode of the series, where we are going to further deep dive into agents, by showcasing an e-2-e sample for a real-world scenario, built with Azure AI Agents service. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Discord. Join the community at https://discord.gg/uwUyWw9xdn and tune in on 27th February at 8AM PST (5PM CET) to get all your questions answered. About thi…  ( 24 min )
    Build your code-first app with Azure AI Agent Service
    Welcome to the 3rd week of learning with Generative AI Level up Tuesdays series. Last week we introduced you to the world of agentic AI and we announced the release of a free AI Agents for beginners course on GitHub. Did you miss last session? Catch it up here. What's next? Next Tuesday 25th February at 9AM PST (6PM CET) we are going to host the 4th episode of the series, where we are going to further deep dive into agents, by showcasing an e-2-e sample for a real-world scenario, built with Azure AI Agents service. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Discord. Join the community at https://discord.gg/uwUyWw9xdn and tune in on 27th February at 8AM PST (5PM CET) to get all your questions answered. About thi…
    Build your code-first app with Azure AI Agent Service
    Welcome to the 3rd week of learning with Generative AI Level up Tuesdays series. Last week we introduced you to the world of agentic AI and we announced the release of a free AI Agents for beginners course on GitHub. Did you miss last session? Catch it up here. What's next? Next Tuesday 25th February at 9AM PST (6PM CET) we are going to host the 4th episode of the series, where we are going to further deep dive into agents, by showcasing an e-2-e sample for a real-world scenario, built with Azure AI Agents service. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Discord. Join the community at https://discord.gg/uwUyWw9xdn and tune in on 27th February at 8AM PST (5PM CET) to get all your questions answered. About thi…
  • Open

    Deploying WebJobs to Azure Container Apps
    Debjyoti Ganguly walks through the process of migrating existing WebJobs to Azure Container App Jobs, highlighting the benefits and providing practical steps to implement this migration effectively Introduction As businesses scale, the need for a robust and scalable environment for running background jobs becomes essential. Azure Container App Jobs offer a modern, scalable, and containerized […] The post Deploying WebJobs to Azure Container Apps appeared first on Developer Support.  ( 24 min )
  • Open

    AI Agents for Beginners Course: 10 Lessons teaching you how to start building AI Agents
    10 Lessons teaching everything you need to know to start building AI Agents Today we want to highlight the AI Agents For Beginners course that was released. 🔗https://github.com/microsoft/ai-agents-for-beginners/tree/main 🗃️There are 10 Lessons available today teaching you the basics of building AI Agents, as shown below Lesson Link Intro to AI Agents and Use Cases Link […] The post AI Agents for Beginners Course: 10 Lessons teaching you how to start building AI Agents appeared first on Semantic Kernel.  ( 22 min )
  • Open

    GitHub Copilot for Azure DevOps users
    Azure DevOps customers frequently ask us when GitHub Copilot will be available to them. What many don’t realize is that GitHub Copilot for Business is already accessible to all customers, including those using Azure DevOps. Even better, much of its powerful functionality is integrated into tools you already use, like Visual Studio and VS Code. […] The post GitHub Copilot for Azure DevOps users appeared first on Azure DevOps Blog.  ( 24 min )
  • Open

    Using Semantic Kernel to control a BBC Microbit
    Background The BBC Micro:bit has proved very popular in UK schools as a cheap and simple device that can be used to demonstrate basic coding, either using a building block approach or by using Python code in an editor. There are browser-based tools to allow editing of the code in either mode. The BBC Micro:bit is then connected via USB and the browser is then able to upload the code to the BBC Micro:bit. Thus making it a very interactive experience. What if the power of Azure OpenAI prompts could be combined with the simplicity of the BBC Micro:bit to allow prompts to directly program the BBC Micro:bit? This is what this blog explains. Semantic Kernel This blog assumes the reader has a basic understanding of Azure OpenAI and that there is an API that allows you to send requests and get re…  ( 38 min )
    Using Semantic Kernel to control a BBC Microbit
    Background The BBC Micro:bit has proved very popular in UK schools as a cheap and simple device that can be used to demonstrate basic coding, either using a building block approach or by using Python code in an editor. There are browser-based tools to allow editing of the code in either mode. The BBC Micro:bit is then connected via USB and the browser is then able to upload the code to the BBC Micro:bit. Thus making it a very interactive experience. What if the power of Azure OpenAI prompts could be combined with the simplicity of the BBC Micro:bit to allow prompts to directly program the BBC Micro:bit? This is what this blog explains. Semantic Kernel This blog assumes the reader has a basic understanding of Azure OpenAI and that there is an API that allows you to send requests and get re…
    Using Semantic Kernel to control a BBC Microbit
    Background The BBC Micro:bit has proved very popular in UK schools as a cheap and simple device that can be used to demonstrate basic coding, either using a building block approach or by using Python code in an editor. There are browser-based tools to allow editing of the code in either mode. The BBC Micro:bit is then connected via USB and the browser is then able to upload the code to the BBC Micro:bit. Thus making it a very interactive experience. What if the power of Azure OpenAI prompts could be combined with the simplicity of the BBC Micro:bit to allow prompts to directly program the BBC Micro:bit? This is what this blog explains. Semantic Kernel This blog assumes the reader has a basic understanding of Azure OpenAI and that there is an API that allows you to send requests and get re…
    Reproduce Deepseek R1-zero Aha Moment
    The Code We used the method of `accelerate` + `deepspeed` + `transformers` + `trl.GRPOTrainer`. Do not use `unsloth` acceleration because it does not yet support distributed training.   For faster experimental speed, I used the 0.5B and 1.5B versions of Qwen2.5.   The code to config the `deepspeed` backend in `accelerate` is in `train/r1/phi4grpo_zero2.yaml` (yes, the `deepspeed` configuration file is the same as when using `phi4`). Note that the number of `num_processes` should be set to the number of GPUs specified in the `CUDA_VISIBLE_DEVICES` environment variable minus one, because one GPU needs to be reserved for `vllm` to generate the `group relative policy`. Some configuration is as follows:   ...... distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_funct…  ( 30 min )
    Reproduce Deepseek R1-zero Aha Moment
    The Code We used the method of `accelerate` + `deepspeed` + `transformers` + `trl.GRPOTrainer`. Do not use `unsloth` acceleration because it does not yet support distributed training.   For faster experimental speed, I used the 0.5B and 1.5B versions of Qwen2.5.   The code to config the `deepspeed` backend in `accelerate` is in `train/r1/phi4grpo_zero2.yaml` (yes, the `deepspeed` configuration file is the same as when using `phi4`). Note that the number of `num_processes` should be set to the number of GPUs specified in the `CUDA_VISIBLE_DEVICES` environment variable minus one, because one GPU needs to be reserved for `vllm` to generate the `group relative policy`. Some configuration is as follows:   ...... distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_funct…
    Reproduce Deepseek R1-zero Aha Moment
    The Code We used the method of `accelerate` + `deepspeed` + `transformers` + `trl.GRPOTrainer`. Do not use `unsloth` acceleration because it does not yet support distributed training.   For faster experimental speed, I used the 0.5B and 1.5B versions of Qwen2.5.   The code to config the `deepspeed` backend in `accelerate` is in `train/r1/phi4grpo_zero2.yaml` (yes, the `deepspeed` configuration file is the same as when using `phi4`). Note that the number of `num_processes` should be set to the number of GPUs specified in the `CUDA_VISIBLE_DEVICES` environment variable minus one, because one GPU needs to be reserved for `vllm` to generate the `group relative policy`. Some configuration is as follows:   ...... distributed_type: DEEPSPEED downcast_bf16: 'no' machine_rank: 0 main_training_funct…
  • Open

    Azure AI Foundry, GitHub Copilot, Fabric and more to Analyze usage stats from Utility Invoices
    Overview With the introduction of Azure AI Foundry, integrating various AI services to streamline AI solution development and deployment of Agentic AI Workflow solutions like multi-modal, multi-model, dynamic & interactive Agents etc. has become more efficient. The platform offers a range of AI services, including Document Intelligence for extracting data from documents, natural language processing and robust machine learning capabilities, and more. Microsoft Fabric further enhances this ecosystem by providing robust data storage, analytics, and data science tools, enabling seamless data management and analysis. Additionally, Copilot and GitHub Copilot assist developers by offering AI-powered code suggestions and automating repetitive coding tasks, significantly boosting productivity and e…  ( 39 min )
    Azure AI Foundry, GitHub Copilot, Fabric and more to Analyze usage stats from Utility Invoices
    Overview With the introduction of Azure AI Foundry, integrating various AI services to streamline AI solution development and deployment of Agentic AI Workflow solutions like multi-modal, multi-model, dynamic & interactive Agents etc. has become more efficient. The platform offers a range of AI services, including Document Intelligence for extracting data from documents, natural language processing and robust machine learning capabilities, and more. Microsoft Fabric further enhances this ecosystem by providing robust data storage, analytics, and data science tools, enabling seamless data management and analysis. Additionally, Copilot and GitHub Copilot assist developers by offering AI-powered code suggestions and automating repetitive coding tasks, significantly boosting productivity and e…
    Azure AI Foundry, GitHub Copilot, Fabric and more to Analyze usage stats from Utility Invoices
    Overview With the introduction of Azure AI Foundry, integrating various AI services to streamline AI solution development and deployment of Agentic AI Workflow solutions like multi-modal, multi-model, dynamic & interactive Agents etc. has become more efficient. The platform offers a range of AI services, including Document Intelligence for extracting data from documents, natural language processing and robust machine learning capabilities, and more. Microsoft Fabric further enhances this ecosystem by providing robust data storage, analytics, and data science tools, enabling seamless data management and analysis. Additionally, Copilot and GitHub Copilot assist developers by offering AI-powered code suggestions and automating repetitive coding tasks, significantly boosting productivity and e…
  • Open

    Introducing the Adaptive Cards documentation hub and new Adaptive Cards updates
    Discover how Adaptive Cards can transform your app with rich, interactive experiences that boost productivity and streamline workflows, in our new Adaptive Cards documentation hub. The post Introducing the Adaptive Cards documentation hub and new Adaptive Cards updates appeared first on Microsoft 365 Developer Blog.  ( 23 min )

  • Open

    Reference Architecture for a High Scale Moodle Environment on Azure
    Introduction  Moodle is an open-source learning platform that was developed in 1999 by Martin Dougiamas, a computer scientist and educator from Australia. Moodle stands for Modular Object-Oriented Dynamic Learning Environment, and it is written in PHP, a popular web programming language. Moodle aims to provide educators and learners with a flexible and customizable online environment for teaching and learning, where they can create and access courses, activities, resources, and assessments. Moodle also supports collaboration, communication, and feedback among users, as well as various plugins and integrations with other systems and tools.  Moodle is widely used around the world by schools, universities, businesses, and other organizations, with over 100 million registered users and 250,000…  ( 29 min )
    Reference Architecture for a High Scale Moodle Environment on Azure
    Introduction  Moodle is an open-source learning platform that was developed in 1999 by Martin Dougiamas, a computer scientist and educator from Australia. Moodle stands for Modular Object-Oriented Dynamic Learning Environment, and it is written in PHP, a popular web programming language. Moodle aims to provide educators and learners with a flexible and customizable online environment for teaching and learning, where they can create and access courses, activities, resources, and assessments. Moodle also supports collaboration, communication, and feedback among users, as well as various plugins and integrations with other systems and tools.  Moodle is widely used around the world by schools, universities, businesses, and other organizations, with over 100 million registered users and 250,000…
    Reference Architecture for a High Scale Moodle Environment on Azure
    Introduction  Moodle is an open-source learning platform that was developed in 1999 by Martin Dougiamas, a computer scientist and educator from Australia. Moodle stands for Modular Object-Oriented Dynamic Learning Environment, and it is written in PHP, a popular web programming language. Moodle aims to provide educators and learners with a flexible and customizable online environment for teaching and learning, where they can create and access courses, activities, resources, and assessments. Moodle also supports collaboration, communication, and feedback among users, as well as various plugins and integrations with other systems and tools.  Moodle is widely used around the world by schools, universities, businesses, and other organizations, with over 100 million registered users and 250,000…
  • Open

    Introducing fabric-cicd Deployment Tool
    We’re excited to announce the preview of the Fabric CI/CD Python library! Recognizing the importance of CI/CD in our success, we decided to open source our project to share with the community. As part of the Azure Data Insights & Analytics team, an internal data engineering group focused on supporting product analytics for Azure Data, we’ve been using Fabric as the backbone of our platform for the last two years. Our team is committed to maintaining and evolving this library, and we look forward to collaborating with the community to enhance its capabilities!  ( 5 min )
    Why SQL database in Fabric is the best choice for low-code/no-code Developers
    Low-code/no-code empowers developers to create and manage databases in an Intuitive and user-friendly way. In the fast-evolving world of software development, the low-code/no-code movement has garnered substantial momentum. This paradigm shift is enabling a new wave of developers, citizen developers, to create powerful applications with minimal hand-coding. In Fabric, the heart of this revolution lies … Continue reading “Why SQL database in Fabric is the best choice for low-code/no-code Developers”  ( 7 min )
  • Open

    Enterprise Best Practices for Fine-Tuning Azure OpenAI Models
    Fine-tuning large language models (LLMs) has become increasingly practical within enterprise settings. Recent advancements in both the training procedures and serving infrastructure have dramatically lowered the barriers to creating domain-specific AI solutions. Fine-tuning not only boosts model accuracy but also reduces operational costs related to token consumption. Additionally, by customizing smaller LLM variants (for example, moving from a larger GPT‑4 model to a “GPT‑4-mini” model), teams can accelerate inference speed and more efficiently manage compute resources. In this article, we outline a Hub/Spoke architecture strategy for organizations looking to securely orchestrate fine-tuning pipelines, streamline deployment, and maintain critical compliance protocols across multiple envir…  ( 28 min )
    Enterprise Best Practices for Fine-Tuning Azure OpenAI Models
    Fine-tuning large language models (LLMs) has become increasingly practical within enterprise settings. Recent advancements in both the training procedures and serving infrastructure have dramatically lowered the barriers to creating domain-specific AI solutions. Fine-tuning not only boosts model accuracy but also reduces operational costs related to token consumption. Additionally, by customizing smaller LLM variants (for example, moving from a larger GPT‑4 model to a “GPT‑4-mini” model), teams can accelerate inference speed and more efficiently manage compute resources. In this article, we outline a Hub/Spoke architecture strategy for organizations looking to securely orchestrate fine-tuning pipelines, streamline deployment, and maintain critical compliance protocols across multiple envir…
    Enterprise Best Practices for Fine-Tuning Azure OpenAI Models
    Fine-tuning large language models (LLMs) has become increasingly practical within enterprise settings. Recent advancements in both the training procedures and serving infrastructure have dramatically lowered the barriers to creating domain-specific AI solutions. Fine-tuning not only boosts model accuracy but also reduces operational costs related to token consumption. Additionally, by customizing smaller LLM variants (for example, moving from a larger GPT‑4 model to a “GPT‑4-mini” model), teams can accelerate inference speed and more efficiently manage compute resources. In this article, we outline a Hub/Spoke architecture strategy for organizations looking to securely orchestrate fine-tuning pipelines, streamline deployment, and maintain critical compliance protocols across multiple envir…
  • Open

    Using OpenAI’s o3-mini Reasoning Model in Semantic Kernel
    OpenAI’s o3-mini is a newly released small reasoning model (launched January 2025) that delivers advanced problem-solving capabilities at a fraction of the cost of previous models. It excels in STEM domains (science, math, coding) while maintaining low latency and cost similar to the earlier o1-mini model. This model is also available as Azure OpenAI Service, emphasizing its efficiency gains and new features like […] The post Using OpenAI’s o3-mini Reasoning Model in Semantic Kernel appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Ingestion of Managed Prometheus metrics from a private AKS cluster using private link
    This article describes the end-to-end instructions on how to configure Managed Prometheus for data ingestion from your private Azure Kubernetes Service (AKS) cluster to an Azure Monitor Workspace. Azure Private Link enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link Scope (AMPLS) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. Using private endpoints for Managed Prometheus and your Azure Monitor workspace you can allow clients on a virtual network (VNet) to securely ingest Prometheus metrics over a Private Link. Conceptual overview A private endpoint is a special network interface for an Azure service in your Virtual Network…  ( 30 min )
    Ingestion of Managed Prometheus metrics from a private AKS cluster using private link
    This article describes the end-to-end instructions on how to configure Managed Prometheus for data ingestion from your private Azure Kubernetes Service (AKS) cluster to an Azure Monitor Workspace. Azure Private Link enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link Scope (AMPLS) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. Using private endpoints for Managed Prometheus and your Azure Monitor workspace you can allow clients on a virtual network (VNet) to securely ingest Prometheus metrics over a Private Link. Conceptual overview A private endpoint is a special network interface for an Azure service in your Virtual Network…
    Ingestion of Managed Prometheus metrics from a private AKS cluster using private link
    This article describes the end-to-end instructions on how to configure Managed Prometheus for data ingestion from your private Azure Kubernetes Service (AKS) cluster to an Azure Monitor Workspace. Azure Private Link enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An Azure Monitor Private Link Scope (AMPLS) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. Using private endpoints for Managed Prometheus and your Azure Monitor workspace you can allow clients on a virtual network (VNet) to securely ingest Prometheus metrics over a Private Link. Conceptual overview A private endpoint is a special network interface for an Azure service in your Virtual Network…

  • Open

    Data Storage in Azure OpenAI Service
    Data Stored at Rest by Default Azure OpenAI does store certain data at rest by default when you use specific features (continue reading)  In general, the base models are stateless and do not retain your prompts or completions from standard API calls (they aren't used to train or improve the base models)​. However, some optional service features will persist data in your Azure OpenAI resource. For example, if you upload files for fine-tuning, use the vector store, or enable stateful features like Assistants API Threads or Stored Completions, that data will be stored at rest by the service​. This means content such as training datasets, embeddings, conversation history, or output logs from those features are saved within your Azure environment. Importantly, this storage is within your own Az…  ( 52 min )
    Data Storage in Azure OpenAI Service
    Data Stored at Rest by Default Azure OpenAI does store certain data at rest by default when you use specific features (continue reading)  In general, the base models are stateless and do not retain your prompts or completions from standard API calls (they aren't used to train or improve the base models)​. However, some optional service features will persist data in your Azure OpenAI resource. For example, if you upload files for fine-tuning, use the vector store, or enable stateful features like Assistants API Threads or Stored Completions, that data will be stored at rest by the service​. This means content such as training datasets, embeddings, conversation history, or output logs from those features are saved within your Azure environment. Importantly, this storage is within your own Az…
    Data Storage in Azure OpenAI Service
    Data Stored at Rest by Default Azure OpenAI does store certain data at rest by default when you use specific features (continue reading)  In general, the base models are stateless and do not retain your prompts or completions from standard API calls (they aren't used to train or improve the base models)​. However, some optional service features will persist data in your Azure OpenAI resource. For example, if you upload files for fine-tuning, use the vector store, or enable stateful features like Assistants API Threads or Stored Completions, that data will be stored at rest by the service​. This means content such as training datasets, embeddings, conversation history, or output logs from those features are saved within your Azure environment. Importantly, this storage is within your own Az…
  • Open

    Container Apps Pull Image using an Azure Service Principal
    This post will go over how to use a service principal to pull an image to your Container App.  ( 3 min )
  • Open

    Hack Together: The Microsoft Data + AI Kenya Hack
    We’re excited to announce, “Hack Together: The Microsoft Data + AI Kenya Hack,” an online hackathon happening from March 12th to April 11th. This event is open to everyone in Kenya who is eager to explore the endless possibilities of leveraging Microsoft Fabric to build innovative AI solutions. Whether you’re just starting out, an experienced … Continue reading “Hack Together: The Microsoft Data + AI Kenya Hack”  ( 7 min )
    Announcing RTI End-to-End Sample
    A typical RTI scenario in Fabric involves the following: As a demonstration of this end-to-end scenario, we’ve created Fabric Notebook that can deploy this in a matter of minutes. This notebook is taking advantage of the existing Microsoft Fabric APIs along with the existing sample datasets in Eventstream. After running the notebook you will have … Continue reading “Announcing RTI End-to-End Sample”  ( 5 min )
    What’s new in OneLake catalog: Data governance and more
    OneLake catalog is the central hub for discovering and managing Fabric content. Whether you’re a business analyst searching for the right datasets, a data engineer managing structured and unstructured data, or a BI consumer looking for curated insights, the OneLake catalog seamlessly connects you to the right content. Enhanced exploration in OneLake Catalog OneLake catalog continues to evolve to deliver richer, … Continue reading “What’s new in OneLake catalog: Data governance and more”  ( 7 min )
  • Open

    The Launch of "AI Agents for Beginners": Your Gateway to Building Intelligent Systems
    🌱 Getting Started Each lesson covers fundamental aspects of building AI Agents. Whether you're a novice or have some experience, you'll find valuable insights and practical knowledge. We also support multiple languages, so you can learn in your preferred language. To see the available languages, click here. If this is your first time working with Generative AI models, we highly recommend our "Generative AI For Beginners" course, which includes 21 lessons on building with GenAI. Remember to star (🌟) this repository and fork it to run the code! 📋 What You Need The course includes code examples that you can find in the code_samples folder. Feel free to fork this repository to create your own copy. The exercises utilize Azure AI Foundry and GitHub Model Catalogs for interacting with Languag…  ( 22 min )
    The Launch of "AI Agents for Beginners": Your Gateway to Building Intelligent Systems
    🌱 Getting Started Each lesson covers fundamental aspects of building AI Agents. Whether you're a novice or have some experience, you'll find valuable insights and practical knowledge. We also support multiple languages, so you can learn in your preferred language. To see the available languages, click here. If this is your first time working with Generative AI models, we highly recommend our "Generative AI For Beginners" course, which includes 21 lessons on building with GenAI. Remember to star (🌟) this repository and fork it to run the code! 📋 What You Need The course includes code examples that you can find in the code_samples folder. Feel free to fork this repository to create your own copy. The exercises utilize Azure AI Foundry and GitHub Model Catalogs for interacting with Languag…
    The Launch of "AI Agents for Beginners": Your Gateway to Building Intelligent Systems
    🌱 Getting Started Each lesson covers fundamental aspects of building AI Agents. Whether you're a novice or have some experience, you'll find valuable insights and practical knowledge. We also support multiple languages, so you can learn in your preferred language. To see the available languages, click here. If this is your first time working with Generative AI models, we highly recommend our "Generative AI For Beginners" course, which includes 21 lessons on building with GenAI. Remember to star (🌟) this repository and fork it to run the code! 📋 What You Need The course includes code examples that you can find in the code_samples folder. Feel free to fork this repository to create your own copy. The exercises utilize Azure AI Foundry and GitHub Model Catalogs for interacting with Languag…
  • Open

    Node 22 now available on Azure App Service
    We are happy to announce that App Service now supports apps targeting Node 22 across all public regions on Linux App Service Plans.  ( 2 min )

  • Open

    Distillation of Phi-4 on DeepSeek R1: SFT and GRPO
    Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git  This article if from one of my repo: https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/GRPO-Phi-4-Training    https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/SLM-DeepSeek-R1   Phi-4 thinks as DeepSeek-R1 I tried fine-tuning Microsoft's Phi-4 model using the open-source R1 dataset. Below, I'll share my steps with everyone. Please click below pictures to see my demo video on Youtube: https://www.youtube.com/watch?v=9CVKR0YcdKU Dataset Used Why Choose This Dataset? I used the reasoning-deepseek subset from the cognitivecomputations/dolphin-r1 dataset. This dataset was generated by the large model DeepSeek-R1 and contains 30,00…  ( 65 min )
    Distillation of Phi-4 on DeepSeek R1: SFT and GRPO
    Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git  This article if from one of my repo: https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/GRPO-Phi-4-Training    https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/SLM-DeepSeek-R1   Phi-4 thinks as DeepSeek-R1 I tried fine-tuning Microsoft's Phi-4 model using the open-source R1 dataset. Below, I'll share my steps with everyone. Please click below pictures to see my demo video on Youtube: https://www.youtube.com/watch?v=9CVKR0YcdKU Dataset Used Why Choose This Dataset? I used the reasoning-deepseek subset from the cognitivecomputations/dolphin-r1 dataset. This dataset was generated by the large model DeepSeek-R1 and contains 30,00…
    Distillation of Phi-4 on DeepSeek R1: SFT and GRPO
    Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git  This article if from one of my repo: https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/GRPO-Phi-4-Training    https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/SLM-DeepSeek-R1   Phi-4 thinks as DeepSeek-R1 I tried fine-tuning Microsoft's Phi-4 model using the open-source R1 dataset. Below, I'll share my steps with everyone. Please click below pictures to see my demo video on Youtube: https://www.youtube.com/watch?v=9CVKR0YcdKU Dataset Used Why Choose This Dataset? I used the reasoning-deepseek subset from the cognitivecomputations/dolphin-r1 dataset. This dataset was generated by the large model DeepSeek-R1 and contains 30,00…

  • Open

    Generative AI for Beginners - .NET
    Introducing a new hands-on course designed for .NET developers who want to explore the world of Generative AI. 👉Generative AI for Beginners - .NET Our focus in this course is code-first, to teach you what you need to know to be confident building .NET GenAI applications today. What is this course about? As generative AI becomes more accessible, it’s essential for developers to understand how to use it responsibly and effectively. To fill this need, we created a course that covers the basics of Generative AI for the .NET ecosystem, including how to set up your .NET environment, core techniques, practical samples, and responsible use of AI. You’ll learn how to create real-world .NET AI-based apps using a variety of libraries and tools including Microsoft Extensions for AI, GitHub Models and…  ( 23 min )
    Generative AI for Beginners - .NET
    Introducing a new hands-on course designed for .NET developers who want to explore the world of Generative AI. 👉Generative AI for Beginners - .NET Our focus in this course is code-first, to teach you what you need to know to be confident building .NET GenAI applications today. What is this course about? As generative AI becomes more accessible, it’s essential for developers to understand how to use it responsibly and effectively. To fill this need, we created a course that covers the basics of Generative AI for the .NET ecosystem, including how to set up your .NET environment, core techniques, practical samples, and responsible use of AI. You’ll learn how to create real-world .NET AI-based apps using a variety of libraries and tools including Microsoft Extensions for AI, GitHub Models and…
    Generative AI for Beginners - .NET
    Introducing a new hands-on course designed for .NET developers who want to explore the world of Generative AI. 👉Generative AI for Beginners - .NET Our focus in this course is code-first, to teach you what you need to know to be confident building .NET GenAI applications today. What is this course about? As generative AI becomes more accessible, it’s essential for developers to understand how to use it responsibly and effectively. To fill this need, we created a course that covers the basics of Generative AI for the .NET ecosystem, including how to set up your .NET environment, core techniques, practical samples, and responsible use of AI. You’ll learn how to create real-world .NET AI-based apps using a variety of libraries and tools including Microsoft Extensions for AI, GitHub Models and…
  • Open

    [Azure Notification Hub] Apple Push Notification service server certificate update
    Customers are enquiring about the Apple's announcement (link below) that the Certification Authority (CA) for Apple Push Notification service (APNs) is changing. Apple Push Notification service server certificate update Content from the link: There are no additional steps/actions required from customers for this change. The certificates will be updated by Notification Hub product team to all backend servers.  ( 18 min )
    [Azure Notification Hub] Apple Push Notification service server certificate update
    Customers are enquiring about the Apple's announcement (link below) that the Certification Authority (CA) for Apple Push Notification service (APNs) is changing. Apple Push Notification service server certificate update Content from the link: There are no additional steps/actions required from customers for this change. The certificates will be updated by Notification Hub product team to all backend servers.
    [Azure Notification Hub] Apple Push Notification service server certificate update
    Customers are enquiring about the Apple's announcement (link below) that the Certification Authority (CA) for Apple Push Notification service (APNs) is changing. Apple Push Notification service server certificate update Content from the link: There are no additional steps/actions required from customers for this change. The certificates will be updated by Notification Hub product team to all backend servers.

  • Open

    Prompt Engineering with GitHub Copilot and JavaScript
    In the ever-evolving world of technology, prompt engineering is becoming a key skill for developers leveraging artificial intelligence tools like GitHub Copilot. In this session, we’ll explore how GitHub Copilot can enhance productivity and efficiency in software development. This article compiles complementary resources from the first session of the bootcamp. Before diving in, here’s a quick reminder of the resources available to all bootcamp participants: Register and receive recordings of the bootcamp classes GitHub Copilot Challenge: participate in the learning challenge and earn a digital badge FREE GitHub Copilot in Visual Studio Code Join the Azure AI Community on Discord Use the session discount coupon to get a GitHub certification In this session, participants were introduced t…  ( 26 min )
    Prompt Engineering with GitHub Copilot and JavaScript
    In the ever-evolving world of technology, prompt engineering is becoming a key skill for developers leveraging artificial intelligence tools like GitHub Copilot. In this session, we’ll explore how GitHub Copilot can enhance productivity and efficiency in software development. This article compiles complementary resources from the first session of the bootcamp. Before diving in, here’s a quick reminder of the resources available to all bootcamp participants: Register and receive recordings of the bootcamp classes GitHub Copilot Challenge: participate in the learning challenge and earn a digital badge FREE GitHub Copilot in Visual Studio Code Join the Azure AI Community on Discord Use the session discount coupon to get a GitHub certification In this session, participants were introduced t…
    Prompt Engineering with GitHub Copilot and JavaScript
    In the ever-evolving world of technology, prompt engineering is becoming a key skill for developers leveraging artificial intelligence tools like GitHub Copilot. In this session, we’ll explore how GitHub Copilot can enhance productivity and efficiency in software development. This article compiles complementary resources from the first session of the bootcamp. Before diving in, here’s a quick reminder of the resources available to all bootcamp participants: Register and receive recordings of the bootcamp classes GitHub Copilot Challenge: participate in the learning challenge and earn a digital badge FREE GitHub Copilot in Visual Studio Code Join the Azure AI Community on Discord Use the session discount coupon to get a GitHub certification In this session, participants were introduced t…
    Ingeniería de prompts con GitHub Copilot: Potenciando el desarrollo de software con IA
    GitHub Copilot es una herramienta de inteligencia artificial que ayuda a los desarrolladores a escribir código de manera más rápida y eficiente. Durante el GitHub Copilot Bootcamp LATAM (accede a las grabaciones), reunimos contenidos prácticos para enseñar cómo dominar la ingeniería de prompts y explorar las funcionalidades de Copilot, aumentando tu productividad en tareas diarias, como crear rutas para APIs, automatizar pruebas e integrar pipelines con GitHub Actions. GitHub Copilot es un asistente de programación basado en IA desarrollado por GitHub en colaboración con OpenAI. Utiliza modelos avanzados de lenguaje, como GPT-4, para ofrecer sugerencias contextuales directamente en el editor del desarrollador. Ya sea completando funciones, generando fragmentos de código o explicando bloqu…  ( 27 min )
    Ingeniería de prompts con GitHub Copilot: Potenciando el desarrollo de software con IA
    GitHub Copilot es una herramienta de inteligencia artificial que ayuda a los desarrolladores a escribir código de manera más rápida y eficiente. Durante el GitHub Copilot Bootcamp LATAM (accede a las grabaciones), reunimos contenidos prácticos para enseñar cómo dominar la ingeniería de prompts y explorar las funcionalidades de Copilot, aumentando tu productividad en tareas diarias, como crear rutas para APIs, automatizar pruebas e integrar pipelines con GitHub Actions. GitHub Copilot es un asistente de programación basado en IA desarrollado por GitHub en colaboración con OpenAI. Utiliza modelos avanzados de lenguaje, como GPT-4, para ofrecer sugerencias contextuales directamente en el editor del desarrollador. Ya sea completando funciones, generando fragmentos de código o explicando bloqu…
    Ingeniería de prompts con GitHub Copilot y Python
    GitHub Copilot está transformando la forma en que los desarrolladores escriben código, optimizando el flujo de trabajo y aumentando la productividad con asistencia basada en inteligencia artificial. En la primera sesión del GitHub Copilot Bootcamp, exploramos cómo aprovechar esta herramienta para mejorar la calidad del código, refactorizar proyectos y automatizar tareas repetitivas. Este artículo reúne recursos complementarios de la sesión, junto con un resumen de los conceptos clave abordados. Antes de explorar el contenido, aquí tienes un recordatorio rápido de los recursos disponibles para todos los participantes del bootcamp: Regístrate y recibe las grabaciones de las clases del bootcamp Desafío de GitHub Copilot: participa en el desafío de aprendizaje y gana una insignia digital GitH…
    Ingeniería de prompts con GitHub Copilot y Python
    GitHub Copilot está transformando la forma en que los desarrolladores escriben código, optimizando el flujo de trabajo y aumentando la productividad con asistencia basada en inteligencia artificial. En la primera sesión del GitHub Copilot Bootcamp, exploramos cómo aprovechar esta herramienta para mejorar la calidad del código, refactorizar proyectos y automatizar tareas repetitivas. Este artículo reúne recursos complementarios de la sesión, junto con un resumen de los conceptos clave abordados. Antes de explorar el contenido, aquí tienes un recordatorio rápido de los recursos disponibles para todos los participantes del bootcamp: Regístrate y recibe las grabaciones de las clases del bootcamp Desafío de GitHub Copilot: participa en el desafío de aprendizaje y gana una insignia digital GitH…
    Getting started with AI Agents in Azure
    This week is the second week of the Generative AI Level Up Tuesdays Reactor series, as well as the accompanying MS Learn challenge! In the session we hosted last Tuesday, we moved from theory to practice, by showcasing with an e-2-e RAG sample built with Azure OpenAI Service, Azure AI Search Service and Prompty. Did you miss it? Catch it up here. What's next? Next Tuesday 18th February at 9AM PST (6PM CET) we are going to host the 3rd episode of the series, where we are going to demystify the world of AI agents and provide an overview of the agents framework in the Microsoft ecosystem. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Discord. Join the community at https://discord.gg/uwUyWw9xdn and tune in on 19th Feb…  ( 23 min )
    Getting started with AI Agents in Azure
    This week is the second week of the Generative AI Level Up Tuesdays Reactor series, as well as the accompanying MS Learn challenge! In the session we hosted last Tuesday, we moved from theory to practice, by showcasing with an e-2-e RAG sample built with Azure OpenAI Service, Azure AI Search Service and Prompty. Did you miss it? Catch it up here. What's next? Next Tuesday 18th February at 9AM PST (6PM CET) we are going to host the 3rd episode of the series, where we are going to demystify the world of AI agents and provide an overview of the agents framework in the Microsoft ecosystem. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Discord. Join the community at https://discord.gg/uwUyWw9xdn and tune in on 19th Feb…
    Getting started with AI Agents in Azure
    This week is the second week of the Generative AI Level Up Tuesdays Reactor series, as well as the accompanying MS Learn challenge! In the session we hosted last Tuesday, we moved from theory to practice, by showcasing with an e-2-e RAG sample built with Azure OpenAI Service, Azure AI Search Service and Prompty. Did you miss it? Catch it up here. What's next? Next Tuesday 18th February at 9AM PST (6PM CET) we are going to host the 3rd episode of the series, where we are going to demystify the world of AI agents and provide an overview of the agents framework in the Microsoft ecosystem. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Discord. Join the community at https://discord.gg/uwUyWw9xdn and tune in on 19th Feb…
  • Open

    Prototyping Agents with visual tools
    Introduction               Agents are gaining wide adoption in the emerging generative AI applications for organizations, transforming the way we interact with technology. Agent development using visual tools provides a low code / no code approach in prototyping agentic behavior. They help in creating preliminary versions of agentic applications, enabling development, testing, refining the functionalities before full-scale deployment. Prototyping tools for agents typically have the below features: Visual tools that allow for rapid creation, management and interaction with agentic applications Enable users to define and modify agents and multi-agent workflows through a point-and-click, drag-and-drop interface The interface should make it easier to set parameters for agents within a user-fr…  ( 41 min )
    Prototyping Agents with visual tools
    Introduction               Agents are gaining wide adoption in the emerging generative AI applications for organizations, transforming the way we interact with technology. Agent development using visual tools provides a low code / no code approach in prototyping agentic behavior. They help in creating preliminary versions of agentic applications, enabling development, testing, refining the functionalities before full-scale deployment. Prototyping tools for agents typically have the below features: Visual tools that allow for rapid creation, management and interaction with agentic applications Enable users to define and modify agents and multi-agent workflows through a point-and-click, drag-and-drop interface The interface should make it easier to set parameters for agents within a user-fr…
    Prototyping Agents with visual tools
    Introduction               Agents are gaining wide adoption in the emerging generative AI applications for organizations, transforming the way we interact with technology. Agent development using visual tools provides a low code / no code approach in prototyping agentic behavior. They help in creating preliminary versions of agentic applications, enabling development, testing, refining the functionalities before full-scale deployment. Prototyping tools for agents typically have the below features: Visual tools that allow for rapid creation, management and interaction with agentic applications Enable users to define and modify agents and multi-agent workflows through a point-and-click, drag-and-drop interface The interface should make it easier to set parameters for agents within a user-fr…
    Numonix supercharges their value to clients with multimodality using Azure AI Content Understanding
    Numonix is a compliance recording company that specializes in capturing, analyzing, and managing customer interactions across various modalities. They’re committed to revolutionizing how businesses extract value from customer interactions, offering solutions that empower businesses to make informed decisions while accelerating revenue growth, enhancing customer experiences, and maintaining regulatory compliance. By leveraging state-of-the-art technology, they provide powerful tools that help organizations ensure compliance, mitigate risk, and discover actionable insights from communications data.  Numonix has many call center clients for whom regulatory compliance is crucial. They needed a way to help their clients monitor calls and customer interactions, solve call-compliance issues, and …  ( 29 min )
    Numonix supercharges their value to clients with multimodality using Azure AI Content Understanding
    Numonix is a compliance recording company that specializes in capturing, analyzing, and managing customer interactions across various modalities. They’re committed to revolutionizing how businesses extract value from customer interactions, offering solutions that empower businesses to make informed decisions while accelerating revenue growth, enhancing customer experiences, and maintaining regulatory compliance. By leveraging state-of-the-art technology, they provide powerful tools that help organizations ensure compliance, mitigate risk, and discover actionable insights from communications data.  Numonix has many call center clients for whom regulatory compliance is crucial. They needed a way to help their clients monitor calls and customer interactions, solve call-compliance issues, and …
    Numonix supercharges their value to clients with multimodality using Azure AI Content Understanding
    Numonix is a compliance recording company that specializes in capturing, analyzing, and managing customer interactions across various modalities. They’re committed to revolutionizing how businesses extract value from customer interactions, offering solutions that empower businesses to make informed decisions while accelerating revenue growth, enhancing customer experiences, and maintaining regulatory compliance. By leveraging state-of-the-art technology, they provide powerful tools that help organizations ensure compliance, mitigate risk, and discover actionable insights from communications data.  Numonix has many call center clients for whom regulatory compliance is crucial. They needed a way to help their clients monitor calls and customer interactions, solve call-compliance issues, and …
  • Open

    Using Advanced Reasoning Model on EdgeAI Part 2 - Evaluate local models using AITK for VSCode
    Quantizing a generative AI model assists in ease of deployment of edge devices. For development purposes we would like to have 1) quantization 2) optimization and 3) evaluation of models all in one tool. We recommend AI Toolkit for Visual Studio Code (AITK) for as an all-in-one tool for developers experimenting with GenAI models. This is a lightweight open-source tool that covers model selection, fine-tuning, deployment, application development, batch data for testing, and evaluation. In Using Advanced Reasoning Model on EdgeAI Part 1 - Quantization, Conversion, Performance, we performed quantization and format conversion for Phi-4 and DeepSeek-R1-Distill-Qwen-1.5B. In This blog will evaluate Phi-4-14B-ONNX-INT4-GPU and DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU using AITK. About model qua…  ( 29 min )
    Using Advanced Reasoning Model on EdgeAI Part 2 - Evaluate local models using AITK for VSCode
    Quantizing a generative AI model assists in ease of deployment of edge devices. For development purposes we would like to have 1) quantization 2) optimization and 3) evaluation of models all in one tool. We recommend AI Toolkit for Visual Studio Code (AITK) for as an all-in-one tool for developers experimenting with GenAI models. This is a lightweight open-source tool that covers model selection, fine-tuning, deployment, application development, batch data for testing, and evaluation. In Using Advanced Reasoning Model on EdgeAI Part 1 - Quantization, Conversion, Performance, we performed quantization and format conversion for Phi-4 and DeepSeek-R1-Distill-Qwen-1.5B. In This blog will evaluate Phi-4-14B-ONNX-INT4-GPU and DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU using AITK. About model qua…
    Using Advanced Reasoning Model on EdgeAI Part 2 - Evaluate local models using AITK for VSCode
    Quantizing a generative AI model assists in ease of deployment of edge devices. For development purposes we would like to have 1) quantization 2) optimization and 3) evaluation of models all in one tool. We recommend AI Toolkit for Visual Studio Code (AITK) for as an all-in-one tool for developers experimenting with GenAI models. This is a lightweight open-source tool that covers model selection, fine-tuning, deployment, application development, batch data for testing, and evaluation. In Using Advanced Reasoning Model on EdgeAI Part 1 - Quantization, Conversion, Performance, we performed quantization and format conversion for Phi-4 and DeepSeek-R1-Distill-Qwen-1.5B. In This blog will evaluate Phi-4-14B-ONNX-INT4-GPU and DeepSeek-R1-Distill-Qwen-14B-ONNX-INT4-GPU using AITK. About model qua…

  • Open

    GenAI Search for Retail
    Integrating generative AI into e-commerce search systems can significantly enhance user experience by refining query understanding and delivering more relevant results. A practical implementation involves deploying a query expansion mechanism that utilizes AI to interpret and broaden user search inputs. Implementation Overview The GenAISearchQueryExpander repository provides a .NET 8.0 application designed for this purpose. It leverages Azure Functions and Azure OpenAI to expand search queries, thereby improving the retrieval of pertinent products. Key Features Azure Functions Integration: Utilizes serverless computing to handle search query processing efficiently. AI-Powered Query Expansion: Employs Azure OpenAI to generate expanded versions of user queries, capturing a broader range of …  ( 22 min )
    GenAI Search for Retail
    Integrating generative AI into e-commerce search systems can significantly enhance user experience by refining query understanding and delivering more relevant results. A practical implementation involves deploying a query expansion mechanism that utilizes AI to interpret and broaden user search inputs. Implementation Overview The GenAISearchQueryExpander repository provides a .NET 8.0 application designed for this purpose. It leverages Azure Functions and Azure OpenAI to expand search queries, thereby improving the retrieval of pertinent products. Key Features Azure Functions Integration: Utilizes serverless computing to handle search query processing efficiently. AI-Powered Query Expansion: Employs Azure OpenAI to generate expanded versions of user queries, capturing a broader range of …
    GenAI Search for Retail
    Integrating generative AI into e-commerce search systems can significantly enhance user experience by refining query understanding and delivering more relevant results. A practical implementation involves deploying a query expansion mechanism that utilizes AI to interpret and broaden user search inputs. Implementation Overview The GenAISearchQueryExpander repository provides a .NET 8.0 application designed for this purpose. It leverages Azure Functions and Azure OpenAI to expand search queries, thereby improving the retrieval of pertinent products. Key Features Azure Functions Integration: Utilizes serverless computing to handle search query processing efficiently. AI-Powered Query Expansion: Employs Azure OpenAI to generate expanded versions of user queries, capturing a broader range of …
  • Open

    How can I decide which protection method to use to protect my sensitive data in Fabric?
    Comparison of methods to enforce access in Fabric, using Microsoft Purview MIP Protection Policies and Data Loss Prevention Policies Restrict Access action  ( 7 min )
    Private ADLS Gen2 access made easy with OneLake Shortcuts: a step-by-step guide
    Microsoft Fabric provides the capability to streamline data access through OneLake Shortcuts. OneLake Shortcuts can significantly reduce data sprawl, enhances data interoperability and accessibility, promotes self-service without the need for ETL/ELT processes, and can improve Power BI semantic model performance with Direct Lake mode. A common question from our customers, particularly those in regulated industries, … Continue reading “Private ADLS Gen2 access made easy with OneLake Shortcuts: a step-by-step guide”  ( 14 min )
  • Open

    Fine-Tuning DeepSeek-R1-Distill-Llama-8B with PyTorch FSDP, QLoRA on Azure Machine Learning
    Large Language Models (LLMs) have demonstrated remarkable capabilities across various industries, revolutionizing how we approach tasks like legal document summarization, creative content generation, and customer sentiment analysis. However, adapting these general-purpose models to excel in specific domains often requires fine-tuning. This is where fine-tuning comes in, allowing us to tailor LLMs to meet unique requirements and improve their performance on targeted tasks. In this blog post, we'll explore the process of fine-tuning the DeepSeek-R1-Distill-Llama-8B model, highlighting the advantages of using PyTorch Fully Sharded Data Parallel (FSDP) and Quantization-Aware Low-Rank Adaptation (QLoRA) techniques in conjunction with the Azure Machine Learning platform. Why Fine-Tuning Matters …  ( 51 min )
    Fine-Tuning DeepSeek-R1-Distill-Llama-8B with PyTorch FSDP, QLoRA on Azure Machine Learning
    Large Language Models (LLMs) have demonstrated remarkable capabilities across various industries, revolutionizing how we approach tasks like legal document summarization, creative content generation, and customer sentiment analysis. However, adapting these general-purpose models to excel in specific domains often requires fine-tuning. This is where fine-tuning comes in, allowing us to tailor LLMs to meet unique requirements and improve their performance on targeted tasks. In this blog post, we'll explore the process of fine-tuning the DeepSeek-R1-Distill-Llama-8B model, highlighting the advantages of using PyTorch Fully Sharded Data Parallel (FSDP) and Quantization-Aware Low-Rank Adaptation (QLoRA) techniques in conjunction with the Azure Machine Learning platform. Why Fine-Tuning Matters …
    Fine-Tuning DeepSeek-R1-Distill-Llama-8B with PyTorch FSDP, QLoRA on Azure Machine Learning
    Large Language Models (LLMs) have demonstrated remarkable capabilities across various industries, revolutionizing how we approach tasks like legal document summarization, creative content generation, and customer sentiment analysis. However, adapting these general-purpose models to excel in specific domains often requires fine-tuning. This is where fine-tuning comes in, allowing us to tailor LLMs to meet unique requirements and improve their performance on targeted tasks. In this blog post, we'll explore the process of fine-tuning the DeepSeek-R1-Distill-Llama-8B model, highlighting the advantages of using PyTorch Fully Sharded Data Parallel (FSDP) and Quantization-Aware Low-Rank Adaptation (QLoRA) techniques in conjunction with the Azure Machine Learning platform. Why Fine-Tuning Matters …
    Introducing Stability AI Generative Visual Models to Azure AI Foundry
    Stable Diffusion 3.5 Large  At 8.1 billion parameters, Stable Diffusion 3.5 is the most powerful model in the Stable Diffusion family. Our collaboration with Stability AI brings this sophisticated model to our catalog, introducing image-to-image capabilities to Azure AI Foundry for the first time. The model delivers exceptional text-to-image generation with superior quality and prompt adherence, making it ideal for professional use cases at 1 megapixel resolution.     Images above were created using Stable Diffusion 3.5 Large Stable Diffusion 3.5 Large produces diverse outputs, creating images that are representative of the world, with different skin tones and features, all without requiring extensive prompting.     Images above were created using Stable Diffusion 3.5 Large According to …  ( 31 min )
    Introducing Stability AI Generative Visual Models to Azure AI Foundry
    Stable Diffusion 3.5 Large  At 8.1 billion parameters, Stable Diffusion 3.5 is the most powerful model in the Stable Diffusion family. Our collaboration with Stability AI brings this sophisticated model to our catalog, introducing image-to-image capabilities to Azure AI Foundry for the first time. The model delivers exceptional text-to-image generation with superior quality and prompt adherence, making it ideal for professional use cases at 1 megapixel resolution.     Images above were created using Stable Diffusion 3.5 Large Stable Diffusion 3.5 Large produces diverse outputs, creating images that are representative of the world, with different skin tones and features, all without requiring extensive prompting.     Images above were created using Stable Diffusion 3.5 Large According to …
    Introducing Stability AI Generative Visual Models to Azure AI Foundry
    Stable Diffusion 3.5 Large  At 8.1 billion parameters, Stable Diffusion 3.5 is the most powerful model in the Stable Diffusion family. Our collaboration with Stability AI brings this sophisticated model to our catalog, introducing image-to-image capabilities to Azure AI Foundry for the first time. The model delivers exceptional text-to-image generation with superior quality and prompt adherence, making it ideal for professional use cases at 1 megapixel resolution.     Images above were created using Stable Diffusion 3.5 Large Stable Diffusion 3.5 Large produces diverse outputs, creating images that are representative of the world, with different skin tones and features, all without requiring extensive prompting.     Images above were created using Stable Diffusion 3.5 Large According to …
  • Open

    GitHub Copilot for Eclipse: Code Completion Now in Public Preview
    We are excited to announce the Public Preview of GitHub Copilot for Eclipse. As part of the broader GitHub Copilot family, which enhances productivity in various IDEs, this latest integration ensures that developers using Eclipse can benefit from AI-assisted coding like never before. GitHub Copilot is an AI-powered code assistant designed to streamline software development […] The post GitHub Copilot for Eclipse: Code Completion Now in Public Preview appeared first on Microsoft for Java Developers.  ( 24 min )

  • Open

    Govern your data in SQL database in Microsoft Fabric with protection policies in Microsoft Purview
    Microsoft Purview’s protection policies help you safeguard sensitive data in Microsoft Fabric items, including SQL databases. In this article, we’ll explain how these policies override Microsoft Fabric item permissions for users, apps, and groups, limiting their actions within the database. If you are not familiar with how item permissions and workspace roles work in SQL … Continue reading “Govern your data in SQL database in Microsoft Fabric with protection policies in Microsoft Purview”  ( 8 min )
    Fabric OPENROWSET function (Preview)
    We are excited to announce the preview of the OPENROWSET function in the Fabric Data Warehouse and SQL endpoint for Lakehouse. This powerful function allows you to read the content of external files stored in Azure Data Lake Storage and Azure Blob Storage without the need to ingest them into the Data Warehouse. With the … Continue reading “Fabric OPENROWSET function (Preview)”  ( 8 min )
    BULK INSERT in Fabric Data Warehouse (Preview)
    We are excited to announce the preview of the BULK INSERT statement in Fabric Data Warehouse, which allows you to load CSV files from Azure Data Lake Storage and Azure Blob Storage. Refer to the example of the traditional BULK INSERT statement commonly used by SQL Server or Azure SQL users to import files from … Continue reading “BULK INSERT in Fabric Data Warehouse (Preview)”  ( 7 min )
  • Open

    Adopting Hybrid Search with Azure Cosmos DB
    In today's data-driven landscape, the ability to efficiently search and retrieve information is paramount. Azure Cosmos DB introduces Hybrid Search, a powerful feature that combines the strengths of both Vector and Full-Text Search. This hybrid approach facilitates the creation of more nuanced and effective search experiences by integrating semantic understanding with traditional keyword-based search. In this blog post, we'll delve into the intricacies of Hybrid Search in Azure Cosmos DB, exploring its features, enabling mechanisms, and practical use cases. Introduction to Hybrid Search  Hybrid Search in Azure Cosmos DB seamlessly combines Vector Search and Full-Text Search to deliver highly relevant search results. By leveraging semantic understanding alongside keyword-based search, Hybri…  ( 40 min )
    Adopting Hybrid Search with Azure Cosmos DB
    In today's data-driven landscape, the ability to efficiently search and retrieve information is paramount. Azure Cosmos DB introduces Hybrid Search, a powerful feature that combines the strengths of both Vector and Full-Text Search. This hybrid approach facilitates the creation of more nuanced and effective search experiences by integrating semantic understanding with traditional keyword-based search. In this blog post, we'll delve into the intricacies of Hybrid Search in Azure Cosmos DB, exploring its features, enabling mechanisms, and practical use cases. Introduction to Hybrid Search  Hybrid Search in Azure Cosmos DB seamlessly combines Vector Search and Full-Text Search to deliver highly relevant search results. By leveraging semantic understanding alongside keyword-based search, Hybri…
    Adopting Hybrid Search with Azure Cosmos DB
    In today's data-driven landscape, the ability to efficiently search and retrieve information is paramount. Azure Cosmos DB introduces Hybrid Search, a powerful feature that combines the strengths of both Vector and Full-Text Search. This hybrid approach facilitates the creation of more nuanced and effective search experiences by integrating semantic understanding with traditional keyword-based search. In this blog post, we'll delve into the intricacies of Hybrid Search in Azure Cosmos DB, exploring its features, enabling mechanisms, and practical use cases. Introduction to Hybrid Search  Hybrid Search in Azure Cosmos DB seamlessly combines Vector Search and Full-Text Search to deliver highly relevant search results. By leveraging semantic understanding alongside keyword-based search, Hybri…
  • Open

    Unified manifest support for Word, Excel, and PowerPoint is available for preview
    The unified manifest is a key component of the Microsoft 365 ecosystem providing a single model for distributing and managing Teams apps, Outlook add-ins, as well as Copilot agents which work across Microsoft 365 apps.  We've released it for Outlook, and now it is available for Word, PowerPoint, and Excel. The post Unified manifest support for Word, Excel, and PowerPoint is available for preview appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Fortify your cloud and AI security with Microsoft Defender and Azure skilling plans
    Securing hybrid and multicloud environments is more complex than ever. While concerns persist, modern cloud providers implement advanced security features often surpass on-premises solutions, making cloud environments increasingly secure and reliable for critical workloads. Microsoft offers a comprehensive suite of security solutions, and Microsoft Defender for Cloud stands out as a cornerstone technology designed to protect cloud and AI workloads, mitigating emerging challenges throughout the migration process, AI application development, and ongoing operations.   To help you get familiar with infrastructure security fundamentals, Microsoft also offers definitive learning pathways for upskilling your entire IT team on cloud and AI migration and security, as well as our new Azure Essential…  ( 32 min )
    Fortify your cloud and AI security with Microsoft Defender and Azure skilling plans
    Securing hybrid and multicloud environments is more complex than ever. While concerns persist, modern cloud providers implement advanced security features often surpass on-premises solutions, making cloud environments increasingly secure and reliable for critical workloads. Microsoft offers a comprehensive suite of security solutions, and Microsoft Defender for Cloud stands out as a cornerstone technology designed to protect cloud and AI workloads, mitigating emerging challenges throughout the migration process, AI application development, and ongoing operations.   To help you get familiar with infrastructure security fundamentals, Microsoft also offers definitive learning pathways for upskilling your entire IT team on cloud and AI migration and security, as well as our new Azure Essential…
    Fortify your cloud and AI security with Microsoft Defender and Azure skilling plans
    Securing hybrid and multicloud environments is more complex than ever. While concerns persist, modern cloud providers implement advanced security features often surpass on-premises solutions, making cloud environments increasingly secure and reliable for critical workloads. Microsoft offers a comprehensive suite of security solutions, and Microsoft Defender for Cloud stands out as a cornerstone technology designed to protect cloud and AI workloads, mitigating emerging challenges throughout the migration process, AI application development, and ongoing operations.   To help you get familiar with infrastructure security fundamentals, Microsoft also offers definitive learning pathways for upskilling your entire IT team on cloud and AI migration and security, as well as our new Azure Essential…
  • Open

    Building an OpenAI powered Recommendation Engine
    Introduction Recommendation engines play a vital role in enhancing user experiences by providing personalized suggestions and have been proved as an effective strategy in turning engagement into valuable business. The technical objective of a Recommendation Engine is to filter and present the most relevant items from a vast datasets considering business constrains. This process includes steps like data collection, preprocessing, model training, and deployment. Advanced techniques such as embeddings and cosine similarity are used to determine most relevant results for recommendations This blog explores the design and implementation of a recommendation engine. It addresses the challenges faced by traditional systems and how modern approaches can overcome them, aiming to build a robust, scala…  ( 40 min )
    Building an OpenAI powered Recommendation Engine
    Introduction Recommendation engines play a vital role in enhancing user experiences by providing personalized suggestions and have been proved as an effective strategy in turning engagement into valuable business. The technical objective of a Recommendation Engine is to filter and present the most relevant items from a vast datasets considering business constrains. This process includes steps like data collection, preprocessing, model training, and deployment. Advanced techniques such as embeddings and cosine similarity are used to determine most relevant results for recommendations This blog explores the design and implementation of a recommendation engine. It addresses the challenges faced by traditional systems and how modern approaches can overcome them, aiming to build a robust, scala…
    Building an OpenAI powered Recommendation Engine
    Introduction Recommendation engines play a vital role in enhancing user experiences by providing personalized suggestions and have been proved as an effective strategy in turning engagement into valuable business. The technical objective of a Recommendation Engine is to filter and present the most relevant items from a vast datasets considering business constrains. This process includes steps like data collection, preprocessing, model training, and deployment. Advanced techniques such as embeddings and cosine similarity are used to determine most relevant results for recommendations This blog explores the design and implementation of a recommendation engine. It addresses the challenges faced by traditional systems and how modern approaches can overcome them, aiming to build a robust, scala…
  • Open

    February Patches for Azure DevOps Server
    Today we are releasing patches that impact our self-hosted product, Azure DevOps Server. We strongly encourage and recommend that all customers use the latest, most secure release of Azure DevOps Server. You can download the latest version of the product, Azure DevOps Server 2022.2 from the Azure DevOps Server download page. The following version of […] The post February Patches for Azure DevOps Server appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Hyperlight: Achieving 0.0009-second micro-VM execution time
    In this post, we’ll take the demo application and show how it demonstrates one way you can use Hyperlight in your applications.  The post Hyperlight: Achieving 0.0009-second micro-VM execution time appeared first on Microsoft Open Source Blog.  ( 11 min )
  • Open

    How to set up agents and Microsoft 365 Copilot Chat, included with Microsoft 365
    If your organization uses Microsoft 365, you can now get AI-powered assistance using Copilot Chat - without a Microsoft 365 Copilot license. Research faster, analyze files more securely, and automate repetitive tasks. Create and use agents to streamline workflows, with pay-as-you-go billing so you only pay for what you use. See how as a Microsoft 365 and Power Platform admin, you can easily configure access for everyone in your directory by pinning Copilot Chat to make it discoverable for quick use, then manage agent authoring permission in Copilot Studio, and set up consumption-based billing. Jeremy Chapman, Director of Microsoft 365 shows how to optimize productivity while controlling costs. Included with Microsoft 365. Just sign in and boost productivity. Steps to set up Microsoft 365 …  ( 39 min )
    How to set up agents and Microsoft 365 Copilot Chat, included with Microsoft 365
    If your organization uses Microsoft 365, you can now get AI-powered assistance using Copilot Chat - without a Microsoft 365 Copilot license. Research faster, analyze files more securely, and automate repetitive tasks. Create and use agents to streamline workflows, with pay-as-you-go billing so you only pay for what you use. See how as a Microsoft 365 and Power Platform admin, you can easily configure access for everyone in your directory by pinning Copilot Chat to make it discoverable for quick use, then manage agent authoring permission in Copilot Studio, and set up consumption-based billing. Jeremy Chapman, Director of Microsoft 365 shows how to optimize productivity while controlling costs. Included with Microsoft 365. Just sign in and boost productivity. Steps to set up Microsoft 365 …
    How to set up agents and Microsoft 365 Copilot Chat, included with Microsoft 365
    If your organization uses Microsoft 365, you can now get AI-powered assistance using Copilot Chat - without a Microsoft 365 Copilot license. Research faster, analyze files more securely, and automate repetitive tasks. Create and use agents to streamline workflows, with pay-as-you-go billing so you only pay for what you use. See how as a Microsoft 365 and Power Platform admin, you can easily configure access for everyone in your directory by pinning Copilot Chat to make it discoverable for quick use, then manage agent authoring permission in Copilot Studio, and set up consumption-based billing. Jeremy Chapman, Director of Microsoft 365 shows how to optimize productivity while controlling costs. Included with Microsoft 365. Just sign in and boost productivity. Steps to set up Microsoft 365 …
    Using Azure AI Foundry SDK for your AI apps and agents
    Design, customize and manage your own custom applications with Azure AI Foundry right from your code. With Azure AI Foundry, leverage over 1,800 models, seamlessly integrating them into your coding environment to create agents and tailored app experiences. Utilize Retrieval Augmented Generation and vector search to enrich responses with contextual information, as well with built-in services to incorporate cognitive skills such as language, vision, and safety detection. Dan Taylor, Principal Product Architect for Azure AI Foundry SDK, also shares how to streamline your development process with tools for orchestration and monitoring. Use templates to simplify resource deployment and run evaluations against large datasets to optimize performance. With Application Insights, gain visibility in…  ( 48 min )
    Using Azure AI Foundry SDK for your AI apps and agents
    Design, customize and manage your own custom applications with Azure AI Foundry right from your code. With Azure AI Foundry, leverage over 1,800 models, seamlessly integrating them into your coding environment to create agents and tailored app experiences. Utilize Retrieval Augmented Generation and vector search to enrich responses with contextual information, as well with built-in services to incorporate cognitive skills such as language, vision, and safety detection. Dan Taylor, Principal Product Architect for Azure AI Foundry SDK, also shares how to streamline your development process with tools for orchestration and monitoring. Use templates to simplify resource deployment and run evaluations against large datasets to optimize performance. With Application Insights, gain visibility in…
    Using Azure AI Foundry SDK for your AI apps and agents
    Design, customize and manage your own custom applications with Azure AI Foundry right from your code. With Azure AI Foundry, leverage over 1,800 models, seamlessly integrating them into your coding environment to create agents and tailored app experiences. Utilize Retrieval Augmented Generation and vector search to enrich responses with contextual information, as well with built-in services to incorporate cognitive skills such as language, vision, and safety detection. Dan Taylor, Principal Product Architect for Azure AI Foundry SDK, also shares how to streamline your development process with tools for orchestration and monitoring. Use templates to simplify resource deployment and run evaluations against large datasets to optimize performance. With Application Insights, gain visibility in…
  • Open

    Announcing billing for Workspace monitoring
    Workspace Monitoring, currently in preview is an observability feature within Fabric that enables monitoring capabilities across two key areas:  Workspace monitoring allows Fabric developers and admins to access detailed logs and performance metrics for their workspaces. This helps troubleshoot performance issues, investigate errors, optimize queries, and minimize data downtime.  Since the preview announcement, we have … Continue reading “Announcing billing for Workspace monitoring “  ( 6 min )
  • Open

    Supercharge Your TypeScript Workflow: ESLint, Prettier, and Build Tools
    Introduction TypeScript has become the go-to language for modern JavaScript developers, offering static typing, better tooling, and improved maintainability. But writing clean and efficient TypeScript isn’t just about knowing the syntax, it’s about using the right tools to enhance your workflow. In this blog, we’ll explore essential TypeScript tools like ESLint, Prettier, tsconfig settings, and VS Code extensions to help you write better code, catch errors early, and boost productivity. By the end, you’ll have a fully optimized TypeScript development environment with links to quality resources to deepen your knowledge. Why Tooling Matters in TypeScript Development Unlike JavaScript, TypeScript enforces static typing and requires compilation. This means proper tooling can:✅ Catch syntax and…  ( 24 min )
    Supercharge Your TypeScript Workflow: ESLint, Prettier, and Build Tools
    Introduction TypeScript has become the go-to language for modern JavaScript developers, offering static typing, better tooling, and improved maintainability. But writing clean and efficient TypeScript isn’t just about knowing the syntax, it’s about using the right tools to enhance your workflow. In this blog, we’ll explore essential TypeScript tools like ESLint, Prettier, tsconfig settings, and VS Code extensions to help you write better code, catch errors early, and boost productivity. By the end, you’ll have a fully optimized TypeScript development environment with links to quality resources to deepen your knowledge. Why Tooling Matters in TypeScript Development Unlike JavaScript, TypeScript enforces static typing and requires compilation. This means proper tooling can:✅ Catch syntax and…
    Supercharge Your TypeScript Workflow: ESLint, Prettier, and Build Tools
    Introduction TypeScript has become the go-to language for modern JavaScript developers, offering static typing, better tooling, and improved maintainability. But writing clean and efficient TypeScript isn’t just about knowing the syntax, it’s about using the right tools to enhance your workflow. In this blog, we’ll explore essential TypeScript tools like ESLint, Prettier, tsconfig settings, and VS Code extensions to help you write better code, catch errors early, and boost productivity. By the end, you’ll have a fully optimized TypeScript development environment with links to quality resources to deepen your knowledge. Why Tooling Matters in TypeScript Development Unlike JavaScript, TypeScript enforces static typing and requires compilation. This means proper tooling can:✅ Catch syntax and…
  • Open

    Guest Blog: Step-by-Step Guide to Building a Portfolio Manager: A Multi-Agent System with Microsoft Semantic Kernel and Azure OpenAI
    Today the Semantic Kernel team is excited to welcome back a guest author, Akshay Kokane to share his recent Medium article using Semantic Kernel and Azure OpenAI, showcasing a step-by-step guide to building a Portfolio Manager. We’ll turn it over to him to dive into his work below. In my previous blog, we went over […] The post Guest Blog: Step-by-Step Guide to Building a Portfolio Manager: A Multi-Agent System with Microsoft Semantic Kernel and Azure OpenAI appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Deploy Datadog Agent to AKS clusters with Datadog Azure Native ISV Service
    Coming soon -- stay tuned
    Deploy Datadog Agent to AKS clusters with Datadog Azure Native ISV Service
    Datadog Azure Native ISV Service is a powerful integration between Microsoft Azure and Datadog which is available on both Azure Marketplace and Azure portal. This native offering allows you to seamlessly monitor your Azure workloads with Datadog to deliver comprehensive insights about how your tech stack is performing. The experience of setting up, configuring, and deploying a Datadog resource on Azure is similar to creating any other native Azure resources like VMs, App Services etc. Datadog is a cloud-scale monitoring and security platform that aggregates data across your entire stack. With 850+ integrations, customers can monitor the performance of virtually any of the technologies they use. The unified platform centralizes visibility for faster troubleshooting on dynamic architectures.…
    Meet First Round of Speakers for Microsoft JDConf 2025: Code the future with Java and AI
    We are excited to share the initial lineup of speakers and sessions for Microsoft JDConf 2025, taking place on April 9-10. Whether you are an experienced developer or just starting out, JDConf offers valuable opportunities to explore the latest advancements in Java, Cloud and AI technologies, gain practical insights, and connect with Java experts from across the globe. Secure your spot now at jdconf.com. Here are the initial sessions and speakers who will provide valuable insights into Java, Cloud, and AI. Java 25. Explore The Hidden Gems of Java 25 with Mohamed Taman as he uncovers key Java SE features, updates, and fixes that will simplify migration to new Java and enhance your daily development workflow. Virtual Threads. Virtual Threads in Action with Jakarta EE Core Profile by Daniel…
    Meet First Round of Speakers for Microsoft JDConf 2025: Code the future with Java and AI
    We are excited to share the initial lineup of speakers and sessions for Microsoft JDConf 2025, taking place on April 9-10. Whether you are an experienced developer or just starting out, JDConf offers valuable opportunities to explore the latest advancements in Java, Cloud and AI technologies, gain practical insights, and connect with Java experts from across the globe. Secure your spot now at jdconf.com. Here are the initial sessions and speakers who will provide valuable insights into Java, Cloud, and AI. Java 25. Explore The Hidden Gems of Java 25 with Mohamed Taman as he uncovers key Java SE features, updates, and fixes that will simplify migration to new Java and enhance your daily development workflow. Virtual Threads. Virtual Threads in Action with Jakarta EE Core Profile by Daniel…

  • Open

    Azure Files security compatability on Container Apps
    This post will go over compatable security settings when using Azure Storage and Container Apps  ( 3 min )
  • Open

    Use GitHub Copilot Agent Mode to create a Copilot Chat application in 5 minutes
    Last week, GitHub Copilot released a new Agent Mode mode. We can use GitHub Copilot Agent Mode to create new applications or add features to existing projects based on new requirements. You can experience this feature through Visual Studio Code Insiders (1.98). To enable GitHub Copilot Agent Mode, you need to enable GitHub Copilot Agent Mode through Visual Studio Code Insiders. After completion, you can select Edit with GitHub Copilot, select Agent Mode and the corresponding model. Currently, GitHub Copilot Agent Mode supports GPT-4o, Claude 3.5 Sonnet, and Gemini 2.0 Flash. I recommend considering GPT-4o because it supports image uploads. After completing the relevant settings, we can enter the Copilot Chat application development. We want to have a Facebook Messenger-style Copilot Chat…  ( 25 min )
    Use GitHub Copilot Agent Mode to create a Copilot Chat application in 5 minutes
    Last week, GitHub Copilot released a new Agent Mode mode. We can use GitHub Copilot Agent Mode to create new applications or add features to existing projects based on new requirements. You can experience this feature through Visual Studio Code Insiders (1.98). To enable GitHub Copilot Agent Mode, you need to enable GitHub Copilot Agent Mode through Visual Studio Code Insiders. After completion, you can select Edit with GitHub Copilot, select Agent Mode and the corresponding model. Currently, GitHub Copilot Agent Mode supports GPT-4o, Claude 3.5 Sonnet, and Gemini 2.0 Flash. I recommend considering GPT-4o because it supports image uploads. After completing the relevant settings, we can enter the Copilot Chat application development. We want to have a Facebook Messenger-style Copilot Chat…
    Use GitHub Copilot Agent Mode to create a Copilot Chat application in 5 minutes
    Last week, GitHub Copilot released a new Agent Mode mode. We can use GitHub Copilot Agent Mode to create new applications or add features to existing projects based on new requirements. You can experience this feature through Visual Studio Code Insiders (1.98). To enable GitHub Copilot Agent Mode, you need to enable GitHub Copilot Agent Mode through Visual Studio Code Insiders. After completion, you can select Edit with GitHub Copilot, select Agent Mode and the corresponding model. Currently, GitHub Copilot Agent Mode supports GPT-4o, Claude 3.5 Sonnet, and Gemini 2.0 Flash. I recommend considering GPT-4o because it supports image uploads. After completing the relevant settings, we can enter the Copilot Chat application development. We want to have a Facebook Messenger-style Copilot Chat…

  • Open

    Harnessing the Power of Azure AI Foundry with AI agents, Azure AI and OpenAI: SmartWeather AI Agent
    Introduction:  AI is transforming how we interact with data, and one great example of this is the SmartWeather AI Agent. This AI-powered weather reporting system integrates Azure AI and Azure OpenAI's GPT-4O-mini to provide real-time weather data, sentiment analysis, and health alerts based on weather conditions. It combines weather data from OpenWeatherMap with Azure AI’s natural language processing and sentiment analysis to create a seamless, personalized weather experience. Explore the code and get started with this innovative AI solution on GitHub!What is SmartWeather AI Agent? SmartWeather AI Agent is an AI solution that integrates multiple agents for fetching real-time weather data, analyzing sentiment, and generating health and safety alerts. It uses Azure AI and OpenAI models to pr…  ( 23 min )
    Harnessing the Power of Azure AI Foundry with AI agents, Azure AI and OpenAI: SmartWeather AI Agent
    Introduction:  AI is transforming how we interact with data, and one great example of this is the SmartWeather AI Agent. This AI-powered weather reporting system integrates Azure AI and Azure OpenAI's GPT-4O-mini to provide real-time weather data, sentiment analysis, and health alerts based on weather conditions. It combines weather data from OpenWeatherMap with Azure AI’s natural language processing and sentiment analysis to create a seamless, personalized weather experience. Explore the code and get started with this innovative AI solution on GitHub!What is SmartWeather AI Agent? SmartWeather AI Agent is an AI solution that integrates multiple agents for fetching real-time weather data, analyzing sentiment, and generating health and safety alerts. It uses Azure AI and OpenAI models to pr…
    Harnessing the Power of Azure AI Foundry with AI agents, Azure AI and OpenAI: SmartWeather AI Agent
    Introduction:  AI is transforming how we interact with data, and one great example of this is the SmartWeather AI Agent. This AI-powered weather reporting system integrates Azure AI and Azure OpenAI's GPT-4O-mini to provide real-time weather data, sentiment analysis, and health alerts based on weather conditions. It combines weather data from OpenWeatherMap with Azure AI’s natural language processing and sentiment analysis to create a seamless, personalized weather experience. Explore the code and get started with this innovative AI solution on GitHub!What is SmartWeather AI Agent? SmartWeather AI Agent is an AI solution that integrates multiple agents for fetching real-time weather data, analyzing sentiment, and generating health and safety alerts. It uses Azure AI and OpenAI models to pr…
  • Open

    A Framework for Calculating ROI for Agentic AI Apps
    Contributors and Reviewers: Anurag Karuparti (C), Aishwarya Umachandran(C), Tara Webb(R), Bart Czernicki (R), Simon Lacasse (R), Vishnu Pamula (R)   Table of Contents 1. Key Metrics for Measuring ROI in Agentic AI Apps 2. Cost Components of Developing and Deploying Agentic Apps 3. New Revenue Streams from Agentic Apps 4. Framework for Calculating ROI for Agentic Apps 5. Example Scenarios: 6. Risks and Important Considerations 7. ROI will differ from use case to use case Conclusion ROI serves as a critical metric for assessing the financial benefits of any investment, including AI projects. It helps determine whether the investment generates more value than it costs. The fundamental formula for calculating ROI is: ROI = (Net Return from Investment - Cost of Investment) / Cost of Investm…  ( 49 min )
    A Framework for Calculating ROI for Agentic AI Apps
    Contributors and Reviewers: Anurag Karuparti (C), Aishwarya Umachandran(C), Tara Webb(R), Bart Czernicki (R), Simon Lacasse (R), Vishnu Pamula (R)   Table of Contents 1. Key Metrics for Measuring ROI in Agentic AI Apps 2. Cost Components of Developing and Deploying Agentic Apps 3. New Revenue Streams from Agentic Apps 4. Framework for Calculating ROI for Agentic Apps 5. Example Scenarios: 6. Risks and Important Considerations 7. ROI will differ from use case to use case Conclusion ROI serves as a critical metric for assessing the financial benefits of any investment, including AI projects. It helps determine whether the investment generates more value than it costs. The fundamental formula for calculating ROI is: ROI = (Net Return from Investment - Cost of Investment) / Cost of Investm…
    A Framework for Calculating ROI for Agentic AI Apps
    Contributors and Reviewers: Anurag Karuparti (C), Aishwarya Umachandran(C), Tara Webb(R), Bart Czernicki (R), Simon Lacasse (R)   Table of Contents 1. Key Metrics for Measuring ROI in Agentic AI Apps 2. Cost Components of Developing and Deploying Agentic Apps 3. New Revenue Streams from Agentic Apps 4. Framework for Calculating ROI for Agentic Apps 5. Example Scenarios: 6. Risks and Important Considerations 7. ROI will differ from use case to use case Conclusion ROI serves as a critical metric for assessing the financial benefits of any investment, including AI projects. It helps determine whether the investment generates more value than it costs. The fundamental formula for calculating ROI is: ROI = (Net Return from Investment - Cost of Investment) / Cost of Investment * 100    Studie…
  • Open

    How I build NoteBookmark using C# and Azure Container App
    I like to read. I read while commuting, at home, in the morning, or in the evening. I read books of course, but I also a lots of blog posts, and articles to stay up to date with the technologies. And from time to time, a questions or a topic comes out and I remember that I read something about it but I can't remember where. So I decided to build a simple app to help me keep track of the articles I read, with my personal thoughts about it. I called it NoteBookmark. I then use it to write my weekly Reading Notes post and be able to find back any articles I read.   In this post, I will share how I build the NoteBookmark explaining the choices I made and the technologies I used. It's a real application that I use everyday, it's still in progress, and it's open source. You can find the source c…
    How I build NoteBookmark using C# and Azure Container App
    I like to read. I read while commuting, at home, in the morning, or in the evening. I read books of course, but I also a lots of blog posts, and articles to stay up to date with the technologies. And from time to time, a questions or a topic comes out and I remember that I read something about it but I can't remember where. So I decided to build a simple app to help me keep track of the articles I read, with my personal thoughts about it. I called it NoteBookmark. I then use it to write my weekly Reading Notes post and be able to find back any articles I read.   In this post, I will share how I build the NoteBookmark explaining the choices I made and the technologies I used. It's a real application that I use everyday, it's still in progress, and it's open source. You can find the source c…
    How I build NoteBookmark using C# and my Azure Container App
    I like to read. I read while commuting, at home, in the morning, or in the evening. I read books of course, but I also a lots of blog posts, and articles to stay up to date with the technologies. And from time to time, a questions or a topic comes out and I remember that I read something about it but I can't remember where. So I decided to build a simple app to help me keep track of the articles I read, with my personal thoughts about it. I called it NoteBookmark. I then use it to write my weekly Reading Notes post and be able to find back any articles I read.   In this post, I will share how I build the NoteBookmark explaining the choices I made and the technologies I used. It's a real application that I use everyday, it's still in progress, and it's open source. You can find the source c…

  • Open

    General Azure Pipelines and App Service Linux deployment troubleshooting and scenarios
    This post will cover common issues and scenarios when deploying to App Service Linux from Azure DevOps pipelines.  ( 19 min )
  • Open

    [pt2] Choosing the right Data Storage Source (Under Preview) for Azure AI Search
    This guide introduces preview data sources available for integrating with Azure AI Search, specifically focusing on new features currently in preview. In this article, we break down the available preview connectors and categorize them into key use cases: Generally Available Data Sources by Azure AI Search Preview Data Sources by Azure AI Search Data Sources from Our Partners In This Article: When integrating Azure AI Search into your applications, choosing the right preview data source is essential for optimizing indexing efficiency, query performance, and scalability. Azure AI Search allows you to pull data from various storage sources using indexers that automate both ingestion and enrichment, allowing for powerful search experiences. This guide explores the preview data sources curren…  ( 36 min )
    [pt2] Choosing the right Data Storage Source (Under Preview) for Azure AI Search
    This guide introduces preview data sources available for integrating with Azure AI Search, specifically focusing on new features currently in preview. In this article, we break down the available preview connectors and categorize them into key use cases: Generally Available Data Sources by Azure AI Search Preview Data Sources by Azure AI Search Data Sources from Our Partners In This Article: When integrating Azure AI Search into your applications, choosing the right preview data source is essential for optimizing indexing efficiency, query performance, and scalability. Azure AI Search allows you to pull data from various storage sources using indexers that automate both ingestion and enrichment, allowing for powerful search experiences. This guide explores the preview data sources curren…
    [pt2] Choosing the right Data Storage Source (Under Preview) for Azure AI Search
    This guide introduces preview data sources available for integrating with Azure AI Search, specifically focusing on new features currently in preview. In this article, we break down the available preview connectors and categorize them into key use cases: Generally Available Data Sources by Azure AI Search Preview Data Sources by Azure AI Search Data Sources from Our Partners In This Article: When integrating Azure AI Search into your applications, choosing the right preview data source is essential for optimizing indexing efficiency, query performance, and scalability. Azure AI Search allows you to pull data from various storage sources using indexers that automate both ingestion and enrichment, allowing for powerful search experiences. This guide explores the preview data sources curren…
  • Open

    How to configure directory level permission for SFTP local user
    SFTP is a feature which is supported for Azure Blob Storage with hierarchical namespace (ADLS Gen2 Storage Account). As documented, the permission system used by SFTP feature is different from normal permission system in Azure Storage Account. It’s using a form of identity management called local users.   Normally the permission which user can set up on local users while creating them is on container level. But in real user case, it’s usual that user needs to configure multiple local users, and each local user only has permission on one specific directory. In this scenario, using ACLs (Access control lists) for local users will be a great solution.   In this blog, we’ll set up an environment using ACLs for local users and see how it meets the above aim.   Attention! As mentioned in Caution…  ( 30 min )
    How to configure directory level permission for SFTP local user
    SFTP is a feature which is supported for Azure Blob Storage with hierarchical namespace (ADLS Gen2 Storage Account). As documented, the permission system used by SFTP feature is different from normal permission system in Azure Storage Account. It’s using a form of identity management called local users.   Normally the permission which user can set up on local users while creating them is on container level. But in real user case, it’s usual that user needs to configure multiple local users, and each local user only has permission on one specific directory. In this scenario, using ACLs (Access control lists) for local users will be a great solution.   In this blog, we’ll set up an environment using ACLs for local users and see how it meets the above aim.   Attention! As mentioned in Caution…
    How to configure directory level permission for SFTP local user
    SFTP is a feature which is supported for Azure Blob Storage with hierarchical namespace (ADLS Gen2 Storage Account). As documented, the permission system used by SFTP feature is different from normal permission system in Azure Storage Account. It’s using a form of identity management called local users.   Normally the permission which user can set up on local users while creating them is on container level. But in real user case, it’s usual that user needs to configure multiple local users, and each local user only has permission on one specific directory. In this scenario, using ACLs (Access control lists) for local users will be a great solution.   In this blog, we’ll set up an environment using ACLs for local users and see how it meets the above aim.   Attention! As mentioned in Caution…
  • Open

    Giving Granular Controls to Azure EasyAuth
    When working with web applications, implementing authentication and authorization is crucial for security. Traditionally, there have been various ways from simple username/password inputs to OAuth-based approaches. However, implementing authentication and authorization logics can be quite complex and challenging. Fortunately, Azure offers a feature called EasyAuth, which simplifies this process. EasyAuth is built in Azure PaaS services like App Service, Functions, Container Apps, and Static Web Apps. The best part of using EasyAuth is that you don't have to modify existing code for the EasyAuth integration. Having said that, since EasyAuth follows one of the cloud architecture patterns – Sidecar pattern, it secures the entire web application. However, fine-tuned control – such as protectin…
    Giving Granular Controls to Azure EasyAuth
    When working with web applications, implementing authentication and authorization is crucial for security. Traditionally, there have been various ways from simple username/password inputs to OAuth-based approaches. However, implementing authentication and authorization logics can be quite complex and challenging. Fortunately, Azure offers a feature called EasyAuth, which simplifies this process. EasyAuth is built in Azure PaaS services like App Service, Functions, Container Apps, and Static Web Apps. The best part of using EasyAuth is that you don't have to modify existing code for the EasyAuth integration. Having said that, since EasyAuth follows one of the cloud architecture patterns – Sidecar pattern, it secures the entire web application. However, fine-tuned control – such as protectin…
  • Open

    Python in Visual Studio Code – February 2025 Release
    The February 2025 release of the Python and Jupyter extensions for Visual Studio Code are now available. This month's updates include . The post Python in Visual Studio Code – February 2025 Release appeared first on Python.  ( 25 min )
  • Open

    Unlocking AI-Powered Automation with Azure AI Agent Service
    AI agents are transforming the way businesses automate workflows. By providing agents access to the same apps and services your employees have, AI agents can automate manual, time-intensive processes and drive significant productivity gains in the process. Deploying reliable AI agents in real-world environments, however, remains a challenge. Many existing agent services lack 1) the secure, integrated tools necessary for AI agents to perform real work, like updating a database or sending an email; 2) crucial context necessary for an agent to complete tasks is also often missing; and once the AI agent is running, 3) it's difficult to identify and diagnose issues. To address these challenges, we announced Azure AI Agent Service at Microsoft Ignite 2024. This service is purpose-built for desig…  ( 36 min )
    Unlocking AI-Powered Automation with Azure AI Agent Service
    AI agents are transforming the way businesses automate workflows. By providing agents access to the same apps and services your employees have, AI agents can automate manual, time-intensive processes and drive significant productivity gains in the process. Deploying reliable AI agents in real-world environments, however, remains a challenge. Many existing agent services lack 1) the secure, integrated tools necessary for AI agents to perform real work, like updating a database or sending an email; 2) crucial context necessary for an agent to complete tasks is also often missing; and once the AI agent is running, 3) it's difficult to identify and diagnose issues. To address these challenges, we announced Azure AI Agent Service at Microsoft Ignite 2024. This service is purpose-built for desig…
    Unlocking AI-Powered Automation with Azure AI Agent Service
    AI agents are transforming the way businesses automate workflows. By providing agents access to the same apps and services your employees have, AI agents can automate manual, time-intensive processes and drive significant productivity gains in the process. Deploying reliable AI agents in real-world environments, however, remains a challenge. Many existing agent services lack 1) the secure, integrated tools necessary for AI agents to perform real work, like updating a database or sending an email; 2) crucial context necessary for an agent to complete tasks is also often missing; and once the AI agent is running, 3) it's difficult to identify and diagnose issues. To address these challenges, we announced Azure AI Agent Service at Microsoft Ignite 2024. This service is purpose-built for desig…
  • Open

    Customer Case Study: How preezie’s AI shopping assistant is reshaping Blue Bungalow’s online store
    Introduction Blue Bungalow, one of Australia’s leading fashion retailers, faced a common challenge in eCommerce—how to create a more engaging, seamless, and personalised shopping experience for customers online. They wanted to implement AI-powered assistance to provide personalised product recommendations, accurate sizing guidance, product comparisons, and instant answers to customer questions—replicating the ease and support of […] The post Customer Case Study: How preezie’s AI shopping assistant is reshaping Blue Bungalow’s online store appeared first on Semantic Kernel.  ( 24 min )

  • Open

    [pt1] Choosing the right Data Storage Source (Generally available) for Azure AI Search
    This guide provides a comprehensive look at data sources for integrating with Azure AI Search, specifically focusing on generally available options. We break down the available connectors and categorize them into three distinct sections: Generally Available Data Sources by Azure AI Search Preview Data Sources by Azure AI Search Data Sources from Our Partners In This Article: When building AI-powered search solutions using Azure AI Search, selecting the right data source is crucial for optimizing efficiency, scalability, and overall search performance. Azure AI Search provides indexers that can pull data from various storage sources, transforming and enriching it for a better search experience. This article explores the key data sources available and offers best practices to help you choo…  ( 32 min )
    [pt1] Choosing the right Data Storage Source (Generally available) for Azure AI Search
    This guide provides a comprehensive look at data sources for integrating with Azure AI Search, specifically focusing on generally available options. We break down the available connectors and categorize them into three distinct sections: Generally Available Data Sources by Azure AI Search Preview Data Sources by Azure AI Search Data Sources from Our Partners In This Article: When building AI-powered search solutions using Azure AI Search, selecting the right data source is crucial for optimizing efficiency, scalability, and overall search performance. Azure AI Search provides indexers that can pull data from various storage sources, transforming and enriching it for a better search experience. This article explores the key data sources available and offers best practices to help you choo…
    [pt1] Choosing the right Data Storage Source (Generally available) for Azure AI Search
    This guide provides a comprehensive look at data sources for integrating with Azure AI Search, specifically focusing on generally available options. We break down the available connectors and categorize them into three distinct sections: Generally Available Data Sources by Azure AI Search Preview Data Sources by Azure AI Search Data Sources from Our Partners In This Article: When building AI-powered search solutions using Azure AI Search, selecting the right data source is crucial for optimizing efficiency, scalability, and overall search performance. Azure AI Search provides indexers that can pull data from various storage sources, transforming and enriching it for a better search experience. This article explores the key data sources available and offers best practices to help you choo…
  • Open

    Build AI Solutions with Azure AI Foundry
    This week we kicked off the Generative AI Level Up Tuesdays Microsoft Reactor series with hundreds of you joining live across the globe. Did you miss the first session? Don't worry you can catch it on-demand here. Along with the first episode of the series, we also started a new Microsoft Learn Challenge. It's not too late to join! Join the challenge now and get a curated selection of free AI resources as well as a digital badge of completion!  Next Tuesday 11th February at 9AM PST we host the second episode of the series, where we are going to move from theory to practice with an e-2-e RAG sample built with Azure OpenAI Service, Azure AI Search Service and Prompty. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Di…  ( 22 min )
    Build AI Solutions with Azure AI Foundry
    This week we kicked off the Generative AI Level Up Tuesdays Microsoft Reactor series with hundreds of you joining live across the globe. Did you miss the first session? Don't worry you can catch it on-demand here. Along with the first episode of the series, we also started a new Microsoft Learn Challenge. It's not too late to join! Join the challenge now and get a curated selection of free AI resources as well as a digital badge of completion!  Next Tuesday 11th February at 9AM PST we host the second episode of the series, where we are going to move from theory to practice with an e-2-e RAG sample built with Azure OpenAI Service, Azure AI Search Service and Prompty. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Di…
    Build AI Solutions with Azure AI Foundry
    This week we kicked off the Generative AI Level Up Tuesdays Microsoft Reactor series with hundreds of you joining live across the globe. Did you miss the first session? Don't worry you can catch it on-demand here. Along with the first episode of the series, we also started a new Microsoft Learn Challenge. It's not too late to join! Join the challenge now and get a curated selection of free AI resources as well as a digital badge of completion!  Next Tuesday 11th February at 9AM PST we host the second episode of the series, where we are going to move from theory to practice with an e-2-e RAG sample built with Azure OpenAI Service, Azure AI Search Service and Prompty. Also, you'll have the unique opportunity to interact with the speakers after the session in a community roundtable call on Di…
  • Open

    Introducing Support for Multiple JMeter Files and Fragments in Azure Load Testing
    We are excited to announce a significant update to Azure Load Testing that allows you to use multiple JMeter files and fragments in your test configurations. This feature empowers users to design more modular, flexible, and scalable performance tests, ensuring comprehensive testing for complex applications. What’s New? Previously, Azure Load Testing supported a single JMeter file for defining test scenarios. With the new update, you can now: Include multiple JMeter test files in a single load test configuration. Use JMeter fragments to define reusable components, such as authentication workflows or common request sequences. Seamlessly manage test modularity, enabling easier collaboration and maintenance. This approach improves test execution and maintenance, especially for large-scale an…
    Introducing Support for Multiple JMeter Files and Fragments in Azure Load Testing
    We are excited to announce a significant update to Azure Load Testing that allows you to use multiple JMeter files and fragments in your test configurations. This feature empowers users to design more modular, flexible, and scalable performance tests, ensuring comprehensive testing for complex applications. What’s New? Previously, Azure Load Testing supported a single JMeter file for defining test scenarios. With the new update, you can now: Include multiple JMeter test files in a single load test configuration. Use JMeter fragments to define reusable components, such as authentication workflows or common request sequences. Seamlessly manage test modularity, enabling easier collaboration and maintenance. This approach improves test execution and maintenance, especially for large-scale an…
    Enhancing Security for Azure Container Apps with Aqua Security
    Azure Container Apps (ACA) is a developer-first serverless platform that allows you to run scalable containerized workloads at any scale. Being serverless provides inherent security benefits by reducing the attack surface, but it also presents some unique challenges for any security solution. Hence, we’re happy to announce that our partner, Aqua has just certified Azure Container Apps for their suite of security solutions. Azure Container Apps: Built-In Security Features Due to its purpose-built nature ACA offers several built-in security features that help protect your containerized applications: Isolation: ACA runs your workload without the need for root access to the underlying host. Additionally, it’s trivial and requires minimal overhead to isolate different teams in their own enviro…
    Enhancing Security for Azure Container Apps with Aqua Security
    Azure Container Apps (ACA) is a developer-first serverless platform that allows you to run scalable containerized workloads at any scale. Being serverless provides inherent security benefits by reducing the attack surface, but it also presents some unique challenges for any security solution. Hence, we’re happy to announce that our partner, Aqua has just certified Azure Container Apps for their suite of security solutions. Azure Container Apps: Built-In Security Features Due to its purpose-built nature ACA offers several built-in security features that help protect your containerized applications: Isolation: ACA runs your workload without the need for root access to the underlying host. Additionally, it’s trivial and requires minimal overhead to isolate different teams in their own enviro…
    Modernizing Enterprise Asset Management: The Power of IBM Maximo on Azure Red Hat OpenShift
    In today's rapidly evolving enterprise landscape, organizations face increasing pressure to modernize their asset management systems while maintaining operational efficiency. The collaboration between technology giants IBM, Microsoft, and Red Hat offers a compelling solution: IBM Maximo on Azure Red Hat OpenShift (ARO). For organizations running IBM Maximo 7.6.1.x, this transformation has become increasingly urgent.  The Critical Timeline  For organizations running Maximo 7.6.1.x, End of Life (EOL) is approaching in September 2025. Organizations should begin their modernization journey now to ensure a smooth transition to the IBM Maximo Application Suite with the scalability and reliability of Azure Red Hat OpenShift.  Understanding the Transformation: Maximo v7.6.1x vs Maximo Application…
    Modernizing Enterprise Asset Management: The Power of IBM Maximo on Azure Red Hat OpenShift
    In today's rapidly evolving enterprise landscape, organizations face increasing pressure to modernize their asset management systems while maintaining operational efficiency. The collaboration between technology giants IBM, Microsoft, and Red Hat offers a compelling solution: IBM Maximo on Azure Red Hat OpenShift (ARO). For organizations running IBM Maximo 7.6.1.x, this transformation has become increasingly urgent.  The Critical Timeline  For organizations running Maximo 7.6.1.x, End of Life (EOL) is approaching in September 2025. Organizations should begin their modernization journey now to ensure a smooth transition to the IBM Maximo Application Suite with the scalability and reliability of Azure Red Hat OpenShift.  Understanding the Transformation: Maximo v7.6.1x vs Maximo Application…
  • Open

    From Foundry to Fine-Tuning: Topics you Need to Know in Azure AI Services
    With so many new features from Azure and newer ways of development, especially in generative AI, you must be wondering what all the different things you need to know are and where to start in Azure AI. Whether you're a developer or IT professional, this guide will help you understand the key features, use cases, and documentation links for each service.  Let's explore how Azure AI can transform your projects and drive innovation in your organization.  Stay tuned for more details! Term   Description   Use Case   Azure Resource Azure AI Foundry   A comprehensive platform for building, deploying, and managing AI-driven applications.   Customizing, hosting, running, and managing AI applications.   Azure AI Foundry AI Agent   Within Azure AI Foundry, an AI Agent acts…  ( 27 min )
    From Foundry to Fine-Tuning: Topics you Need to Know in Azure AI Services
    With so many new features from Azure and newer ways of development, especially in generative AI, you must be wondering what all the different things you need to know are and where to start in Azure AI. Whether you're a developer or IT professional, this guide will help you understand the key features, use cases, and documentation links for each service.  Let's explore how Azure AI can transform your projects and drive innovation in your organization.  Stay tuned for more details! Term   Description   Use Case   Azure Resource Azure AI Foundry   A comprehensive platform for building, deploying, and managing AI-driven applications.   Customizing, hosting, running, and managing AI applications.   Azure AI Foundry AI Agent   Within Azure AI Foundry, an AI Agent acts…
    From Foundry to Fine-Tuning: Topics you Need to Know in Azure AI Services
    With so many new features from Azure and newer ways of development, especially in generative AI, you must be wondering what all the different things you need to know are and where to start in Azure AI. This guide covers the high-level basics and provides a detailed comparison of various Azure AI services, resources, and models. Whether you're a developer, data scientist, or IT professional, this guide will help you understand the key features, use cases, and documentation links for each service.  Let's explore how Azure AI can transform your projects and drive innovation in your organization.  Stay tuned for more details! Term   Description   Use Case   Azure Resource Azure AI Foundry   A comprehensive platform for building, deploying, and managing AI-driven applications.…
    Introducing the GPT-4o-Mini Audio Models: Adding More Choice to Audio-Enhanced AI Interaction
    We are thrilled to announce the release of the new GPT-4o-Mini-Realtime-Preview and GPT-4o-Mini-Audio-Preview models, both now available in preview. These new models introduce advanced audio capabilities at just 25% of the cost of GPT-4o audio models. Adding on to the existing GPT-4o audio models, this expansion enhances the potential for AI applications in text and voice-based interactions. Starting today, developers can unlock immersive, voice-driven experiences by harnessing the advanced capabilities of all Azure OpenAI Service advanced audio models, now in public preview. Key Benefits Advanced Audio Capabilities: Enjoy high-quality audio interactions at a fraction of the cost of GPT-4o audio models. Seamless Compatibility: Our new models are compatible with existing Realtime API and C…  ( 21 min )
    Introducing the GPT-4o-Mini Audio Models: Adding More Choice to Audio-Enhanced AI Interaction
    We are thrilled to announce the release of the new GPT-4o-Mini-Realtime-Preview and GPT-4o-Mini-Audio-Preview models, both now available in preview. These new models introduce advanced audio capabilities at just 25% of the cost of GPT-4o audio models. Adding on to the existing GPT-4o audio models, this expansion enhances the potential for AI applications in text and voice-based interactions. Starting today, developers can unlock immersive, voice-driven experiences by harnessing the advanced capabilities of all Azure OpenAI Service advanced audio models, now in public preview. Key Benefits Advanced Audio Capabilities: Enjoy high-quality audio interactions at a fraction of the cost of GPT-4o audio models. Seamless Compatibility: Our new models are compatible with existing Realtime API and C…
    Introducing the GPT-4o-Mini Audio Models: Adding More Choice to Audio-Enhanced AI Interaction
    We are thrilled to announce the release of the new GPT-4o-Mini-Realtime-Preview and GPT-4o-Mini-Audio-Preview models, both now available in preview. These new models introduce advanced audio capabilities at just 25% of the cost of GPT-4o audio models. Adding on to the existing GPT-4o audio models, this expansion enhances the potential for AI applications in text and voice-based interactions. Starting today, developers can unlock immersive, voice-driven experiences by harnessing the advanced capabilities of all Azure OpenAI Service advanced audio models, now in public preview. Key Benefits Advanced Audio Capabilities: Enjoy high-quality audio interactions at a fraction of the cost of GPT-4o audio models. Seamless Compatibility: Our new models are compatible with existing Realtime API and C…
  • Open

    Guest Blog: Let your Copilot Declarative Agent think deep with DeepSeek-R1
    Today we’d like to feature a guest author on our Semantic Kernel blog, Mahmoud Hassan, a Microsoft Valuable Professional (MVP) focused on AI. We’ll turn it over to him to dive into his work below.   In recent days, there has been significant attention in the AI community regarding DeepSeek-R1 and its capabilities. Many people […] The post Guest Blog: Let your Copilot Declarative Agent think deep with DeepSeek-R1 appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Automate Your Load Tests: Introducing Scheduled Load Tests in Azure Load Testing
    Why Schedule Load Tests? Automating your load tests through scheduling enhances performance validation in multiple ways: Consistent performance monitoring – Set up tests to run daily, weekly, or monthly, ensuring continuous assessment of your application's stability and scalability. Reduced development disruptions – Schedule tests to run during off-peak hours, minimizing interference with active development and live production environments. Key Features of Scheduled Load Testing Azure Load Testing’s scheduling functionality offers flexible options to meet different testing needs: Customizable scheduling – Set up tests to run once, hourly, daily, weekly, or monthly. Cron-based scheduling – Define complex recurrence patterns using cron expressions. Controlled test runs – Set an end condit…
    Automate Your Load Tests: Introducing Scheduled Load Tests in Azure Load Testing
    Why Schedule Load Tests? Automating your load tests through scheduling enhances performance validation in multiple ways: Consistent performance monitoring – Set up tests to run daily, weekly, or monthly, ensuring continuous assessment of your application's stability and scalability. Reduced development disruptions – Schedule tests to run during off-peak hours, minimizing interference with active development and live production environments. Key Features of Scheduled Load Testing Azure Load Testing’s scheduling functionality offers flexible options to meet different testing needs: Customizable scheduling – Set up tests to run once, hourly, daily, weekly, or monthly. Cron-based scheduling – Define complex recurrence patterns using cron expressions. Controlled test runs – Set an end condit…
  • Open

    Customize AOAI Embeddings with contrastive learning
    Introduction Embeddings are used to generate a representation of unstructured data in a dense vector space. An embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlates to semantic similarity between two inputs in the original format (eg., text / image). When text is embedded, the meaning of each word is encoded so that words closer together in the vector space are expected to have similar meanings. A large number of embedding models that support such text representations are available and benchmarks like MTEB help understand their performance. One of the pitfalls / risks in embedding models is that sometimes models may not be able to adequately represent the underlying data. This could be due to the following scenarios…  ( 44 min )
    Customize AOAI Embeddings with contrastive learning
    Introduction Embeddings are used to generate a representation of unstructured data in a dense vector space. An embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlates to semantic similarity between two inputs in the original format (eg., text / image). When text is embedded, the meaning of each word is encoded so that words closer together in the vector space are expected to have similar meanings. A large number of embedding models that support such text representations are available and benchmarks like MTEB help understand their performance. One of the pitfalls / risks in embedding models is that sometimes models may not be able to adequately represent the underlying data. This could be due to the following scenarios…
    Customize AOAI Embeddings with contrastive learning
    Introduction Embeddings are used to generate a representation of unstructured data in a dense vector space. An embedding is a vector of floating-point numbers, such that the distance between two embeddings in the vector space is correlates to semantic similarity between two inputs in the original format (eg., text / image). When text is embedded, the meaning of each word is encoded so that words closer together in the vector space are expected to have similar meanings. A large number of embedding models that support such text representations are available and benchmarks like MTEB help understand their performance. One of the pitfalls / risks in embedding models is that sometimes models may not be able to adequately represent the underlying data. This could be due to the following scenarios…
    Prompt Engineering for OpenAI’s O1 and O3-mini Reasoning Models
    Important Attempting to extract the model's internal reasoning is prohibited, as it violates the acceptable use guidelines.   This section explores how O1 and O3-mini differ from GPT-4o in input handling, reasoning capabilities, and response behavior, and outlines prompt engineering best practices to maximize their performance. Finally, we apply these best practices to a legal case analysis scenario. Differences Between O1/O3-mini and GPT-4o Input Structure and Context Handling Built-in Reasoning vs. Prompted Reasoning: O1-series models have built-in chain-of-thought reasoning, meaning they internally reason through steps without needing explicit coaxing from the prompt​.​ In contrast, GPT-4o often benefits from external instructions like “Let’s think step by step” to solve complex proble…  ( 77 min )
    Prompt Engineering for OpenAI’s O1 and O3-mini Reasoning Models
    Important Attempting to extract the model's internal reasoning is prohibited, as it violates the acceptable use guidelines.   This section explores how O1 and O3-mini differ from GPT-4o in input handling, reasoning capabilities, and response behavior, and outlines prompt engineering best practices to maximize their performance. Finally, we apply these best practices to a legal case analysis scenario. Differences Between O1/O3-mini and GPT-4o Input Structure and Context Handling Built-in Reasoning vs. Prompted Reasoning: O1-series models have built-in chain-of-thought reasoning, meaning they internally reason through steps without needing explicit coaxing from the prompt​.​ In contrast, GPT-4o often benefits from external instructions like “Let’s think step by step” to solve complex proble…
    Prompt Engineering for OpenAI’s O1 and O3-mini Reasoning Models
    This section explores how O1 and O3-mini differ from GPT-4o in input handling, reasoning capabilities, and response behavior, and outlines prompt engineering best practices to maximize their performance. Finally, we apply these best practices to a legal case analysis scenario. Differences Between O1/O3-mini and GPT-4o Input Structure and Context Handling Built-in Reasoning vs. Prompted Reasoning: O1-series models have built-in chain-of-thought reasoning, meaning they internally reason through steps without needing explicit coaxing from the prompt​.​ In contrast, GPT-4o often benefits from external instructions like “Let’s think step by step” to solve complex problems, since it doesn’t automatically engage in multi-step reasoning to the same extent​. With O1/O3, you can present the problem…
  • Open

    Error handling benefits with Fabric’s API for GraphQL
    You can take advantage of this with Fabric’s API for GraphQL. However, like any technology, effective error handling is crucial to ensure a smooth user experience and robust application performance. In this blog post, we’ll dive into the intricacies of error handling in GraphQL and share some best practices for managing errors effectively.  ( 6 min )
  • Open

    Fine Tune Mistral Models on Azure AI Foundry
    We're excited to announce the general availability of fine-tuning for Mistral models on Azure is now live! Starting today, Mistral Large 2411, Mistral Nemo, and Ministral 3B fine-tuning are available to all our Azure AI Foundry customers, providing unmatched customization and performance.  This also establishes Azure AI Foundry as the second platform, after Mistral's own, where fine-tuning of Mistral models is currently available. Azure AI Foundry lets you tailor large language models to your personal datasets by using a process known as fine-tuning. Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.   Finetuning enabled Mistral Model…  ( 32 min )
    Fine Tune Mistral Models on Azure AI Foundry
    We're excited to announce the general availability of fine-tuning for Mistral models on Azure is now live! Starting today, Mistral Large 2411, Mistral Nemo, and Ministral 3B fine-tuning are available to all our Azure AI Foundry customers, providing unmatched customization and performance.  This also establishes Azure AI Foundry as the second platform, after Mistral's own, where fine-tuning of Mistral models is currently available. Azure AI Foundry lets you tailor large language models to your personal datasets by using a process known as fine-tuning. Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.   Finetuning enabled Mistral Model…
    Fine Tune Mistral Models on Azure AI Foundry
    We're excited to announce the general availability of fine-tuning for Mistral models on Azure is now live! Starting today, Mistral Large 2411, Mistral Nemo, and Ministral 3B fine-tuning are available to all our Azure AI Foundry customers, providing unmatched customization and performance.  This also establishes Azure AI Foundry as the second platform, after Mistral's own, where fine-tuning of Mistral models is currently available. Azure AI Foundry lets you tailor large language models to your personal datasets by using a process known as fine-tuning. Fine-tuning provides significant value by enabling customization and optimization for specific tasks and applications. It leads to improved performance, cost efficiency, reduced latency, and tailored outputs.   Finetuning enabled Mistral Model…
  • Open

    Using Azure OpenAI Chat Completion with data source and Function Calling
    Azure OpenAI Chat Completion with data source provides powerful capabilities for integrating conversational AI into applications. However, using a data source and function calling in a single request is not supported yet. When both features are enabled, function calling is ignored, and only the data source is used. This presents a challenge when retrieving information, […] The post Using Azure OpenAI Chat Completion with data source and Function Calling appeared first on Semantic Kernel.  ( 26 min )
  • Open

    Full web support for conditional access policies across Azure DevOps and partner web properties
    We’re happy to announce that we’ve made significant progress in updating our web authentication stack on Azure DevOps services and partner web properties to utilize Microsoft Entra tokens to handle web sessions. By replacing our previous cookies with Entra tokens, we’ve deepened the integration we have with Microsoft Entra ID on our web experience. This […] The post Full web support for conditional access policies across Azure DevOps and partner web properties appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    ICYMI: Ask the Expert – Fabric Databases
    On Wednesday, January 29th, several members of the team gathered for an hour of Q&A on SQL database in Fabric. SQL database in Fabric is the first SaaS database in Fabric bringing transactional and analytical data together without compromising application performance, so it’s no surprise that the hour was packed with questions.  Let’s jump right … Continue reading “ICYMI: Ask the Expert – Fabric Databases “  ( 7 min )
    Simplifying access control: Assigning default schemas to users in Fabric Warehouse
    You can now assign default schema to your users on Fabric Data Warehouse and to Analytics SQL Endpoints. This means you can now combine the ability to create schemas on Fabric Lakehouse and benefit from this enhancement for SQL Endpoints,. A default schema defines how database objects—like tables, views, and procedures—are organized and associated with … Continue reading “Simplifying access control: Assigning default schemas to users in Fabric Warehouse”  ( 5 min )
    Introducing enhanced conversation with Microsoft Fabric Copilot (Preview)
    Enhancing AI Functionalities with a Commitment to Privacy and Security At Microsoft, we continuously strive to enhance our services based on valuable customer feedback and evolving needs. In response to our data storage commitments, we are introducing improvements to AI functionalities in Microsoft Fabric. We are thrilled to introduce a new way to store chat prompts … Continue reading “Introducing enhanced conversation with Microsoft Fabric Copilot (Preview)”  ( 6 min )
  • Open

    Full-Text Search in Azure Cosmos DB
    What is full-text search? Full-text search is a technique that finds specific information within a large corpus of text. It goes beyond keyword matching and analyzes the content of documents to identify relevant results based on the user’s search query. Azure Cosmos DB for NoSQL now offers a powerful Full Text Search feature in preview, designed to enhance the search capabilities of your applications. Read more about it here. How does full-text search work? A full-text search involves two primary stages: Indexing Searching Indexing During the indexing stage, the system analyzes the text content of documents and stores the data in a structured format. This process typically involves: Tokenization: Breaking down text into individual words or units called tokens. This is like separating a …  ( 33 min )
    Why Every JavaScript Developer Should Try TypeScript
    Introduction "Why did the JavaScript developer break up with TypeScript?" "Because they couldn’t handle the commitment!" As a student entrepreneur, you're constantly juggling coursework, projects, and maybe even a startup idea. You don’t have time to debug mysterious JavaScript errors at 2 AM. That's where TypeScript comes in helping you write cleaner, more reliable code so you can focus on building, not debugging. In this post, I’ll show you why TypeScript is a must-have skill for any student developer and how it can set your projects up for success. Overview of TypeScript JavaScript, the world's most-used programming language, powers cross-platform applications but wasn't designed for large-scale projects. It lacks some features needed for managing extensive codebases, making it challeng…  ( 30 min )
    Full-Text Search in Azure Cosmos DB
    What is full-text search? Full-text search is a technique that finds specific information within a large corpus of text. It goes beyond keyword matching and analyzes the content of documents to identify relevant results based on the user’s search query. Azure Cosmos DB for NoSQL now offers a powerful Full Text Search feature in preview, designed to enhance the search capabilities of your applications. Read more about it here. How does full-text search work? A full-text search involves two primary stages: Indexing Searching Indexing During the indexing stage, the system analyzes the text content of documents and stores the data in a structured format. This process typically involves: Tokenization: Breaking down text into individual words or units called tokens. This is like separating a …
    Why Every JavaScript Developer Should Try TypeScript
    Introduction "Why did the JavaScript developer break up with TypeScript?" "Because they couldn’t handle the commitment!" As a student entrepreneur, you're constantly juggling coursework, projects, and maybe even a startup idea. You don’t have time to debug mysterious JavaScript errors at 2 AM. That's where TypeScript comes in helping you write cleaner, more reliable code so you can focus on building, not debugging. In this post, I’ll show you why TypeScript is a must-have skill for any student developer and how it can set your projects up for success. Overview of TypeScript JavaScript, the world's most-used programming language, powers cross-platform applications but wasn't designed for large-scale projects. It lacks some features needed for managing extensive codebases, making it challeng…
    Full-Text Search in Azure Cosmos DB
    What is full-text search? Full-text search is a technique that finds specific information within a large corpus of text. It goes beyond keyword matching and analyzes the content of documents to identify relevant results based on the user’s search query. Azure Cosmos DB for NoSQL now offers a powerful Full Text Search feature in preview, designed to enhance the search capabilities of your applications. Read more about it here. How does full-text search work? A full-text search involves two primary stages: Indexing Searching Indexing During the indexing stage, the system analyzes the text content of documents and stores the data in a structured format. This process typically involves: Tokenization: Breaking down text into individual words or units called tokens. This is like separating a …
    Why Every JavaScript Developer Should Try TypeScript
    Introduction "Why did the JavaScript developer break up with TypeScript?" "Because they couldn’t handle the commitment!" As a student entrepreneur, you're constantly juggling coursework, projects, and maybe even a startup idea. You don’t have time to debug mysterious JavaScript errors at 2 AM. That's where TypeScript comes in helping you write cleaner, more reliable code so you can focus on building, not debugging. In this post, I’ll show you why TypeScript is a must-have skill for any student developer and how it can set your projects up for success. Overview of TypeScript JavaScript, the world's most-used programming language, powers cross-platform applications but wasn't designed for large-scale projects. It lacks some features needed for managing extensive codebases, making it challeng…
  • Open

    Semantic Kernel Roadmap H1 2025: Accelerating Agents, Processes, and Integration
    As we move into the first half of 2025, I’m excited to share our ambitious roadmap that we hope will enable you to build even more sophisticated AI applications with Semantic Kernel. Agent Framework 1.0 By the end of Q1 2025, the SK Agent Framework will transition from preview to general availability (GA). This milestone […] The post Semantic Kernel Roadmap H1 2025: Accelerating Agents, Processes, and Integration appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Voice Bot: GPT-4o-Realtime Best Practices - A learning from customer journey
    Voice technology is transforming how we interact with machines, making conversations with AI feel more natural than ever before. With the public beta release of the Realtime API powered by GPT-4o, developers now have the tools to create low-latency, multimodal voice experiences in their apps, opening endless possibilities for innovation. For building voice AI solution introduction of GPT-4o-Realtime was a game changing technology which handles key features like interruption, language switching, emotion handling out of the box with low latency and optimized architecture. GPT-4o-Realtime based voice bot are the simplest to implement as they used Foundational Speech model as it could refer to a model that directly takes speech as an input and generates speech as output, without the need for t…  ( 85 min )
    Voice Bot: GPT-4o-Realtime Best Practices - A learning from customer journey
    Voice technology is transforming how we interact with machines, making conversations with AI feel more natural than ever before. With the public beta release of the Realtime API powered by GPT-4o, developers now have the tools to create low-latency, multimodal voice experiences in their apps, opening endless possibilities for innovation. For building voice AI solution introduction of GPT-4o-Realtime was a game changing technology which handles key features like interruption, language switching, emotion handling out of the box with low latency and optimized architecture. GPT-4o-Realtime based voice bot are the simplest to implement as they used Foundational Speech model as it could refer to a model that directly takes speech as an input and generates speech as output, without the need for t…
    Voice Bot: GPT-4o-Realtime Best Practices - A learning from customer journey
    Voice technology is transforming how we interact with machines, making conversations with AI feel more natural than ever before. With the public beta release of the Realtime API powered by GPT-4o, developers now have the tools to create low-latency, multimodal voice experiences in their apps, opening endless possibilities for innovation. For building voice AI solution introduction of GPT-4o-Realtime was a game changing technology which handles key features like interruption, language switching, emotion handling out of the box with low latency and optimized architecture. GPT-4o-Realtime based voice bot are the simplest to implement as they used Foundational Speech model as it could refer to a model that directly takes speech as an input and generates speech as output, without the need for t…
  • Open

    Announcing new Networking Troubleshooter preview
    Networking in general and networking in App Service can be complex. There are many ways to configure the networking components and features to create a secure environment that matches the requirements of each specific customer. We are trying hard to create documentation and Azure portal experiences that guide you along the way, but sometimes networking can still be an issue.  ( 3 min )
  • Open

    Do more with Copilot and agents
    For February, we’re delving deep into Copilots and AI agents. We have live events and learning resources that will help developers get started and do more so you can take your productivity to a new level. Learn about tools for creating agents, find out how to use GitHub Copilot to develop apps more quickly, build intelligent apps with .NET, start creating customized experiences for Microsoft Teams, and more. GitHub Copilot BootcampJoin the GitHub Copilot Bootcamp to deep dive into the tools and skills you need to supercharge your development productivity and with GitHub Copilot. This is a 4-part live series happening February 4–13, 2025. AI agents — what they are, and how they’ll change the way we workWhat are AI agents? Discover what agents are, how they work autonomously around-the-clock…
    Do more with Copilot and agents
    For February, we’re delving deep into Copilots and AI agents. We have live events and learning resources that will help developers get started and do more so you can take your productivity to a new level. Learn about tools for creating agents, find out how to use GitHub Copilot to develop apps more quickly, build intelligent apps with .NET, start creating customized experiences for Microsoft Teams, and more. GitHub Copilot BootcampJoin the GitHub Copilot Bootcamp to deep dive into the tools and skills you need to supercharge your development productivity and with GitHub Copilot. This is a 4-part live series happening February 4–13, 2025. AI agents — what they are, and how they’ll change the way we workWhat are AI agents? Discover what agents are, how they work autonomously around-the-clock…
    Open Standard Enterprise Java and our Secure Future Initiative
    Microsoft Azure is the best place for enterprise Java workloads. Whether you are using plain Java SE, Spring Boot and its many sub-projects, or a Jakarta EE and MicroProfile runtime, our portfolio of Java support has first class, framework-native, compute offerings and detailed guidance to give you confidence in your choice of Azure for your mission critical Java workloads. This blog post covers the Jakarta EE and MicroProfile part of our Java on Azure portfolio, and specifically how our Secure Future Initiative (SFI) is supported by Jakarta EE and MicroProfile on Azure. What is Jakarta EE and MicroProfile on Azure? Our product offering for Jakarta EE and MicroProfile on Azure is partner driven and Azure native. We recognize that our partners are the experts in the Java frameworks that pow…
    Open Standard Enterprise Java and our Secure Future Initiative
    Microsoft Azure is the best place for enterprise Java workloads. Whether you are using plain Java SE, Spring Boot and its many sub-projects, or a Jakarta EE and MicroProfile runtime, our portfolio of Java support has first class, framework-native, compute offerings and detailed guidance to give you confidence in your choice of Azure for your mission critical Java workloads. This blog post covers the Jakarta EE and MicroProfile part of our Java on Azure portfolio, and specifically how our Secure Future Initiative (SFI) is supported by Jakarta EE and MicroProfile on Azure.   Oracle WebLogic Server on Azure Azure compute offer Deploy it now Paved paths SFI notes AKS https://aka.ms/wlsaks Deploy WebLogic Server on Azure Kubernetes Service using the Azure portal - Azure Kubernetes S…
  • Open

    Update to Azure DevOps Allowed IP addresses
    We are excited to announce some important upgrades to our networking infrastructure that will enhance the performance and reliability of our service. As part of these infrastructure upgrades, we are introducing new IP addresses that you will need to allow list in your firewall configurations. What’s Changing And Why? We are transitioning from the current […] The post Update to Azure DevOps Allowed IP addresses appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Announcing the activation of billing for SQL database in Fabric
    Since SQL database is a native item in Fabric, it utilizes Fabric capacity units like other Fabric workloads. Compute charges apply only when the database is actively used, so you only consume what you need. Storage is billed separately on a monthly basis, as are automatic backups, which are retained for seven days. Billing for … Continue reading “Announcing the activation of billing for SQL database in Fabric”  ( 5 min )
    Open Mirroring for SAP sources – dab and Simplement
    Fabric Mirroring Database mirroring has been a hot topic in Microsoft Fabric since its introduction in 2024. It offers a user-friendly replication capability that allows you to connect to a source, select the tables you want to replicate, and mirror them into Fabric OneLake. The mirroring engine fetches an initial snapshot from the source and … Continue reading “Open Mirroring for SAP sources – dab and Simplement”  ( 6 min )
    Introducing template dashboards for Workspace Monitoring
    Co-authors: Iris Kaminer, Gellert Gintli, Nick Salch, Xiaodong Zhang We’re thrilled to announce ready-to-use Power BI and real-time dashboard template reports that integrate seamlessly with the workspace monitoring in Microsoft Fabric! These community-built, open-source, templates can be downloaded through the fabric-toolbox GitHub repo and are designed to help you quickly visualize critical insights across your … Continue reading “Introducing template dashboards for Workspace Monitoring”  ( 6 min )
  • Open

    Secure your Kubernetes workloads with Cloud NGFW AKS Landing Zone
    Microsoft Azure and Palo Alto Networks - Azure Marketplace offer for setting up an Azure Kubernetes Service (AKS)  and securing it with the Cloud NGFW. This offer makes the provisioning of an AKS cluster straightforward while leveraging the advanced network security in Azure powered by Palo Alto Networks. What does it include? The solution follows the AKS landing zone accelerator reference architecture to build a scalable Azure Kubernetes Service (AKS) cluster while following the Cloud Adoption Framework. The deployment follows the AKS Secure Baseline architecture, including Azure networking, security, identity, management, and monitoring services. It deploys an AKS cluster, an Application Gateway for Ingress, a Container Registry with Private Endpoints, and more. The cluster is then conne…  ( 24 min )
    Secure your Kubernetes workloads with Cloud NGFW AKS Landing Zone
    Microsoft Azure and Palo Alto Networks - Azure Marketplace offer for setting up an Azure Kubernetes Service (AKS)  and securing it with the Cloud NGFW. This offer makes the provisioning of an AKS cluster straightforward while leveraging the advanced network security in Azure powered by Palo Alto Networks. What does it include? The solution follows the AKS landing zone accelerator reference architecture to build a scalable Azure Kubernetes Service (AKS) cluster while following the Cloud Adoption Framework. The deployment follows the AKS Secure Baseline architecture, including Azure networking, security, identity, management, and monitoring services. It deploys an AKS cluster, an Application Gateway for Ingress, a Container Registry with Private Endpoints, and more. The cluster is then conne…
    Secure your Kubernetes workloads with Cloud NGFW AKS Landing Zone
    Microsoft Azure and Palo Alto Networks - Azure Marketplace offer for setting up an Azure Kubernetes Service (AKS)  and securing it with the Cloud NGFW. This offer makes the provisioning of an AKS cluster straightforward while leveraging the advanced network security in Azure powered by Palo Alto Networks. What does it include? The solution follows the AKS landing zone accelerator reference architecture to build a scalable Azure Kubernetes Service (AKS) cluster while following the Cloud Adoption Framework. The deployment follows the AKS Secure Baseline architecture, including Azure networking, security, identity, management, and monitoring services. It deploys an AKS cluster, an Application Gateway for Ingress, a Container Registry with Private Endpoints, and more. The cluster is then conne…
  • Open

    Building a Basic Chatbot with Azure OpenAI
    Overview In this turorial, we'll build a simple chatbot that uses Azure OpenAI to generate responses to user queries. To create a basic chatbot, we need to set up a language model resource that enables conversation capabilities. In this tutorial, we will: Set up the Azure OpenAI resource using the Azure AI Foundry portal. Retrieve the API key needed to connect the resource to your chatbot application. Once the API key is configured in your code, you will be able to integrate the language model into your chatbot and enable it to generate responses. By the end of this tutorial, you'll have a working chatbot that can generate responses using the Azure OpenAI model. Signing In and Setting Up Your Azure AI Foundry Workspace Signing In to Azure AI Foundry Open the Azure AI Foundry page in you…  ( 32 min )
    Building a Basic Chatbot with Azure OpenAI
    Overview In this turorial, we'll build a simple chatbot that uses Azure OpenAI to generate responses to user queries. To create a basic chatbot, we need to set up a language model resource that enables conversation capabilities. In this tutorial, we will: Set up the Azure OpenAI resource using the Azure AI Foundry portal. Retrieve the API key needed to connect the resource to your chatbot application. Once the API key is configured in your code, you will be able to integrate the language model into your chatbot and enable it to generate responses. By the end of this tutorial, you'll have a working chatbot that can generate responses using the Azure OpenAI model. Signing In and Setting Up Your Azure AI Foundry Workspace Signing In to Azure AI Foundry Open the Azure AI Foundry page in you…
    Building a Basic Chatbot with Azure OpenAI
    Overview In this turorial, we'll build a simple chatbot that uses Azure OpenAI to generate responses to user queries. To create a basic chatbot, we need to set up a language model resource that enables conversation capabilities. In this tutorial, we will: Set up the Azure OpenAI resource using the Azure AI Foundry portal. Retrieve the API key needed to connect the resource to your chatbot application. Once the API key is configured in your code, you will be able to integrate the language model into your chatbot and enable it to generate responses. By the end of this tutorial, you'll have a working chatbot that can generate responses using the Azure OpenAI model. Signing In and Setting Up Your Azure AI Foundry Workspace Signing In to Azure AI Foundry Open the Azure AI Foundry page in you…
  • Open

    Log Analytics Simple Mode is Now Generally Available
    Over the past few months, we gradually rolled out the new Log Analytics experience to our users. The feedback has been positive, and the telemetry shows that users are more successful at working with their data. Today, we’re excited to announce that the new Log Analytics experience, including Simple Mode and other improvements, is now fully available and enabled by default.  How simple is it? Here are two quick examples:  Investigate Workspace Usage:  Double-click the Usage table to load the latest data.  Add an Aggregate operation to sum the Quantity column by DataType. Add a Sort operation by Quantity, and instantly see the results organized. At the top-right, click the three dots and create a New Alert Rule.    Troubleshoot Kubernetes Pods:  Select the KubePodInventory table and click Run to view the latest data.  Filter the PodStatus column to Pending.  Add an Aggregate operator to count the failed pods by Name.  Click Share and export the results to CSV.   That’s it - just a few clicks, and you’ve gained meaningful insights!   Seamless Transition for Advanced Users  If you’re comfortable with Kusto Query Language (KQL), you can switch to KQL Mode, edit the auto-generated query, and dive deeper. Once done, you can switch back to Simple Mode to continue exploring with updated results. You can also set your preferred default mode through the Settings menu for a customized experience.  Improved Usability The interface includes organized menus for key actions like Save, Share, and Export, and a collapsible pane for quick access to tables, saved queries, examples, and more.  To dive deeper into Simple Mode and other recent updates, visit our official documentation.    Your Feedback Matters  We’re committed to continuously improving Log Analytics to meet our users’ needs. Your input is invaluable in shaping its capabilities and user experience. For questions or feedback, feel free to reach out to Noyablanga@microsoft.com or use the Give Feedback form directly in Logs.  ( 22 min )
    Log Analytics Simple Mode is Now Generally Available
    Over the past few months, we gradually rolled out the new Log Analytics experience to our users. The feedback has been positive, and the telemetry shows that users are more successful at working with their data. Today, we’re excited to announce that the new Log Analytics experience, including Simple Mode and other improvements, is now fully available and enabled by default.  How simple is it? Here are two quick examples:  Investigate Workspace Usage:  Double-click the Usage table to load the latest data.  Add an Aggregate operation to sum the Quantity column by DataType. Add a Sort operation by Quantity, and instantly see the results organized. At the top-right, click the three dots and create a New Alert Rule.    Troubleshoot Kubernetes Pods:  Select the KubePodInventory table and click Run to view the latest data.  Filter the PodStatus column to Pending.  Add an Aggregate operator to count the failed pods by Name.  Click Share and export the results to CSV.   That’s it - just a few clicks, and you’ve gained meaningful insights!   Seamless Transition for Advanced Users  If you’re comfortable with Kusto Query Language (KQL), you can switch to KQL Mode, edit the auto-generated query, and dive deeper. Once done, you can switch back to Simple Mode to continue exploring with updated results. You can also set your preferred default mode through the Settings menu for a customized experience.  Improved Usability The interface includes organized menus for key actions like Save, Share, and Export, and a collapsible pane for quick access to tables, saved queries, examples, and more.  To dive deeper into Simple Mode and other recent updates, visit our official documentation.    Your Feedback Matters  We’re committed to continuously improving Log Analytics to meet our users’ needs. Your input is invaluable in shaping its capabilities and user experience. For questions or feedback, feel free to reach out to Noyablanga@microsoft.com or use the Give Feedback form directly in Logs.
    Log Analytics Simple Mode is Now Generally Available
    Over the past few months, we gradually rolled out the new Log Analytics experience to our users. The feedback has been positive, and the telemetry shows that users are more successful at working with their data. Today, we’re excited to announce that the new Log Analytics experience, including Simple Mode and other improvements, is now fully available and enabled by default.  How simple is it? Here are two quick examples:  Investigate Workspace Usage:  Double-click the Usage table to load the latest data.  Add an Aggregate operation to sum the Quantity column by DataType. Add a Sort operation by Quantity, and instantly see the results organized. At the top-right, click the three dots and create a New Alert Rule.    Troubleshoot Kubernetes Pods:  Select the KubePodInventory table and click Run to view the latest data.  Filter the PodStatus column to Pending.  Add an Aggregate operator to count the failed pods by Name.  Click Share and export the results to CSV.   That’s it - just a few clicks, and you’ve gained meaningful insights!   Seamless Transition for Advanced Users  If you’re comfortable with Kusto Query Language (KQL), you can switch to KQL Mode, edit the auto-generated query, and dive deeper. Once done, you can switch back to Simple Mode to continue exploring with updated results. You can also set your preferred default mode through the Settings menu for a customized experience.  Improved Usability The interface includes organized menus for key actions like Save, Share, and Export, and a collapsible pane for quick access to tables, saved queries, examples, and more.  To dive deeper into Simple Mode and other recent updates, visit our official documentation.    Your Feedback Matters  We’re committed to continuously improving Log Analytics to meet our users’ needs. Your input is invaluable in shaping its capabilities and user experience. For questions or feedback, feel free to reach out to Noyablanga@microsoft.com or use the Give Feedback form directly in Logs.

  • Open

    VPTQ Quantized 2-Bit Models: Principles, Steps, and Practical Implementation
    Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git  This article if from one of my repo: https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/Quantization-2-bit-VPTQ Welcome to this comprehensive guide where we delve into the application of VPTQ (Vector Post-Training Quantization) in quantizing models to 2 bits. This article aims to help you understand the core concepts of VPTQ, the key steps involved in the quantization process, and how to achieve efficient model compression and performance optimization using VPTQ. Introduction As large language models (LLMs) continue to grow in scale, the demand for storage and computational resources increases accordingly. To run these large models on hardware with …  ( 48 min )
    VPTQ Quantized 2-Bit Models: Principles, Steps, and Practical Implementation
    Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git  This article if from one of my repo: https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/Quantization-2-bit-VPTQ Welcome to this comprehensive guide where we delve into the application of VPTQ (Vector Post-Training Quantization) in quantizing models to 2 bits. This article aims to help you understand the core concepts of VPTQ, the key steps involved in the quantization process, and how to achieve efficient model compression and performance optimization using VPTQ. Introduction As large language models (LLMs) continue to grow in scale, the demand for storage and computational resources increases accordingly. To run these large models on hardware with …
    VPTQ Quantized 2-Bit Models: Principles, Steps, and Practical Implementation
    Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git  This article if from one of my repo: https://github.com/xinyuwei-david/david-share/tree/master/Deep-Learning/Quantization-2-bit-VPTQ Welcome to this comprehensive guide where we delve into the application of VPTQ (Vector Post-Training Quantization) in quantizing models to 2 bits. This article aims to help you understand the core concepts of VPTQ, the key steps involved in the quantization process, and how to achieve efficient model compression and performance optimization using VPTQ. Introduction As large language models (LLMs) continue to grow in scale, the demand for storage and computational resources increases accordingly. To run these large models on hardware with …

  • Open

    Building a TOTP Authenticator App on Azure Functions and Azure Key Vault
    Two-factor authentication (2FA) has become a cornerstone of modern digital security, serving as a crucial defense against unauthorized access and account compromises. While many organizations rely on popular authenticator apps like Microsoft Authenticator, there's significant value in understanding how to build and customize your own TOTP (Time-based One-Time Password) solution. This becomes particularly relevant for those requiring specific customizations, enhanced security controls, or seamless integration with existing systems. In this blog, I'll walk through building a TOTP authenticator application using Azure's modern cloud services. Our solution demonstrates using Azure Functions for server-side operations with Azure Key Vault for secrets management. A bonus section covers integrati…
    Building a TOTP Authenticator App on Azure Functions and Azure Key Vault
    Two-factor authentication (2FA) has become a cornerstone of modern digital security, serving as a crucial defense against unauthorized access and account compromises. While many organizations rely on popular authenticator apps like Microsoft Authenticator, there's significant value in understanding how to build and customize your own TOTP (Time-based One-Time Password) solution. This becomes particularly relevant for those requiring specific customizations, enhanced security controls, or seamless integration with existing systems. In this blog, I'll walk through building a TOTP authenticator application using Azure's modern cloud services. Our solution demonstrates using Azure Functions for server-side operations with Azure Key Vault for secrets management. A bonus section covers integrati…
  • Open

    Unlocking Function Calling with vLLM and Azure Machine Learning
    Introduction In this post, we’ll explain how to deploy LLMs on vLLM using Azure Machine Learning’s Managed Online Endpoints for efficient, scalable, and secure real-time inference. Next, we will look at function calling, and how vLLM's engine can support you to achieve that. To get started, let’s briefly look into what vLLM and Managed Online Endpoints are. You can find the full code examples on vllm-on-azure-machine-learning. vLLM vLLM is a high-throughput and memory-efficient inference and serving engine designed for large language models (LLMs). It optimizes the serving and execution of LLMs by utilizing advanced memory management techniques, such as PagedAttention, which efficiently manages attention key and value memory. This allows for continuous batching of incoming requests and fas…  ( 46 min )
    Unlocking Function Calling with vLLM and Azure Machine Learning
    Introduction In this post, we’ll explain how to deploy LLMs on vLLM using Azure Machine Learning’s Managed Online Endpoints for efficient, scalable, and secure real-time inference. Next, we will look at function calling, and how vLLM's engine can support you to achieve that. To get started, let’s briefly look into what vLLM and Managed Online Endpoints are. You can find the full code examples on vllm-on-azure-machine-learning. vLLM vLLM is a high-throughput and memory-efficient inference and serving engine designed for large language models (LLMs). It optimizes the serving and execution of LLMs by utilizing advanced memory management techniques, such as PagedAttention, which efficiently manages attention key and value memory. This allows for continuous batching of incoming requests and fas…
    Unlocking Function Calling with vLLM and Azure Machine Learning
    Introduction In this post, we’ll explain how to deploy LLMs on vLLM using Azure Machine Learning’s Managed Online Endpoints for efficient, scalable, and secure real-time inference. Next, we will look at function calling, and how vLLM's engine can support you to achieve that. To get started, let’s briefly look into what vLLM and Managed Online Endpoints are. You can find the full code examples on vllm-on-azure-machine-learning. vLLM vLLM is a high-throughput and memory-efficient inference and serving engine designed for large language models (LLMs). It optimizes the serving and execution of LLMs by utilizing advanced memory management techniques, such as PagedAttention, which efficiently manages attention key and value memory. This allows for continuous batching of incoming requests and fas…
  • Open

    Using Advanced Reasoning Model on EdgeAI Part 1 - Quantization, Conversion, Performance
    DeepSeek-R1 is very popular, and it can achieve the same capabilities as OpenAI o1 in advanced reasoning. Microsoft has also added DeepSeek-R1 models to Azure AI Foundry and GitHub Models. We can compare DeepSeek-R1 ith other available models through GitHub Models Playground Note This series revolves around deployment of SLMs to Edge Devices 'Edge AI' we will focus on the deployment advanced reasoning models, with different application scenarios. You can learn more in the following session AI Tour BRK453. In this experiement we want to deploy advanced reasoning models to the edge, so that they can run on edge devices with limited computing power and offline environments. At this time, the recommendation is to use the traditional ONNX model . We can use Microsoft Olive to convert the DeepS…  ( 29 min )
    Using Advanced Reasoning Model on EdgeAI Part 1 - Quantization, Conversion, Performance
    DeepSeek-R1 is very popular, and it can achieve the same capabilities as OpenAI o1 in advanced reasoning. Microsoft has also added DeepSeek-R1 models to Azure AI Foundry and GitHub Models. We can compare DeepSeek-R1 ith other available models through GitHub Models Playground Note This series revolves around deployment of SLMs to Edge Devices 'Edge AI' we will focus on the deployment advanced reasoning models, with different application scenarios. You can learn more in the following session AI Tour BRK453. In this experiement we want to deploy advanced reasoning models to the edge, so that they can run on edge devices with limited computing power and offline environments. At this time, the recommendation is to use the traditional ONNX model . We can use Microsoft Olive to convert the DeepS…
    Using Advanced Reasoning Model on EdgeAI Part 1 - Quantization, Conversion, Performance
    DeepSeek-R1 is very popular, and it can achieve the same capabilities as OpenAI o1 in advanced reasoning. Microsoft has also added DeepSeek-R1 models to Azure AI Foundry and GitHub Models. We can compare DeepSeek-R1 ith other available models through GitHub Models Playground Note This series revolves around deployment of SLMs to Edge Devices 'Edge AI' we will focus on the deployment advanced reasoning models, with different application scenarios. You can learn more in the following session AI Tour BRK453. In this experiement we want to deploy advanced reasoning models to the edge, so that they can run on edge devices with limited computing power and offline environments. At this time, the recommendation is to use the traditional ONNX model . We can use Microsoft Olive to convert the DeepS…
  • Open

    SecretLess App Registrations in Entra ID
    Identity is an area that has gotten a lot of attention the past few years and is now essential for every developer to have at least a basic level of understanding of. (We can probably debate what constitutes basic - not everyone has to be able to implement an authentication library from scratch, but everyone should have an idea of things to think about when adding a sign-in button to their web app.) On the .NET side we've seen good things with standards compliant libraries provided by Microsoft. With Blazor it has gotten easier to implement patterns like Backend for Frontends (BFF) to move the auth process away from the client side. In .NET 9 this was improved upon further with built-in authentication state handling. When it comes to authenticating across services, like your web app needin…  ( 35 min )
    SecretLess App Registrations in Entra ID
    Identity is an area that has gotten a lot of attention the past few years and is now essential for every developer to have at least a basic level of understanding of. (We can probably debate what constitutes basic - not everyone has to be able to implement an authentication library from scratch, but everyone should have an idea of things to think about when adding a sign-in button to their web app.) On the .NET side we've seen good things with standards compliant libraries provided by Microsoft. With Blazor it has gotten easier to implement patterns like Backend for Frontends (BFF) to move the auth process away from the client side. In .NET 9 this was improved upon further with built-in authentication state handling. When it comes to authenticating across services, like your web app needin…
    SecretLess App Registrations in Entra ID
    Identity is an area that has gotten a lot of attention the past few years and is now essential for every developer to have at least a basic level of understanding of. (We can probably debate what constitutes basic - not everyone has to be able to implement an authentication library from scratch, but everyone should have an idea of things to think about when adding a sign-in button to their web app.) On the .NET side we've seen good things with standards compliant libraries provided by Microsoft. With Blazor it has gotten easier to implement patterns like Backend for Frontends (BFF) to move the auth process away from the client side. In .NET 9 this was improved upon further with built-in authentication state handling. When it comes to authenticating across services, like your web app needin…
  • Open

    SecretLess App Registrations in Entra ID
    Identity is an area that has gotten a lot of attention the past few years and is now essential for every developer to have at least a basic level of understanding of. (We can probably debate what constitutes basic - not everyone has to be able to implement an authentication library from scratch, but everyone should have an idea of things to think about when adding a sign-in button to their web app.) On the .NET side we've seen good things with standards compliant libraries provided by Microsoft. With Blazor it has gotten easier to implement patterns like Backend for Frontends (BFF) to move the auth process away from the client side. In .NET 9 this was improved upon further with built-in authentication state handling. When it comes to authenticating across services, like your web app needin…  ( 35 min )
    SecretLess App Registrations in Entra ID
    Identity is an area that has gotten a lot of attention the past few years and is now essential for every developer to have at least a basic level of understanding of. (We can probably debate what constitutes basic - not everyone has to be able to implement an authentication library from scratch, but everyone should have an idea of things to think about when adding a sign-in button to their web app.) On the .NET side we've seen good things with standards compliant libraries provided by Microsoft. With Blazor it has gotten easier to implement patterns like Backend for Frontends (BFF) to move the auth process away from the client side. In .NET 9 this was improved upon further with built-in authentication state handling. When it comes to authenticating across services, like your web app needin…
    SecretLess App Registrations in Entra ID
    Identity is an area that has gotten a lot of attention the past few years and is now essential for every developer to have at least a basic level of understanding of. (We can probably debate what constitutes basic - not everyone has to be able to implement an authentication library from scratch, but everyone should have an idea of things to think about when adding a sign-in button to their web app.) On the .NET side we've seen good things with standards compliant libraries provided by Microsoft. With Blazor it has gotten easier to implement patterns like Backend for Frontends (BFF) to move the auth process away from the client side. In .NET 9 this was improved upon further with built-in authentication state handling. When it comes to authenticating across services, like your web app needin…
  • Open

    Introducing the Microsoft Graph Export-Import APIs for Exchange in public preview
    We are happy to announce the launch of Export-Import APIs in limited Public Preview (Beta), a set of Microsoft Graph APIs that empower applications to discover, export and import contents from Exchange Online mailboxes in full fidelity. The post Introducing the Microsoft Graph Export-Import APIs for Exchange in public preview appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    Microsoft Translator Pro is now Generally Available (GA)
    In November 2024, we introduced the gated public preview release of Microsoft Translator Pro, our robust solution crafted to help enterprises break down language barriers in the workplace. Today, we are thrilled to announce that Microsoft Translator Pro is now generally available on iOS. Screenshots of Microsoft Translator Pro app New features of the gated GA release Below are the latest features in this release. For more information on the core features, please refer to the public preview release announcement.   Customized phrasebook: Upload a phrasebook with your organization’s phrases to facilitate quick and efficient communication in another language.   International availability: The app is now accessible in selected countries outside the United States. To view the complete list of supported countries, please refer to the Microsoft Translator Pro availability by country   Availability in US Government cloud: Microsoft Translator Pro, which is already available in commercial cloud, is now also available within the US Government cloud. US Government agencies can now operate the app within the US Government cloud. For detailed information on regional availability, please refer to the Microsoft Translator Pro availability by region   Expanded language coverage: The app now supports additional languages when connected to the internet, enhancing its usability for a broader range of users. For more details, please visit the Microsoft Translator Pro language support   Join the gated GA To onboard the GA version of the app, please complete the gating form. Upon meeting the criteria, we will grant your organization access to the paid version of the Microsoft Translator Pro app.    Learn more and get started: Microsoft Translator Pro documentation Microsoft Translator Pro FAQ  ( 21 min )
    Microsoft Translator Pro is now Generally Available (GA)
    In November 2024, we introduced the gated public preview release of Microsoft Translator Pro, our robust solution crafted to help enterprises break down language barriers in the workplace. Today, we are thrilled to announce that Microsoft Translator Pro is now generally available on iOS. Screenshots of Microsoft Translator Pro app New features of the gated GA release Below are the latest features in this release. For more information on the core features, please refer to the public preview release announcement.   Customized phrasebook: Upload a phrasebook with your organization’s phrases to facilitate quick and efficient communication in another language.   International availability: The app is now accessible in selected countries outside the United States. To view the complete list of supported countries, please refer to the Microsoft Translator Pro availability by country   Availability in US Government cloud: Microsoft Translator Pro, which is already available in commercial cloud, is now also available within the US Government cloud. US Government agencies can now operate the app within the US Government cloud. For detailed information on regional availability, please refer to the Microsoft Translator Pro availability by region   Expanded language coverage: The app now supports additional languages when connected to the internet, enhancing its usability for a broader range of users. For more details, please visit the Microsoft Translator Pro language support   Join the gated GA To onboard the GA version of the app, please complete the gating form. Upon meeting the criteria, we will grant your organization access to the paid version of the Microsoft Translator Pro app.    Learn more and get started: Microsoft Translator Pro documentation Microsoft Translator Pro FAQ
    Microsoft Translator Pro is now Generally Available (GA)
    In November 2024, we introduced the gated public preview release of Microsoft Translator Pro, our robust solution crafted to help enterprises break down language barriers in the workplace. Today, we are thrilled to announce that Microsoft Translator Pro is now generally available on iOS. Screenshots of Microsoft Translator Pro app New features of the gated GA release Below are the latest features in this release. For more information on the core features, please refer to the public preview release announcement.   Customized phrasebook: Upload a phrasebook with your organization’s phrases to facilitate quick and efficient communication in another language.   International availability: The app is now accessible in selected countries outside the United States. To view the complete list of supported countries, please refer to the Microsoft Translator Pro availability by country   Availability in US Government cloud: Microsoft Translator Pro, which is already available in commercial cloud, is now also available within the US Government cloud. US Government agencies can now operate the app within the US Government cloud. For detailed information on regional availability, please refer to the Microsoft Translator Pro availability by region   Expanded language coverage: The app now supports additional languages when connected to the internet, enhancing its usability for a broader range of users. For more details, please visit the Microsoft Translator Pro language support   Join the gated GA To onboard the GA version of the app, please complete the gating form. Upon meeting the criteria, we will grant your organization access to the paid version of the Microsoft Translator Pro app.    Learn more and get started: Microsoft Translator Pro documentation Microsoft Translator Pro FAQ
    Introducing Enhanced Azure OpenAI Distillation and Fine-Tuning Capabilities
    As we continue to push the boundaries of AI capabilities, we are excited to announce significant updates to our Azure OpenAI Service, specifically focused on enhancing our distillation and fine-tuning features. Following our recent announcement on the public preview of distillation in Azure OpenAI Service, we're releasing a compare experience for evaluation and expanding our model and region coverage for stored completions -- making distillation easier than ever! We are also providing more deployment types in our evaluation offering. These updates aim to provide more robust, flexible, and efficient AI solutions to meet diverse business needs. Overview of Distillation in Azure OpenAI Service Azure OpenAI Service distillation involves three main components: Stored Completions: Easily genera…  ( 26 min )
    Introducing Enhanced Azure OpenAI Distillation and Fine-Tuning Capabilities
    As we continue to push the boundaries of AI capabilities, we are excited to announce significant updates to our Azure OpenAI Service, specifically focused on enhancing our distillation and fine-tuning features. Following our recent announcement on the public preview of distillation in Azure OpenAI Service, we're releasing a compare experience for evaluation and expanding our model and region coverage for stored completions -- making distillation easier than ever! We are also providing more deployment types in our evaluation offering. These updates aim to provide more robust, flexible, and efficient AI solutions to meet diverse business needs. Overview of Distillation in Azure OpenAI Service Azure OpenAI Service distillation involves three main components: Stored Completions: Easily genera…
    Introducing Enhanced Azure OpenAI Distillation and Fine-Tuning Capabilities
    As we continue to push the boundaries of AI capabilities, we are excited to announce significant updates to our Azure OpenAI Service, specifically focused on enhancing our distillation and fine-tuning features. Following our recent announcement on the public preview of distillation in Azure OpenAI Service, we're releasing a compare experience for evaluation and expanding our model and region coverage for stored completions -- making distillation easier than ever! We are also providing more deployment types in our evaluation offering. These updates aim to provide more robust, flexible, and efficient AI solutions to meet diverse business needs. Overview of Distillation in Azure OpenAI Service Azure OpenAI Service distillation involves three main components: Stored Completions: Easily genera…

  • Open

    Profiling Python applications with high CPU on App Service Linux
    This blog post will go over different toolsets that can be used to profile a Python application experiencing high CPU and/or slowness.  ( 8 min )
  • Open

    Microsoft Fabric January 2025 update
    We’ve got a lot of exciting updates this month. To name a few, NotebookUtils session management utilities, Enhancing COPY INTO operations with Granular Permissions in Data Warehouse, Application Lifecycle Management (ALM) and Fabric REST APIs. Keep reading to hear about everything we have in store for you this month. Microsoft Fabric Community Conference 2025 After … Continue reading “Microsoft Fabric January 2025 update”  ( 33 min )
  • Open

    Leveraging Azure Container Apps Labels for Environment-based Routing and Feature Testing
    Introduction In modern cloud applications, managing different environments (such as development, staging, and production) is crucial for ensuring smooth deployment and testing workflows. Azure Container Apps offers a powerful feature through labels and traffic splitting that can help developers easily manage multiple versions of an app, route traffic based on different environments, and enable controlled feature testing without disrupting live users. In this blog, we'll walk through a practical scenario where we deploy an experimental feature in a staging revision, test it with internal developers, and then switch the feature to production once it’s validated. We'll use Azure Container Apps labels and traffic splitting to achieve this seamless deployment process. Note: labels is a feature …
    Leveraging Azure Container Apps Labels for Environment-based Routing and Feature Testing
    Introduction In modern cloud applications, managing different environments (such as development, staging, and production) is crucial for ensuring smooth deployment and testing workflows. Azure Container Apps offers a powerful feature through labels and traffic splitting that can help developers easily manage multiple versions of an app, route traffic based on different environments, and enable controlled feature testing without disrupting live users. In this blog, we'll walk through a practical scenario where we deploy an experimental feature in a staging revision, test it with internal developers, and then switch the feature to production once it’s validated. We'll use Azure Container Apps labels and traffic splitting to achieve this seamless deployment process. Note: labels is a feature …
  • Open

    Getting started with Generative AI in Azure
    On 4th February at 9 AM PST (6 PM CET) we are kicking off the Generative AI Level Up Tuesday Microsoft Reactor series. Don't miss it, register now. Along with the first episode of the series, we are also starting a new Microsoft Learn Challenge. Join the challenge now  and get a curated selection of free AI resources as well as a digital badge of completion!  And that's not all! You'll have the opportunity to interact with the speakers after the session in a community roundtable call on Discord. Join the community at https://discord.gg/uwUyWw9xdn and tune in on 5th February at 8AM PST (5PM CET) to get all your questions answered. About this session: Getting started with Generative AI in Azure Discover the basics of generative AI, including core models and functionalities. Learn how to util…  ( 21 min )
    Getting started with Generative AI in Azure
    On 4th February at 9 AM PST (6 PM CET) we are kicking off the Generative AI Level Up Tuesday Microsoft Reactor series. Don't miss it, register now. Along with the first episode of the series, we are also starting a new Microsoft Learn Challenge. Join the challenge now  and get a curated selection of free AI resources as well as a digital badge of completion!  And that's not all! You'll have the opportunity to interact with the speakers after the session in a community roundtable call on Discord. Join the community at https://discord.gg/uwUyWw9xdn and tune in on 5th February at 8AM PST (5PM CET) to get all your questions answered. About this session: Getting started with Generative AI in Azure Discover the basics of generative AI, including core models and functionalities. Learn how to util…
    Getting started with Generative AI in Azure
    On 4th February at 9 AM PST (6 PM CET) we are kicking off the Generative AI Level Up Tuesday Microsoft Reactor series. Don't miss it, register now. Along with the first episode of the series, we are also starting a new Microsoft Learn Challenge. Join the challenge now  and get a curated selection of free AI resources as well as a digital badge of completion!  About this session: Getting started with Generative AI in Azure Discover the basics of generative AI, including core models and functionalities. Learn how to utilize these models within the Azure ecosystem, leveraging various services to build your own generative AI applications. Learn more: Getting started with Generative AI in Azure | Microsoft Reactor Explore session content: aka.ms/Feb4GenerativeAIAzureRepo1 Speakers: Bruno Capuano currently works as a Principal Cloud Advocate at Microsoft focused on empowering the Toronto area to build awesome things with Azure. Bruno was a Microsoft MVP for 14 years and has over 20 years of experience as a software developer and loves to tinker with electronics. He lives in a small town near Toronto with his wife and two adorable kids. Bruno speaks two languages, English and Spanish. Ayca Bas joined Microsoft with Developer Experience (DX) team where closely worked with MSP/MVP communities. Followed by Services Apps domain that she extensively focused on Azure App Services, Bots, Cognitive Services and IoT, and worked onsite with more than a hundred customers around EMEA. She joined the Advocacy team, concentrating on Microsoft Graph for developers. She hold a double major degree in Electrical & Electronics Engineering and Software Engineering. In her free time, she likes playing piano, wake-surfing, camping, photography and doing smart home improvements with Raspberry Pi or Arduino.
    AI Toolkit for VS Code January Update
    AI Toolkit is a VS Code extension aiming to empower AI engineers in transforming their curiosity into advanced generative AI applications. This toolkit, featuring both local-enabled and cloud-accelerated inner loop capabilities, is set to ease model exploration, prompt engineering, and the creation and evaluation of generative applications. We are pleased to announce the January Update to the toolkit with support for OpenAI's o1 model and enhancements in the Model Playground and Bulk Run features. What's New? January’s update brings several exciting new features to boost your productivity in AI development. Here's a closer look at what's included: Support for OpenAI’s new o1 Model: We've added access to GitHub hosted OpenAI’s latest o1 model. This new model replaces the o1-preview and offers even better performance in handling complex tasks. You can start interacting with the o1 model within VS Code for free by using the latest AI Toolkit update. Chat History Support in Model Playground: We have heard your feedback that tracking past model interactions is crucial. The Model Playground has been updated to include support for chat history. This feature saves chat history as individual files stored entirely on your local machine, ensuring privacy and security. Bulk Run with Prompt Templating: The Bulk Run feature, introduced in the AI Toolkit December release, now supports prompt templating with variables. This allows users to create templates for prompts, insert variables, and run them in bulk. This enhancement simplifies the process of testing multiple scenarios and models. Stay tuned for more updates and enhancements as we continue to innovate and support your journey in AI development. Try out the AI Toolkit for Visual Studio Code, share your thoughts, and file issues and suggest features in our GitHub repo. Thank you for being a part of this journey with us!  ( 21 min )
    AI Toolkit for VS Code January Update
    AI Toolkit is a VS Code extension aiming to empower AI engineers in transforming their curiosity into advanced generative AI applications. This toolkit, featuring both local-enabled and cloud-accelerated inner loop capabilities, is set to ease model exploration, prompt engineering, and the creation and evaluation of generative applications. We are pleased to announce the January Update to the toolkit with support for OpenAI's o1 model and enhancements in the Model Playground and Bulk Run features. What's New? January’s update brings several exciting new features to boost your productivity in AI development. Here's a closer look at what's included: Support for OpenAI’s new o1 Model: We've added access to GitHub hosted OpenAI’s latest o1 model. This new model replaces the o1-preview and offers even better performance in handling complex tasks. You can start interacting with the o1 model within VS Code for free by using the latest AI Toolkit update. Chat History Support in Model Playground: We have heard your feedback that tracking past model interactions is crucial. The Model Playground has been updated to include support for chat history. This feature saves chat history as individual files stored entirely on your local machine, ensuring privacy and security. Bulk Run with Prompt Templating: The Bulk Run feature, introduced in the AI Toolkit December release, now supports prompt templating with variables. This allows users to create templates for prompts, insert variables, and run them in bulk. This enhancement simplifies the process of testing multiple scenarios and models. Stay tuned for more updates and enhancements as we continue to innovate and support your journey in AI development. Try out the AI Toolkit for Visual Studio Code, share your thoughts, and file issues and suggest features in our GitHub repo. Thank you for being a part of this journey with us!
    AI Toolkit for VS Code January Update
    AI Toolkit is a VS Code extension aiming to empower AI engineers in transforming their curiosity into advanced generative AI applications. This toolkit, featuring both local-enabled and cloud-accelerated inner loop capabilities, is set to ease model exploration, prompt engineering, and the creation and evaluation of generative applications. We are pleased to announce the January Update to the toolkit with support for OpenAI's o1 model and enhancements in the Model Playground and Bulk Run features. What's New? January’s update brings several exciting new features to boost your productivity in AI development. Here's a closer look at what's included: Support for OpenAI’s new o1 Model: We've added access to GitHub hosted OpenAI’s latest o1 model. This new model replaces the o1-preview and offers even better performance in handling complex tasks. You can start interacting with the o1 model within VS Code for free by using the latest AI Toolkit update. Chat History Support in Model Playground: We have heard your feedback that tracking past model interactions is crucial. The Model Playground has been updated to include support for chat history. This feature saves chat history as individual files stored entirely on your local machine, ensuring privacy and security. Bulk Run with Prompt Templating: The Bulk Run feature, introduced in the AI Toolkit December release, now supports prompt templating with variables. This allows users to create templates for prompts, insert variables, and run them in bulk. This enhancement simplifies the process of testing multiple scenarios and models. Stay tuned for more updates and enhancements as we continue to innovate and support your journey in AI development. Try out the AI Toolkit for Visual Studio Code, share your thoughts, and file issues and suggest features in our GitHub repo. Thank you for being a part of this journey with us!
  • Open

    DeepSeek-R1 on Azure with LangChain4j Demo
    DeepSeek-R1 has been announced on GitHub Models as well as on Azure AI Foundry, and the goal of this blog post is to demonstrate how to use it with LangChain4j and Java. We concentrate here on GitHub Models as they are easier to use (you just need a GitHub token, no Azure subscription required), then Azure AI Foundry […] The post DeepSeek-R1 on Azure with LangChain4j Demo appeared first on Microsoft for Java Developers.  ( 24 min )
  • Open

    AI Toolkit for VS Code January Update
    AI Toolkit is a VS Code extension aiming to empower AI engineers in transforming their curiosity into advanced generative AI applications. This toolkit, featuring both local-enabled and cloud-accelerated inner loop capabilities, is set to ease model exploration, prompt engineering, and the creation and evaluation of generative applications. We are pleased to announce the January Update to the toolkit with support for OpenAI's o1 model and enhancements in the Model Playground and Bulk Run features. What's New? January’s update brings several exciting new features to boost your productivity in AI development. Here's a closer look at what's included: Support for OpenAI’s new o1 Model: We've added access to GitHub hosted OpenAI’s latest o1 model. This new model replaces the o1-preview and offers even better performance in handling complex tasks. You can start interacting with the o1 model within VS Code for free by using the latest AI Toolkit update. Chat History Support in Model Playground: We have heard your feedback that tracking past model interactions is crucial. The Model Playground has been updated to include support for chat history. This feature saves chat history as individual files stored entirely on your local machine, ensuring privacy and security. Bulk Run with Prompt Templating: The Bulk Run feature, introduced in the AI Toolkit December release, now supports prompt templating with variables. This allows users to create templates for prompts, insert variables, and run them in bulk. This enhancement simplifies the process of testing multiple scenarios and models. Stay tuned for more updates and enhancements as we continue to innovate and support your journey in AI development. Try out the AI Toolkit for Visual Studio Code, share your thoughts, and file issues and suggest features in our GitHub repo. Thank you for being a part of this journey with us!  ( 21 min )
    AI Toolkit for VS Code January Update
    AI Toolkit is a VS Code extension aiming to empower AI engineers in transforming their curiosity into advanced generative AI applications. This toolkit, featuring both local-enabled and cloud-accelerated inner loop capabilities, is set to ease model exploration, prompt engineering, and the creation and evaluation of generative applications. We are pleased to announce the January Update to the toolkit with support for OpenAI's o1 model and enhancements in the Model Playground and Bulk Run features. What's New? January’s update brings several exciting new features to boost your productivity in AI development. Here's a closer look at what's included: Support for OpenAI’s new o1 Model: We've added access to GitHub hosted OpenAI’s latest o1 model. This new model replaces the o1-preview and offers even better performance in handling complex tasks. You can start interacting with the o1 model within VS Code for free by using the latest AI Toolkit update. Chat History Support in Model Playground: We have heard your feedback that tracking past model interactions is crucial. The Model Playground has been updated to include support for chat history. This feature saves chat history as individual files stored entirely on your local machine, ensuring privacy and security. Bulk Run with Prompt Templating: The Bulk Run feature, introduced in the AI Toolkit December release, now supports prompt templating with variables. This allows users to create templates for prompts, insert variables, and run them in bulk. This enhancement simplifies the process of testing multiple scenarios and models. Stay tuned for more updates and enhancements as we continue to innovate and support your journey in AI development. Try out the AI Toolkit for Visual Studio Code, share your thoughts, and file issues and suggest features in our GitHub repo. Thank you for being a part of this journey with us!
    AI Toolkit for VS Code January Update
    AI Toolkit is a VS Code extension aiming to empower AI engineers in transforming their curiosity into advanced generative AI applications. This toolkit, featuring both local-enabled and cloud-accelerated inner loop capabilities, is set to ease model exploration, prompt engineering, and the creation and evaluation of generative applications. We are pleased to announce the January Update to the toolkit with support for OpenAI's o1 model and enhancements in the Model Playground and Bulk Run features. What's New? January’s update brings several exciting new features to boost your productivity in AI development. Here's a closer look at what's included: Support for OpenAI’s new o1 Model: We've added access to GitHub hosted OpenAI’s latest o1 model. This new model replaces the o1-preview and offers even better performance in handling complex tasks. You can start interacting with the o1 model within VS Code for free by using the latest AI Toolkit update. Chat History Support in Model Playground: We have heard your feedback that tracking past model interactions is crucial. The Model Playground has been updated to include support for chat history. This feature saves chat history as individual files stored entirely on your local machine, ensuring privacy and security. Bulk Run with Prompt Templating: The Bulk Run feature, introduced in the AI Toolkit December release, now supports prompt templating with variables. This allows users to create templates for prompts, insert variables, and run them in bulk. This enhancement simplifies the process of testing multiple scenarios and models. Stay tuned for more updates and enhancements as we continue to innovate and support your journey in AI development. Try out the AI Toolkit for Visual Studio Code, share your thoughts, and file issues and suggest features in our GitHub repo. Thank you for being a part of this journey with us!
  • Open

    Dev Proxy v0.24 with improved generating OpenAPI specs
    Try the latest version of Dev Proxy to simulate APIs and test your applications under real-world conditions The post Dev Proxy v0.24 with improved generating OpenAPI specs appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    Automating Developer Environments with Microsoft Dev Box and Teams Customizations
    The following blog walks through the experience of defining and automating the creation of developer environments with the newly announced Teams Customizations feature in Microsoft Dev Box. This allows developer team leads or managers to define the software installed by default every time a developer creates a new environment, ensuring that every team member has […] The post Automating Developer Environments with Microsoft Dev Box and Teams Customizations appeared first on Develop from the cloud.  ( 28 min )
  • Open

    Real Time, Real You: Announcing General Availability of Face Liveness Detection
    A Milestone in Identity Verification We are excited to announce the general availability of our face liveness detection features, a key milestone in making identity verification both seamless and secure. As deepfake technology and sophisticated spoofing attacks continue to evolve, organizations need solutions that can verify the authenticity of an individual in real time. During the preview, we listened to customer feedback, expanded capabilities, and made significant improvements to ensure that liveness detection works across three platforms and for common use cases. What’s New Since the Preview? During the preview, we introduced several features that laid the foundation for secure and seamless identity verification, including active challenge in JavaScript library. Building on that found…  ( 25 min )
    Real Time, Real You: Announcing General Availability of Face Liveness Detection
    A Milestone in Identity Verification We are excited to announce the general availability of our face liveness detection features, a key milestone in making identity verification both seamless and secure. As deepfake technology and sophisticated spoofing attacks continue to evolve, organizations need solutions that can verify the authenticity of an individual in real time. During the preview, we listened to customer feedback, expanded capabilities, and made significant improvements to ensure that liveness detection works across three platforms and for common use cases. What’s New Since the Preview? During the preview, we introduced several features that laid the foundation for secure and seamless identity verification, including active challenge in JavaScript library. Building on that found…
    Real Time, Real You: Announcing General Availability of Face Liveness Detection
    A Milestone in Identity Verification We are excited to announce the general availability of our face liveness detection features, a key milestone in making identity verification both seamless and secure. As deepfake technology and sophisticated spoofing attacks continue to evolve, organizations need solutions that can verify the authenticity of an individual in real time. During the preview, we listened to customer feedback, expanded capabilities, and made significant improvements to ensure that liveness detection works across three platforms and for common use cases. What’s New Since the Preview? During the preview, we introduced several features that laid the foundation for secure and seamless identity verification, including active challenge in JavaScript library. Building on that found…
  • Open

    Using DeepSeek models in Microsoft Semantic Kernel
    DeepSeek recently awed the AI community by open sourcing two new state-of-the-art models, the DeepSeek-V3 and a reasoning model, the DeepSeek-R1, that not only claim to be op-par with the most capable models from OpenAI but are also extremely cost-effective. We’d like to highlight the recent announcement from the Azure AI Foundry team highlighting DeepSeek […] The post Using DeepSeek models in Microsoft Semantic Kernel appeared first on Semantic Kernel.  ( 24 min )
2025-06-28T01:41:29.103Z osmosfeed 1.15.1