• Open

    Announcing the Data Wrangler powered Notebook Results Table
    An introduction to the new Results Table integrated into the output cell of Notebooks, powered by the VS Code extension called Data Wrangler. The post Announcing the Data Wrangler powered Notebook Results Table appeared first on Microsoft for Python Developers Blog.  ( 23 min )
  • Open

    Generating Classes with Custom Naming Conventions Using GitHub Copilot and a Custom MCP Server
    GitHub Spark and GitHub Copilot are powerful development tools that can significantly boost productivity even when used out of the box. However, in enterprise settings, a common request is for development support that aligns with specific compliances or regulations. While GitHub Copilot allows you to choose models like GPT-4o or others, it does not currently support the use of custom fine-tuned models. Additionally, many users might find it unclear how to integrate Copilot with external services, which can be a source of frustration. To address such needs, one possible approach is to build a custom MCP server and connect it to GitHub Copilot. For a basic “Hello World” style guide on how to set this up, please refer to the articles below. https://devblogs.microsoft.com/dotnet/build-a-model…  ( 34 min )

  • Open

    Test and validate your functions with Develop mode in Fabric User Data Functions (Preview)
    We’ve made a major update to User Data Functions! This update addresses the most critical feedback we have received since the start of the preview: the testing and development experience of your functions. Important note: In order to use this feature, you need to upgrade to the latest version of the fabric-user-data-functions library. Beginning in … Continue reading “Test and validate your functions with Develop mode in Fabric User Data Functions (Preview)”  ( 7 min )
    OpenAPI specification code generation now available in Fabric User Data Functions
    The Open API specification, formerly Swagger Specification, is a widely used, language-agnostic description format for REST APIs. This allows humans and computers alike to discover and understand the capabilities of a service in a standardized format. This is critical for creating integrations with external systems, AI agents and code generators. Now, you can automatically generate … Continue reading “OpenAPI specification code generation now available in Fabric User Data Functions”  ( 6 min )
  • Open

    Announcing Early Preview: BYO Remote MCP Server on Azure Functions
    If you’ve already built Model Context Protocol (MCP) servers with the MCP SDKs and wished you could turn them into world class Remote MCP servers using a hyperscale, serverless platform, then this one’s for you! We’ve published samples showing how to host bring‑your-own (BYO) Remote MCP servers on Azure Functions, so you can run the servers you’ve already built with the MCP SDKs—Python, Node, and .NET—with minimal changes and full serverless goodness. Why this is exciting Keep your code. If you’ve already implemented servers with the MCP SDKs (Python, Node, .NET), deploy them to Azure Functions as remote MCP servers with just one line of code change. Serverless scale when you need it. Functions on the Flex Consumption plan handles bursty traffic, scales out and back to zero automatically,…  ( 22 min )

  • Open

    How to Use Custom Models with Foundry Local: A Beginner's Guide
    What is Foundry Local? Foundry Local is a user-friendly tool that helps you run small AI language models directly on your Windows or Mac computer. Think of it as a way to have your own personal ChatGPT running locally on your machine, without needing an internet connection or sending your data to external servers. Currently, Foundry Local works great with several popular model families: Phi models (Microsoft's small but powerful models) Qwen models (Alibaba's multilingual models) DeepSeek models (Efficient reasoning models) In this tutorial, we'll learn how to set up the Qwen3-0.6B model step by step. Don't worry if you're new to AI - we'll explain everything along the way! Why Do We Need to Convert AI Models? When you download AI models from websites like Hugging Face (think of it as G…  ( 37 min )

  • Open

    Supercharge Your App Service Apps with AI Foundry Agents Connected to MCP servers
    Introduction The integration of AI into web applications has reached a new milestone with the introduction of Model Context Protocol (MCP) support in Azure AI Foundry Agent Service. This powerful new feature allows developers to extend the capabilities of their Azure AI Foundry agents by connecting them to tools hosted on remote MCP servers—and the best part? Both your applications and MCP servers can be hosted seamlessly on Azure App Service. No custom code is required to get your agents hooked up to these MCP servers either, it's built right into this new functionality. In this blog post, we'll explore how this new functionality works, demonstrate a practical implementation, and show you how to get started with your own MCP-enabled applications on App Service. What is Model Context Proto…  ( 32 min )
  • Open

    Building AI Agents with Ease: Function Calling in VS Code AI Toolkit
    Function calling is a powerful technique that allows Large Language Models (LLMs) to go beyond their static training data and interact with real-world, dynamic information sources like databases and APIs. This capability turns a simple chat interface into a powerful tool for fetching real-time data, executing code, and much more. The process of Tool/Function calling typically involves two main components: a client application and the LLM. A user's request, such as "Do I need to carry an umbrella in Bangalore today?", is sent from the client application to the LLM. Along with this message, a tool definition is provided. This definition gives the LLM the context it needs to understand what tools are available along usage methods. The LLM analyses the user's request and the list of available …  ( 34 min )
  • Open

    Deploying OpenAI’s First Open-Source Model on Azure AKS with KAITO
    Special ThanksThanks to Andrew Thomas, Kurt Niebuhr, and Sachi Desai for their invaluable support, insightful discussions, and for providing the compute resources that made testing this deployment possible. Your contributions were essential in bringing this project to life.Introduction OpenAI recently released GPT-OSS, its first open-source large language model. With the rise of high-performance GPUs in the cloud, running advanced AI inference workloads has become easier than ever. Microsoft Azure’s AKS (Azure Kubernetes Service) paired with KAITO (Kubernetes AI Toolchain Operator) provides a powerful, scalable environment for deploying such models. KAITO simplifies provisioning GPU nodes, managing inference workloads, and integrating AI-optimized runtimes like vLLM. In this tutorial, we’l…  ( 28 min )

  • Open

    Case study: TypeError: Invalid URL - ERR_INVALID_URL with Next.js (and Node.js) apps
    This blog covers a niche issue with Next.js, but potentially any node.js-based app, when using the Node.js URI API  ( 6 min )
  • Open

    Send signals from Micronaut applications to Azure Monitor through zero-code instrumentation
    The original post (Japanese) was written on 13 August 2025. Zero code instrumentationでMicronautアプリケーションからAzure Monitorにtraceやmetricを送信したい – Logico Inside This entry is a series posts below. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Received another question from the customer. I understand that I can get metrics and traces, but is it possible to send them to Azure Monitor (Application Insights) without using code? If you are not familiar with zero-code instrumentation, please check the following URL. Zero-code | OpenTelemetry The customer wondered if the dependencies would take care of everything else when they only specified the dependencies and destinations. To confirm this (and to prov…  ( 36 min )
    Send logs from Micronaut native image applications to Azure Monitor
    The original post (Japanese) was written on 29 July 2025. MicronautからAzure Monitorにlogを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Where can we post logs? Log destination differs depending upon managed services such as App Service, Container Apps, etc. We can also send logs to specified destination which is different destination from the default one. In the case of Azure Container Apps, for instance, we have several options to send logs. TypeDestinationHow to Write console output to a log ContainerAppConsoleLogs_CL If diagnostic settings are configured, destination table may differ from the above. The output destination can …  ( 40 min )
    Send traces from Micronaut native image applications to Azure Monitor
    The original post (Japanese) was written on 23 July 2025. MicronautからAzure Monitorにtraceを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information. Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Prerequisites Maven: 3.9.10 JDK version 21 Micronaut: 4.9.0 or later The following tutorial was used as a reference. OpenTelemetry Tracing with Oracle Cloud and the Micronaut Framework As of 13 August 2025, GDK (Graal Dev Kit) guides are also available. Create and Trace a Micronaut Application Using Azure Monitor Create an archetype We can create an archetype using Micronaut’s CLI (mn) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application conf…  ( 31 min )
    Send signals from Micronaut native image applications to Azure Monitor
    The original post (Japanese) was written on 20 July 2025. MicronautアプリケーションからAzure Monitorにlog/metric/traceを送信したい – Logico Inside Query One of my customers, using Micronaut to develop their applications, asked me about the following question. We are currently creating microservices with Micronaut and orchestrating them on Azure (Azure Container Apps). We would like to ask about the following two points. We found information on how to send logs and metrics to Azure Monitor in the Micronaut documentation and guides, but we need to know how to send traces. Is it also available with GraalVM Native Image? Their query is, in short, Spring Boot provides Spring Boot Starter and Quarkus provides Quarkus as dependencies, but does Micronaut do the same? Spring Boot starter Quarkus Opentelemetry E…  ( 24 min )
    Send metrics from Micronaut native image applications to Azure Monitor
    The original post (Japanese) was written on 20 July 2025.MicronautからAzure Monitorにmetricを送信したい – Logico Inside This entry is related to the following one. Please take a look for background information.Send signals from Micronaut native image applications to Azure Monitor | Microsoft Community Hub Prerequisites Maven: 3.9.10 JDK version 21 Micronaut: 4.9.0 or later The following tutorials were used as a reference. Create a Micronaut Application to Collect Metrics and Monitor Them on Azure Monitor Metrics Collect Metrics with Micronaut Create an archetype We can create an archetype using Micronaut’s CLI (mn) or Micronaut Launch. In this entry, use application.yml instead of application.properties for application configuration. So, we need to specify the feature “yaml” so that we can includ…  ( 37 min )

  • Open

    Model Mondays S2E9: Models for AI Agents
    1. Weekly Highlights This episode kicked off with the top news and updates in the Azure AI ecosystem: GPT-5 and GPT-OSS Models Now in Azure AI Foundry: Azure AI Foundry now supports OpenAI’s GPT-5 lineup (including GPT-5, GPT-5 Mini, and GPT-5 Nano) and the new open-weight GPT-OSS models (120B, 20B). These models offer powerful reasoning, real-time agent tasks, and ultra-low latency Q&A, all with massive context windows and flexible deployment via the Model Router. Flux 1 Context Pro & Flux 1.1 Pro from Black Forest Labs: These new vision models enable in-context image generation, editing, and style transfer, now available in the Image Playground in Azure AI Foundry. Browser Automation Tool (Preview): Agents can now perform real web tasks—search, navigation, form filling, and more—via na…  ( 32 min )
  • Open

    Exciting News: Azure AI Blogs Have Come Together in the New Azure AI Foundry Blog
    We’re excited to share that we’ve entered a new chapter in how we bring you Azure AI content! To streamline your experience, several Azure AI Tech Community blogs have been consolidated into a single destination—the Azure AI Foundry Blog. What’s Changed Multiple Azure AI blogs have merged into one. The Azure AI technical blogs – including the AI Platform Blog, Azure AI Services Blog, and Azure Machine Learning Blog – are now part of the new Azure AI Foundry Blog. By consolidating these resources into a single blog, you have one place to find technical news, tutorials, and announcements. Seamless Access to Past Content All existing content has been preserved. Posts from the legacy blogs have been migrated into the Azure AI Foundry Blog archive. Older posts remain accessible through redirect…  ( 21 min )
  • Open

    Exciting News: Azure AI Blogs Have Come Together in the New Azure AI Foundry Blog
    We’re excited to share that we’ve entered a new chapter in how we bring you Azure AI content! To streamline your experience, several Azure AI Tech Community blogs have been consolidated into a single destination—the Azure AI Foundry Blog. What’s Changed Multiple Azure AI blogs have merged into one. The Azure AI technical blogs – including the AI Platform Blog, Azure AI Services Blog, and Azure Machine Learning Blog – are now part of the new Azure AI Foundry Blog. By consolidating these resources into a single blog, you have one place to find technical news, tutorials, and announcements. Seamless Access to Past Content All existing content has been preserved. Posts from the legacy blogs have been migrated into the Azure AI Foundry Blog archive. Older posts remain accessible through redirect…  ( 21 min )
  • Open

    Azure Developer CLI: From Dev to Prod with Azure DevOps Pipelines
    Building on our previous post about implementing dev-to-prod promotion with GitHub Actions, this follow-up demonstrates the same “build once, deploy everywhere” pattern using Azure DevOps Pipelines. You’ll learn how to leverage Azure DevOps YAML pipelines with Azure Developer CLI (azd). This approach ensures consistent, reliable deployments across environments. Environment-Specific Infrastructure The infrastructure approach is identical […] The post Azure Developer CLI: From Dev to Prod with Azure DevOps Pipelines appeared first on Azure DevOps Blog.  ( 25 min )

  • Open

    Azure DevOps OAuth Client Secrets Now Shown Only Once
    We’re making an important change to how Azure DevOps displays OAuth client secrets to align with industry best practices and improve our overall security posture. Starting September, newly generated client secrets will be shown only once at the time of creation. After that, they will no longer be retrievable via the UI or API. This […] The post Azure DevOps OAuth Client Secrets Now Shown Only Once appeared first on Azure DevOps Blog.  ( 21 min )
    Hunting Living Secrets: Secret Validity Checks Arrive in GitHub Advanced Security for Azure DevOps
    If you’ve ever waded through a swamp of secret scanning alerts wondering, “Which of these are actually dangerous right now?” — this enhancement is for you. Secret validity checks in GitHub Advanced Security for Azure DevOps (and the standalone Secret Protection experience) add a high‑signal field to each alert: Active (still usable), or Unknown (couldn’t […] The post Hunting Living Secrets: Secret Validity Checks Arrive in GitHub Advanced Security for Azure DevOps appeared first on Azure DevOps Blog.  ( 24 min )
  • Open

    Build lightweight AI Apps on Azure App Service with gpt-oss-20b
    OpenAI recently introduced gpt-oss as an open-weight language model that delivers strong real-world performance at low cost. Available under the flexible Apache 2.0 license, these models outperform similarly sized open models on reasoning tasks, demonstrate strong tool use capabilities, and are optimized for efficient deployment on consumer hardware; see the announcement: https://openai.com/index/introducing-gpt-oss/. It’s an excellent choice for scenarios where you want the security and efficiency of a smaller model running on your application instance — while still getting impressive reasoning capabilities. By hosting it on Azure App Service, you can take advantage of enterprise-grade features without worrying about managing infrastructure: Built-in autoscaling Integration with VNet Ent…  ( 26 min )
    Azure App Testing: Playwright Workspaces for Local-to-Cloud Test Runs
    Azure App Testing centralizes functional and performance testing for your apps. With Playwright Workspaces, you can author, execute, and analyze browser tests at scale, both locally and in the cloud, with shared reporting and parallel, cloud-hosted browsers. If you’re new to the service, start with this Overview and QuickStart. Create a Playwright Workspace (portal quick tour) In the Azure portal, search for “Azure App Testing”, open it, and select “Create.” Choose “Playwright Workspaces” (use “Azure Load Testing” for performance/load). Provide a name, region, and subscription, then create the workspace. Open your Playwright Workspace to author and run tests locally or on cloud-hosted browsers, and view results in “Test runs.” Note: A workspace lets you publish test results and artifact…  ( 27 min )
  • Open

    Simplifying Data Ingestion with Copy job – Reset Incremental Copy, Auto Table Creation, and JSON Format Support
    Copy Job has become the go-to solution in Microsoft Fabric Data Factory for simplified data movement, whether you’re moving data across clouds, from on-premises systems, or between services. With native support for multiple delivery styles, including bulk copy, incremental copy, and change data capture (CDC) replication, Copy Job offers the flexibility to handle a wide … Continue reading “Simplifying Data Ingestion with Copy job – Reset Incremental Copy, Auto Table Creation, and JSON Format Support”  ( 7 min )
  • Open

    How Microsoft Semantic Kernel Transforms Proven Workflows into Intelligent Agents
    Most developers today face a common challenge when integrating AI into their applications: the gap between natural language prompts and actual code execution. While services like OpenAI's ChatGPT excel at generating responses, they can't directly interact with your existing systems, databases, or business logic. You're left building complex orchestration layers, managing function calls manually, and creating brittle workflows that break when requirements change. Microsoft Semantic Kernel changes this paradigm entirely. Unlike traditional LLM integrations where you send a prompt and receive text, Semantic Kernel acts as an AI orchestration layer that bridges natural language with your existing codebase. Semantic Kernel intelligently decides which of your trusted functions to execute, chains…  ( 36 min )

  • Open

    Real-Time Security with Continuous Access Evaluation (CAE) comes to Azure DevOps
    We’re thrilled to announce that Continuous Access Evaluation (CAE) is now supported on Azure DevOps, bringing a new level of near real-time security enforcement to your development workflows. 🔐 What Is CAE? Continuous Access Evaluation (CAE) is a feature from Microsoft Entra ID that enables near real-time enforcement of Conditional Access policies. Traditionally, Microsoft Entra […] The post Real-Time Security with Continuous Access Evaluation (CAE) comes to Azure DevOps appeared first on Azure DevOps Blog.  ( 23 min )
  • Open

    Private Pod Subnets in AKS Without Overlay Networking
    When deploying AKS clusters, a common concern is the amount of IP address space required. If you are deploying your AKS cluster into your corporate network, the size of the IP address space you can obtain may be quite small, which can cause problems with the number of pods you are able to deploy. The simplest and most common solution to this is to use an overlay network, which is fully supported in AKS. In an overlay network, pods are deployed to a private, non-routed address space that can be as large as you want. Translation between the routable and non-routed network is handled by AKS. For most people, this is the best option for dealing with IP addressing in AKS, and there is no need to complicate things further. However, there are some limitations with overlay networking, primarily th…  ( 39 min )
  • Open

    Log Analytic tables in Container Apps not being created
    This post will cover why you may see the message of “The name ‘ContainerAppConsoleLogs_CL’ does not refer to any known table, tabular variable or function.”  ( 3 min )
  • Open

    Load data from network-protected Azure Storage accounts to Microsoft OneLake with AzCopy
    AzCopy is a powerful and performant tool for copying data between Azure Storage and Microsoft OneLake, and is the preferred tool for large-scale data movement due to its ease of use and built-in performance optimizations. AzCopy now supports copying data from firewall-enabled Azure Storage accounts into OneLake using trusted workspace access. Now you can use … Continue reading “Load data from network-protected Azure Storage accounts to Microsoft OneLake with AzCopy”  ( 6 min )
    OneLake costs simplified: lowering capacity utilization when accessing OneLake
    We’re thrilled to share a major update and simplification to OneLake’s capacity utilization model that will make it even easier to manage Fabric capacity and scale your data workloads. We are reducing the consumption rate of OneLake transactions via proxy to match the rate for transactions via redirect. This means you no longer have to … Continue reading “OneLake costs simplified: lowering capacity utilization when accessing OneLake”  ( 7 min )
  • Open

    mssql-python vs pyodbc: Benchmarking SQL Server Performance
    Reviewed by Imran Masud and Sumit Sarabhai When it comes to working with Microsoft SQL Server in Python, pyodbc has long been the de facto driver. It’s mature, trusted and has been serving the community well for years. But as applications scale and Python becomes more central to modern data workflows — from microservices to […] The post mssql-python vs pyodbc: Benchmarking SQL Server Performance appeared first on Microsoft for Python Developers Blog.  ( 25 min )
  • Open

    Fix Broken Migrations with AI Powered Debugging in VS Code Using GitHub Copilot
    Data is at the heart of every application. But evolving your schema is risky business. One broken migration, and your dev or prod environment can go down. We've all experienced it: mismatched columns, orphaned constraints, missing fields, or that dreaded "table already exists" error. But what if debugging migrations didn’t have to be painful? What if you could simply describe the error or broken state, and AI could fix your migration in seconds? In this blog, you’ll learn how to: Use GitHub Copilot to describe and fix broken migrations with natural language Catch schema issues like incorrect foreign keys before they block your workflow Validate and deploy your database changes using GibsonAI CLI Broken migrations are nothing new. Whether you're working on a side project or part of a larg…  ( 27 min )
    Building Application with gpt-oss-20b with AI Toolkit
    OpenAI has released the open-source models gpt-oss-20b and gpt-oss-120B. Enterprises and developers can now deploy these models on edge devices without relying on cloud APIs, enabling local deployment. The AI Toolkit for VS Code extension provides developers with a complete development workflow—from model testing and local deployment to building intelligent agent applications—creating a comprehensive AI application development workflow. We used the AI Toolkit in conjunction with gpt-oss-20b to build local AI applications. Understanding gpt-oss OpenAI has released gpt-oss-120b and gpt-oss-20b, the first open-weight language models since GPT-2. Both models utilize mixture-of-experts (MoE) architecture with MXFP4 quantization, delivering exceptional reasoning capabilities and tool use functio…  ( 26 min )

  • Open

    Customer-managed keys for Fabric workspaces is now in Public Preview
    We’re excited to share that customer-managed keys (CMK) for Microsoft Fabric workspaces are now available in public preview in all public regions! This expansion makes it easier for customers worldwide to meet compliance requirements and implement robust data protection strategies. Note: This feature was released in public preview in a limited set of regions earlier … Continue reading “Customer-managed keys for Fabric workspaces is now in Public Preview”  ( 5 min )
    Introducing support for Workspace Identity Authentication in Fabric Connectors
    Co-author: Meenal Srivastva Managing secure, seamless access to data sources is a top priority for organizations using Microsoft Fabric. With workspace identity authentication, teams can simplify credential management, enhance security, and streamline data access across their enterprise. Workspace identity in Fabric is an automatically managed service principal associated with workspaces (excluding My Workspaces). When you … Continue reading “Introducing support for Workspace Identity Authentication in Fabric Connectors”  ( 7 min )
    How Microsoft OneLake seamlessly provides Apache Iceberg support for all Fabric Data
    Co-authored with Kevin Liu, Apache Iceberg Committer and Principal Engineer at Microsoft.  Microsoft Fabric is a unified SaaS data and analytics platform designed for the era of AI. All workloads in Microsoft Fabric use Delta Lake as the standard, open-source table format. With Microsoft OneLake, Fabric’s unified SaaS data lake, customers can unify their data … Continue reading “How Microsoft OneLake seamlessly provides Apache Iceberg support for all Fabric Data”  ( 7 min )
  • Open

    Finding the Right Page number in PDFs with AI Search
    Why Page Numbers Matter in AI Search When users search for content within large PDFs—such as contracts, manuals, or reports—they often need to know not just what was found, but where it was found. Associating search results with page numbers enables: Contextual navigation within documents. Precise citations in knowledge bases or chatbots. Improved user trust in AI-generated responses. Prerequisites for Azure Blob Storage & Azure AI Search Setup Summary 1. Azure Blob Storage A container is configured to store PDF files. 2. Appropriate permissions: The AI search service must have Storage Blob Data Reader access to the container. If using RBAC, ensure the managed identity is properly assigned.  Ref:  AI search Search-blob-indexer-role-based-access How to Index Azure Blobs   Technical Ap…  ( 27 min )

  • Open

    Simplifying Outbound Connectivity Troubleshooting in AKS with Connectivity Analysis (Preview)
    Background Troubleshooting outbound connectivity issues in Azure Kubernetes Service (AKS) can be daunting. The complexity arises from the layered nature of Azure networking, which can span NSGs, UDRs, route tables, firewalls, and private endpoints. When a pod fails to reach an external service, pinpointing the root cause often requires deep familiarity with Azure networking and manual inspection of multiple network resources. This complex process slows down resolution and increases operational overhead for cluster operations and platform teams. Announcing the Connectivity Analysis feature for AKS To simplify the network troubleshooting process, we're excited to announce the Connectivity Analysis feature for AKS, now available in Public Preview. This feature leverages the same underlying en…  ( 27 min )

  • Open

    Python in Visual Studio Code – August 2025 Release
    The August 2025 release includes Python shell integration support for Python 3.13+, Python Environments extension improvements, enhanced terminal suggestions with documentation, and more! The post Python in Visual Studio Code – August 2025 Release appeared first on Microsoft for Python Developers Blog.  ( 23 min )
  • Open

    Introducing Azure App Testing: Scalable End-to-end App Validation
    Azure App Testing enables developers and QA teams to run large-scale functional and performance tests to pinpoint issues in their applications, across frameworks like Playwright, JMeter, or Locust. It brings together two powerful testing capabilities—Azure Load Testing and Microsoft Playwright Testing—into a single hub in the Azure Portal, providing a consistent experience for resource provisioning, access control, and consolidated billing. With Azure App Testing, our goal is to help you spend less time managing infrastructure and more time harnessing AI-driven test automation to boost quality and innovation. Key Benefits AI-driven testing: Accelerate test creation and insights with AI-powered tooling. Limitless scale: Simulate real-world traffic from multiple regions with load tests, and…  ( 23 min )

  • Open

    Model Mondays S2E8: On-Device & Local AI
    Core Topics Discussed 1. Weekly Highlights RFD Observability tools in Azure AI Foundry: Real-time model telemetry, auto evals, quick evals, Python grader. GitHub Copilot Pro with Spark: AI pair programmer for code explanation and workflow suggestions. Synthetic Data for Vision Models: Training accurate models with procedurally generated data. Agent-Friendly Websites: Making sites accessible to AI agents via APIs, semantic markup, and OpenAPI specs. MCP (Model Context Protocol): Standardizing agent memory and context for scalable AI. 2. Spotlight: Foundry Local – On-Device AI What is Foundry Local?A toolkit to run open-source AI models directly on your device (CPU, GPU, NPU) for fast, private, and efficient inference. Why does it matter?Developers can work with data locally, switch seaml…  ( 29 min )
  • Open

    Terraform Provider for Microsoft Fabric: #4 Deploying a Fabric config with Terraform in GitHub Actions
    If you have been following this blog post series then you should have a working Terraform config from the first two posts, plus a managed identity from the third post that has the correct authorizations as a workload identity for use in deployment pipelines. In this last post we will configure OpenID Connect for the … Continue reading “Terraform Provider for Microsoft Fabric: #4 Deploying a Fabric config with Terraform in GitHub Actions”  ( 9 min )
  • Open

    Dev Proxy v1.0 with new features for building robust AI-powered apps
    Introducing Dev Proxy v1.0, with new language model-specific testing capabilities to help developers build more reliable AI-powered applications by simulating real-world scenarios and tracking resource usage. The post Dev Proxy v1.0 with new features for building robust AI-powered apps appeared first on Microsoft 365 Developer Blog.  ( 26 min )
  • Open

    How to Secure your pro-code Custom Engine Agent of Microsoft 365 Copilot?
    Prerequisite This article assumes that you’ve already gone through a following post. Please make sure to read it before proceeding: Developing a Custom Engine Agent for Microsoft 365 Copilot Chat Using Pro-Code With the article, we have found how to publish a Custom Engine Agent using pro-code approaches such as C#. In this post, I’d like to shift the focus to security, specifically how to protect the endpoint of our custom Microsoft 365 Copilot. Through several architectural explorations, we found an approach that seems to work well. However, I strongly encourage you to review and evaluate it carefully for your production environment. Which Endpoints Can Be Controlled? In the current architecture, there are three key endpoints to consider from a security perspective: Teams EndpointThis…  ( 53 min )
    How to Secure Azure Bot Service Endpoints with Teams Channel?
    Prerequisite This article assumes that you’ve already gone through a following post. Please make sure to read it before proceeding: Developing a Custom Engine Agent for Microsoft 365 Copilot Chat Using Pro-Code With the article, we have found how to publish a Custom Engine Agent using pro-code approaches such as C#. In this post, I’d like to shift the focus to security, specifically how to protect the endpoint of our custom Microsoft 365 Copilot. Through several architectural explorations, we found an approach that seems to work well. However, I strongly encourage you to review and evaluate it carefully for your production environment. Which Endpoints Can Be Controlled? In the current architecture, there are three key endpoints to consider from a security perspective: Teams EndpointThis…  ( 53 min )
    Open AI's gpt-oss models on Azure Container Apps serverless GPUs
    Just yesterday, OpenAI announced the release of gpt-oss-120b and gpt-oss-20b, two new state-of-the-art open-weight language models. These models are designed to run on lighter weight GPU resources making them highly accessible for developers who want to self-host powerful language models within their own environments. If you’re looking to deploy these models in the cloud, Azure Container Apps serverless GPUs are a great option. With support for both A100 and T4 Open AI gpt-oss-120b running on Azure Container Apps serverless GPUsGPUs, serverless GPUs support both the gpt-oss-120b and gpt-oss-20b models, providing a cost-efficient and scalable platform with minimal infrastructure overhead. Open AI gpt-oss-120b running on Azure Container Apps serverless GPUs In this blog post, we’ll walk thro…  ( 31 min )
  • Open

    “HTTP header exceeding 8,192 bytes” error when using the Azure SDK for Java
    A recent issue was identified in the azure-resourcemanager libraries for Java regarding HTTP headers exceeding 8,192 bytes. Following analysis, this was addressed within the Azure SDK for Java by increasing the maximum response header size limit of “reactor-netty-http” to 256 KB, accommodating larger headers. To benefit from this mitigation, it is necessary to upgrade azure-core-http-netty […] The post “HTTP header exceeding 8,192 bytes” error when using the Azure SDK for Java appeared first on Microsoft for Java Developers.  ( 21 min )

  • Open

    Introducing Wassette: WebAssembly-based tools for AI agents
    Wassette empowers AI agents to securely fetch and run Wasm tools, enabling dynamic, permission-controlled capabilities with zero dependencies. The post Introducing Wassette: WebAssembly-based tools for AI agents appeared first on Microsoft Open Source Blog.  ( 14 min )
  • Open

    Deploy LangChain applications to Azure App Service
    LangChain is a powerful framework that simplifies the development of applications powered by large language models (LLMs). It provides essential building blocks like chains, agents, and memory components that enable developers to create sophisticated AI workflows beyond simple prompt-response interactions. LangChain's importance lies in its ability to orchestrate complex AI operations, integrate multiple data sources, and maintain conversation context—making it the go-to choice for production-ready AI applications. In this blog post, we'll explore a sample application that demonstrates how you can easily deploy a LangChain application integrated with Azure OpenAI Foundry models to Azure App Service. We'll walk through this complete example that showcases a conversational AI chat interface …  ( 28 min )
    Azure at KubeCon India 2025 | Hyderabad, India – 6-7 August 2025
    Welcome to KubeCon + CloudNativeCon India 2025! We’re thrilled to join this year’s event in Hyderabad as a Gold sponsor, where we’ll be highlighting the newest innovations in Azure and Azure Kubernetes Service (AKS) while connecting with India’s dynamic cloud-native community. We’re excited to share some powerful new AKS capabilities that bring AI innovation to the forefront, strengthen security and networking, and make it easier than ever to scale and streamline operations. Innovate with AI AI is increasingly central to modern applications and competitive innovation, and AKS is evolving to support intelligent agents more natively. The AKS Model Context Protocol (MCP) server, now in public preview, introduces a unified interface that abstracts Kubernetes and Azure APIs, allowing AI agents …  ( 28 min )
    Announcing General Availability of App Service Inbound IPv6 Support
    Inbound IPv6 support on public multi-tenant App Service has been in public preview for a while now, so we're excited to finally be able to announce that it is now generally available across all public Azure regions for multi-tenant apps on all Basic, Standard, and Premium SKUs, Functions Consumption, Functions Elastic Premium, and Logic Apps Standard! The limitations called out in the previous blog post have been removed except for IP-SSL IPv6 bindings still not being supported. How it works IPv6 inbound requires two things: an IPv6 address that accepts traffic coming in, and a DNS record that returns an IPv6 (AAAA) record. You’ll also need a client that can send and receive IPv6 traffic. This means that you may not be able to test it from your local machine since many networks today only …  ( 21 min )
  • Open

    Generative AI for Beginners – Java Edition launched
    IN ONE hour, all ONLINE, complete our new training for Generative AI for Beginners – Java Edition – part of the incredible Microsoft Generative AI series – Python, .NET and JavaScript. In this training get first-hand experience with Core Generative AI Techniques and Tools we’re using inside Microsoft: Foundry Local MCP Responsible AI Quick Start Star and […] The post Generative AI for Beginners – Java Edition launched appeared first on Microsoft for Java Developers.  ( 22 min )
  • Open

    Deploy LangChain applications to Azure App Service
    LangChain is a powerful framework that simplifies the development of applications powered by large language models (LLMs). It provides essential building blocks like chains, agents, and memory components that enable developers to create sophisticated AI workflows beyond simple prompt-response interactions. LangChain’s importance lies in its ability to orchestrate complex AI operations, integrate multiple data sources, and maintain conversation context—making it the go-to choice for production-ready AI applications.  ( 5 min )
  • Open

    General Availability of Azure Monitor Network Security Perimeter Features
    We’re excited to announce that Azure Monitor Network Security Perimeter features are now generally available! This update is an important step forward for Azure Monitor’s security, providing comprehensive network isolation for your monitoring data. In this post, we’ll explain what Network Security Perimeter is, why it matters, and how it benefits Azure Monitor users. Network Security Perimeter is purpose-built to strengthen network security and monitoring, enabling customers to establish a more secure and isolated environment. As enterprise interest grows, it’s clear that this feature will play a key role in elevating the protection of Azure PaaS resources against evolving security threats. What is Network Security Perimeter and Why Does It Matter? Network Security Perimeter is a network i…  ( 31 min )

  • Open

    General Availability of Auxiliary Logs and Reduced Pricing
    Azure Monitor logs are trusted by hundreds of thousands of organizations to monitor mission-critical workloads. But with such a diverse customer base, there’s no one-size-fits-all solution. That’s why we’re excited to announce a series of major advancements to Auxiliary Logs, the Azure Monitor plan that is designed for high volume logs. Auxiliary Logs works in tandem with all other Azure Monitor tools including the more powerful Basic and Analytics Logs plans. Together, they are the one-stop-shop for all the logging needs of an organization. Auxiliary Logs were introduced last year and have gained a lot of traction since. There are many customers that ingest data into Auxiliary Logs, with several teams ingesting more than a petabyte of logs per day. Over the last few months, we have moved …  ( 27 min )
  • Open

    How to enable alerts in Batch especially when a node is encountering high disk usage
    Batch users often encounter issues like nodes suddenly gets into unusable state due to high CPU or Disk usage. Alerts allow you to identify and address issues in your system. This blog will focus on how users can enable alerts when the node is consuming high amount of disk by configuring the threshold limit. With this user can get notified beforehand when the node gets into unusable state and pre-emptively takes measures to avoid service disruptions. The task output data is written to the file system of the Batch node. When this data reaches more than 90 percent capacity of the disk size of the node SKU, the Batch service marks the node as unusable and blocks the node from running any other tasks until the Batch service does a clean up. The Batch node agent reserves 10 percent capacity of …  ( 29 min )
  • Open

    Decoupling Semantic Model for Mirroring Customers
    Overview Semantic models are evolving to work more seamlessly with Mirrored artifacts—giving you greater flexibility, control, and transparency when working with mirrored data. Why are we decoupling semantic models from Mirroring Artifacts? Historically, Mirrored artifacts were created with an automatically ‘coupled’ semantic model. While convenient out-of-the-box, this approach limited how you could shape, interpret, and … Continue reading “Decoupling Semantic Model for Mirroring Customers”  ( 6 min )
    What’s new in Fabric Warehouse – July 2025 Recap
    Introduction Welcome to What’s New in Fabric Warehouse, where we’ll spotlight our work improving quality, delivering major performance enhancements, boosting developer productivity, and our continuous investments in security. Whether you’re migrating from Synapse, optimizing your workloads, writing SQL in VS Code, or exploring new APIs, this roundup has something for every data professional. With quality … Continue reading “What’s new in Fabric Warehouse – July 2025 Recap”  ( 10 min )
    Coming soon: Streamlining Data Management with Collation Settings in Microsoft Fabric Warehouse and SQL Endpoint
    Microsoft Fabric’s Warehouse and SQL Endpoint now support both Case Sensitive (CS) and Case Insensitive (CI) collations, providing users with powerful tools to ensure data consistency and flexibility. This blog delves into the upcoming change – updated collation configuration requirements at the workspace and item levels, offering insights into how these features enhance data workflows, … Continue reading “Coming soon: Streamlining Data Management with Collation Settings in Microsoft Fabric Warehouse and SQL Endpoint”  ( 7 min )
    Unlocking Flexibility in Fabric: Introducing Multiple Scheduler and CI/CD Support
    Multiple Scheduler: A Game-Changer for Complex Scheduling Needs In today’s data environments, one size rarely fits all. Yet until now, Fabric only allowed one scheduler per item, forcing users to duplicate pipelines, manually configure jobs, or build brittle workarounds to meet real-world needs. That changes today. We’re excited to announce Multiple Scheduler support in Fabric — a long-awaited … Continue reading “Unlocking Flexibility in Fabric: Introducing Multiple Scheduler and CI/CD Support”  ( 6 min )
  • Open

    Automate your open-source dependency scanning with Advanced Security
    Any experiences that require additional setup is cumbersome, especially when there are multiple people needed. In GitHub Advanced Security for Azure DevOps, we’re working to make it easier to enable features and scale out enablement across your enterprise. You can now automatically inject the dependency scanning task into any pipeline run targeting your default branch. […] The post Automate your open-source dependency scanning with Advanced Security appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Fabric Platform Support for Transport Layer Security (TLS) 1.1 and earlier versions has ended
    We have officially ended the support for TLS 1.1 and earlier on the Fabric platform. As previously announced, starting July 31, 2025, all outbound connections from Fabric to customer data sources must use TLS 1.2 or later. This update follows our earlier announcement in the TLS Deprecation for Fabric blog, where we outlined the rationale and timeline for this … Continue reading “Fabric Platform Support for Transport Layer Security (TLS) 1.1 and earlier versions has ended”  ( 5 min )
    Mirroring for Google BigQuery in Microsoft Fabric (Private Preview)
    Mirroring for Google BigQuery is now available in Private Preview—bringing the power of Microsoft Fabric to your BigQuery data with zero-ETL, near real-time replication, and native integration across the Fabric experience. This release marks a major milestone in our cross-cloud data strategy, enabling customers to unify analytics across various platforms without the complexity of traditional data movement. What’s New With Mirroring … Continue reading “Mirroring for Google BigQuery in Microsoft Fabric (Private Preview)”  ( 5 min )
  • Open

    S2:E7 · AI-Assisted Azure Development
    Why MCP & Copilot for Azure Matter  What is AI-Assisted Azure Development? AI-assisted development means you can use natural language to manage Azure resources. The Azure MCP Server acts as a bridge between AI agents (like Copilot) and Azure services, exposing tools in a standard way. GitHub Copilot for Azure lets you chat with your cloud—no need to memorize commands or scripts. Key takeaway:These tools make Azure more accessible and powerful. You can ask questions, deploy resources, and get best-practice advice—all in plain English.   Technical Deep Dive: How Does Azure MCP Server Work? Implements the Model Context Protocol (MCP): Standardizes how AI agents talk to Azure services. Tool Discovery: Exposes a list of available tools (like listing VMs, deploying models, checking status). Na…  ( 31 min )

  • Open

    AI Inference Task Migration from CPU to GPU: Methodology Overview
    Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git  This repository ties together the entire methodology with a minimalistic example: first identify computational hotspots on the CPU, then rewrite loops characterized by "high parallelism, sequential memory access, and simple branching" into CUDA kernels to offload CPU workload and unleash GPU computing power. Running the code once can compare CPU and GPU execution times, separate transfer and computation overheads, and verify result errors. This quickly validates the feasibility and expected gains of "CPU → GPU" migration, providing a template for subsequent large-scale migration, pipeline optimization, and MIG resource partitioning in a real business environment. Overal…  ( 57 min )

  • Open

    Announcing a flexible, predictable billing model for Azure SRE Agent
    At Microsoft Build 2025, we announced Azure SRE Agent, an AI tool designed to streamline incident response, improve service uptime, and cut operational costs. The SRE Agent uses AI Models in Azure AI Foundry to quickly analyze logs and metrics for root cause analysis and issue resolution. This advanced AI agent enhances incident and infrastructure management in Azure, allowing site reliability engineers (SRE) to focus on higher-value tasks.   Starting September 1, 2025, billing will begin for Azure SRE Agent. Please visit the Azure pricing page to see the pricing in your region. This article describes the billing model and example scenarios.  Overview of Azure SRE Agent Azure SRE Agent is a powerful tool that uses machine learning and advanced monitoring capabilities to proactively manage …  ( 33 min )

  • Open

    Converting Page or Append Blobs to Block Blobs with ADF
    In certain scenarios, a storage account may contain a significant number of page blobs classified under the hot access tier that are infrequently accessed or retained solely for backup purposes. To optimise costs, it is desirable to transition these page blobs to the archive tier. However, as indicated in the following documentation - https://learn.microsoft.com/en-us/azure/storage/blobs/access-tiers-overview the ability to set the access tier is only available for block blobs; this functionality is not supported for append or page blobs. The Azure blob storage connector in Azure data factory is capable of copying blobs from block, append, or page blobs and copying data to only block blobs. https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-blob-storage?tabs=data-factory#…  ( 24 min )
  • Open

    What’s New in Azure AI Foundry Finetuning: July 2025
    RFT Observability ("Auto-Evals") Reinforcement Finetuning (RFT) observability, now in public preview, provides real-time, in-depth visibility into your RFT job by automatically kicking off evaluation (“auto-evals”) that shows the detailed finetuning progress at each checkpoint.  What is RFT? RFT adjusts model weights using a reward model (grader) to score outputs against the reference data. The grader’s results are then used to reward or penalize the model response, steering the model’s reasoning direction and quality towards desired outcomes. The benefit of using RFT over supervised fine-tuning (SFT) is that RFT incorporates these reward signals real-time during training – but it’s slower and more expensive than SFT. Previously, customers could only evaluate RFT results after a training …  ( 28 min )

  • Open

    Boost Performance with Fast Copy in Dataflows Gen2 for Snowflake
    Fast Copy in Dataflows Gen2 is a game-changer to enhance the performance and cost-efficiency of your Dataflows Gen2. By leveraging the same optimized backend as the Copy Activity in data pipelines, Fast Copy significantly reduces data processing time and enhances cost efficiency. Fast Copy in Dataflows Gen2 (Generally Available) enabled by default in all newly created Dataflows … Continue reading “Boost Performance with Fast Copy in Dataflows Gen2 for Snowflake”  ( 6 min )
    OneLake as a Source for COPY INTO and OPENROWSET (Preview)
    Simplified, workspace-governed ingestion without external dependencies COPY INTO and OPENROWSET from OneLake are now available in Preview for Microsoft Fabric Data Warehouse. With this release, you can load and query files directly from Lakehouse File folders, without relying on external staging storage, SAS tokens, or complex IAM configurations. This improvement reinforces Fabric’s vision of a … Continue reading “OneLake as a Source for COPY INTO and OPENROWSET (Preview)”  ( 6 min )
    Experience the New Visual SQL Audit Logs Configuration in Fabric Warehouse
    In April, we announced the preview of SQL Audit Logs for Microsoft Fabric Data Warehouse, giving organizations the power to track and retain critical warehouse events for enhanced visibility and control. Today, we’re taking that experience to the next level. We’re introducing an all-new, intuitive visual experience—making it easier than ever to configure, enable, and … Continue reading “Experience the New Visual SQL Audit Logs Configuration in Fabric Warehouse”  ( 6 min )
  • Open

    Personal Voice upgraded to v2.1 in Azure AI Speech, more expressive than ever before
    At the Build conference on May 21, 2024, we announced the general available of Personal Voice, a feature designed to empower customers to build applications where users can easily create and utilize their own AI voices (see the blog). Today we're thrilled to announce that Azure AI Speech Service has upgraded a new zero-shot TTS (text-to-speech) model, named “DragonV2.1Neural”. This new model delivers more natural-sounding and expressive voices, offering improved pronunciation accuracy and greater controllability compared to the earlier zero-shot TTS model. In this blog, we’ll present the new zero-shot TTS model audio quality, new features and benchmarks results. We’ll also share a guide for controlling pronunciation and accent using the Personal Voice API with the new zero-shot TTS model. …  ( 39 min )
  • Open

    S2:E6 Understanding Research & Innovation with SeokJin Han and Saumil Shrivastava
    This week in Model Mondays, we dive into the cutting-edge research happening at Azure AI Foundry Labs. From the MCP Server that makes it easy to experiment with new models and tools, to Magentic-UI that brings human-centered agent workflows to life, there’s a lot to unpack! Spotlight On: Research & Innovation in Azure AI Foundry Labs 1. What is this topic and why is it important? Research and innovation are the engines that drive progress in AI. This episode spotlights Azure AI Foundry Labs—a hub where Microsoft’s latest breakthroughs in models, agent frameworks, and developer tools are shared with the world. The MCP Server makes it easy for an yone to experiment with new models and tools, while Magentic-UI brings human-centered agent workflows to life. These projects help bridge the gap …  ( 37 min )
  • Open

    MCP Bootcamp: APAC, LATAM and Brazil
    The Model Context Protocol (MCP) is transforming how AI systems interact with real-world applications. From intelligent assistants to real-time streaming, MCP is already being adopted by leading companies—and now is your chance to get ahead. Join us for a four-part technical series designed to give you practical, production-ready skills in MCP development, integration, and deployment. Whether you're a developer, AI engineer, or cloud architect, this series will equip you with the tools to build and scale MCP-based solutions. 📅 English edition - 6PM IST (India Standard Time) ✅ Register at MCP Bootcamp APAC Session TitleDate & Time (IST) Creating Your First MCP Server Learn the fundamental concepts of the protocol and test your implementation using official tools. August 28, 6:00 PM MC…  ( 23 min )

  • Open

    On-premises data gateway July 2025 release
    Here is the July 2025 release of the on-premises data gateway (version 3000.278).  ( 5 min )
    Autoscale Billing for Spark in Microsoft Fabric (Generally Available)
    We’re thrilled to announce the general availability (GA) of Autoscale Billing for Apache Spark in Microsoft Fabric — a serverless billing model designed to offer greater flexibility, transparency, and cost efficiency for running Spark workloads at scale. With this model now fully supported, Spark Jobs can run independently of your Fabric capacity and are billed … Continue reading “Autoscale Billing for Spark in Microsoft Fabric (Generally Available)”  ( 7 min )
    Fabric July 2025 Feature Summary
    Welcome to the July 2025 Fabric Feature Summary! This month’s update covers major events like the return of the Microsoft Fabric Community Conference in Vienna, and the 10th anniversary of Power BI. Key platform enhancements include new domain tags and updates to the default category in the OneLake catalog. You’ll also find highlights on data … Continue reading “Fabric July 2025 Feature Summary”  ( 12 min )
    Terraform Provider for Microsoft Fabric: #3 Creating a workload identity with Fabric permissions
    The Microsoft Fabric tooling ecosystem has evolved in recent months as both the Fabric CLI and the Terraform Provider for Microsoft Fabric have become generally available, providing new opportunities for automating Fabric Administration tasks for those already experienced with Terraform for declarative deployment of infrastructure as code. This short blog series will provide practical guidance … Continue reading “Terraform Provider for Microsoft Fabric: #3 Creating a workload identity with Fabric permissions”  ( 10 min )
  • Open

    New data lake in Microsoft Sentinel
    Correlate signals, run advanced analytics, and perform forensic investigations from a single copy of data — without costly migrations or data silos. Detect persistent, low-and-slow attacks with greater visibility, automate responses using scheduled jobs, and generate predictive insights by combining Copilot, KQL, and machine learning.     Vandana Mahtani, Microsoft Sentinel Principal Product Manager shows how to uncover long-running threats, streamline investigations, and automate defenses — all within a unified, AI-powered SIEM experience. Store security data for up to 12 years. Perfect for long-term investigations and compliance. Check out our new data lake in Microsoft Sentinel. Streamline your data strategy. Send high-volume logs to the new low-cost data lake tier and control retent…  ( 40 min )

  • Open

    JSON Lines Support in OPENROWSET for Fabric Data Warehouse and Lakehouse SQL Endpoints (Preview)
    We’re happy to announce the preview of JSON Lines (JSONL) support in the OPENROWSET(BULK) function for Microsoft Fabric Data Warehouse and SQL endpoints for Lakehouses. The OPENROWSET(BULK) function allows you to query external data stored in the lake using well-known T-SQL syntax. With this update, you can now also query files in JSON Lines format, expanding the range of supported formats and simplifying access to semi-structured … Continue reading “JSON Lines Support in OPENROWSET for Fabric Data Warehouse and Lakehouse SQL Endpoints (Preview)”  ( 7 min )
    Standardizing Audit Operations for Warehouse, DataMarts and SQL Analytics Endpoint.
    As we continue to mature the Microsoft Fabric platform, we’re taking steps to streamline and simplify the experience for administrators and compliance teams who rely on audit logs. Beginning July 2025, we’re consolidating a number of redundant audit operations into a unified model that reflects our broader platform architecture. This change impacts how operations for … Continue reading “Standardizing Audit Operations for Warehouse, DataMarts and SQL Analytics Endpoint.”  ( 6 min )
    Expanded Data Agent Support for Large Data Sources
    We are continuously enhancing data agents in Fabric to deliver more powerful and flexible data experiences. In February of this year, we introduced a host of new improvements coming to the AI Skill—including support for additional Data Sources such as Eventhouse KQL and Semantic Models. Initially, integration to data sources was limited to sources with … Continue reading “Expanded Data Agent Support for Large Data Sources”  ( 6 min )
    AI Ready Apps: From RAG to Chat – Interacting with SQL Database in Microsoft Fabric using GraphQL and MCP
    In the current digital environment, applications are expected to offer more than basic functionality, they must demonstrate intelligence, adaptability, and provide seamless user experiences. To remain competitive, organizations increasingly recognize that effective data utilization is fundamental to driving innovation. Data powers real-time insights and supports advanced agentic AI systems capable of reasoning, acting, and learning. … Continue reading “AI Ready Apps: From RAG to Chat – Interacting with SQL Database in Microsoft Fabric using GraphQL and MCP”  ( 17 min )
  • Open

    Develop Custom Engine Agent to Microsoft 365 Copilot Chat with pro-code
    Getting Started Here is the original article written in Japanese. There are some great articles that explain how to integrate an MCP server built on Azure with a declarative agent created using Microsoft Copilot Studio. These approaches aim to extend the agent’s capabilities rather than defining a role. Here were some of the challenges we encountered: The agent's behavior can only be tested through the Copilot Studio. We want to use pro-code development tools. We can't choose LLM as the orchestrator—for example, there's no way to specify GPT-4o. The agent doesn’t always response same one which they would if we were prompting the LLM directly. These limitations got me thinking: why not build the entire agent myself? At the same time, I still wanted to take advantage of the familiar Micros…  ( 56 min )

  • Open

    Creating Intelligent Video Summaries and Avatar Videos with Azure AI Services
    Introduction Imagine a world where every second of your organization’s video content whether it’s a crucial training session, a product demo, or an expert-led seminar becomes instantly accessible, searchable, and actionable. What if, instead of sifting through hours of footage, you could surface key insights, create concise summaries, and deliver dynamic, avatar-driven presentations in minutes? In a landscape overwhelmed by video, it’s not about storing more, it’s about unlocking value from every frame.  Join me as we dive into the future of intelligent video analytics, where Microsoft Azure’s cutting-edge AI transforms raw footage into powerful knowledge assets. The Business Challenge Traditional video content management faces several critical limitations: Content Discovery: Finding spec…  ( 40 min )

  • Open

    Fabric Influencers Spotlight July 2025
    Welcome to the July 2025 edition of the Fabric Influencers Spotlight, a recurring monthly post here to shine a bright light on the places on the internet where Microsoft MVPs & Fabric Super Users are doing some amazing work on all aspects of Microsoft Fabric. The Microsoft Fabric Community team has created the Fabric Influencers Spotlight to … Continue reading “Fabric Influencers Spotlight July 2025”  ( 8 min )
  • Open

    From Manual Testing to AI-Generated Automation: Our Azure DevOps MCP + Playwright Success Story
    In today’s fast-paced software development cycles, manual testing often becomes a significant bottleneck. Our team was facing a growing backlog of test cases that required repetitive manual execution—running the entire test suite every sprint. This consumed valuable time that could be better spent on exploratory testing and higher-value tasks. We set out to solve this […] The post From Manual Testing to AI-Generated Automation: Our Azure DevOps MCP + Playwright Success Story appeared first on Azure DevOps Blog.  ( 25 min )
  • Open

    Strategic Solutions for Seamless Integration of Third-Party SaaS
    Introduction Modern systems must be modular and interoperable by design. Integration is no longer a feature, it’s a requirement. Developers are expected to build architectures that connect easily with third-party platforms, but too often, core systems are designed in isolation. This disconnect creates friction for downstream teams and slows delivery. At Microsoft, SaaS platforms like SAP SuccessFactors and Eightfold support Talent Acquisition by handling functions such as requisition tracking, application workflows, and interview coordination. These tools help reduce costs and free up engineering focus for high-priority areas like Azure and AI. The real challenge is integrating them with internal systems such as Demand Planning, Offer Management, and Employee Central. This blog post outlin…  ( 40 min )
  • Open

    Strategic Solutions for Seamless Integration of Third-Party SaaS
    Introduction Modern systems must be modular and interoperable by design. Integration is no longer a feature, it’s a requirement. Developers are expected to build architectures that connect easily with third-party platforms, but too often, core systems are designed in isolation. This disconnect creates friction for downstream teams and slows delivery. At Microsoft, SaaS platforms like SAP SuccessFactors and Eightfold support Talent Acquisition by handling functions such as requisition tracking, application workflows, and interview coordination. These tools help reduce costs and free up engineering focus for high-priority areas like Azure and AI. The real challenge is integrating them with internal systems such as Demand Planning, Offer Management, and Employee Central. This blog post outlin…  ( 40 min )
  • Open

    Strategic Solutions for Seamless Integration of Third-Party SaaS
    Introduction Modern systems must be modular and interoperable by design. Integration is no longer a feature, it’s a requirement. Developers are expected to build architectures that connect easily with third-party platforms, but too often, core systems are designed in isolation. This disconnect creates friction for downstream teams and slows delivery. At Microsoft, SaaS platforms like SAP SuccessFactors and Eightfold support Talent Acquisition by handling functions such as requisition tracking, application workflows, and interview coordination. These tools help reduce costs and free up engineering focus for high-priority areas like Azure and AI. The real challenge is integrating them with internal systems such as Demand Planning, Offer Management, and Employee Central. This blog post outlin…  ( 40 min )
  • Open

    Strategic Solutions for Seamless Integration of Third-Party SaaS
    Introduction Modern systems must be modular and interoperable by design. Integration is no longer a feature, it’s a requirement. Developers are expected to build architectures that connect easily with third-party platforms, but too often, core systems are designed in isolation. This disconnect creates friction for downstream teams and slows delivery. At Microsoft, SaaS platforms like SAP SuccessFactors and Eightfold support Talent Acquisition by handling functions such as requisition tracking, application workflows, and interview coordination. These tools help reduce costs and free up engineering focus for high-priority areas like Azure and AI. The real challenge is integrating them with internal systems such as Demand Planning, Offer Management, and Employee Central. This blog post outlin…  ( 40 min )
  • Open

    Bring Auxiliary Logs to the next level
    Azure Monitor logs are trusted by hundreds of thousands of organizations to monitor mission-critical workloads. But with such a diverse customer base, there’s no one-size-fits-all solution. That’s why we’re excited to announce a series of major advancements to Auxiliary Logs, the Azure Monitor plan that is designed for high volume logs, making it more cost-effective and improving the tools for use. Auxiliary Logs works in tandem with all other Azure Monitor tools including the more powerful Basic and Analytics Logs plans. Together, they are the one-stop-shop for all the logging needs of an organization. Auxiliary Logs is also one of the underlying services behind Microsoft Sentinel and the newly announced Sentinel data lake enabling customers to use the same data for both observability and…  ( 28 min )
  • Open

    New Surface Laptop 5G for Business, Copilot+ PC
    Stay securely connected with rearchitected 5G design — including six smart-switching antennas, eSIM and Wi-Fi 7 — without relying on hotspots. As the first Surface Laptop to feature 5G, it enables enterprise-ready AI features for deeper insights, productivity boosts, and powerful local inferencing wherever work happens. Stay connected anywhere. The first Surface laptop with built-in 5G — supporting NanoSIM, eSIM, smart signal switching, and international roaming. See it here. High-performance AI experiences. Surface Laptop 5G is powered by Intel Core Ultra processors with AI Boost. Watch here. No IT setup required. Surface Laptop 5G can arrive business-ready with zero-touch deployment and managed 5G policies. Check it out. QUICK LINKS: 00:00 — Surface Laptop 5G for Business 00:28 — Buil…  ( 29 min )

  • Open

    From Signals to Insights: Building a Real-Time Streaming Data Platform with Fabric Eventstream
    How Contoso uses MQTT sensors, public weather feeds and Fabric Real-Time Intelligence to monitor smart buildings. Jointly authored by Alicia Li and Arindam Chatterjee Why Real-Time Stream Processing Matters In the age of AI, as organizations embrace intelligent systems and data-driven decision-making, the ability to act on data the moment it arrives is unlocking new … Continue reading “From Signals to Insights: Building a Real-Time Streaming Data Platform with Fabric Eventstream”  ( 8 min )
  • Open

    Smart AI integration with the Model Context Protocol (MCP) ... part 3B
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP Link part 1 Part 2 - What MCP is, including its architecture and core components Link part 2 Part 3 - A demo of MCP — including how to configure it so anyone can run it Part 3 - A demo of MCP using Visual Studio Code Link Part 3 Part 3B - A demo of MCP using GitHub Coding Agent  Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Link part 4 Part 5 - How to add OAuth authentication/authorization to our MCP Server Link part 5 Part 3B - MCP Demo with GitHub Coding Agent This MCP demo will use the GitHub Copilot Coding Agent. The Coding Agent is an autonomous, AI-powered software development agent integrated into GitHub. It can work ind…  ( 23 min )

  • Open

    Refresh SQL analytics endpoint Metadata REST API (Generally Available)
    Last month we introduced the SQL analytics endpoint Metadata Sync REST API in preview, and we’re excited to announce that this API is now Generally available (GA). With this API you can programmatically trigger a refresh of your SQL analytics endpoint to keep tables in sync with any changes made in your lakehouse, native and mirrored databases, … Continue reading “Refresh SQL analytics endpoint Metadata REST API (Generally Available)”  ( 6 min )
    Empowering Workload Developers with Language Choice and Simplicity
    Celebrating Enhanced Flexibility in the WDK Sample We’re thrilled to announce a major step forward for the Workload Development Kit (WDK): developers can now transform our workload sample into their favorite programming languages. Whether you prefer Python, Java, Node.js, or Go, we’ve made it easier to start building innovative workloads with the tools and languages … Continue reading “Empowering Workload Developers with Language Choice and Simplicity”  ( 6 min )
    Fabric Data Agents + Microsoft Copilot Studio: A New Era of Multi-Agent Orchestration (Preview)
    Additional authors: Joanne Wong The preview of the integration between Fabric data agents and Microsoft Copilot Studio is now available, introducing a robust capability that enables agents to interoperate seamlessly across tools. This enhancement is designed to streamline the development, deployment, and scaling of intelligent agents across enterprise data environments, reinforcing automation and extensibility within … Continue reading “Fabric Data Agents + Microsoft Copilot Studio: A New Era of Multi-Agent Orchestration (Preview)”  ( 7 min )
    Terraform Provider for Microsoft Fabric: #2 Using MCP servers and Fabric CLI to help define your fabric resources
    The Microsoft Fabric tooling ecosystem has evolved in recent months as both the Fabric CLI and the Terraform Provider for Microsoft Fabric have become generally available, providing new opportunities for automating Fabric Administration tasks for those already experienced with Terraform for declarative deployment of infrastructure as code. This short blog series will provide practical guidance … Continue reading “Terraform Provider for Microsoft Fabric: #2 Using MCP servers and Fabric CLI to help define your fabric resources”  ( 9 min )
  • Open

    Agentic AI research-methodology - part 1
    It's nice to meet you virtually! We are a team consisting of Tamara Gaidar, a data scientist, and Fady Copty, a researcher. We both work at Microsoft Security. In the upcoming blog posts, we will share insights our team has gained from building AI agents that can be leveraged for any agentic-ai application. We hope you enjoy reading and that it adds value to your day-to-day job. Cheers! TL;DR In this blog we provide a practical guide for transitioning agentic AI projects from research to production. We will be writing it in several parts. In part one we will explore how to effectively design agentic AI systems - AI agents that can reason over and generate unstructured data - by emphasizing the importance of clearly defining the problem and understanding why AI is needed. We will outline a …  ( 29 min )
  • Open

    Call Function App from Azure Data Factory with Managed Identity Authentication
    Integrating Azure Function Apps into your Azure Data Factory (ADF) workflows is a common practice. To enhance security beyond the use of function API keys, leveraging managed identity authentication is strongly recommended. Given the fact that many existing guides were outdated with recent updates to Azure services, this article provides a comprehensive, up-to-date walkthrough on configuring managed identity in ADF to securely call Function Apps. The provided methods can also be adapted to other Azure services that need to call Function Apps with managed identity authentication. The high level process is: Enable Managed Identity on Data Factory Configure Microsoft Entra Sign-in on Azure Function App Configure Linked Service in Data Factory Assign Permissions to the Data Factory in Azure …  ( 23 min )
    Capture Java Thread Dump from Kudu console on Windows App Service
    A Java thread dump is a snapshot of the current state of all the threads that are part of a Java process. It provides a lot of valuable information about the Java process like its current execution point, stack trace, thread priority, and the status of the thread (running, waiting, or blocked).Thread dumps are especially useful for diagnosing and troubleshooting performance issues, deadlocks, and unresponsive applications in a Java application. They are crucial for understanding the behavior of an application at a certain point in time and can provide insights into the application's performance that would be difficult to obtain in any other way.   In Azure App Service, we provide ability to capture Java thread dump of your Java application by using "Diagnostic & Solve Problem". The tool is…  ( 25 min )
  • Open

    Introducing the Improved Search Job Experience in Azure Monitor Log Analytics
    A search job is an asynchronous query that runs on any data in your Log Analytics workspace, including data from the long-term retention, making the results available for further queries in a new Analytics table within your workspace.  To efficiently search massive datasets, Search Job divides queries into smaller time-based segments, processes them in parallel, and returns the results. This approach optimizes scalability and enables reliable analysis, even over petabytes of data.  We’re excited to announce significant enhancements to Search Jobs, designed to make large-scale data exploration faster, easier, and more efficient.  What’s New in Search Job  Our latest update includes several powerful improvements:  Intuitive and streamlined UI experience for faster and simpler setup.  Cost e…  ( 24 min )
  • Open

    Azure VMware Solution now available in Spain Central
    We are pleased to announce that Azure VMware Solution is now available in Spain Central. Now in 35 Azure regions, Azure VMware Solution empowers you to seamlessly extend or migrate existing VMware workloads to Azure without the cost, effort or risk of re-architecting applications or retooling operations.  Azure VMware Solution supports: Rapid cloud migration of VMware-based workloads to Azure without refactoring. Datacenter exit while maintaining operational consistency for the VMware environment. Business continuity and disaster recovery for on-premises VMware environments. Attach Azure services and innovate applications at your own pace. Includes the VMware technology stack and lets you leverage existing Microsoft licenses for Windows Server and SQL Server. For updates on current and upcoming region availability, visit the product by region page here. Streamline migration with new offers and licensing benefits, including a 20% discount. We recently announced the VMware Rapid Migration Plan, where Microsoft provides a comprehensive set of licensing benefits and programs to give you price protection and savings as you migrate to Azure VMware Solution. Azure VMware Solution is a great first step to the cloud for VMware customers, and this plan can help you get there. Learn More  ( 19 min )

  • Open

    Connect to your SQL database in Fabric using Python Notebook
    Read/Write to SQL database in Fabric using Python Notebook You created your SQL database in Fabric because you wanted to take full advantage of all the existing workloads without worrying about integration pain points. Python notebook integration with SQL was highly requested. We are thrilled to share you can seamlessly use a python notebook to … Continue reading “Connect to your SQL database in Fabric using Python Notebook”  ( 6 min )
    Fabric Data Warehouse Migration Assistant: Better, Faster, More Reliable
    Building on the excitement of our public preview, we’re excited to announce the latest updates to the Migration Assistant for Fabric Data Warehouse, making it even easier, faster, and more reliable to move your data to Fabric. The Migration Assistant simplifies what can be a complicated process by automatically taking care of it. Since its … Continue reading “Fabric Data Warehouse Migration Assistant: Better, Faster, More Reliable”  ( 6 min )
    Simplifying Data Ingestion with Copy job – Copy Data from Database View,  Sample dataset and New Connectors
    Copy Job is now the preferred solution in Microsoft Fabric Data Factory for simplified data movement, whether you’re moving data across clouds, from on-premises systems, or between different services. With robust support for both batch and incremental copying, it offers the flexibility to tackle a wide range of scenarios with ease. We continuously improve Copy … Continue reading “Simplifying Data Ingestion with Copy job – Copy Data from Database View,  Sample dataset and New Connectors”  ( 6 min )
  • Open

    Building Enterprise-Grade Deep Research Agents In-House: Architecture and Implementation
    As generative AI adoption accelerates, more professionals are recognizing the limitations of basic Retrieval-Augmented Generation (RAG). Many point out that traditional RAG provides only superficial analysis and that search results often appear as isolated points, rather than forming holistic insights. Most RAG implementations rely on single-query searches and summarization, which makes it difficult to explore information in depth or to perform repeated validation. These limitations become especially clear in complex enterprise scenarios. Deep Research is designed to address these challenges. Unlike RAG, Deep Research refers to advanced capabilities where AI receives a user’s query and collects and analyzes information from a variety of perspectives to generate detailed reports. This appro…  ( 46 min )
  • Open

    Java OpenJDK July 2025 Patch & Security Update
    Hello Java customers! We are happy to announce the latest July 2025 patch & security update release for the Microsoft Build of OpenJDK. Download and install the binaries today. OpenJDK 21.0.8 OpenJDK 17.0.16 OpenJDK 11.0.28 Check our release notes page for details on fixes and enhancements. The source code of our builds is available now […] The post Java OpenJDK July 2025 Patch & Security Update appeared first on Microsoft for Java Developers.  ( 22 min )
  • Open

    Important Changes to App Service Managed Certificates: Is Your Certificate Affected?
    Overview  As part of an upcoming industry-wide change, DigiCert, the Certificate Authority (CA) for Azure App Service Managed Certificates (ASMC), is required to migrate to a new validation platform to meet multi-perspective issuance corroboration (MPIC) requirements.   While most certificates will not be impacted by this change, certain site configurations and setups may prevent certificate issuance or renewal starting July 28, 2025.   What Will the Change Look Like?  For most customers: No disruption. Certificate issuance and renewals will continue as expected for eligible site configurations.  For impacted scenarios: Certificate requests will fail (no certificate issued) starting July 28, 2025, if your site configuration is not supported. Existing certificates will remain valid until th…  ( 35 min )
  • Open

    Build Smarter with the Microsoft 365 Agents Toolkit MCP Server
    As AI agents become central to how we build and interact with modern productivity apps, developers need tools that are flexible, standard-based, and deeply integrated with AI. That’s where the Microsoft 365 Agents Toolkit MCP Server comes in.  This is a local Model Context Protocol (MCP) server that acts as a bridge between your AI coding agents and […] The post Build Smarter with the Microsoft 365 Agents Toolkit MCP Server appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Azure Developer CLI: From Dev to Prod with One Click
    This post walks through how to implement a “build once, deploy everywhere” pattern using Azure Developer CLI (azd) that provisions environment-specific infrastructure and promotes applications from dev to prod with the same build artifacts. You’ll learn how to use conditional Bicep deployment, environment variable injection, package preservation across environments, and automated CI/CD promotion from development […] The post Azure Developer CLI: From Dev to Prod with One Click appeared first on Azure DevOps Blog.  ( 24 min )

  • Open

    Sunsetting Default Semantic Models – Microsoft Fabric
    Overview Microsoft Fabric is officially sunsetting Default Semantic Models. This change is part of our ongoing efforts to simplify and improve the manageability, deployment, and governance of Fabric items such as warehouse, lakehouse, SQL database, and mirrored databases. Why the Change? Default Semantic Models were initially designed to provide a lightweight, out-of-the-box experience—automatically generating models … Continue reading “Sunsetting Default Semantic Models – Microsoft Fabric”  ( 7 min )
    Serve real-time predictions seamlessly with ML model endpoints
    Fabric offers a wide variety of data-science capabilities, from automated machine learning with FLAML to batch inferencing with the SynapseML PREDICT function. We’re pleased to announce that ML models can now serve real-time predictions from secure, scalable, and easy-to-use online endpoints. In addition to generating batch predictions in Spark, you can use endpoints to bring … Continue reading “Serve real-time predictions seamlessly with ML model endpoints”  ( 6 min )
    What’s new and coming soon in SQL analytics endpoint in Fabric
    We’re thrilled to share a series of exciting updates and upcoming enhancements to the SQL analytics endpoint in Microsoft Fabric. These improvements are designed to make your experience more powerful and reliable. From Preview to GA – Metadata sync REST API Last month, we introduced the SQL analytics endpoint metadata sync REST API in preview, … Continue reading “What’s new and coming soon in SQL analytics endpoint in Fabric”  ( 6 min )
    Using Microsoft Fabric Git integration for User Data Functions
    Microsoft Fabric offers native Git integration and deployment pipelines to facilitate version control, collaboration, and automated releases for workspace items like user data functions. This guide explains how to set up and manage Git integration for user data functions within a Fabric workspace. • Workspace preparation and Git linking: Users start by selecting or creating a Fabric workspace containing user data functions, then enable Git integration via workspace settings by connecting to a Git provider and repository branch, optionally specifying a folder for organization. • Branching strategy configuration: Teams are advised to adopt branching strategies such as main/develop, feature, and release branches, along with pull request and code review policies to maintain code quality and collaboration. • Managing user data functions in Git: Each data function is stored in a function_app.py file; users clone the repository locally, edit or add functions, and update the definition.json file to reflect new functions and required libraries like numpy. • Committing, syncing, and publishing changes: After committing changes in VS Code, users sync with the Fabric portal, update the function via source control, and publish to deploy the new or updated functions, making them available for invocation.  ( 8 min )
  • Open

    Customising Node-Level Configuration in AKS
    When you deploy AKS, you deploy the control plan, which is managed by Microsoft, and one or more node pools, which contain the worker nodes used to run your Kubernetes workloads. These node pools are usually deployed as Virtual Machine Scale Sets. These scale sets are visible in your subscription, but generally you would not make changes to these directly, as they will be managed by AKS and all of the configuration and management of these is done through AKS. However, there are some scenarios where you do need to make changes to the underlying node configuration to be able to handle the workloads you need to run. Whilst you can make some changes to these nodes, you need to make sure you do it in a supported manner, which will be applied consistently to all your nodes. An example of this re…  ( 38 min )
  • Open

    Guest Blog: Building Multi-Agent Solutions with Semantic Kernel and A2A Protocol
    In the rapidly evolving landscape of AI application development, the ability to orchestrate multiple intelligent agents has become crucial for building sophisticated, enterprise-grade solutions. While individual AI agents excel at specific tasks, complex business scenarios often require coordination between specialized agents running on different platforms, frameworks, or even across organizational boundaries. This is where the […] The post Guest Blog: Building Multi-Agent Solutions with Semantic Kernel and A2A Protocol appeared first on Semantic Kernel.  ( 29 min )

  • Open

    Autonomous Visual Studio Code Desktop Automation using Computer Use Agent & PyAutoGUI
    Project Overview The system replicates a developer's workflow by autonomously launching VS Code, configuring the environment, and generating code via GitHub Copilot Agent mode. This automation is ideal for scenarios where: Codebase is not hosted on GitHub – GitHub Agents on github.com are not applicable. GitHub Codespaces cannot access the code – VS Code in the browser is ruled out. Desktop automation is required – Must work with VS Code desktop client on a local computer. Key Innovation: CUA Model + PyAutoGUI Integration The project demonstrates the synergy between CUA model and PyAutoGUI: PyAutoGUI: Executes desktop actions (launching apps, typing, clicking). CUA Model: Analyzes screenshots and determines next actions. Deterministic Outcomes: Enables autonomous detection and correctio…  ( 22 min )

  • Open

    AI for Operations - Copilot Agent Integration
    Solution ideas The original framework introduced several Logic App and Function App patterns for SQL BPA, Update Manager, Cost Management, Anomaly Detection, and Smart Doc creation. In this article we add two Copilot Studio Agents, packaged in the GitHub repository Microsoft Azure AI for Operation Framework, designed to be deployed in a dedicated subscription (e.g., OpenAI-CoreIntegration): Copilot FinOps Agent – interactive cost & usage analysis Copilot Update Manager Agent – interactive patch status & one-time updates Architecture   Copilot FinOps Agent A Copilot Studio agent that lets stakeholders chat in natural language to retrieve, compare, and summarise cost data—without leaving Teams. Dataflow #StageDescription Initial TriggerUser message (Teams / Copilot Studio web) invoke t…  ( 27 min )

  • Open

    Deep Dive on Availability Zones in Azure App Service
    With the recent release of our new Availability Zone features, a common question we've been getting from our customers is: "How do I know if my existing App Service supports Availability Zones?" While this information is documented in the App Service Resiliency Microsoft Learn Docs, in this blog post I will try and walk you through some essential aspects of enabling Availability Zones in Azure App Service. This article outlines how to confirm if your App Service plan supports Availability Zones, guides you through upgrading from Free, Basic, or Standard tiers to Premium SKUs, and describes the process for enabling this feature, including creating new, properly configured plans. The overview also addresses specific considerations for App Service Environments, along with common issues and so…  ( 44 min )
  • Open

    Take Control of Your Azure VMware Solution Maintenance Schedule
    Overview Azure VMware Solution is a VMware validated first party Azure service from Microsoft that provides private clouds containing VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. It enables customers to leverage their existing investments in VMware skills and tools, allowing them to focus on developing and running their VMware-based workloads on Azure. At Microsoft, we’re continuously evolving our services based on customer feedback and Azure VMware Solution is no exception. As a fully Microsoft-managed service, Azure VMware Solution takes care of the end-to-end lifecycle management of your VMware environment, from ESXi host patching to vCenter Server and NSX upgrades. This ensures your private cloud stays secure, compliant, and up-to-date.  Historically, p…  ( 25 min )
  • Open

    Accelerating Insights from Unstructured Text with AI Powered OneLake Shortcut Transformations
    Shortcut‑based AI transformations in Microsoft Fabric convert raw text files into governed Delta Lake tables within minutes, removing the need for complex data‑integration pipelines and significantly reducing time to insight. Why adopt AI transformations? Supported AI transforms Transform Purpose Summarization Generates concise summaries from long-form text. Translation Translates text between supported languages. Sentiment analysis Labels text … Continue reading “Accelerating Insights from Unstructured Text with AI Powered OneLake Shortcut Transformations”  ( 6 min )
  • Open

    Geo-Replication is Here! Now generally available for Event Hubs Premium & Dedicated
    Today, we are thrilled to announce the General Availability of the Geo-replication feature for Azure Event Hubs, now available in both Premium and Dedicated tiers. This milestone marks a significant enhancement in our service, providing our customers with robust business continuity and disaster recovery capabilities – ensuring high availability for their mission-critical applications.  The Geo-replication feature allows you to replicate your Event Hubs data across multiple regions either synchronously or asynchronously, ensuring that your data remains accessible in the event of maintenance activities, regional degradation, or a regional outage. With Geo-replication, you can seamlessly promote a secondary region to a primary, minimizing downtime and ensuring business continuity.        …  ( 21 min )
  • Open

    Join Us for an AMA on Improving Your MCP Servers with Azure API Management
    What Will We Cover? In this interactive AMA, you'll learn how to: Expose Azure API Management instances as MCP servers, enabling remote access to your APIs using AI combined with Model Context Protocol. Configure API Management policies to enhance your MCP servers with enterprise-grade capabilities such as rate limiting, authentication, and centralized monitoring. Why This Matters Model Context Protocol (MCP) bridges the gap between AI agents and the real-world data they need to be effective. By integrating MCP with Azure API Management, developers can expose tools to their AI agents while enforcing consistent policies and security standards. Whether you’re deploying custom tools or remote services, this AMA will show you how Azure API Management can be your go-to platform for controlling and scaling MCP access. How to Join Register to Join the Azure AI Foundry Discord Community EventSee the events channel 📅 Monday, July 21st, 2025⏰ 10:00 AM Pacific Time (UTC−07:00) Event Highlights Learn how to expose MCP servers through Azure API Management See how to configure and test policies such as rate limiting Get answers directly from Microsoft product managers and engineers Connect with fellow developers building with MCP, Azure API Management, and Azure AI Foundry Get a Head Start Before the event, check out the documentation to learn how to Expose a REST API in API Management as an MCP server, view the Build and protect MCPs faster with governance in Azure API Manager session from Build 2025, or explore the AI-Gateway labs on GitHub and learn how to use APIM and MCP in the MCP for Beginners course. Don’t miss this opportunity to deepen your understanding of API Management and MCP integration—and get your questions answered live!  ( 21 min )

  • Open

    Decoding Data with Confluent Schema Registry Support in Eventstream (Preview)
    We are pleased to announce that Eventstream’s Confluent Cloud for Apache Kafka streaming connector now supports decoding data from Confluent Cloud for Apache Kafka topics that are associated with a data contract in Confluent Schema Registry. The Challenge with Schema Registry Encoded Data The Confluent Schema Registry serves as a centralized service for managing and … Continue reading “Decoding Data with Confluent Schema Registry Support in Eventstream (Preview)”  ( 7 min )
    Accelerating Insights from Unstructured Text with AI‑Powered OneLake Shortcut Transformations in Microsoft Fabric (Public Preview)
    Shortcut‑based AI transformations in Microsoft Fabric convert raw text files into governed Delta Lake tables within minutes, removing the need for complex data‑integration pipelines and significantly reducing time to insight. Why adopt AI transformations? How to set up an AI-powered shortcut transformations 3. Monitor progressStatus can be observed under Manage shortcut Practical implementation patterns All these … Continue reading “Accelerating Insights from Unstructured Text with AI‑Powered OneLake Shortcut Transformations in Microsoft Fabric (Public Preview)”  ( 3 min )
    Enhancing Data Transformation Flexibility with Multiple-Schema Inferencing in Eventstream (Preview)
    Introducing multiple-schema inferencing in Eventstream! This feature empowers you to work seamlessly with data sources that emit varying schemas by inferring and managing multiple schemas simultaneously. It eliminates the limitations of single-schema inferencing by enabling more accurate and flexible transformations, preventing field mismatches when switching between Live and Edit modes, and allowing you to view … Continue reading “Enhancing Data Transformation Flexibility with Multiple-Schema Inferencing in Eventstream (Preview)”  ( 8 min )
    User Data Functions now support async functions and pandas DataFrame, Series types
    Microsoft Fabric has introduced new features for its User Data Functions (UDFs), enhancing Python-based data processing capabilities within the platform. These updates include support for asynchronous functions and the use of pandas DataFrame and Series types for input and output, enabling more efficient handling of large-scale data. • Async function support: Developers can now write async functions in Fabric UDFs to improve responsiveness and efficiency, especially for managing high volumes of I/O-bound operations, such as reading files asynchronously from a Lakehouse. • Pandas DataFrame and Series integration: UDFs can accept and return pandas DataFrames and Series, allowing batch processing of rows with improved speed and performance in data analysis tasks. An example function calculates total revenue by driver using pandas groupby operations. • Usage in notebooks: These functions can be invoked directly from notebooks using pandas objects, facilitating efficient aggregation and analysis of large datasets interactively within Microsoft Fabric. • Getting started and benefits: Users can enable these features by updating the fabric-user-data-functions library to version 1.0.0. The enhancements reduce I/O operations, enable concurrent task handling, and improve performance on datasets with millions of rows.  ( 7 min )
  • Open

    Model Mondays S2:E5 – Fine Tuning & Distillation with Dave Voutila
    This post was generated with AI help and human revision & review. To learn more about our motivation and workflows, please refer to this document in our Model Mondays website. About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ, one week at a time. Here’s what to expect: 5-Minute Highlights – Quick updates on Azure AI models and tools (Mondays) 15-Minute Spotlight – A deeper look at a key model, protocol, or feature (Mondays) 30-Minute AMA – Friday Q&A with experts from Monday’s episode Whether you’re just starting out or already working with models, this series is your chance to grow with the community. Quick links to explore: Register for Model Mondays Watch Past Episodes Join the AMA on July 18 Visit the Discussion Forum Sp…  ( 26 min )
  • Open

    Microsoft is headed to VMware Explore 2025 in Las Vegas
    If you want to know about Azure, the work we are doing in partnership with VMware by Broadcom, or have a conversation about your VMware workloads, stop by our booth #107! This year Microsoft will have several break-out sessions on a variety of topics where you will discover how Microsoft can streamline your VMware workloads’ migration to the cloud. Learn how to maximize your existing on-premises investments and fast path your applications into an AI-innovation platform. We’ll reveal the quickest, least disruptive migration path, and share enticing offers that make this an easy decision. Learn how transitioning from on-premises to Azure will give your business a competitive edge. As you are building out your VMware Explore schedule don't miss these Micrsoft sessions: Monday, August 25th9:00-9:45 AM From On-Prem to Intelligent Cloud with Azure VMware Solution    CLOB1948LVS  45-min. Breakout Session Monday, August 25th 12:45-1:30 PM Building End-to-End Networking with Azure VMware Solution   CLOB1950LVS  45-min. Breakout Session Monday, August 25th 2:00-3:30 PM Azure & AI: How to unleash the power of Azure VMware Solution   CLOT1951LVS  90-min. Tutorial Tuesday, August 26th 10:30-11:15 AM Enterprise Security and Migration with Azure VMware Solution   CLOB1952LVS  45-min. Breakout Session Tuesday, August 26th1:00-1:45 PM Build Resilient VCF Private Clouds: Azure VMware Solution (AVS) + BCDR & Storage   CLOB1953LVS  45-min. Breakout  Wednesday, August 27th 11:00-11:30 AM Ask Me Anything About Azure VMware Solution   CLOM1949LVS  Meet the Expert Roundtable Wednesday, August 27th3:30-4:00 PM Ask Me Anything About Azure VMware Solution   CLOM1947LVS  Meet the Expert Roundtable  ( 20 min )
  • Open

    Bring your own agents into Microsoft 365 Copilot
    Custom Engine Agents now generally available—build and integrate your own AI into the flow of work Microsoft 365 Copilot is redefining how people interact with AI—embedding it directly into the flow of work as the intuitive, natural interface for agents: the ‘UI for AI’ As Copilot becomes the interface for AI in the workplace, we’re […] The post Bring your own agents into Microsoft 365 Copilot appeared first on Microsoft 365 Developer Blog.  ( 26 min )
  • Open

    Swagger Auto-Generation on MCP Server
    In many use cases, using remote MCP servers is not uncommon. In particular, if you're using Azure API Management (APIM), Azure API Center (APIC) or Copilot Studio in Power Platform, integrating with remote MCP servers is inevitable.   MCP (Model Context Protocol) is a protocol used for defining server endpoints that support AI agent workflows, often in Copilot scenarios.   Background   A remote MCP server is basically an HTTP API app that exposes an endpoint of either /mcp or /sse depending on the spec the servers support. This endpoint should be accessible from Copilot Studio through a custom connector, APIM or APIC. The easiest and simplest way of exposing this endpoint is to provide an OpenAPI doc. The OpenAPI doc is very simple because it has only one endpoint, and the structure is a…  ( 28 min )

  • Open

    Organising the AI Foundry: A Practical Guide for Enterprise Readiness
    Purpose of the document provide overview of AI Foundry and how it can be set up and organised for at scale for an enterprise.  This document should be considered as guidance and recommendations, but individual organisations should treat and consider other factors such as security, policy, governance and number of business units. AI Foundry Resource: Azure AI Foundry is Microsoft’s unified platform for building, customising, and managing AI applications and agents—designed to accelerate innovation and operationalise AI at scale. It brings together: Data, models, and operations into a single, integrated environment. A developer-first experience with native support for GitHub, Visual Studio, and Copilot Studio. A low-code portal and code-first SDKs/APIs for flexibility across skill levels. …  ( 43 min )
    Downloading tagged images from Azure Custom Vision
    Azure Custom Vision is an easy-to-use yet powerful cognitive service from Microsoft that empowers developers to build, train, and deploy custom image classification or object detection models - all without being machine learning experts. Through a simple web interface or SDKs, you can upload images, tag them with meaningful labels, and train a model that understands your specific use case with the help of transfer learning. Once trained, you might realize you need access to those tagged images again - maybe for auditing, retraining on different platforms, migrating data, or simply backing them up. Unfortunately, the Custom Vision portal doesn’t offer a “Download All Tagged Images” button. Thankfully, Microsoft provides a RESTful Get Tagged Images API, a hidden gem that enables you to reco…  ( 39 min )
  • Open

    Microsoft and LangChain4j: A Partnership for Secure, Enterprise-Grade Java AI Applications
    The artificial intelligence landscape presents unprecedented opportunities for Java developers. With decades of experience building robust enterprise applications using Spring, Quarkus and Jakarta EE, Java developers are uniquely positioned to create sophisticated AI solutions. The emergence of specialized AI libraries like LangChain4j enables them to leverage their existing expertise while building secure, scalable, and maintainable […] The post Microsoft and LangChain4j: A Partnership for Secure, Enterprise-Grade Java AI Applications appeared first on Microsoft for Java Developers.  ( 23 min )
  • Open

    NEW Conditional Access Optimization Agent in Microsoft Entra + Security Copilot in Entra updates
    Instead of switching between logs, PowerShell, and spreadsheets, Security Copilot centralizes insights for faster, more focused action.  Resolve compromised accounts, uncover ownerless or high-risk apps, and tighten policy coverage with clear insights, actionable recommendations, and auto-generated policies. Strengthen security posture and reclaim time with a smarter, more efficient approach powered by Security Copilot.       Diana Vicezar, Microsoft Entra Product Manager, shares how to streamline investigations and policy management using AI-driven insights and automation. Skip the scripting.  Ask questions in plain language and get back policy and risk insights in seconds. Microsoft Entra now has built-in AI with Security Copilot. Stay ahead of threats.  Use AI to track auth changes, …  ( 38 min )
  • Open

    Secure Data Streaming with Private Endpoints in Eventstream (Generally Available)
    We’re excited to announce the General Availability of Managed Private Endpoints (MPE) in Fabric Eventstream. This network security feature allows you to stream data from Azure resources to Fabric over a private and secure network without the complexity of manual network configurations. Why Network Security Matters for Streaming As organizations increasingly adopt real-time data streaming … Continue reading “Secure Data Streaming with Private Endpoints in Eventstream (Generally Available)”  ( 6 min )
    Permission model improvements for Azure and Fabric Events
    Azure and Fabric Events offer a powerful capability within Real-Time Intelligence that enables you to ingest system events that are generated in Microsoft Fabric and Azure to deliver them to consumers in Microsoft Fabric like Activator for setting event-based triggers or Eventstream to stream and process events to other destinations. Permission model To subscribe to Azure and Fabric events … Continue reading “Permission model improvements for Azure and Fabric Events”  ( 7 min )
    Terraform Provider for Microsoft Fabric: #1 Accelerating first steps using the CLIs
    The Microsoft Fabric tooling ecosystem has evolved in recent months as both the Fabric CLI and the Terraform Provider for Microsoft Fabric have become generally available, providing new opportunities for automating Fabric Administration tasks for those already experienced with Terraform for declarative deployment of infrastructure as code. This short blog series will provide practical guidance … Continue reading “Terraform Provider for Microsoft Fabric: #1 Accelerating first steps using the CLIs”  ( 10 min )
  • Open

    CampusSphere: Building the Future of Campus AI with Microsoft's Agentic Framework
    Project Overview We are a team of Imperial College Students committed to improving campus life through innovative multi-agent solutions. CampusSphere leverages Microsoft Azure AI capabilities to automate core university campus services. We created an end-to-end solution that allows both students and staff to access a multi-agent framework for room/gym booking, attendance tracking, calendar management, IoT monitoring and more. 🔭 Our Initial Vision: Reimagining Campus Technology When our team at Imperial College London embarked on the CampusSphere project as part of Microsoft's Agentic Campus initiative, we had one clear ambition: to create an intelligent campus ecosystem that would fundamentally change how students, faculty, and staff interact with university services. The inspiration came…  ( 31 min )
  • Open

    Smart AI Integration with the Model Context Protocol (MCP) ... part 5
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP Link part 1 Part 2 - What MCP is, including its architecture and core components Link part 2 Part 3 - A demo of MCP — including how to configure it so anyone can run it Link part 3 Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Link part 4 Part 5 - How to add OAuth authentication/authorization to our MCP Server  Part 5 ... MCP Authentication / Authorization In the previous section we covered the development of an MCP Server. In this section we shall extend the server code so that it supports authentication / authorization. Objectives Authentication and authorization are essential for both the MCP service provider and the user. Each…  ( 40 min )
  • Open

    Azure VMware Solution Broadcom VMSA-2025-0013 Remediation
    Broadcom has released a new Critical Security Advisory, VMSA-2025-0013 with a CVSS base score range of 7.1 to 9.3. With Microsoft’s commitment to the security of our platform and our improved lifecycle management process, we were able to quickly assemble a global team to work on the acceleration and validation of the ESXi 8.0 U3f + Hot Patch (VAIO bug fix) Build 24797835 security patch . We have nearly finished qualifying the security patch that will mitigate VMSA-2025-0013 across our fleet. As a result, with the public release of this vulnerability we expect to be able to patch your existing Azure VMware Solution infrastructure next week. We are committing to completing the remediation within 30-days. Microsoft will communicate the scheduled date of patching over the next three weeks. Any…  ( 22 min )

  • Open

    Introduction of access limits in a Fabric workspace
    In August 2025, Microsoft Fabric will introduce workspace access limits to improve service quality, reliability, and to encourage workspace access control hygiene. This limit will be permanent once it is rolled out – each Fabric & Power BI workspace will be limited to a maximum of 1,000 users or groups in workspaces roles (Admin, Member, … Continue reading “Introduction of access limits in a Fabric workspace”  ( 6 min )
    Unified by design: mirroring Azure Databricks Unity Catalog to Microsoft OneLake in Fabric (Generally Available)
    We are thrilled to announce the general availability of Mirroring for Azure Databricks Unity Catalog in Microsoft Fabric—a secure, high-performance integration that provides seamless access to Azure Databricks tables from Fabric. With Fabric and Azure Databricks, we are building the future of data platforms on a lakehouse foundation, powered by open data formats, full interoperability, … Continue reading “Unified by design: mirroring Azure Databricks Unity Catalog to Microsoft OneLake in Fabric (Generally Available)”  ( 9 min )
  • Open

    Building Agent-to-Agent (A2A) Applications on Azure App Service
    The world of AI agents is evolving rapidly, with new protocols and frameworks emerging to enable sophisticated multi-agent communication. Google's Agent-to-Agent (A2A) protocol represents one of the most promising approaches for building distributed AI systems that can coordinate tasks across different platforms and services. I'm excited to share how you can leverage Azure App Service to build, deploy, and scale A2A applications. Today, I'll walk you through a practical example that combines Microsoft Semantic Kernel with the A2A protocol to create an intelligent travel planning assistant. What We Built: An A2A Travel Agent on App Service I've taken an existing A2A travel planning sample and enhanced it to run seamlessly on Azure App Service. This demonstrates how A2A concepts can be adapt…  ( 32 min )
  • Open

    Supercharge your EWS migration with AI and GitHub Copilot
    We're announcing a new tutorial that helps developers migrate solutions from EWS to Microsoft Graph using AI and GitHub Copilot. The post Supercharge your EWS migration with AI and GitHub Copilot appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Hyperlight: Debugging hardware-protected guests
    You can now interactively debug Hyperlight guest micro-VMs. Attach the GNU Debugger at runtime to step through the code. The post Hyperlight: Debugging hardware-protected guests appeared first on Microsoft Open Source Blog.  ( 16 min )
  • Open

    Fine-Tuning Together: How Our Developer Community Is Shaping the Future of Custom AI
    In a world where foundation models are increasingly accessible, the real magic happens when developers make them their own. Fine-tuning is no longer a niche capability, it’s becoming a core skill for developers who want to build AI that’s faster, smarter, and more aligned with their users scaling expert knowledge across their organizations. Over the past few months, we’ve seen something remarkable: a growing community of builders, tinkerers, and innovators coming together to push the boundaries of what fine-tuning can do making a powerful difference for everyday organizations.  A Community Making a Big Impact: Customer Stories At Build 2025, we saw firsthand how much the landscape has shifted. Just a year ago, many teams were still relying solely on prompt engineering or retrieval-augmente…  ( 29 min )

  • Open

    New Automation enhancements in AVS Landing Zone for Migration-Ready Infrastructure
    As organizations adopt Azure VMware Solution (AVS) to modernize their infrastructure, the need for repeatable, script-driven deployment patterns becomes increasingly important. Whether you're preparing for a new AVS deployment or enhancing an existing AVS environment, automation can significantly reduce manual effort, improve consistency, and accelerate readiness for workload migration.   AVS Landing Zone already provides a rich repository of Terraform, Azure Resource manager (ARM), Bicep and PowerShell scripts to automate deployment as well as management operations. For example, we recently added a collection of PowerShell-based automation scripts that simplify the deployment of key AVS components. This article highlights three such scripts that help accelerate AVS migrations: Jumpbox De…  ( 30 min )
  • Open

    Enhancing Code Quality at Scale with AI-Powered Code Reviews
    Microsoft’s AI-powered code review assistant has transformed pull request workflows by automating routine checks, suggesting improvements, and enabling conversational Q&A, leading to faster PR completion, improved code quality, and enhanced developer onboarding. Its seamless integration and customizability have driven widespread adoption within Microsoft The post Enhancing Code Quality at Scale with AI-Powered Code Reviews appeared first on Engineering@Microsoft.  ( 27 min )
  • Open

    Announcing Full Cross-Platform Support for the mssql-python Driver
    After the successful release of Public Preview of mssql-python driver, we’re thrilled to announce a major milestone for the mssql-python driver: full support for all three major operating systems—Windows, macOS, and Linux. This release marks a significant leap forward in our mission to provide seamless, performant, and Pythonic connectivity to Microsoft SQL Server and the […] The post Announcing Full Cross-Platform Support for the mssql-python Driver appeared first on Microsoft for Python Developers Blog.  ( 24 min )
  • Open

    Ingest Logs using Logstash into Real-Time Intelligence
    Coauthor: Ramachandran G, RTI Engineering Manager, Microsoft In today’s data-driven world, organizations rely heavily on real-time insights to monitor systems, detect anomalies, and make informed decisions. One of the key challenges in achieving this is efficiently ingesting and transforming log data from diverse sources into a format that can be analysed instantly. Real-Time Intelligence Real-time … Continue reading “Ingest Logs using Logstash into Real-Time Intelligence”  ( 7 min )
    Announcing Cosmos DB in Microsoft Fabric Featuring New Capabilities! (Preview)
    The Preview of Cosmos DB in Microsoft Fabric, is now available to all users. Following its initial announcement at Microsoft Build 2025, several new capabilities have been added to improve data workflows. With this release, you can seamlessly access and analyze your operational data across the Fabric ecosystem. Leverage Real-Time Intelligence, Copilot-powered Power BI, and … Continue reading “Announcing Cosmos DB in Microsoft Fabric Featuring New Capabilities! (Preview)”  ( 8 min )
    Access Amazon S3 Shortcuts Securely and Seamlessly with Microsoft Entra Service Principals (Preview)
    Microsoft Fabric now offers a preview of support for Microsoft Entra service principals when using Amazon S3 Shortcuts. This feature allows the use of Entra service principals to securely access S3 buckets without the need for long-term AWS access keys. Previously, S3 shortcuts required access keys. With this update, organizations can authorize access using Microsoft … Continue reading “Access Amazon S3 Shortcuts Securely and Seamlessly with Microsoft Entra Service Principals (Preview)”  ( 6 min )
  • Open

    Building Real-Time AI Apps with Model Context Protocol (MCP) and Azure Web PubSub
    Overview Model Context Protocol (MCP) is an open, standardized protocol that allows Large Language Models (LLMs) to interact with external tools and data sources in a consistent, extensible way. It gives models access to capabilities beyond their training data—such as calling APIs or querying databases. MCP builds on the idea of function calling, which lets LLMs invoke external operations by generating JSON payloads that match predefined schemas. While function calling was initially tied to proprietary formats (like OpenAI’s schemas), MCP unifies these patterns under a common JSON-RPC-based protocol. This makes it easier for developers to write tools once and expose them to many LLMs, regardless of vendor. In short: Function calling gave LLMs actions. MCP gives those actions structure, di…  ( 41 min )

  • Open

    AI Agent MCP Tools: QuickStart to MCP Tools Development with Azure AI Foundry SDK
    As AI agents become more sophisticated, the need for seamless integration with powerful cloud-based tools grows. With the Azure AI Foundry SDK and MCP (Model Context Protocol) tools, creates a dynamic duo that empowers developers to build, deploy, and manage intelligent agents with ease. Solution Overview The AI-Foundry-Agent-MCP GitHub repository provides a hands-on solution for integrating MCP tools with the Azure AI Foundry SDK. This setup allows developers to: Access and deploy state-of-the-art models from Azure AI Foundry. Use MCP tools to manage model context, knowledge bases, and evaluation pipelines. Rapidly prototype and scale AI agent solutions in a cloud-native environment. Getting Started The repo includes a step-by-step guide to get your environment up and running: Clone th…  ( 23 min )

  • Open

    Python in Visual Studio Code – July 2025 Release
    The July 2025 release includes TBA and more! The post Python in Visual Studio Code – July 2025 Release appeared first on Microsoft for Python Developers Blog.  ( 23 min )

  • Open

    ML Model Scoring in Fabric Eventhouse via Update Policy
    In this blog post, we will describe how to train an ML model in Fabric Spark notebook, save it in Fabric’s models registry and use it for scoring new streaming data by Fabric Eventhouse via update policy in real time. We describe a typical workflow that can be implemented to monitor cloud resources or IoT … Continue reading “ML Model Scoring in Fabric Eventhouse via Update Policy”  ( 11 min )
    New in OneLake: Access your Delta Lake tables as Iceberg automatically (Preview)
    Effortlessly read Delta Lake tables using Apache Iceberg readers Microsoft Fabric is a unified, SaaS data and analytics platform designed for the era of AI. All workloads in Microsoft Fabric use Delta Lake as the standard, open-source table format. With Microsoft OneLake, Fabric’s unified SaaS data lake, customers can unify their data estate across multiple … Continue reading “New in OneLake: Access your Delta Lake tables as Iceberg automatically (Preview)”  ( 6 min )
    From Clicks to Code: SQL Operator under Fabric Eventstream (Preview)
    A new feature has been added to Eventstream—the SQL Operator—which enables real-time data transformation within the platform. Whether you’re filtering, aggregating, or joining data streams, or handling complex data transformation needs like conditional logic, nested expression, string manipulation etc. SQL Operator gives you the flexibility and control to craft custom transformations using the language you … Continue reading “From Clicks to Code: SQL Operator under Fabric Eventstream (Preview)”  ( 7 min )
    What’s new with SAP connectivity in Microsoft Fabric – July 2025
    For enterprise customers, SAP represents one of the most valuable data sources. Integration of SAP data with other non-SAP data estates is a frequent requirement of Microsoft Fabric customers. Consequently, we are continually working to expand the options for SAP data integration to ensure that every customer has a viable solution tailored to their specific … Continue reading “What’s new with SAP connectivity in Microsoft Fabric – July 2025”  ( 9 min )
    Connecting AI Agents to Microsoft Fabric with GraphQL and the Model Context Protocol (MCP)
    Have you ever wondered how to give AI assistants access to your organization’s data in a clean, structured way? The Model Context Protocol (MCP) is an open standard that creates a bridge between large language models and external tools, APIs, and data sources. Think of it as a universal translator that lets AI agents understand … Continue reading “Connecting AI Agents to Microsoft Fabric with GraphQL and the Model Context Protocol (MCP)”  ( 8 min )
  • Open

    Preparing for Your Organization’s AI Workloads – Student Learning Pathways
    This structured plan helps students: Plans | Microsoft Learn Build foundational knowledge of AI in the cloud. Learn how enterprise-level infrastructure supports responsible, scalable AI deployments. Explore governance and monitoring strategies to ensure security and compliance. And the best part? It’s built using Microsoft’s existing training resources plus some brand-new modules to give you an edge. Your AI Readiness Journey on Azure 🎯 Milestone 1: Getting Started with AI on Azure https://learn.microsoft.com/training/paths/introduction-to-ai-on-azure/Begin with the basics—from machine learning concepts to practical uses of Azure AI services. 🛡️ Milestone 2: Infrastructure Essentials https://learn.microsoft.com/training/paths/manage-iam-for-ai-workloads-on-azure/ https://learn.micro…  ( 20 min )
    Curious About Model Context Protocol? Dive into MCP with Us!
    Global Workshops for All Skill Levels We’re hosting a series of free online workshops to introduce you to MCP—available in multiple languages and programming languages! You’ll get hands-on experience building your first MCP server, guided by friendly experts ready to answer your questions. Register now: https://aka.ms/letslearnmcp Who Should Join? This workshop is built for: Students exploring tech careers Beginner devs eager to learn how AI agents and MCP works Curious coders and seasoned pros alike If you’ve got some code curiosity and a laptop, you’re good to go. Workshop Schedule (English Sessions) DateTech FocusRegistration Link July 9C#Join Here July 15JavaJoin Here July 16PythonJoin Here July 17C# + Visual StudioJoin Here July 21TypeScriptJoin Here Multilingual Sessions We’re…  ( 21 min )
    Multi-Agent Systems and MCP Tools Integration with Azure AI Foundry
    The Power of Connected Agents: Building Multi-Agent Systems Imagine trying to build an AI system that can handle complex workflows like managing support tickets, analyzing data from multiple sources, or providing comprehensive recommendations. Sounds challenging, right? That's where multi-agent systems come in! The Develop a multi-agent solution with Azure AI Foundry Agent Services module introduces you to the concept of connected agents  a game changing approach that allows you to break down complex tasks into specialized roles handled by different AI agents. Why Connected Agents Matter As a student developer, you might wonder why you'd need multiple agents when a single agent can handle many tasks. Here's why this approach is transformative: 1. Simplified Complexity: Instead of building …  ( 29 min )
    S2:E4 Understanding AI Developer Experiences with Leo Yao
    This week in Model Mondays, we put the spotlight on the AI Toolkit for Visual Studio Code - and explore the tools and workflows that make building generative AI apps and agents easier for developers. Read on for my recap. This post was generated with AI help and human revision & review. To learn more about our motivation and workflows, please refer to this document in our website. About Model Mondays Model Mondays is a weekly series designed to help you grow your Azure AI Foundry Model IQ step by step. Each week includes: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from the Monday livestream If you're lo…  ( 28 min )
  • Open

    Collaborative Function App Development Using Repo Branches
    In this example, I demonstrate a Windows-based Function App using PowerShell, with deployment via Azure DevOps (ADO) and a Bicep template. Local development is done in VSCode.   Scenario: Your Function App project resides in a shared repository maintained by a team. Each developer works on a separate branch. Whenever a branch is updated, the Function App is deployed to a slot named after that branch. If the slot doesn't exist, it will be automatically created.   How to use it: Create a Function App You can create a Function App using any method of your choice.   Prepare a corresponding repo in Azure DevOps Set up your repo structure for the Function App source code.   Create Function App code using the VSCode wizard In this example, we use PowerShell and create an anonymous H…  ( 25 min )
  • Open

    Customize GitHub Copilot in JetBrains with Custom Instructions
    Today, you can use Custom Instructions in JetBrains to speed up development while staying aligned with your team’s coding standards and personal preferences. Whether you’re working on a solo project or part of a larger team, adhering to consistent coding standards is essential. Custom Instructions provide a mechanism to reinforce team coding guidelines or embed […] The post Customize GitHub Copilot in JetBrains with Custom Instructions appeared first on Microsoft for Java Developers.  ( 22 min )

  • Open

    July Patches for Azure DevOps Server
    Today we are releasing patches that impact the latest version of our self-hosted product, Azure DevOps Server. We strongly encourage and recommend that all customers use the latest, most secure release of Azure DevOps Server. You can download the latest version of the product, Azure DevOps Server 2022.2 from the Azure DevOps Server download page. […] The post July Patches for Azure DevOps Server appeared first on Azure DevOps Blog.  ( 21 min )
  • Open

    How SharePoint Embedded works and how to build AI apps on it
    SharePoint Embedded is a fully managed, cloud-based, API-only document management system that lets you securely integrate your custom web or mobile apps, whether built on Azure or other clouds, with Microsoft 365 file storage. It’s especially ideal for ISVs building multi-tenant apps because content stays within each customer’s Microsoft 365 tenant.    Design apps that include Microsoft 365 Copilot and agent capabilities, connected Office experiences like Word, and Microsoft Purview compliance and data protection, all within your own user experience. Use built-in retrieval augmented generation (RAG) or bring your own models to create intelligent, secure solutions that reason over your business content, support real-time co-authoring, and scale with granular permissions and storage control.…  ( 39 min )
  • Open

    Fabric Workspace Identity: Removing Default Contributor Access for Workspace Identity
    A Fabric workspace identity is an automatically managed service principal that can be associated with a Fabric workspace. Fabric workspaces with a workspace identity can securely read or write to firewall-enabled Azure Data Lake Storage Gen2 accounts through trusted workspace access for OneLake shortcuts. Fabric items can use the identity when connecting to resources that support Microsoft … Continue reading “Fabric Workspace Identity: Removing Default Contributor Access for Workspace Identity”  ( 7 min )
  • Open

    Rehydrating Archived Blobs via Storage Task Actions
    Azure Storage Actions is a fully managed platform designed to automate data management tasks for Azure Blob Storage and Azure Data Lake Storage. You can use it to perform common data operations on millions of objects across multiple storage accounts without provisioning extra compute capacity and without requiring you to write code. Storage task actions can be used to rehydrate the archived blobs in any tier as required. Please note there is no option to set the rehydration priority and is defaulted to Standard one as of now. Note :- Azure Storage Actions are generally available in the following public regions: https://learn.microsoft.com/en-us/azure/storage-actions/overview#supported-regions Azure Storage Actions is currently in PREVIEW in following reions. Please refer: https://learn.mic…  ( 22 min )
  • Open

    Improved Node.js Deployment Performance on Azure App Service
    We’ve made significant improvements to how Node.js applications are deployed on Azure App Service — with deployment times now up to 8× faster in some cases.  ( 2 min )

  • Open

    Zero-Trust Agents: Adding Identity and Access to Multi-Agent Workflows
    Executive Summary AI agents need identity and trust just like humans. In this article, we demonstrate a zero-trust approach to autonomous AI agents by integrating Identity and access management into an enterprise agentic workflow. Using a hotel booking scenario, we show how each AI agent has identity and issued access token for every action. This ensures no implicit trust between entities – every interaction is authenticated and authorized, following the principle of “never trust, always verify”[1]. The solution marries Asgardeo by WSO2(for identity and access management) with Microsoft’s Azure OpenAI service on Azure AI Foundry and Autogen (for powerful generative AI model and Multiagent orchestration). The result is a working example of autonomous agents that carry digital ID tokens, ena…  ( 59 min )
    June Fine-Tuning Updates: Preference Alignment, Global Training, and More!
    With all our big announcements at Build, you might think we'd kick back and take a break for a few weeks... but fine tuning never stops! We're wrapping up June with a release of direct preference optimization for the 4.1 family of models, fine tuning available in more regions than ever before, and Responses API support for fine-tuned models.  GPT-4.1, GPT-4.1-mini support Direct Preference Optimization (DPO) 😍 GPT-4.1 and GPT-4.1-mini now support Direct Preference Optimization (DPO). DPO is a finetuning technique that adjusts model weights based on human preferences. You provide a prompt along with two responses: a preferred and non-preferred example; using this data, you can align a fine-tuned model to match your own style, preferences, or safety requirements.   Unlike Reinforcement Lea…  ( 23 min )
    What If You Could Cut AI Costs by 60% Without Losing Quality?
    That’s the promise behind the new pricing model for Azure AI Content Understanding. We’ve restructured how you pay for document, audio, and video analysis—moving from rigid field-based pricing to a flexible, token-based system that lets you pay only for what you use. Whether you're extracting layout from documents or identifying actions in a video, the new pricing structure delivers up to 60% cost savings for many typical tasks and more control over your spend. Why We’re Moving to Token-Based Pricing Field-based pricing was easy to understand, but it didn’t reflect the real work being done. Some fields are simple. Others require deep reasoning, cross-referencing, and contextual understanding. So we asked: What if pricing scaled with complexity? Enter tokens. Tokens are the atomic units of …  ( 35 min )
  • Open

    Markdown Support Arrives for Work Items
    After several months in private preview and many bug fixes along the way, we’re excited to announce that Markdown support in large text fields is now generally available! 🎉 🦄 How it works By default, all existing and new work items will continue using the HTML editor for large text fields. However, you now have […] The post Markdown Support Arrives for Work Items appeared first on Azure DevOps Blog.  ( 24 min )

  • Open

    Optimizing memory usage in large language models fine-tuning with KAITO: Best practices from Phi-3
    The Cloud Native team at Azure is working to make AI on Kubernetes more cost-effective and approachable for a broader range of users.  The post Optimizing memory usage in large language models fine-tuning with KAITO: Best practices from Phi-3  appeared first on Microsoft Open Source Blog.  ( 15 min )
  • Open

    AI Repo of the Week: Generative AI for Beginners with JavaScript
    Introduction Ready to explore the fascinating world of Generative AI using your JavaScript skills? This week’s featured repository, Generative AI for Beginners with JavaScript, is your launchpad into the future of application development. Whether you're just starting out or looking to expand your AI toolbox, this open-source GitHub resource offers a rich, hands-on journey. It includes interactive lessons, quizzes, and even time-travel storytelling featuring historical legends like Leonardo da Vinci and Ada Lovelace. Each chapter combines narrative-driven learning with practical exercises, helping you understand foundational AI concepts and apply them directly in code. It’s immersive, educational, and genuinely fun. What You'll Learn 1. 🧠 Foundations of Generative AI and LLMs Start with t…  ( 23 min )

  • Open

    Smart AI Integration with the Model Context Protocol (MCP) ... part 4
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP Link part 1 Part 2 - What MCP is, including its architecture and core components Link part 2 Part 3 - A demo of MCP — including how to configure it so anyone can run it Link part 3 Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Part 4 - MCP Development In this section we shall discuss creating an MCP Server - specifically, the Colour MCP server that was covered in the previous section, and used in the demo. MCP SDK To help build MCP servers with minimal effort - there are Software Development Kits (SDKS) for the following languages: C# Java Kotlin Python Ruby Swift TypeScript Links to the different SDKS are at https://modelcontex…  ( 26 min )
    Model Context Protocol (MCP) - Why What How ... part 4
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP Part 2 - What MCP is, including its architecture and core components Part 3 - A demo of MCP — including how to configure it so anyone can run it Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Part 4 - MCP Development In this section we shall discuss creating an MCP Server - specifically, the Colour MCP server that was covered in the previous section, and used in the demo. MCP SDK To help build MCP servers with minimal effort - there are Software Development Kits (SDKS) for the following languages: C# Java Kotlin Python Ruby Swift TypeScript Links to the different SDKS are at https://modelcontextprotocol.io/ Colors MCP The source …  ( 26 min )
    Smart AI integration with the Model Context Protocol (MCP) ... part 3
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP Link part 1 Part 2 - What MCP is, including its architecture and core components Link part 2 Part 3 - A demo of MCP — including how to configure it so anyone can run it Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Link part 4 Part 3 - MCP Demo This section shows the use of several MCP tools - specifically, these will provide knowledge about Colors, Timer, Prices and Azure Resources. The demo will use Visual Studio Code and GitHub Copilot Agent Mode. Demo video Configuration The remainder of this section, explains how to configure and run the demo. Project can be cloned from https://github.com/markharrisonorg/mcpdemo. Alternative…  ( 26 min )
    Model Context Protocol (MCP) - Why What How ... part 3
    A discussion on Model Context Protocol:Part 1 - Why is there a need for MCP Part 2 - What MCP is, including its architecture and core components Part 3 - A demo of MCP — including how to configure it so anyone can run it Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Part 3 - MCP Demo This section shows the use of several MCP tools - specifically, these will provide knowledge about Colors, Timer, Prices and Azure Resources. The demo will use Visual Studio Code and GitHub Copilot Agent Mode. Demo video Configuration The remainder of this section, explains how to configure and run the demo. Project can be cloned from https://github.com/markharrisonorg/mcpdemo. Alternatively: Create an empty folder and open wi…  ( 25 min )
    Smart AI integration with the Model Context Protocol (MCP) ... part 2
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP Link part 1 Part 2 - What MCP is, including its architecture and core components Part 3 - A demo of MCP — including how to configure it so anyone can run it  Link part 3 Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Link part 4 Part 2 - What is MCP? The Model Context Protocol (MCP) is an open standard designed to enable intelligent AI applications to seamlessly communicate and interoperate with a wide range of external tools and services. MCP abstracts away integration complexities by defining a unified, extensible communication layer that supports context-aware interactions. https://modelcontextprotocol.io/ Overview of MCP Archit…  ( 21 min )
    Model Context Protocol (MCP) - Why What How ... part 2
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP Part 2 - What MCP is, including its architecture and core components Part 3 - A demo of MCP — including how to configure it so anyone can run it Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Part 2 - What is MCP? The Model Context Protocol (MCP) is an open standard designed to enable intelligent AI applications to seamlessly communicate and interoperate with a wide range of external tools and services. MCP abstracts away integration complexities by defining a unified, extensible communication layer that supports context-aware interactions. https://modelcontextprotocol.io/ Overview of MCP Architecture MCP is structured around a modul…  ( 20 min )
    Smart AI integration with the Model Context Protocol (MCP) ... part 1
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP  Part 2 - What MCP is, including its architecture and core components Link part 2  Part 3 - A demo of MCP — including how to configure it so anyone can run it Link Part 3  Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Link part 4 Part 1 - Why MCP? To understand why the Model Context Protocol (MCP) is needed, lets start by looking at the patterns that modern AI Agents rely on. Common AI Patterns Chat The Chat pattern is a conversational interface between an Agent and an AI Endpoint, typically using natural language, though it can also involve structured inputs or metadata depending on the use case.     The first message sent to an…  ( 31 min )
    Model Context Protocol (MCP) - Why What How ... part 1
    A discussion on Model Context Protocol: Part 1 - Why is there a need for MCP  Part 2 - What MCP is, including its architecture and core components Part 3 - A demo of MCP — including how to configure it so anyone can run it Part 4 - An example of how to develop an MCP server — a potential starter project for connecting to your own knowledge resources Part 1 - Why MCP? To understand why MCP is needed, lets start by looking at the patterns that modern AI Agents rely on. Common AI Patterns Chat The Chat pattern is a conversational interface between an Agent and an AI Endpoint, typically using natural language, though it can also involve structured inputs or metadata depending on the use case.     The first message sent to an AI endpoint includes a system prompt, which defines the AI's behav…  ( 31 min )
  • Open

    S2:E3 Understanding SLMs and Reasoning with Mojan Javaheripi
    This week in Model Mondays, we focus on Small Language Models (SLMs) and Reasoning — and learn how reasoning models leverage inference-time scaling to execute complex tasks, but how can we use these in resource-constrained devices? Read on for my recap of Mojan Javaheripi's insights on Phi-4 reasoning models that are redefining small language models (SLM) for the agentic era of apps. About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ step by step. Here's how it works: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream If you want…  ( 29 min )

  • Open

    Automate a multi-step business process, using turnkey MCP, Logic App Integration in AI Foundry
    This article walks you through an application for Procure-to-Pay anomaly detection using Azure AI Foundry Agent Service. It analyzes purchase invoice images to detect procurement anomalies and compliance issues. The key capabilities of Azure AI Foundry it showcases are: Agent Service that performs a multi-step business process with only natural language instructions. The Application has little to no business logic in the code. Invocation of a variety of automated tool actions required during the multi steps business process. It showcases the recently announced turnkey integration with MCP Server, Azure Logic Apps. In addition, it uses a) visual reasoning of images using gpt-4o models, b) turnkey vector search, and c) reasoning to apply business rules and perform p2p anomaly detection. The…  ( 45 min )
    Understanding the ICL impact
    .  ( 16 min )

  • Open

    Integrating Azure Monitor in Azure Batch to monitor Batch Pool nodes performance
    In Azure Batch, to monitor the node performance like CPU or Disk usage users are required to use Azure monitor.  The Azure Monitor service collects and aggregates metrics and logs from every component of the node. Azure Monitor provides you with a view of availability, performance, and resilience. When you create an Azure Batch pool, you can install any of the following monitoring-related extensions on the compute nodes to collect and analyse data. Previously users have leveraged Batch Insights to get system statistics for Azure Batch account nodes, but it is deprecated now and no longer supported. The Log Analytics agent virtual machine (VM) extension installs the Log Analytics agent on Azure VMs and enrols VMs into an existing Log Analytics workspace. The Log Analytics agent is on a depr…  ( 31 min )
  • Open

    Modernizing Enterprise IT & Knowledge Support with Azure-Native Multiagent AI and LangGraph.
    Industry: Energy Location: North America Executive Summary: AI-Driven Multi-Agent Knowledge and IT Support Solution for an Energy Industry Firm A North American energy company sought to modernize its legacy knowledge and IT support chatbot, which was underperforming across key metrics. The existing system, built on static rules and scripts, delivered slow and often inaccurate responses, failing to meet the organization’s standards for employee engagement and operational efficiency. To address this challenge, we proposed and designed a cloud-native, AI-powered multi-agent system hosted on Microsoft Azure. Built on the LangGraph orchestration framework and Azure AI Foundry. This solution integrates advanced ai agent hierarchies, allowing for contextual, domain-specific knowledge retrieval an…  ( 22 min )
  • Open

    Testing Modern AI Systems: From Rule-Based Systems to Deep Learning and Large Language Models
    1. Introduction 1.1 Evolution from Expert Systems to Modern AI The transition from rule-based expert systems to modern AI represents one of the most significant paradigm shifts in computer science[1]. Where the original 1992 paper by Kiper focused on testing deterministic rule-based systems with clear logical pathways, today's AI systems operate through complex neural architectures that process information in fundamentally different ways[2]. Modern AI systems, particularly deep neural networks and transformer models, exhibit emergent behaviors that cannot be easily traced through simple logical paths[3]. Traditional expert systems operated on explicit if-then rules that could be mapped to logical path graphs, making structural testing relatively straightforward[4]. Contemporary AI systems,…  ( 65 min )
  • Open

    Announcing App Service Outbound IPv6 Support in Public Preview
    We are excited to announce the public preview of IPv6 outbound support in App Service for Windows! Public preview of outbound IPv6 support for Windows multi-tenant apps is supported on all App Service plan SKUs, Functions Consumption, Functions Elastic Premium, and Logic Apps Standard running Windows. This is the next announcement in our series of IPv6 related feature work on App Service. Public preview of Inbound IPv6 Support on App Service multi-tenant This announcement: IPv6 (dual-stack) non-vnet outbound support (multi-tenant) Backlog - IPv6 vnet outbound support (multi-tenant and App Service Environment v3) Backlog - IPv6 vnet inbound support (App Service Environment v3 - both internal and external) Limitations Linux sites are NOT supported at this time but will be available in the…  ( 22 min )

  • Open

    Let's Learn - MCP Events: A Beginner's Guide to the Model Context Protocol
    The Model Context Protocol (MCP) has rapidly become the industry standard for connecting AI agents to a wide range of external tools and services in a consistent way. In a matter of months, this protocol has become a hot topic in developer events and forums and has been implemented by companies large and small. With such rapid change comes the need for training and upskilling to meet the moment! That's why, we're planning a series of virtual training events across different languages (both natural and programming) to introduce you to MCP. ⭐ Register: https://aka.ms/letslearnmcp 👩‍💻 Who Should Join? Whether you're a beginner developer, a university student, or a seasoned tech professional, this workshop was designed with you in mind. At each event, experts will guide you through an exciti…  ( 22 min )
  • Open

    Building Safe, AI-Powered Data Resilience with Veeam and Azure AI Foundry
    Veeam Data Cloud (VDC) provides cloud-native, SaaS-based data resilience for mission-critical environments including Microsoft 365, Entra ID, Azure, and Salesforce. As enterprise data protection needs to evolve, so does the intelligence behind it. With Veeam Intelligence, VDC is leveraging the power of generative AI that helps users get the most value from VDC services using an advanced AI assistant, with the vision to deliver real-time insights through backup telemetry in the future   To build an assistant delivers accurate, reliable, and safe responses, the VDC team partnered with Azure AI Foundry, Microsoft's AI application development platform which provides built-in responsible AI evaluation and testing  at scale.   From Backup to Breakthrough: Enabling Smart, Safe AI Assistance Veeam…  ( 24 min )

  • Open

    Azure AI Voice Live API: what’s new and the pricing announcement
    At the //Build conference in May 2025, we announced the public preview of Azure AI Voice Live API (Breakout Session 144). Today we are exciting to share some updates to this API and the latest pricing. Recap: What is the Voice Live API and why does it matter? Voice is the next generation interface between humans and computers. In the era of voice-driven technologies, creating smooth and intuitive speech-based systems has become a priority for developers. The Voice Live API simplifies the process by combining essential voice processing components into a unified interface. Whether you're building conversational agents for customer support, automotive assistants, or educational tools, this API is designed to streamline workflows, reduce latency, and deliver high-quality, real-time voice inter…  ( 38 min )
  • Open

    Simplifying Data Ingestion with Copy job – Incremental Copy GA, Lakehouse Upserts, and New Connectors
    Copy job has been a go-to tool for simplified data ingestion in Microsoft Fabric, offering a seamless data movement experience from any source to any destination. Whether you need batch or incremental copying, it provides the flexibility to meet diverse data needs while maintaining a simple and intuitive workflow. We continuously refine Copy job based … Continue reading “Simplifying Data Ingestion with Copy job – Incremental Copy GA, Lakehouse Upserts, and New Connectors”  ( 7 min )
  • Open

    What’s New for Java Developers in Azure Cosmos DB (NoSQL)?
    The Java ecosystem around Azure Cosmos DB has quietly grown a lot stronger in the first half of 2024. And the updates are especially exciting if you’re working on scalable apps or exploring GenAI workloads. From new support for vector search and indexing, to better throughput management and improved integrations with Apache Kafka, Spring Data, […] The post What’s New for Java Developers in Azure Cosmos DB (NoSQL)? appeared first on Microsoft for Java Developers.  ( 22 min )

  • Open

    Expanding platform engineering capabilities with Radius Resource Types
    Now, with Radius Resource Types, platform engineers can define resource types specific to their organizations. The post Expanding platform engineering capabilities with Radius Resource Types appeared first on Microsoft Open Source Blog.  ( 13 min )
  • Open

    Secure your AI apps with user-context-aware controls | Microsoft Purview SDK
    With built-in protections, prevent data leaks, block unsafe prompts, and avoid oversharing without rewriting your app. As a developer, focus on innovation while meeting evolving security and compliance requirements. And as a security admin, gain full visibility into AI data interactions, user activity, and policy enforcement across environments.     Shilpa Ranganathan, Microsoft Purview Principal GPM, shares how new SDKs and Azure AI Foundry integrations bring enterprise-grade security to custom AI apps. Stop data leaks.     Detect and block sensitive content in real-time with Microsoft Purview. Get started. Adapt AI security based on user roles.     Block or allow access without changing your code. See it here. Prevent oversharing with built-in data protections.     Only authorized us…  ( 40 min )

  • Open

    Microsoft 365 Copilot APIs: Unlocking enterprise knowledge for AI with the Retrieval API — Now in Public Preview
    Read how the Retrieval API gives developers a secure, compliant and scalable way to integrate enterprise content into their AI workflows. The post Microsoft 365 Copilot APIs: Unlocking enterprise knowledge for AI with the Retrieval API — Now in Public Preview appeared first on Microsoft 365 Developer Blog.  ( 26 min )

  • Open

    Model Mondays S2:E2 - Understanding Model Context Protocol (MCP)
    This week in Model Mondays, we focus on the Model Context Protocol (MCP) — and learn how to securely connect AI models to real-world tools and services using MCP, Azure AI Foundry, and industry-standard authorization. Read on for my recap   About Model Mondays Model Mondays is a weekly series designed to help you build your Azure AI Foundry Model IQ step by step. Here’s how it works: 5-Minute Highlights – Quick news and updates about Azure AI models and tools on Monday 15-Minute Spotlight – Deep dive into a key model, protocol, or feature on Monday 30-Minute AMA on Friday – Live Q&A with subject matter experts from Monday livestream If you want to grow your skills with the latest in AI model development, Model Mondays is the place to start. Want to follow along? Register Here - to watc…  ( 31 min )
    Deploy Machine Learning Models the Smart Way with Azure Blob & Web App
    💡 Why This Approach? Traditional deployments often include models inside the app, leading to: Large container sizes Long build times Slow cold starts Painful updates when models change With Azure Blob Storage, you can offload the model and only fetch it at runtime — reducing size, improving flexibility, and enabling easier updates. What You will Need An ML model (model.pkl, model.pt, etc.) An Azure Blob Storage account A Python web app (FastAPI, Flask, or Streamlit) Azure Web App (App Service for Python) Azure Python SDK: azure-storage-blob Step 1: Save and Upload Your Model to Blob Storage First, save your trained model locally: # PyTorch example import torch torch.save(model.state_dict(), "model.pt") Then, upload it to Azure Blob Storage: from azure.storage.blob import BlobServiceC…  ( 26 min )
  • Open

    Using Azure API Management as a proxy for Application Insights Telemetry
    Introduction Organizations enforcing Entra ID authentication on their Application Insights resources often face a sudden problem: browser-based telemetry stops flowing. This happens when local authentication is disabled — a necessary step to enforce strict identity controls — but sending data from browser environments comes with inherent security challenges, and the Application Insights JavaScript SDK is no exception. As a result, telemetry from web clients is silently dropped, leaving gaps in monitoring and frustrated teams wondering how to re-enable secure telemetry ingestion. This article provides a solution: using Azure API Management (APIM) as a secure proxy that authenticates telemetry using a managed identity before forwarding it to Application Insights. This pattern restores observ…  ( 34 min )
  • Open

    Anthropic Claude Sonnet 4 and Opus 4 Now Available in GitHub Copilot for JetBrains and Eclipse
    Anthropic Claude Sonnet 4 and Claude Opus 4 are now available in GitHub Copilot Chat for JetBrains IDEs and Eclipse. Model Availability✨ Claude Sonnet 4: Available for users on Pro, Pro+, Business, and Enterprise plans. Claude Opus 4: Available for users on Pro+ and Enterprise plans. For full details, please see this documentation. How to Get Started🚀 JetBrains IDEs: Click the GitHub Copilot icon –> Open GitHub Copilot Chat-> […] The post Anthropic Claude Sonnet 4 and Opus 4 Now Available in GitHub Copilot for JetBrains and Eclipse appeared first on Microsoft for Java Developers.  ( 23 min )
  • Open

    Announcing Shortcut Transformations: from files to Delta tables. Always in sync, no pipelines required.
    Shortcut transformations is a new capability in Microsoft Fabric that simplifies the process of converting raw files, starting with .CSV files, into Delta tables. This feature eliminates the need for traditional ETL pipelines, enabling users to transform data directly on top of files with minimal setup. Why use Shortcut transformations Shortcut transformations help users: What … Continue reading “Announcing Shortcut Transformations: from files to Delta tables. Always in sync, no pipelines required.”  ( 7 min )
  • Open

    Distributed Databases: Adaptive Optimization with Graph Neural Networks and Causal Inference
    Introduction and Motivation The explosive growth of data-driven applications has pushed distributed database systems to their limits, especially as organizations demand real-time consistency, high availability, and efficient resource utilization across global infrastructures. The CAP theorem—asserting that a distributed system can guarantee at most two out of consistency, availability, and partition tolerance—forces architects to make challenging trade-offs. Traditional distributed databases rely on static policies and heuristics, which cannot adapt to the dynamic nature of modern workloads and evolving data relationships. Recent advances in Graph Neural Networks (GNNs) offer a new paradigm for modeling and optimizing distributed systems. Unlike conventional machine learning, GNNs naturall…  ( 30 min )
  • Open

    Voice Conversion in Azure AI Speech
    We are delighted to announce the availability of the Voice Conversion (VC) feature in Azure AI Speech service, which is currently in preview. What is voice Conversion Voice Conversion (or voice changer, speech to speech conversion) is the process of transforming the voice characteristics of a given audio to a target voice speaker, and after Voice Conversion, the resulting audio reserves source audio’s linguistic content and prosody while the voice timbre sounds like the target speaker. Below is a diagram of Voice Conversion.   The purpose of Voice Conversion There are 3 reasons users need Voice Conversion functionality: Voice Conversion can replicate your content using a different voice identity while maintaining the original prosody and emotion. For instance, in education, teachers can …  ( 30 min )

  • Open

    Microsoft Intune data-driven management | Device Query & Copilot
    Proactively manage and secure all your devices — whether they run Windows, macOS, iOS, or Android. With cross-platform analytics, multi-device queries, and in-depth troubleshooting tools, you can pinpoint problems fast and take targeted action at scale.  Even without deep technical expertise, you can generate complex queries, identify vulnerabilities, and deploy remediations — all in a few clicks. With built-in Copilot support, daily tasks like policy validation, device comparison, and risk analysis become faster, smarter, and easier to act on.  Jeremy Chapman, Director of Microsoft 365, shares how to stay ahead of potential issues and keep endpoints running smoothly.  Spot and fix performance issues before users contact support.  Use Advanced Analytics in Microsoft Intune. Check it out.…  ( 42 min )
  • Open

    On-premises data gateway June 2025 release
    The June 2025 release of the on-premises data gateway is (version 3000.274). What’s New Fabric pipeline – Azure database for PostgreSQL connector version 2.0 is now generally available. This new version is enhanced to support TLS 1.3, new table action – upsert, as well as the script activity in data pipeline.  Fabric pipeline – Enhanced … Continue reading “On-premises data gateway June 2025 release”  ( 6 min )
    Announcing new features for Microsoft Fabric Extension in VS Code
    The Microsoft Fabric Extension for VS Code introduces two new features that enhance the management of Fabric items directly within the editor. Users can now perform CRUD operations on Fabric items and switch between multiple tenants easily. These updates aim to improve workflow efficiency and are based on customer feedback, inviting further suggestions for enhancement.  ( 6 min )
  • Open

    Dev Proxy v0.29 with refactored architecture, MCP server, and exposed LM prompts
    Introducing Dev Proxy v0.29, with a major architectural overhaul, control over language model prompts, and improved diagnostics. The post Dev Proxy v0.29 with refactored architecture, MCP server, and exposed LM prompts appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Quest 9: I want to use a ready-made template
    Building robust, scalable AI apps is tough, especially when you want to move fast, follow best practices, and avoid being bogged down by endless setup and configuration. In this quest, you’ll discover how to accelerate your journey from prototype to production by leveraging ready-made templates and modern cloud tools. Say goodbye to decision fatigue and hello to streamlined, industry-approved workflows you can make your own. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🚀 What You’ll Build A fully functional AI application deployed on Azure, customized to solve a real problem that matters to you.  A codebase powered by a producti…  ( 23 min )
  • Open

    Quest 9: I want to use a ready-made template
    Building robust, scalable AI apps is tough, especially when you want to move fast, follow best practices, and avoid being bogged down by endless setup and configuration. In this quest, you’ll discover how to accelerate your journey from prototype to production by leveraging ready-made templates and modern cloud tools. Say goodbye to decision fatigue and hello to streamlined, industry-approved workflows you can make your own. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🚀 What You’ll Build A fully functional AI application deployed on Azure, customized to solve a real problem that matters to you.  A codebase powered by a producti…  ( 23 min )
  • Open

    Removing Azure Resource Manager reliance on Azure DevOps sign-ins
    Azure DevOps will no longer depend on the Azure Resource Manager (ARM) resource (https://management.azure.com) when you sign in or refresh Microsoft Entra access tokens. Previously, Azure DevOps required the ARM audience during sign-in and token refresh flows. This requirement meant administrators had to allow all Azure DevOps users to bypass ARM-based Conditional Access policies (CAPs) […] The post Removing Azure Resource Manager reliance on Azure DevOps sign-ins appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Customer Managed Keys in OneLake: Strengthening Data Protection and Control
    One of the highly requested features in Microsoft Fabric is now available: the ability to encrypt data in OneLake using your own keys. As organizations face growing data volumes and tighter regulatory expectations, Customer-Managed Keys (CMK) offer a powerful way to enforce enterprise-grade security and ensure strict ownership of encryption keys and access. With Microsoft’s … Continue reading “Customer Managed Keys in OneLake: Strengthening Data Protection and Control”  ( 7 min )
    New in Fabric Data Agent: Data source instructions for smarter, more accurate AI responses
    We’re excited to introduce Data Source Instructions, a powerful new feature in the Fabric Data Agent that helps you get more precise, relevant answers from your structured data. What are Data Source instructions? When you use the Data Agent to ask questions in natural language, the agent must determine which data source to use and … Continue reading “New in Fabric Data Agent: Data source instructions for smarter, more accurate AI responses”  ( 6 min )
    Fabric June 2025 Feature Summary
    Welcome to the June 2025 update. The June 2025 Fabric update introduces several key enhancements across multiple areas. Power BI celebrates its 10th anniversary with a range of community events, contests, expert-led sessions, and special certification exam discounts. In Data Engineering, Fabric Notebooks now support integration with variable libraries in preview, empowering users to manage … Continue reading “Fabric June 2025 Feature Summary”  ( 17 min )
  • Open

    Create Stunning AI Videos with Sora on Azure AI Foundry!
    Special credit to Rory Preddy for creating the GitHub resource that enable us to learn more about Azure Sora. Reach him out on LinkedIn to say thanks. Introduction Artificial Intelligence (AI) is revolutionizing content creation, and video generation is at the forefront of this transformation. OpenAI's Sora, a groundbreaking text-to-video model, allows creators to generate high-quality videos from simple text prompts. When paired with the powerful infrastructure of Azure AI Foundry, you can harness Sora's capabilities with scalability and efficiency, whether on a local machine or a remote setup.   In this blog post, I’ll walk you through the process of generating AI videos using Sora on Azure AI Foundry. We’ll cover the setup for both local and remote environments. Requirements: Azure AI …  ( 26 min )
  • Open

    Semantic Kernel Python Gets a Major Vector Store Upgrade
    We’re excited to announce a significant update to Semantic Kernel Python’s vector store implementation. Version 1.34 brings a complete overhaul that makes working with vector data simpler, more intuitive, and more powerful. This update consolidates the API, improves developer experience, and adds new capabilities that streamline AI development workflows. What Makes This Release Special? The […] The post Semantic Kernel Python Gets a Major Vector Store Upgrade appeared first on Semantic Kernel.  ( 25 min )

  • Open

    Introducing Microsoft Purview Alert Triage Agents for Data Loss Prevention & Insider Risk Management
    Surface the highest-risk alerts across your environment, no matter their default severity, and take action. Customize how your agents reason, teach them what matters to your organization, and continuously refine to reduce time-to-resolution.    Talhah Mir, Microsoft Purview Principal GPM, shows how to triage, investigate, and contain potential data risks before they escalate.  Focus on the most high-risk alerts in your queue.  Save time by letting Alert Triage Agents for DLP and IRM surface what matters. Watch how it works. Stay in control.  Tailor triage priorities with your own rules to focus on what really matters. See how to customize your alert triage agent using Security Copilot. View alert triage agent efficiency stats.  Know what your agent is doing and how well it’s performing…  ( 28 min )
  • Open

    OneLake security – updates and news
    It’s been almost 3 months since we announced OneLake security at FabCon 2025 in Las Vegas, and while the interest has not slowed down, we’ve also been working behind the scenes to improve the feature and address your feedback. In this blog post, we’ll go through some of the latest updates on OneLake security including … Continue reading “OneLake security – updates and news”  ( 6 min )
  • Open

    🎉 Now in Public Preview: Create Dev Boxes on Behalf of Your Developers
    We’re excited to announce that one of our most-requested features is officially in Public Preview: You can now create Dev Boxes on behalf of your developers. Waiting around to get started is a thing of the past.  Whether you’re onboarding a new hire, running a hackathon, or setting up for a customer demo, this feature makes it […] The post 🎉 Now in Public Preview: Create Dev Boxes on Behalf of Your Developers  appeared first on Develop from the cloud.  ( 23 min )
  • Open

    Deploying MCP Server Using Azure Container Apps
    Authors: Joel Borellis, Mohamad Al Jazaery, Hwasu Kim, Kira Soderstrom  Github Link  This repository is a great starting point. It demonstrates deploying an MCP server with Azure Container Apps and includes three example clients, each using a different agent framework to interact with the server. What’s Inside MCP Server with Sport News ToolsThe sample server, built using the fastmcp package, exposes tools like “Get NFL News”. It supports API key authentication and is designed to be easily extended with additional tools or data sources. You can run it locally or deploy it to Azure Container Apps for scalable, cloud-native hosting.   Three Client SamplesEach example demonstrates how different agent frameworks can consume tools from the MCP server. All the examples use Azure-OpenAI as a…  ( 25 min )

  • Open

    Deprecation of MS-APP-ACTS-AS header in Shifts Management Microsoft Graph APIs
    In app-only access scenarios, Shifts Management Graph APIs previously required the MS-APP-ACTS-AS: userId header to indicate the user on whose behalf the application was acting. However, this conflicted with the Microsoft Graph permission model where there is no signed-in user for app-only access scenarios. To align Shifts Graph APIs with this model, the MS-APP-ACTS-AS header […] The post Deprecation of MS-APP-ACTS-AS header in Shifts Management Microsoft Graph APIs appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Simplifying Secrets Management in Strapi on Azure App Service
    We’re excited to announce a major enhancement to the deployment experience for Strapi on Azure App Service. Building on the foundation laid out in our  overview, quick start, and FAQ , this update introduces automated and secure secrets management using Azure Key Vault. What’s New? The updated ARM template now provisions an Azure Key Vault instance alongside your Strapi application. This integration enables secure storage of sensitive credentials such as database passwords and Strapi-specific secrets. Here’s what makes this enhancement powerful: Secure by Default: Public access to the Key Vault is disabled out of the box. Instead, private endpoints are configured to ensure secure communication within your virtual network. Auto-Generated Secrets: Strapi secrets are now automatically genera…  ( 24 min )
  • Open

    How to make your SQL scalar user-defined function (UDF) inlineable in Microsoft Fabric Warehouse
    In our previous blog post Inline Scalar user-defined functions in Microsoft Fabric Warehouse (Preview) we have announced the availability of SQL native scalar UDFs. We also emphasized the importance of inlining and how that can affect scenarios in which UDF can be used. In this post, we aim to highlight common patterns that prevent inlining … Continue reading “How to make your SQL scalar user-defined function (UDF) inlineable in Microsoft Fabric Warehouse “  ( 9 min )
    Inline Scalar user-defined functions (UDFs) in Microsoft Fabric Warehouse (Preview)
    SQL native Scalar user-defined functions (UDFs) in Microsoft Fabric Warehouse and SQL analytics endpoint are now in preview. A scalar UDF is a custom code implemented in T-SQL that accepts parameters, performs an action such as complex calculation, and returns a result of that action as a single value. They can contain local variables, calls … Continue reading “Inline Scalar user-defined functions (UDFs) in Microsoft Fabric Warehouse (Preview)”  ( 8 min )
  • Open

    Quest 7: Create an AI Agent with Tools from an MCP Server
    The JS AI Build-a-thon is in full swing — and we’re turning up the power in Quest 7!If you're just joining us, this is part of an ongoing challenge to help JavaScript and TypeScript developers build AI-powered apps from scratch. Catch up here and join the conversation on Discord.   Last quest, you dipped your toes into agentic development — giving your AI the ability to act and reason.This time, we’re taking it further. In Quest 7, you’ll explore the Model Context Protocol (MCP) — a growing protocol in Agentic development that unlocks standardized tool-usage in AI agents via an MCP server. Available tools for the OS Patrol agent 🎯 What You’ll Build This quest is all about connecting your AI agent to tools that do real things.You’ll: Create and spin up an MCP server using the MCP TypeScri…  ( 25 min )

  • Open

    💻 Spring Cleaning for Dev Boxes: Mastering Manual & Automatic Offboarding
    Let’s face it. Sometimes your Dev Box just… hangs around too long. Whether you’ve moved to a new project, left the company, want to create a new dev box with the latest tools, it’s time to clean things up. 🎯 With Dev Box Auto-Deletion now in public preview, offboarding just got a whole lot easier. […] The post 💻 Spring Cleaning for Dev Boxes: Mastering Manual & Automatic Offboarding appeared first on Develop from the cloud.  ( 24 min )

  • Open

    S2E01 Recap: Advanced Reasoning Session
    About Model Mondays Want to know what Reasoning models are and how you can build advanced reasoning scenarios like a Deep Research agent using Azure AI Foundry? Check out this recap from Model Mondays Season 2 Ep 1. Model Mondays is a weekly series to help you build your model IQ in three steps:1. Catch the 5-min Highlights on Monday, to get up to speed on model news2. Catch the 15-min Spotlight on Monday, for a deep-dive into a model or tool3. Catch the 30-min AMA on Friday, for a Q&A session with subject matter experts Want to follow along? Register Here- to watch upcoming livestreams for Season 2 Visit The Forum- to see the full AMA schedule for Season 2 Register Here - to join the AMA on Friday Jun 20 Spotlight On: Advanced Reasoning This week, the Model Mondays spotlight was on Adv…  ( 31 min )
    Getting Started with the AI Toolkit: A Beginner’s Guide with Demos and Resources
    If you're curious about building AI solutions but don’t know where to start, Microsoft’s AI Toolkit is a great place to begin. Whether you’re a student, developer, or just someone exploring AI for the first time, this toolkit helps you build real-world solutions using Microsoft’s powerful AI services. In this blog, I’ll Walk you through what the AI Toolkit is, how you can get started, and where you can find helpful demos and ready-to-use code samples. What is the AI Toolkit? The AI Toolkit is a collection of tools, templates, and sample apps that make it easier to build AI-powered applications and copilots using Microsoft Azure. With the AI Toolkit, you can: Build intelligent apps without needing deep AI expertise. Use templates and guides that show you how everything works. Quickly proto…  ( 24 min )
  • Open

    Boosting Productivity with Ansys RedHawk-SC and Azure NetApp Files Intelligent Data Infrastructure
    Table of Contents Abstract Introduction Using Ansys Access with Azure NetApp Files Architecture Diagram Ansys Redhawk Scenario Details Overview and Context HPC Simulation Environment Cloud Shift Drivers Why Azure NetApp Files Capacity and Scale Fits for Ansys Access to support RedHawk-SC Use Cases Power Integrity Simulations Transient Simulations with Frequent Checkpoints Multiple Concurrent Simulation Runs End-to-End Engineering Workflows Azure Well-Architected Pillars And Considerations Performance Efficiency Parallel I/O and Low Latency Large Volumes Dynamic Service Levels and Volume Resizing Protocol Optimization Performance Isolation and QoS Cluster Right-Sizing Cost Optimization Pay-As-You-Go Model Storage Efficiency for data protection Reserved Capacity Tiering for Cold Data Archiva…  ( 41 min )
  • Open

    Balance governance and flexibility with Dev Box project policies
    As organizations scale their development efforts, managing access to cloud resources becomes critical. Platform engineers need to strike a balance between enforcing governance and enabling developer agility. At Build 2025, we announced the general availability of project policies in Microsoft Dev Box, which provides a powerful way to improve resource control and governance for cloud […] The post Balance governance and flexibility with Dev Box project policies appeared first on Develop from the cloud.  ( 24 min )
    Control cloud costs with Dev Box hibernation features
    Cost is one of the most important concerns in any cloud-native rollout. IT admins need powerful tools to control costs without slowing development. At Build 2025, we were excited to announce the general availability of hibernation in Microsoft Dev Box. This feature empowers platform engineers to optimize resource usage while empowering developers to get what […] The post Control cloud costs with Dev Box hibernation features appeared first on Develop from the cloud.  ( 23 min )

  • Open

    Engaging Employees: A Journey Through Data Analytics
    Our Team (Sorted Alphabetically): Ashkan Allahyari| ashkan.allahayri@ru.nl Ole Bekker| ole.bekker2@ru.nl Robin Elster| robin.elster@ru.nl Waad Hegazy| waad.hegazy@ru.nl Lea Hierl| lea.hierl@ru.nl Linda Pham| linda.pham@ru.nl  Master Business Analysis & Modelling, Radboud University Master Strategic Human Resources Leadership, Radboud University Student Exchange at Radboud University Project Overview At Radboud University, our team in the course Data-Driven Analytics for Responsible Business Solutions, embraced a unique opportunity to apply our passion for data analytics to a real-world challenge. Tasked with analyzing employee turnover at VenturaGear—a company committed to fostering a thriving workplace—we conducted an in-depth study to uncover the root causes of attrition. By le…  ( 36 min )
  • Open

    Announcing Public Preview of the Root Cert API in App Service Environment v3
    What is the Root Cert API? The Root Cert API allows customers to programmatically add root certificates to their ASE, making them available during the startup of apps. Root certificates are public certificates that identify a root certificate authority (CA). These are essential for establishing trust in secure communications. By adding root certificates to your ASE, all web apps hosted within that ASE will have them installed in their root store. This ensures that apps can securely communicate with internal services or APIs that use certificates issued by private or enterprise CAs. Previously, this functionality was only available in private preview through a workaround involving certificate uploads and a special app setting and included a number of limitations. With the new Root Cert API,…  ( 27 min )

  • Open

    Mastering Model Context Protocol (MCP): Building Multi Server MCP with Azure OpenAI
    The Model Context Protocol (MCP) is rapidly becoming the prominent framework for building truly agentic, interoperable AI applications.   Many articles document MCP servers for single-server use, this project stands out as the starter template that combines Azure OpenAI integration with a Multi-Server MCP architecture on a custom interface, enabling you to connect and orchestrate multiple tool servers—through a customized UI.   Here, we will deep dive into Multi Server MCP implementation, connecting both local custom and ready MCP Servers in a single client session through the MultiServerMCP library from Langchain adapters, enabling agentic orchestration across different domains. While most triggers to OOB MCP servers leveraged Github Copilot for input, this project allows for custom app i…  ( 31 min )
    Announcing the Extension of Some Language Understanding Intelligence Service (LUIS) Functionality
    In 2022, we announced the deprecation of LUIS by September 30, 2025 with a recommendation to migrate to conversational language understanding (CLU). In response to feedback from our valued customers, we have decided to extend the availability of certain functionalities in LUIS until March 31, 2026. This extension aims to support our customers in their smooth migration to CLU, ensuring minimal disruption to their operations.  Extension Details  Here are some details on when and how the LUIS functionality will change:  October 2022: LUIS resource creation is no longer available. October 31, 2025: The LUIS portal will no longer be available.  LUIS Authoring (via REST API only) will continue to be available. March 31, 2026: LUIS Authoring, including via REST API, will no longer be available.  LUIS Runtime will no longer be available.  Before these retirement dates, please migrate to conversational language understanding (CLU), a capability of Azure AI Service for Language. CLU provides many of the same capabilities as LUIS, plus enhancements such as:  Enhanced AI quality using state-of-the-art machine learning models  The LLM-powered Quick Deploy feature to deploy a CLU model with no training Multilingual capabilities that allow you to train in one language and predict in 99+ others  Built-in routing between conversational language understanding and custom question answering projects using orchestration workflow  Access to a suite of features available on Azure AI Service for Language in the Azure AI Foundry  Looking Ahead  On March 31, 2026, LUIS will be fully deprecated, and any LUIS inferencing requests will return an error message. We encourage all our customers to complete their migration to CLU as soon as possible to avoid any disruptions.  We appreciate your understanding and cooperation as we work together to ensure a smooth migration.   Thank you for your continued support and trust in our services.  ( 21 min )
  • Open

    Drive carbon reductions in cloud migrations with Sustainability insights in Azure Migrate
    Introduction As sustainability becomes a core priority for organizations worldwide, Azure Migrate now empowers customers to quantify environmental impact alongside cost savings when planning their cloud journey. With the new Sustainability Benefits capability in Azure Migrate's Business Case, customers can now view estimated emissions savings when migrating from on-premises environments to Azure — making sustainability a first-class consideration in cloud transformation. Align with Global Sustainability Goals With governments and enterprises racing to meet net-zero targets — including a 55% emissions reduction target by 2030 in the EU and net-zero goals in the US and UK by 2050 — cloud migration offers a meaningful path to emissions reduction. With Azure’s carbon-efficient infrastructure p…  ( 25 min )
  • Open

    Connect Spring AI to Local AI Models with Foundry Local
    What is Azure AI Foundry and Foundry Local? Azure AI Foundry is Microsoft’s comprehensive platform for enterprise AI development and deployment, enabling organizations to build, customize, and operate AI solutions at scale. It provides tools, services, and infrastructure to develop, fine-tune and deploy AI models in production environments with enterprise-grade security and compliance. Foundry Local […] The post Connect Spring AI to Local AI Models with Foundry Local appeared first on Microsoft for Java Developers.  ( 25 min )
  • Open

    Azure DevOps MCP Server, Public Preview
    A few weeks ago at BUILD, we announced the upcoming Azure DevOps MCP Server: 👉 Azure DevOps with GitHub Repositories – Your path to Agentic AI Today, we’re excited to share that the local Azure DevOps MCP Server is now available in public preview. This lets GitHub Copilot in Visual Studio and Visual Studio Code […] The post Azure DevOps MCP Server, Public Preview appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Validating Change Requests with Kubernetes Admission Controllers
    Promoting an application or infrastructure change into production often comes with a requirement to follow a change control process. This ensures that changes to production are properly reviewed and that they adhere to required approvals, change windows and QA process. Often this change request (CR) process will be conducted using a system for recording and auditing the change request and the outcome. When deploying a release, there will often be places in the process to go through this change control workflow. This may be as part of a release pipeline, it may be managed in a pull request or it may be a manual process. Ultimately, by the time the actual changes are made to production infrastructure or applications, they should already be approved. This relies on the appropriate controls an…  ( 40 min )
  • Open

    Connecting Azure Kubernetes Service Cluster to Azure Machine Learning for Multi-Node GPU Training
    TLDR Create an Azure Kubernetes Service cluster with GPU nodes and connect it to Azure Machine Learning to run distributed ML training workloads. This integration provides a managed data science platform while maintaining Kubernetes flexibility under the hood, enables multi-node training that spans multiple GPUs, and bridges the gap between infrastructure and ML teams. The solution works for both new and existing clusters, supporting specialized GPU hardware and hybrid scenarios. Why Should You Care? Integrating Azure Kubernetes Service (AKS) clusters with GPUs into Azure Machine Learning (AML) offers several key benefits: Utilize existing infrastructure: Leverage your existing AKS clusters with GPUs via a managed data science platform like AML Flexible resource sharing: Allow both AKS wo…  ( 34 min )
  • Open

    Introducing MCP Support for Real-Time Intelligence (RTI)
    Co-author: Alexei Robsky, Data Scientist Manager Overview  As organizations increasingly rely on real-time data to drive decisions, the need for intelligent, responsive systems has never been greater. At the heart of this transformation is Fabric Real-Time Intelligence (RTI), a platform that empowers users to act on data as it arrives. Today, we’re excited to announce … Continue reading “Introducing MCP Support for Real-Time Intelligence (RTI) “  ( 7 min )
    Fabric Eventhouse now supports Eventstream Derived Streams in Direct Ingestion mode (Preview)
    The Eventstreams artifact in the Microsoft Fabric Real-Time Intelligence experience lets you bring real-time events into Fabric, transform them, and then route them to various destinations such as Eventhouse, without writing any code (no-code). You can ingest data from an Eventstream to Eventhouse seamlessly either from Eventstream artifact or Eventhouse Get Data Wizard. This capability … Continue reading “Fabric Eventhouse now supports Eventstream Derived Streams in Direct Ingestion mode (Preview)”  ( 7 min )
    Introducing new item creation experience in Fabric
    Have you ever found yourself frustrated by inconsistent item creation? Maybe you’ve struggled to select the right workspace or folder when creating a new item or ended up with a cluttered workspace due to accidental item creation. We hear you—and we’re excited to introduce the new item creation experience in Fabric! This update is designed … Continue reading “Introducing new item creation experience in Fabric”  ( 6 min )
    Surge Protection for Background Operation (Generally Available)
    We’re excited to announce Surge Protection for background operations is now Generally Available (GA). Using surge protection, capacity admins can limit overuse by background operations in their capacities.  ( 6 min )
  • Open

    Modernizing Loan Processing with Gen AI and Azure AI Foundry Agentic Service
    Scenario Once a loan application is submitted, financial institutions must process a variety of supporting documents—including pay stubs, tax returns, credit reports, and bank statements—before a loan can be approved. This post-application phase is often fragmented and manual, involving data retrieval from multiple systems, document verification, eligibility calculations, packet compilation, and signing. Each step typically requires coordination between underwriters, compliance teams, and loan processors, which can stretch the processing time to several weeks. This solution automates the post-application loan processing workflow using Azure services and Generative AI agents. Intelligent agents retrieve and validate applicant data, extract and summarize document contents, calculate loan eli…  ( 42 min )
  • Open

    Python in Visual Studio Code – June 2025 Release
    The June 2025 release includes Copilot chat tools in the Python extension, project creation from a template, language server based terminal suggest, and more! The post Python in Visual Studio Code – June 2025 Release appeared first on Microsoft for Python Developers Blog.  ( 24 min )

  • Open

    Result Set Caching for Microsoft Fabric Data Warehouse (Preview)
    Result Set Caching is now available in preview for Microsoft Fabric Data Warehouse and Lakehouse SQL analytics endpoint. This performance optimization works transparently to cache the results of eligible T-SQL queries. When the same query is issued again, it directly retrieves the stored result, instead of recompiling and recomputing the original query. This operation drastically … Continue reading “Result Set Caching for Microsoft Fabric Data Warehouse (Preview)”  ( 6 min )
  • Open

    Quest 4 - I want to connect my AI prototype to external data using RAG
    In this quest, you'll teach your AI app to talk to external data using the Retreival Augmented Generation (RAG) technique. You'll overcome the limitations of pre-trained language models by allowing them to reference your own data, using it as context to deliver accurate, fact-based responses.   👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🔧 What You’ll Build In this quest, you’ll: Connect your AI app to external documents (like PDFs) Allow your app to “read” and respond using your real-world content Why does this matter? Because LLMs are powerful, but they don’t know your business, reports, or research papers, etc. With RAG, yo…  ( 25 min )
    Use Prompty with Foundry Local
    Prompty is a powerful tool for managing prompts in AI applications. Not only does it allow you to easily test your prompts during development, but it also provides observability, understandability and portability. Here's how to use Prompty with Foundry Local to support your AI applications with on-device inference. Foundry Local At the Build '25 conference, Microsoft announced Foundry Local, a new tool that allows developers to run AI models locally on their devices. Foundry Local offers developers several benefits, including performance, privacy, and cost savings. Why Prompty? When you build AI applications with Foundry Local, but also other language model hosts, consider using Prompty to manage your prompts. With Prompty, you store your prompts in separate files, making it easy to test a…  ( 27 min )
  • Open

    AI Automation in Azure Foundry through turnkey MCP Integration and Computer Use Agent Models
    The Fashion Trends Discovery Scenario In this walkthrough, we'll explore a sample application that demonstrates the power of combining Computer Use (CUA) models with Playwright browser automation to autonomously compile trend information from the internet, while leveraging MCP integration to intelligently catalog and store insights in Azure Blob Storage. The User Experience A fashion analyst simply provides a query like "latest trends in sustainable fashion" to our command-line interface. What happens next showcases the power of agentic AI—the system requires no further human intervention to: Autonomous Web Navigation: The agent launches Pinterest, intelligently locates search interfaces, and performs targeted queries Intelligent Content Discovery: Systematically identifies and interacts …  ( 43 min )

  • Open

    12 GitLens Features that Revolutionized My Coding Workflow in VS Code
    Let me walk you through 12 features that have become indispensable in my daily coding life. Understanding Code Changes & History Inline Blame Annotations: GitLens adds small annotations at the end of each line of code, showing who last modified that line, when, and in which commit. This feature provides instant context about the code's history without leaving your editor. While working on an e-commerce platform's checkout process, I noticed an unexpected behavior. With inline blame, I quickly identified that the change was introduced in a recent sprint, saving hours of backtracking.  Heatmap: This feature adds a color-coded heatmap to the scroll bar, visually representing the age of the code. Newer changes appear in warmer colors, while older code is shown in cooler colors, helping you q…  ( 30 min )
  • Open

    Quest 3 - I want to add a simple chat interface to my AI prototype
    In this quest, you’ll give your Gen AI prototype a polished chat interface using Vite and Lit. Along the way, you’ll also manage application infrastructure with Bicep and Azure Developer CLI (azd), making your prototype more structured and ready for deployment. This step is all about UX, making your AI prototype not just functional, but interactive and user-friendly. 👉 Want to catch up on the full program or grab more quests? https://aka.ms/JSAIBuildathon 💬 Got questions or want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🔧 What You’ll Build By the end of this quest, you’ll have: A chat UI built with Vite and Lit  A structured codebase with infrastructure-as-code (IaC) using Bicep  Seamless local deployment workflow using the Azure Develop…  ( 26 min )
  • Open

    Cohere Models Now Available on Managed Compute in Azure AI Foundry Models
    Over the course of the last year, we have launched several Cohere models on Azure as Serverless Standard (pay-go) offering. We’re excited to announce that Cohere's latest models—Command A, Rerank 3.5, and Embed 4—are now available on Azure AI Foundry models via Managed Compute.   This launch allows enterprises and developers to now deploy Cohere models instantly with their own Azure quota, with per-hour GPU pricing that compensates the model provider—unlocking a scalable, low-friction path to production-ready GenAI.    What is Managed Compute?  Managed Compute is a deployment option within Azure AI Foundry Models that lets you run large language models (LLMs), SLMs, HuggingFace models and custom models fully hosted on Azure infrastructure.    Why Use Managed Compute?  Azure Managed Compute…  ( 23 min )

  • Open

    Monitor your Quarkus native application on Azure
    Introduction Quarkus is a general-purpose Java framework focused on efficient use of resources, fast startup, and rapid development. It allows developers to create and run services in the Java Virtual Machine (JVM) or native binary executables (native mode). In this blog post we are going to focus on using Quarkus to create and monitor a […] The post Monitor your Quarkus native application on Azure appeared first on Microsoft for Java Developers.  ( 25 min )
  • Open

    Handling unexpected job terminations on Azure Container Apps
    This post goes over situations where you may notice jobs being terminated or stopped briefly and some things that can be done to help alleviate interruptions.  ( 7 min )
  • Open

    Integrating Azure API Management with Fabric API for GraphQL
    Introduction Integrating Azure API Management (APIM) with Microsoft Fabric’s API for GraphQL can significantly enhance your API’s capabilities by providing robust scalability and security features such as identity management, rate limiting, and caching. This post will guide you through the process of setting up and configuring these features. You may not be familiar with API … Continue reading “Integrating Azure API Management with Fabric API for GraphQL”  ( 8 min )
    Privacy by Design: PII Detection and Anonymization with PySpark on Microsoft Fabric
    Introduction Whether you’re building analytics pipelines or conversational AI systems, the risk of exposing sensitive data is real. AI models trained on unfiltered datasets can inadvertently memorize and regurgitate PII, leading to compliance violations and reputational damage. This blog explores how to build scalable, secure, and compliant data workflows using PySpark, Microsoft Presidio, and Faker—covering … Continue reading “Privacy by Design: PII Detection and Anonymization with PySpark on Microsoft Fabric”  ( 9 min )
  • Open

    AgentCon arriva a Milano Martedì 17 Giugno
    📍 Milano, Sala Luigiana @ Coperni.co Centrale - Via Copernico 38📅 17 Giugno 2025 dalle 9:00 alle 17:30🧠 Focus: AI Agents per sviluppatori, ricercatori, imprenditori e professionisti IT La serie globale AgentCon – AI Agents World Tour arriva a Milano per una giornata interamente dedicata a chi costruisce il futuro con l’intelligenza artificiale. Dopo il successo delle tappe internazionali in Kenya, Olanda e Brasile, il tour approda in Italia per offrire contenuti tecnici, demo dal vivo e networking con esperti del settore. Perché partecipare? AgentCon non è una conferenza qualsiasi. È il luogo dove si parla di casi d'uso concreti e esempi pratici di codice. L’evento è pensato per sviluppatori, architetti software, data scientist e professionisti IT che vogliono andare oltre i modelli e s…  ( 22 min )

  • Open

    Introducing upgrades to AI functions for better performance—and lower costs
    Earlier this year, we released AI functions in public preview, allowing Fabric customers to apply LLM-powered transformations to OneLake data simply and seamlessly, in a single line of code. Since then, we’ve continued iterating on AI functions in response to your feedback. Let’s explore the latest updates, which make AI functions more powerful, more cost-effective, … Continue reading “Introducing upgrades to AI functions for better performance—and lower costs”  ( 7 min )
  • Open

    Configure Embedding Models on Azure AI Foundry with Open Web UI
    Introduction Let’s take a closer look at an exciting development in the AI space. Embedding models are the key to transforming complex data into usable insights, driving innovations like smarter chatbots and tailored recommendations. With Azure AI Foundry, Microsoft’s powerful platform, you’ve got the tools to build and scale these models effortlessly. Add in Open Web UI, a intuitive interface for engaging with AI systems, and you’ve got a winning combo that’s hard to beat. In this article, we’ll explore how embedding models on Azure AI Foundry, paired with Open Web UI, are paving the way for accessible and impactful AI solutions for developers and businesses. Let’s dive in!   To proceed with configuring the embedding model from Azure AI Foundry on Open Web UI, please firstly configure the…  ( 24 min )
  • Open

    June Patches for Azure DevOps Server
    Today we are releasing patches that impact the latest version of our self-hosted product, Azure DevOps Server. We strongly encourage and recommend that all customers use the latest, most secure release of Azure DevOps Server. You can download the latest version of the product, Azure DevOps Server 2022.2 from the Azure DevOps Server download page. […] The post June Patches for Azure DevOps Server appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Fix Identity Sprawl + Optimize Microsoft Entra
    Enforce MFA, block legacy authentication, and apply risk-based Conditional Access policies to reduce exposure from stale accounts and weak authentication methods. Use built-in tools for user, group, and device administration to detect and clean up identity sprawl — like unused credentials, inactive accounts, and expired apps — before they become vulnerabilities.  Jeremy Chapman, Microsoft 365 Director, shares steps to clean up your directory, strengthen authentication, and improve overall identity security.  Prioritize top risks.  Take action across MFA, risk policies, and stale objects with Microsoft Entra recommendations. Start here. Block over 99% of identity attacks.  Enforce MFA for admins and users in Microsoft Entra. Detect and delete stale user accounts.  See how to fix account …  ( 45 min )

  • Open

    Drasi accepted into CNCF sandbox for change-driven solutions
    The Azure Incubations team is proud to share that Drasi has officially been accepted into the Cloud Native Computing Foundation Sandbox. The post Drasi accepted into CNCF sandbox for change-driven solutions  appeared first on Microsoft Open Source Blog.  ( 13 min )
  • Open

    Running Self-hosted APIM Gateways in Azure Container Apps with VNet Integration
    With Azure Container Apps we can run containerized applications, completely serverless. The platform itself handles all the orchestration needed to dynamically scale based on your set triggers (such as KEDA) and even scale-to-zero! I have been working a lot with customers recently on using Azure API Management (APIM) and the topic of how we can leverage Azure APIM to manage our internal APIs without having to expose a public IP and stay within compliance from a security standpoint, which leads to the use of a Self-Hosted Gateway. This offers a managed gateway deployed within their network, allowing a unified approach in managing their APIs while keeping all API communication in-network. The self-hosted gateway is deployed as a container and in this article, we will go through how to provis…  ( 27 min )

  • Open

    Pre-Migration Vulnerability Scans:
    Migrating applications to the cloud or modernizing infrastructure requires thorough preparation. Whether it’s a cloud platform, a new data center, or a hybrid infrastructure — is a complex process. While organizations focus on optimizing performance, costs, and scalability, security often takes a backseat, leading to potential risks post-migration. One crucial step before migration is conducting a pre-migration scan to identify security vulnerabilities, licensing risks, and code quality issues. Several tools help in pre-migration scanning, including Blackduck, Coverity, Gitleaks, and Semgrep. In this article, we will explore the role of these tools in migration readiness. Why Perform a Pre-Migration Scan? When an application moves from an on-premises environment to the cloud, it interacts…  ( 31 min )
    Introducing Azure Migrate Explore with AI Assistant
    We're thrilled to announce the Public Preview of Azure Migrate Explore with an AI assistant! This exciting update enhances our existing Azure Migrate Explore (AME) utility, making migration assessments smarter, faster, and more impactful. What is Azure Migrate? Azure Migrate serves as a comprehensive hub designed to simplify the migration journey of on-premises infrastructure, including servers, databases, and web applications, to Azure Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) targets at scale. It provides a unified platform with a suite of tools and features to help you identify the best migration path, assess Azure readiness, estimate the cost of hosting workloads on Azure, and execute the migration with minimal downtime and risk. Revolutionizing Executive Pres…  ( 22 min )
  • Open

    Microsoft and F5 join forces on OpenTelemetry with Apache Arrow in Rust
    Microsoft and F5 are collaborating on Phase 2 of the OpenTelemetry with Apache Arrow project. The post Microsoft and F5 join forces on OpenTelemetry with Apache Arrow in Rust appeared first on Microsoft Open Source Blog.  ( 12 min )
  • Open

    What's new in SQL Server 2025
    Add deep AI integration with built-in vector search and DiskANN optimizations, plus native support for large object JSON and new Change Event Streaming for live data updates.  Join and analyze data faster with the Lakehouse shortcuts in Microsoft Fabric that unify multiple databases — across different SQL Server versions, clouds, and on-prem — into a single, logical schema without moving data. Build intelligent apps, automate workflows, and unlock rich insights with Copilot and the unified Microsoft data platform, including seamless Microsoft Fabric integration, all while leveraging your existing SQL skills and infrastructure.  Bob Ward, lead SQL engineer, joins Jeremy Chapman to share how the latest SQL Server 2025 innovations simplify building complex, high-performance workloads with les…  ( 57 min )
  • Open

    Refresh SQL analytics endpoint Metadata REST API (Preview)
    We’re excited to announce that the long-awaited refresh SQL analytics endpoint metadata REST API is now available in preview. You can now programmatically trigger a refresh of your SQL analytics endpoint to keep tables in sync with any changes made in the parent artifact, ensuring that you can keep your data up to date as needed. … Continue reading “Refresh SQL analytics endpoint Metadata REST API (Preview)”  ( 6 min )
    How to debug user data functions locally in VS Code
    Debugging your code is a big deal, especially when you’re working with user data functions. You want to make sure everything works as it should and that’s where local debugging lets you catch problems in your code without messing with the live environment. In this blog post, I’ll walk you through the steps to make local debugging easier, faster, and less of a headache.  ( 7 min )
  • Open

    A Recap of the Build AI Agents with Custom Tools Live Session
    Artificial Intelligence is evolving, and so are the ways we build intelligent agents. On a recent Microsoft YouTube Live session, developers and AI enthusiasts gathered to explore the power of custom tools in AI agents using Azure AI Studio. The session walked through concepts, use cases, and a live demo that showed how integrating custom tools can bring a new level of intelligence and adaptability to your applications. 🎥 Watch the full session here:  https://www.youtube.com/live/MRpExvcdxGs?si=X03wsQxQkkshEkOT What Are AI Agents with Custom Tools? AI agents are essentially smart workflows that can reason, plan, and act — powered by large language models (LLMs). While built-in tools like search, calculator, or web APIs are helpful, custom tools allow developers to tailor agents for busin…  ( 23 min )
  • Open

    Quest 1 – I Want to Build a Local Gen AI Prototype
    Part of the JS AI Build-a-thon, this quest is where the rubber meets the road. Whether you're here to explore the raw power of local models or just curious what Generative AI looks like under the hood, this guide is for you. 👉 Want to catch up on the full program or grab more quests? Start here💬 Got questions or just want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🧩 What You’ll Build in This Quest   In this quest, you’ll build a fully working local Gen AI prototype — no cloud APIs, no credits needed. You’ll explore the GitHub Models catalog, run local inference using an open model, and convert a hand-drawn UI sketch into a working webpage. Along the way, you’ll get hands-on with AI developer tooling built right into Visual Studio Code. Wh…  ( 31 min )
    Quest – I Want to Build a Local Gen AI Prototype
    Part of the JS AI Build-a-thon, this quest is where the rubber meets the road. Whether you're here to explore the raw power of local models or just curious what Generative AI looks like under the hood, this guide is for you. 👉 Want to catch up on the full program or grab more quests? Start here💬 Got questions or just want to hang with other builders? Join us on Discord — head to the #js-ai-build-a-thon channel. 🧩 What You’ll Build in This Quest   In this quest, you’ll build a fully working local Gen AI prototype — no cloud APIs, no credits needed. You’ll explore the GitHub Models catalog, run local inference using an open model, and convert a hand-drawn UI sketch into a working webpage. Along the way, you’ll get hands-on with AI developer tooling built right into Visual Studio Code. Wh…  ( 31 min )
    JS AI Build-a-thon Setup in 5 Easy Steps
    Want to build next-gen apps using AI and JavaScript (or TypeScript)? The JS AI Build-a-thon is your launchpad — a hands-on, quest-driven journey designed to take you from curious coder to AI-powered app builder. It’s self-paced, community-backed, and packed with real-world use cases. No fluff. Just code, context, and creative exploration. 👾 Ready to roll? Here's your setup guide: 🔧 Step 1: Start Your Journey Head to aka.ms/JSAIBuild-a-thon and find the GitHub repository. Before you hit “Start Course,” take a moment to scroll through the README — it’s packed with useful info, including a global list of Study Jams where you can connect with other developers learning alongside you. Find one near you or join virtually to level up with the community. Screenshot off available study jams Once y…  ( 25 min )

  • Open

    HCX 4.11.0 Upgrade and What it means for Current HCX Users
    Overview Azure VMware Solution is a VMware validated first party Azure service from Microsoft that provides private clouds containing VMware vSphere clusters built from dedicated bare-metal Azure infrastructure. It enables customers to leverage their existing investments in VMware skills and tools, allowing them to focus on developing and running their VMware-based workloads on Azure. VMware HCX is the mobility and migration software used by the Azure VMware Solution to connect remote VMware vSphere environments to the Azure VMware Solution. These remote VMware vSphere environments can be on-premises, co-location or cloud-based instances.     Figure 1 – Azure VMware Solution with VMware HCX Service Mesh Broadcom has announced the end-of-life (EOL) for VMware HCX version 4.10.x, effective …  ( 27 min )
    Azure VMware Solution now available in Korea Central
    We are pleased to announce that Azure VMware Solution is now available in Korea Central. Now in 34 Azure regions, Azure VMware Solution empowers you to seamlessly extend or migrate existing VMware workloads to Azure without the cost, effort or risk of re-architecting applications or retooling operations.  Azure VMware Solution supports: Rapid cloud migration of VMware-based workloads to Azure without refactoring. Datacenter exit while maintaining operational consistency for the VMware environment. Business continuity and disaster recovery for on-premises VMware environments. Attach Azure services and innovate applications at your own pace. Includes the VMware technology stack and lets you leverage existing Microsoft licenses for Windows Server and SQL Server. For updates on current and upcoming region availability, visit the product by region page here.   Streamline migration with new offers and licensing benefits, including a 20% discount. We recently announced the VMware Rapid Migration Plan, where Microsoft provides a comprehensive set of licensing benefits and programs to give you price protection and savings as you migrate to Azure VMware Solution. Azure VMware Solution is a great first step to the cloud for VMware customers, and this plan can help you get there. Learn More  ( 19 min )
  • Open

    From Cloud to Edge: Navigating the Future of AI with LLMs, SLMs, and Azure AI Foundry
    Use Cases: From Automation to Edge AI   Generative AI is transforming industries through: Content creation, summarization, and translation Customer engagement via chatbots and personalization Edge deployment for low-latency, privacy-sensitive applications Domain-specific tasks like legal, medical, or technical document processing LLMs vs. SLMs: Choosing the Right Fit FeatureLLMsSLMs ParametersBillions (e.g., GPT-4)Millions PerformanceHigh accuracy, nuanced understandingFast, efficient for simpler tasks DeploymentCloud-based, resource-intensiveIdeal for edge and mobile CostHigh compute and energyCost-effective SLMs are increasingly viable thanks to optimized runtimes and hardware, making them perfect for on-device AI. Azure AI Foundry: Your AI Launchpad Azure AI Foundry offers: A model catalogue with open-source and proprietary models   Tools for fine-tuning, evaluation, and deployment Integration with GitHub, VS Code, and Azure DevOps Support for both cloud and local inferencing Local AI: The Edge Advantage With tools like Foundry Local and Windows AI Foundry, developers can: Run models on-device with ONNX Runtime Use APIs for summarization, translation, and more Optimize for CPU, GPU, and NPU Ensure privacy, low latency, and offline capability Customization: RAG vs. Fine-Tuning FeatureRAGFine-Tuning Knowledge UpdatesDynamicStatic InterpretabilityHighLow LatencyHigherLower Hallucination RiskLowerModerate Use CaseReal-time, external dataDomain-specific tasks Both methods enhance model relevance RAG by retrieving external data, and fine-tuning by adapting model weights. Developer Resources Get started with: Foundry Local SDK Windows AI Foundry | Microsoft Developer AI Toolkit for VS Code Windows ML Azure AI Learn Courses Join the Azure AI Discord Community  ( 21 min )
  • Open

    Restricting PAT Creation in Azure DevOps Is Now in Preview
    As organizations continue to strengthen their security posture, restricting usage of personal access tokens (PATs) has become a critical area of focus. With the latest public preview of the Restrict personal access token creation policy in Azure DevOps, Project Collection Administrators (PCAs) now have another powerful tool to reduce unnecessary PAT usage and enforce tighter […] The post Restricting PAT Creation in Azure DevOps Is Now in Preview appeared first on Azure DevOps Blog.  ( 25 min )
  • Open

    Meet the Supercomputer that runs ChatGPT, Sora & DeepSeek on Azure (feat. Mark Russinovich)
    Orchestrate multi-agent apps and high-scale inference solutions using open-source and proprietary models, no infrastructure management needed. With Azure, connect frameworks like Semantic Kernel to models from DeepSeek, Llama, OpenAI’s GPT-4o, and Sora, without provisioning GPUs or writing complex scheduling logic. Just submit your prompt and assets, and the models do the rest. Using Azure’s Model as a Service, access cutting-edge models, including brand-new releases like DeepSeek R1 and Sora, as managed APIs with autoscaling and built-in security. Whether you’re handling bursts of demand, fine-tuning models, or provisioning compute, Azure provides the capacity, efficiency, and flexibility you need. With industry-leading AI silicon, including H100s, GB200s, and advanced cooling, your solut…  ( 57 min )

  • Open

    Enhancing Plugin Metadata Management with SemanticPluginForge
    In the world of software development, flexibility and adaptability are key. Developers often face challenges when it comes to updating plugin metadata dynamically without disrupting services or requiring redeployment. This is where SemanticPluginForge, an open-source project, steps in to improve the way we manage plugin metadata. LLM Function Calling Feature The function calling feature in LLMs […] The post Enhancing Plugin Metadata Management with SemanticPluginForge appeared first on Semantic Kernel.  ( 25 min )
    Smarter SK Agents with Contextual Function Selection
    Smarter SK Agents with Contextual Function Selection In today’s fast-paced AI landscape, developers are constantly seeking ways to make AI interactions more efficient and relevant. The new Contextual Function Selection feature in the Semantic Kernel Agent Framework is here to address this need. By dynamically selecting and advertising only the most relevant functions based on […] The post Smarter SK Agents with Contextual Function Selection appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Secure Your Data from Day One: Best Practices for Success with Purview Data Loss Prevention (DLP) Policies in Microsoft Fabric
    As data volume and complexity soar, protecting sensitive information has become non-negotiable. With the latest enhancements to Purview Data Loss Prevention (DLP) Policies in Microsoft Fabric, organizations now have the power to proactively secure their data in Onelake. Whether you’re just getting started or looking to take your data governance to the next level, following … Continue reading “Secure Your Data from Day One: Best Practices for Success with Purview Data Loss Prevention (DLP) Policies in Microsoft Fabric”  ( 7 min )
  • Open

    Skill Up On The Latest AI Models & Tools on Model Mondays - Season 2 starts Jun 16!
    Quick Links To RSVP for each episode: EP1: Advanced Reasoning Models: https://developer.microsoft.com/en-us/reactor/events/25905/  EP2: Model Context Protocol:https://developer.microsoft.com/en-us/reactor/events/25906/  EP3: SLMs (and Reasoning):https://developer.microsoft.com/en-us/reactor/events/25907/  Get All The Details: https://aka.ms/model-mondays    Azure AI Foundry offers the best model choice  Did you manage to catch up on all the talks from Microsoft Build 2025? If, like me, you are interested in building AI-driven applications on Azure, you probably started by looking at what’s new in Azure AI Foundry.  I recommend you read Asha Sharma’s post for the top 10 things you need to know in this context. And it starts with New Models & Smarter Models!  New Models | Azure AI Foundr…  ( 28 min )
  • Open

    Dev Proxy v0.28 with LLM usage and costs tracking
    The latest version of Dev Proxy introduces a new ability to help you understand language models’ usage and costs in your applications, alongside many improvements to mocking, TypeSpec generation, and plugin flexibility. The post Dev Proxy v0.28 with LLM usage and costs tracking appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    GraphRAG and PostgreSQL integration in docker with Cypher query and AI agents
    Why should I care?   ⚡️In under 15 minutes, you’ll have a Cypher-powered, semantically rich knowledge graph you can query interactively.   How can graphRAG help? GraphRAG extracts structured knowledge from raw, unstructured data like .txt files by building a knowledge graph. This enables more precise and context-aware retrieval, making it easier to surface relevant insights from messy or disconnected content. What are the challenges? While the standard graphRAG indexing process typically expects input and output directories, some users already store their data in a DB (database) and prefer to run graphRAG directly against using the DB for both input and output. This eliminates the need for intermediate blob storage and simplifies the pipeline. Additionally, customers often request support …  ( 48 min )
    GraphRAG and PostgreSQL integration in docker with Cypher query and AI agents
    Why should I care?   ⚡️In under 15 minutes, you’ll have a Cypher-powered, semantically rich knowledge graph you can query interactively.   How can graphRAG help? GraphRAG extracts structured knowledge from raw, unstructured data like .txt files by building a knowledge graph. This enables more precise and context-aware retrieval, making it easier to surface relevant insights from messy or disconnected content. What are the challenges? While the standard graphRAG indexing process typically expects input and output directories, some users already store their data in a DB (database) and prefer to run graphRAG directly against using the DB for both input and output. This eliminates the need for intermediate blob storage and simplifies the pipeline. Additionally, customers often request support …
  • Open

    Host Remote MCP Servers on App Service: Updated samples now with new languages and auth support
    If you haven't seen my previous blog post introducing MCP on Azure App Service, check that out here for a quick overview and getting started. In this blog post, I’m excited to share some updates for our App Service MCP samples: new language samples, updated functionality to replace deprecated methods, and built-in authentication and authorization—all designed to make it easier for developers to host MCP servers on Azure App Service. 🔄 Migrating from SSE to Streamable HTTP The original .NET sample I shared used Server-Sent Events (SSE) for streaming responses. However, SSE has since been deprecated in favor of streamable HTTP, which offers better compatibility and performance across platforms. To align with the latest MCP specification, I’ve updated the .NET sample to use streamable HTTP: …  ( 23 min )
    Host Remote MCP Servers on App Service: Updated samples now with new languages and auth support
    If you haven't seen my previous blog post introducing MCP on Azure App Service, check that out here for a quick overview and getting started. In this blog post, I’m excited to share some updates for our App Service MCP samples: new language samples, updated functionality to replace deprecated methods, and built-in authentication and authorization—all designed to make it easier for developers to host MCP servers on Azure App Service. 🔄 Migrating from SSE to Streamable HTTP The original .NET sample I shared used Server-Sent Events (SSE) for streaming responses. However, SSE has since been deprecated in favor of streamable HTTP, which offers better compatibility and performance across platforms. To align with the latest MCP specification, I’ve updated the .NET sample to use streamable HTTP: …

  • Open

    Build like Microsoft: Developer agents in action
    Take a deep dive into Athena, an AI-powered collaborative agent, to learn how it was built and how to create your own version of Athena right within Microsoft Teams. The post Build like Microsoft: Developer agents in action appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Announcing Azure Command Launcher for Java
    Optimizing JVM Configuration for Azure Deployments Tuning the Java Virtual Machine (JVM) for cloud deployments is notoriously challenging. Over 30% of developers tend to deploy Java workloads with no JVM configuration at all, therefore relying on the default settings of the HotSpot JVM.  The default settings in OpenJDK are intentionally conservative, designed to work across a wide range of environments and scenarios. However, these defaults often lead to suboptimal resource utilization in cloud-based deployments, where memory and CPU tend to be dedicated for application workloads (use of containers and VMs) but still require intelligent management to maximize efficiency and cost-effectiveness. To address this, we are excited to introduce jaz, a new JVM launcher optimized specifically for …  ( 26 min )
    Announcing Azure Command Launcher for Java
    Optimizing JVM Configuration for Azure Deployments Tuning the Java Virtual Machine (JVM) for cloud deployments is notoriously challenging. Over 30% of developers tend to deploy Java workloads with no JVM configuration at all, therefore relying on the default settings of the HotSpot JVM.  The default settings in OpenJDK are intentionally conservative, designed to work across a wide range of environments and scenarios. However, these defaults often lead to suboptimal resource utilization in cloud-based deployments, where memory and CPU tend to be dedicated for application workloads (use of containers and VMs) but still require intelligent management to maximize efficiency and cost-effectiveness. To address this, we are excited to introduce jaz, a new JVM launcher optimized specifically for …
  • Open

    Intelligent Email Automation with Azure AI Agent Service
    Do you ever wish you could simply tell your agent to send an email, without the hassle of typing everything — from recipient list to the subject and the body? If so, this guide on building an email-sending agent might be exactly what you’re looking for. Technically, this guide won’t deliver a fully automated agent right out of the box - you’ll still need to add a speech-to-text layer and carefully curate your prompt instructions. By the end of this post, you’ll have an agent capable of interacting with users through natural conversation and generating emails with dynamic subject lines and content. Overview Azure AI Agent Services offers a robust framework for building conversational agents, making it an ideal choice for developers seeking enterprise-grade security and compliance. This ensu…  ( 29 min )
    Intelligent Email Automation with Azure AI Agent Service
    Do you ever wish you could simply tell your agent to send an email, without the hassle of typing everything — from recipient list to the subject and the body? If so, this guide on building an email-sending agent might be exactly what you’re looking for. Technically, this guide won’t deliver a fully automated agent right out of the box - you’ll still need to add a speech-to-text layer and carefully curate your prompt instructions. By the end of this post, you’ll have an agent capable of interacting with users through natural conversation and generating emails with dynamic subject lines and content. Overview Azure AI Agent Services offers a robust framework for building conversational agents, making it an ideal choice for developers seeking enterprise-grade security and compliance. This ensu…

  • Open

    Secure Mirrored Azure Databricks Data in Fabric with OneLake security
    We’re excited to announce that OneLake security capabilities have been extended to support mirrored data through Azure Mirrored Databricks Catalog. This enhancement brings the full suite of OneLake’s enterprise-grade security features to these mirrored assets, empowering organizations to manage access using table, column, or row level security across all engines.  What’s New?  With this update, Azure … Continue reading “Secure Mirrored Azure Databricks Data in Fabric with OneLake security “  ( 6 min )
  • Open

    How to add custom logging in Azure WebJobs Storage Extensions SDK in dotnet isolated function app
    In our previous blog, we discussed how to debug live issues in the Azure WebJobs Storage Extensions SDK. This approach is particularly effective when issues can be consistently reproduced. However, for intermittent problems, live debugging may not be the most efficient solution. In such cases, integrating custom logging can provide deeper insights and facilitate troubleshooting.   In this blog post, we will provide a step-by-step guide on how to implement custom logging within the Azure WebJobs Storage Extensions SDK. This will help you capture valuable information and better understand the behavior of your applications.   If you encounter any issues while using the Azure WebJobs Extensions SDK, the best way to report them is via GitHub Issues. You can report bugs, request features, or ask…  ( 27 min )
    How to add custom logging in Azure WebJobs Storage Extensions SDK in dotnet isolated function app
    In our previous blog, we discussed how to debug live issues in the Azure WebJobs Storage Extensions SDK. This approach is particularly effective when issues can be consistently reproduced. However, for intermittent problems, live debugging may not be the most efficient solution. In such cases, integrating custom logging can provide deeper insights and facilitate troubleshooting.   In this blog post, we will provide a step-by-step guide on how to implement custom logging within the Azure WebJobs Storage Extensions SDK. This will help you capture valuable information and better understand the behavior of your applications.   If you encounter any issues while using the Azure WebJobs Extensions SDK, the best way to report them is via GitHub Issues. You can report bugs, request features, or ask…
    Highlights from Microsoft Build 2025
    Microsoft just held its annual Microsoft Build event for developers. The live event might be over, but we have highlights and other content that will keep the excitement going. Explore on-demand sessions, learn about recent product announcements, watch deep technical demos, and discover fresh resources for learning cutting-edge developer skills.   Microsoft Build opening keynoteThe world of development—its tools and its possibilities—is rapidly evolving. In the Microsoft Build keynote, Satya Nadella discusses the agentic web, current dev tools, the dev landscape right now, and where it’s headed.   GitHub Copilot: Meet the new coding agentCheck out the exciting new coding agent for GitHub Copilot. Just assign a task or issue to Copilot and it will run in the background, pushing commits to a…  ( 25 min )
    Highlights from Microsoft Build 2025
    Microsoft just held its annual Microsoft Build event for developers. The live event might be over, but we have highlights and other content that will keep the excitement going. Explore on-demand sessions, learn about recent product announcements, watch deep technical demos, and discover fresh resources for learning cutting-edge developer skills.   Microsoft Build opening keynoteThe world of development—its tools and its possibilities—is rapidly evolving. In the Microsoft Build keynote, Satya Nadella discusses the agentic web, current dev tools, the dev landscape right now, and where it’s headed.   GitHub Copilot: Meet the new coding agentCheck out the exciting new coding agent for GitHub Copilot. Just assign a task or issue to Copilot and it will run in the background, pushing commits to a…
  • Open

    Teaching Python with GitHub Codespaces
    Whenever I teach Python workshops, tutorials, or classes, I love to use GitHub Codespaces. Every repository on GitHub can be opened inside a GitHub Codespace, which gives the student a full Python environment and a browser-based VS Code. Students spend less time setting up their environment and more time actually coding - the fun part! In this post, I'll walk through my tips for using Codespaces for teaching Python, particularly for classes about web apps, data science, or generative AI. Getting started You can start a GitHub Codespace from any repository. Navigate to the front page of the repository, then select "Code" > "Codespaces" > "Create codespace on main": By default, the Codespace will build an environment based off a universal Docker image, which includes Python, NodeJS, Java, a…  ( 51 min )
    Teaching Python with GitHub Codespaces
    Whenever I teach Python workshops, tutorials, or classes, I love to use GitHub Codespaces. Every repository on GitHub can be opened inside a GitHub Codespace, which gives the student a full Python environment and a browser-based VS Code. Students spend less time setting up their environment and more time actually coding - the fun part! In this post, I'll walk through my tips for using Codespaces for teaching Python, particularly for classes about web apps, data science, or generative AI. Getting started You can start a GitHub Codespace from any repository. Navigate to the front page of the repository, then select "Code" > "Codespaces" > "Create codespace on main": By default, the Codespace will build an environment based off a universal Docker image, which includes Python, NodeJS, Java, a…

  • Open

    Microsoft Fabric Community Conference Comes to Atlanta!
    The Microsoft Fabric Community Conference is back for its third year—and we’re bringing everything and everybody you’ve loved at past events with us to Atlanta, Georgia. After unforgettable experiences at FabCon in Las Vegas and Stockholm, the Fabric community proved just how powerful it can be when we come together. With more than 13,000 attendees across our last three conferences, it’s clear: the Microsoft Fabric community is here to drive the future of data!    And yes, we’re pleased to announce; it’s happening again! Mark your calendars … Continue reading “Microsoft Fabric Community Conference Comes to Atlanta!”  ( 6 min )
    Azure Synapse Runtime for Apache Spark 3.5 (Preview)
    We’re thrilled to announce that we have made Azure Synapse Runtime for Apache Spark 3.5 for our Azure Synapse Spark customers in preview, while they get ready and prepare for migrating to Microsoft Fabric Spark. Apache Spark 3.5 You can now create Azure Synapse Runtime for Apache Spark 3.5. The essential changes include features which come from … Continue reading “Azure Synapse Runtime for Apache Spark 3.5 (Preview)”  ( 5 min )
  • Open

    GitHub Secret Protection and GitHub Code Security for Azure DevOps
    Following the changes to GitHub Advanced Security on GitHub, we’re launching the standalone security products of GitHub Secret Protection and GitHub Code Security for Azure DevOps today. You can bring the protection of Advanced Security to your enterprise with the flexibility to enable the right level of protection for your repositories. GitHub Secret Protection for […] The post GitHub Secret Protection and GitHub Code Security for Azure DevOps appeared first on Azure DevOps Blog.  ( 23 min )
  • Open

    Using DeepSeek-R1 on Azure with JavaScript
    The pace at which innovative AI models are being developed is outstanding! DeepSeek-R1 is one such model that focuses on complex reasoning tasks, providing a powerful tool for developers to build intelligent applications. The week, we announced its availability on GitHub Models as well as on Azure AI Foundry. In this article, we’ll take a look at how you can deploy and use the DeepSeek-R1 models in your JavaScript applications. TL;DR key takeaways DeepSeek-R1 models focus on complex reasoning tasks, and is not designed for general conversation You can quickly switch your configuration to use Azure AI, GitHub Models, or even local models with Ollama. You can use OpenAI Node SDK or LangChain.js to interact with DeepSeek models. What you'll learn here Deploying DeepSeek-R1 model on Azure. …  ( 32 min )
    Using DeepSeek-R1 on Azure with JavaScript
    The pace at which innovative AI models are being developed is outstanding! DeepSeek-R1 is one such model that focuses on complex reasoning tasks, providing a powerful tool for developers to build intelligent applications. The week, we announced its availability on GitHub Models as well as on Azure AI Foundry. In this article, we’ll take a look at how you can deploy and use the DeepSeek-R1 models in your JavaScript applications. TL;DR key takeaways DeepSeek-R1 models focus on complex reasoning tasks, and is not designed for general conversation You can quickly switch your configuration to use Azure AI, GitHub Models, or even local models with Ollama. You can use OpenAI Node SDK or LangChain.js to interact with DeepSeek models. What you'll learn here Deploying DeepSeek-R1 model on Azure. …

  • Open

    Understanding Idle Usage in Azure Container Apps
    Introduction Azure Container Apps provides a serverless platform for running containers at scale, and one of the big benefits is that you can easily scale workloads to zero when they are not getting any traffic. Scaling to zero ensures you only pay when your workloads are actively receiving traffic or performing work. However, for some workloads, scaling to zero might not be possible for a variety of reasons. Some workloads must always be able to respond to requests quickly, and the time it takes to scale from 0 to 1 replicas, while short, is too long. Some applications need to be able to always respond to health checks, and so removing all replicas is not possible. In these scenarios, there may still be time periods where there is no traffic, or the application isn't doing any work. While…  ( 39 min )
    Understanding Idle Usage in Azure Container Apps
    Introduction Azure Container Apps provides a serverless platform for running containers at scale, and one of the big benefits is that you can easily scale workloads to zero when they are not getting any traffic. Scaling to zero ensures you only pay when your workloads are actively receiving traffic or performing work. However, for some workloads, scaling to zero might not be possible for a variety of reasons. Some workloads must always be able to respond to requests quickly, and the time it takes to scale from 0 to 1 replicas, while short, is too long. Some applications need to be able to always respond to health checks, and so removing all replicas is not possible. In these scenarios, there may still be time periods where there is no traffic, or the application isn't doing any work. While…
    Throughput Testing at Scale for Azure Functions
    Introduction Ensuring reliable, high-performance serverless applications is central to our work on Azure Functions. With new plans like Flex Consumption expanding the platform’s capabilities, it's critical to continuously validate that our infrastructure can scale—reliably and efficiently—under real-world load. To meet that need, we built PerfBench (Performance Benchmarker), a comprehensive benchmarking system designed to measure, monitor, and maintain our performance baselines—catching regressions before they impact customers. This infrastructure now runs close to 5,000 test executions every month, spanning multiple SKUs, regions, runtimes, and workloads—with Flex Consumption accounting for more than half of the total volume. This scale of testing helps us not only identify regressions ea…  ( 51 min )
    Throughput Testing at Scale for Azure Functions
    Introduction Ensuring reliable, high-performance serverless applications is central to our work on Azure Functions. With new plans like Flex Consumption expanding the platform’s capabilities, it's critical to continuously validate that our infrastructure can scale—reliably and efficiently—under real-world load. To meet that need, we built PerfBench (Performance Benchmarker), a comprehensive benchmarking system designed to measure, monitor, and maintain our performance baselines—catching regressions before they impact customers. This infrastructure now runs close to 5,000 test executions every month, spanning multiple SKUs, regions, runtimes, and workloads—with Flex Consumption accounting for more than half of the total volume. This scale of testing helps us not only identify regressions ea…
  • Open

    A visual introduction to vector embeddings
    Vector embeddings have become very popular over the last few years, but most of us developers are brand new to the concept. In this post, I'll give a high-level overview of embedding models, similarity metrics, vector search, and vector compression approaches. Vector embeddings A vector embedding is a mapping from an input (like a word, list of words, or image) into a list of floating point numbers. That list of numbers represents that input in the multidimensional embedding space of the model. We refer to the length of the list as its dimensions, so a list with 1024 numbers would have 1024 dimensions.   Embedding models Each embedding model has its own dimension length, allowed input types, similarity space, and other characteristics. word2vec For a long time, word2vec was the most well-…  ( 47 min )
    A visual introduction to vector embeddings
    Vector embeddings have become very popular over the last few years, but most of us developers are brand new to the concept. In this post, I'll give a high-level overview of embedding models, similarity metrics, vector search, and vector compression approaches. Vector embeddings A vector embedding is a mapping from an input (like a word, list of words, or image) into a list of floating point numbers. That list of numbers represents that input in the multidimensional embedding space of the model. We refer to the length of the list as its dimensions, so a list with 1024 numbers would have 1024 dimensions.   Embedding models Each embedding model has its own dimension length, allowed input types, similarity space, and other characteristics. word2vec For a long time, word2vec was the most well-…
  • Open

    Announcing enterprise-grade, Microsoft Entra-based document-level security in Azure AI Search
    Introduction AI agentic grounding, Retrieval-Augmented Generation (RAG) apps/copilots, and enterprise search are game-changers, but they stay safe only when every response obeys the file-level permissions you already set in your data source. Without built-in help, developers have to hand-code security trimming, unravel nested groups, and tweak the logic whenever someone’s role shifts. Starting with REST API version 2025-05-01-preview, Azure AI Search introduces native Microsoft Entra-based POSIX-style Access Control List (ACL) and Azure Role-Based Access Control (RBAC) support, alongside expanded capabilities in the Azure Data Lake Storage Gen2 (ADLS Gen2) built-in indexer. These enhancements make it easier to enforce document-level security across ingestion and query workflows, whether yo…  ( 40 min )
    Announcing enterprise-grade, Microsoft Entra-based document-level security in Azure AI Search
    Introduction AI agentic grounding, Retrieval-Augmented Generation (RAG) apps/copilots, and enterprise search are game-changers, but they stay safe only when every response obeys the file-level permissions you already set in your data source. Without built-in help, developers have to hand-code security trimming, unravel nested groups, and tweak the logic whenever someone’s role shifts. Starting with REST API version 2025-05-01-preview, Azure AI Search introduces native Microsoft Entra-based POSIX-style Access Control List (ACL) and Azure Role-Based Access Control (RBAC) support, alongside expanded capabilities in the Azure Data Lake Storage Gen2 (ADLS Gen2) built-in indexer. These enhancements make it easier to enforce document-level security across ingestion and query workflows, whether yo…

  • Open

    New pipeline Activities Now Support OPDG and VNET
    We’re excited to announce that Microsoft Fabric Data Pipelines now support both On-Premises Data Gateway (OPDG) and Virtual Network (VNET) Gateway across a broader set of external activity types. What’s New? You can now securely connect to on-premises and network-isolated resources using OPDG or VNET Gateway for the following Fabric Data Factory pipeline activities allowing for secure … Continue reading “New pipeline Activities Now Support OPDG and VNET”  ( 5 min )
    Integrating Fabric with Databricks using private network
    Microsoft Fabric and Azure Databricks are widely used data platforms. This article aims to address the requirement of customers who have large data estates in Azure databricks and want to unlock additional use cases in Microsoft Fabric arising out of different business teams. When integrating the two platforms, a crucial security requirement is to ensure … Continue reading “Integrating Fabric with Databricks using private network”  ( 7 min )
    Boost performance effortlessly with Automated Table Statistics in Microsoft Fabric
    We’re thrilled to introduce Automated Table Statistics in Microsoft Fabric Data Engineering — a major upgrade that helps you get blazing-fast query performance with zero manual effort. Whether you’re running complex joins, large aggregations, or heavy filtering workloads, Fabric’s new automated statistics will help Spark make smarter decisions, saving you time, compute, and money. What … Continue reading “Boost performance effortlessly with Automated Table Statistics in Microsoft Fabric”  ( 6 min )
    How to create a SQL database in Fabric using Fabric CLI
    Step into the future of simplicity with Fabric CLI. The Fabric Command Line Interface (CLI) has arrived in preview, bringing with it a seamless way to create SQL databases for your projects. This guide will take you through the steps to get started, ensuring you can leverage the power of Fabric CLI with ease. Prerequisites Before … Continue reading “How to create a SQL database in Fabric using Fabric CLI”  ( 6 min )
  • Open

    Troubleshooting Azure AI Foundry deployments on Azure App Service
    Troubleshooting Azure AI Foundry deployments on Azure App Service  ( 4 min )
  • Open

    Migration planning of MySQL workloads using Azure Migrate
    In our endeavor to increase coverage of OSS workloads in Azure Migrate, we are announcing discovery and modernization assessment of MySQL databases running on Windows and Linux servers. Customers previously had limited visibility into their MySQL workloads and often received generalized VM lift-and-shift recommendations. With this new capability, customers can now accurately identify their MySQL workloads and assess them for right-sizing into Azure Database for MySQL. MySQL workloads are a cornerstone of the LAMP stack, powering countless web applications with their reliability, performance, and ease of use. As businesses grow, the need for scalable and efficient database solutions becomes paramount. This is where Azure Database for MySQL comes into play. Migrating from on-premises to Azur…  ( 25 min )
    Migration planning of MySQL workloads using Azure Migrate
    In our endeavor to increase coverage of OSS workloads in Azure Migrate, we are announcing discovery and modernization assessment of MySQL databases running on Windows and Linux servers. Customers previously had limited visibility into their MySQL workloads and often received generalized VM lift-and-shift recommendations. With this new capability, customers can now accurately identify their MySQL workloads and assess them for right-sizing into Azure Database for MySQL. MySQL workloads are a cornerstone of the LAMP stack, powering countless web applications with their reliability, performance, and ease of use. As businesses grow, the need for scalable and efficient database solutions becomes paramount. This is where Azure Database for MySQL comes into play. Migrating from on-premises to Azur…
  • Open

    Jakarta EE and Quarkus on Azure – June 2025
    Hi everyone, welcome to the June 2025 update for Jakarta EE and Quarkus on Azure. It covers topics such as DevServices support of Quarkus Azure Extension, and comprehensive guides on implementing Quarkus applications monitoring and Liberty applications monitoring. If you’re interested in providing feedback or collaborating on migrating Java workloads to Azure with the engineering […] The post Jakarta EE and Quarkus on Azure – June 2025 appeared first on Microsoft for Java Developers.  ( 24 min )
  • Open

    Ways to simplify your data ingestion pipeline with Azure AI Search
    We are introducing multiple features in Azure AI Search that make AI agent grounding and RAG data preparation easier. Here is what’s new:  The GenAI prompt skill in public preview accesses Azure AI Foundry chat-completion models to enrich content when it’s being indexed  Logic app integration with AI Search in Azure portal for simple data ingestion  Introduction  Azure AI Search is introducing new features and integrations designed to simplify and accelerate the creation of RAG-ready indexes. The GenAI Prompt Skill leverages generative AI during the indexing process, enabling advanced context expansion, image verbalization, and other transformations to enhance multimodal search relevance. The GenAI Prompt Skill enables sophisticated data transformations during the indexing process and fa…  ( 39 min )
    Ways to simplify your data ingestion pipeline with Azure AI Search
    We are introducing multiple features in Azure AI Search that make AI agent grounding and RAG data preparation easier. Here is what’s new:  The GenAI prompt skill in public preview accesses Azure AI Foundry chat-completion models to enrich content when it’s being indexed  Logic app integration with AI Search in Azure portal for simple data ingestion  Introduction  Azure AI Search is introducing new features and integrations designed to simplify and accelerate the creation of RAG-ready indexes. The GenAI Prompt Skill leverages generative AI during the indexing process, enabling advanced context expansion, image verbalization, and other transformations to enhance multimodal search relevance. The GenAI Prompt Skill enables sophisticated data transformations during the indexing process and fa…

  • Open

    Introducing Multi-Vector and Scoring Profile integration with Semantic Ranking in Azure AI Search
    We're excited to announce two powerful new enhancements in Azure AI Search: Multi-Vector Field Support and Scoring Profiles Integration with Semantic Ranking. Developed based on your feedback, these features unlock more control and enable additional scenarios in your search experiences   Why these Enhancements Matter As search experiences become increasingly sophisticated, handling complex, multimodal data and maintaining precise relevance is crucial. These new capabilities directly address common pain points: Multi-Vector Field Support helps you manage detailed, multimodal, and segmented content more effectively. Scoring Profiles Integration with Semantic Ranking ensures consistent relevance throughout your search pipeline. Multi-Vector Field Support Previously, vector fields `(Collecti…  ( 27 min )
    Introducing Multi-Vector and Scoring Profile integration with Semantic Ranking in Azure AI Search
    We're excited to announce two powerful new enhancements in Azure AI Search: Multi-Vector Field Support and Scoring Profiles Integration with Semantic Ranking. Developed based on your feedback, these features unlock more control and enable additional scenarios in your search experiences   Why these Enhancements Matter As search experiences become increasingly sophisticated, handling complex, multimodal data and maintaining precise relevance is crucial. These new capabilities directly address common pain points: Multi-Vector Field Support helps you manage detailed, multimodal, and segmented content more effectively. Scoring Profiles Integration with Semantic Ranking ensures consistent relevance throughout your search pipeline. Multi-Vector Field Support Previously, vector fields `(Collecti…
  • Open

    Semantic Kernel and Microsoft.Extensions.AI: Better Together, Part 2
    This is Part 2 of our series on integrating Microsoft.Extensions.AI with Semantic Kernel. In Part 1, we explored the relationship between these technologies and how they complement each other. Now, let’s dive into practical examples showing how to use Microsoft.Extensions.AI abstractions with Semantic Kernel in non-agent scenarios. Getting Started with Microsoft.Extensions.AI and Semantic Kernel Before we […] The post Semantic Kernel and Microsoft.Extensions.AI: Better Together, Part 2 appeared first on Semantic Kernel.  ( 25 min )
    Semantic Kernel: Multi-agent Orchestration
    The field of AI is rapidly evolving, and the need for more sophisticated, collaborative, and flexible agent-based systems is growing. With this in mind, Semantic Kernel introduces a new multi-agent orchestration framework that enables developers to build, manage, and scale complex agent workflows with ease. This post explores the new orchestration patterns, their capabilities, and […] The post Semantic Kernel: Multi-agent Orchestration appeared first on Semantic Kernel.  ( 26 min )
  • Open

    Understanding OneLake Security with Shortcuts
    OneLake allows for security to be defined once and enforced consistently across Microsoft Fabric. One of its standout features is its ability to work seamlessly with shortcuts, offering users the flexibility to access and organize data from different locations while maintaining robust security controls. In this blog post, we will look at how OneLake security … Continue reading “Understanding OneLake Security with Shortcuts”  ( 7 min )
    New regions supported for Fabric User Data Functions
    Fabric User Data Functions is a serverless platform that allows you to build and run functions on the Fabric platform. You can use your functions to interact with your data and the rest of your Fabric items via native integrations. After the Fabric User Data Functions preview, we have been working on increasing the number … Continue reading “New regions supported for Fabric User Data Functions”  ( 5 min )
    Creating SQL database workload in Fabric with Terraform: A Step-by-Step Guide (Preview)
    Infrastructure as Code (IaC) tools like Terraform have revolutionized the way developers and organizations deploy and manage infrastructure. With its declarative language and ability to automate provisioning, Terraform reduces human error, ensures consistency, and speeds up deployment across cloud and on-premises environments. In this document, we’ll explore how you can create SQL databases workloads in … Continue reading “Creating SQL database workload in Fabric with Terraform: A Step-by-Step Guide (Preview)”  ( 7 min )
    Introducing FinOps Toolkit in Fabric
    The FinOps toolkit is built on top of the FinOps Framework, providing you a collection of resources to help enterprises facilitate their FinOps Goals. Just announced in the May Updates is an exciting integration with Fabric Real-Time Intelligence. With the latest updates you can now analyze all your cloud spend utilizing Eventhouse. Unlocking the power … Continue reading “Introducing FinOps Toolkit in Fabric”  ( 5 min )
  • Open

    Azure DevOps with GitHub Repositories – Your path to Agentic AI
    GitHub Copilot has evolved beyond a coding assistant in the IDE into an agentic teammate – providing actionable feedback on pull requests, fixing bugs and implementing new features, creating pull requests and responding to feedback, and much more. These new capabilities will transform every aspect of the software development lifecycle, as we are already seeing […] The post Azure DevOps with GitHub Repositories – Your path to Agentic AI appeared first on Azure DevOps Blog.  ( 26 min )
  • Open

    New GitHub Copilot Global Bootcamp: Now with Virtual and In-Person Workshops!
    The GitHub Copilot Global Bootcamp started in February as a fully virtual learning journey — and it was a hit. More than 60,000 developers joined the first edition across multiple languages and regions. Now, we're excited to launch the second edition — bigger and better — featuring both virtual and in-person workshops, hosted by tech communities around the globe. This new edition arrives shortly after the announcements at Microsoft Build 2025, where the GitHub and Visual Studio Code teams revealed exciting news: The GitHub Copilot Chat extension is going open source, reinforcing transparency and collaboration. AI is being deeply integrated into Visual Studio Code, now evolving into an open source AI editor. New APIs and tools are making it easier than ever to build with AI and LLMs. This…  ( 30 min )
    New GitHub Copilot Global Bootcamp: Now with Virtual and In-Person Workshops!
    The GitHub Copilot Global Bootcamp started in February as a fully virtual learning journey — and it was a hit. More than 60,000 developers joined the first edition across multiple languages and regions. Now, we're excited to launch the second edition — bigger and better — featuring both virtual and in-person workshops, hosted by tech communities around the globe. This new edition arrives shortly after the announcements at Microsoft Build 2025, where the GitHub and Visual Studio Code teams revealed exciting news: The GitHub Copilot Chat extension is going open source, reinforcing transparency and collaboration. AI is being deeply integrated into Visual Studio Code, now evolving into an open source AI editor. New APIs and tools are making it easier than ever to build with AI and LLMs. This…

  • Open

    Azure Data Factory Item Mounting (Generally Available)
    We’re excited to announce the General Availability (GA) of the Azure Data Factory (Mounting) feature in Microsoft Fabric! This powerful capability allows you to seamlessly connect your existing Azure Data Factory (ADF) pipelines to Fabric workspaces, eliminating the need to manually rebuild or migrate them. What’s New with GA? Once your ADF instance is linked to a Fabric workspace, you … Continue reading “Azure Data Factory Item Mounting (Generally Available)”  ( 5 min )
    Introducing Aggregations in Fabric API for GraphQL: Query Smarter, Not Harder
    Summarize, group, and explore data in one step with the new GraphQL aggregations feature We’re excited to launch a powerful new capability in the Fabric API for GraphQL—Aggregations. This feature brings native support for summary-level insights directly into your GraphQL queries, making your data exploration faster, simpler, and more efficient. Why Aggregations? Until now, getting … Continue reading “Introducing Aggregations in Fabric API for GraphQL: Query Smarter, Not Harder”  ( 6 min )
    Updates to database development tools for SQL database in Fabric
    With SQL database in Fabric, the source control integration in Fabric enables you to keep your active work synced to git while following a branching strategy that best matches your team’s environments and deployment requirements. With the complexity of enterprise deployment scenarios, code-first deployment is also available for Fabric objects through tools like the Fabric-CICD … Continue reading “Updates to database development tools for SQL database in Fabric”  ( 6 min )
    Mirroring in Microsoft Fabric explained: benefits, use cases, and pricing demystified
    Co-Author: Maraki Ketema, Principal Product Manager Unlocking Data Value at Scale: Mirroring in Microsoft Fabric  In the modern data era, speed, scale, and simplicity are no longer luxuries—they’re expectations. Organizations want to harness the power of their operational data in real time, without the overhead of complex ETL pipelines or latency-filled data movement. This is … Continue reading “Mirroring in Microsoft Fabric explained: benefits, use cases, and pricing demystified”  ( 9 min )
    Eventhouse Accelerated OneLake Table Shortcuts – Generally Available
    Turbo charge queries over Delta Lake and Iceberg tables in OneLake Eventhouse accelerated OneLake Table shortcuts aka. Query Acceleration is now Generally Available! OneLake shortcuts are references from an Eventhouse that point to internal Fabric or external sources. Previously, queries run over OneLake shortcuts were less performant than on data that is ingested directly to … Continue reading “Eventhouse Accelerated OneLake Table Shortcuts – Generally Available”  ( 8 min )
    Intelligent Data Cleanup: Smart Purging for Smarter Data Warehouses
    In the era of Artificial Intelligence, organizations generate and accumulate large volumes of information every second. From transactional records to user logs and analytics data warehouses serve as a single source of truth that stores this information for a plethora of purposes. However, as data accumulates over time, not all remain relevant and valuable, leading … Continue reading “Intelligent Data Cleanup: Smart Purging for Smarter Data Warehouses”  ( 6 min )
    Eventhouse No-Code table creation and editing
    Creating and managing tables in Eventhouse just got even more flexible. While you can create tables using the Get Data wizard – which builds the structure based on a sample file or streaming data – there are times when you need to start from scratch. This could be during basic training, testing, or experimenting with … Continue reading “Eventhouse No-Code table creation and editing”  ( 6 min )
  • Open

    Public Preview: Granular RBAC in Azure Monitor Logs
    We are happy to announce our public preview for Granular RBAC in Azure Monitor Log Analytics! What is Granular RBAC in Azure Monitor Logs? Many organizations emphasize the need to segregate and control access to data in a fine-grained manner, while maintaining a centralized and consolidated logging platform.  On top of the existing capabilities of workspace and table level access provided over Azure RBAC, you can now maintain all your data in a single Log Analytics workspace and provide least privilege access at any level. This means you can control which users can access which tables and rows, based on your business or security needs and defined criteria, and completely separate data and control plane access, using Azure Attribute-based access control (ABAC) as part of your Azure RBAC role assignment. Granular RBAC in Azure Monitor Logs allows you to filter the data that each user can view or query, based on the conditions that you specify. Common examples are characteristics such as organizational roles and units, geographical locations, or data sensitivity levels.  How to set granular data access in Azure Monitor Logs To set up granular access: Create or edit an Azure role assignment. Under “Conditions”, select “Add condition”. In “Add action”, choose the new DataAction: “Read workspace data”. Under “Build expression”, click “Add expression” to define your access rules. You can use any combination of the “Table Name” and “Column Value” attributes to scope access, leveraging a wide range of supported operators to match your criteria. Once applied, users will only be able to access the data that matches the conditions you've configured.  Get started with Granular RBAC in Azure Monitor Logs Learn more about Granular RBAC and how to set it up in Azure Monitor Logs We hope you enjoy this new addition to Azure Monitor Log Analytics.  ( 21 min )
    Public Preview: Granular RBAC in Azure Monitor Logs
    We are happy to announce our public preview for Granular RBAC in Azure Monitor Log Analytics! What is Granular RBAC in Azure Monitor Logs? Many organizations emphasize the need to segregate and control access to data in a fine-grained manner, while maintaining a centralized and consolidated logging platform.  On top of the existing capabilities of workspace and table level access provided over Azure RBAC, you can now maintain all your data in a single Log Analytics workspace and provide least privilege access at any level. This means you can control which users can access which tables and rows, based on your business or security needs and defined criteria, and completely separate data and control plane access, using Azure Attribute-based access control (ABAC) as part of your Azure RBAC role assignment. Granular RBAC in Azure Monitor Logs allows you to filter the data that each user can view or query, based on the conditions that you specify. Common examples are characteristics such as organizational roles and units, geographical locations, or data sensitivity levels.  How to set granular data access in Azure Monitor Logs To set up granular access: Create or edit an Azure role assignment. Under “Conditions”, select “Add condition”. In “Add action”, choose the new DataAction: “Read workspace data”. Under “Build expression”, click “Add expression” to define your access rules. You can use any combination of the “Table Name” and “Column Value” attributes to scope access, leveraging a wide range of supported operators to match your criteria. Once applied, users will only be able to access the data that matches the conditions you've configured.  Get started with Granular RBAC in Azure Monitor Logs Learn more about Granular RBAC and how to set it up in Azure Monitor Logs We hope you enjoy this new addition to Azure Monitor Log Analytics.
  • Open

    From Zero to Hero: Build your first voice agent with Voice Live API
    Voice technology is transforming how we interact with machines, making conversations with AI feel more natural than ever before. With the public beta release of the Voice Live API developers now have the tools to create low-latency, multimodal voice experiences in their apps, opening up endless possibilities for innovation. Gone are the days when building a voice bot required stitching together multiple models for transcription, inference, and text-to-speech conversion. With the Realtime API, developers can now streamline the entire process with a single API call, enabling fluid, natural speech-to-speech conversations. This is a game-changer for industries like customer support, education, and real-time language translation, where fast, seamless interactions are crucial. In this blog, we’…  ( 64 min )
    From Zero to Hero: Build your first voice agent with Voice Live API
    Voice technology is transforming how we interact with machines, making conversations with AI feel more natural than ever before. With the public beta release of the Voice Live API developers now have the tools to create low-latency, multimodal voice experiences in their apps, opening up endless possibilities for innovation. Gone are the days when building a voice bot required stitching together multiple models for transcription, inference, and text-to-speech conversion. With the Realtime API, developers can now streamline the entire process with a single API call, enabling fluid, natural speech-to-speech conversations. This is a game-changer for industries like customer support, education, and real-time language translation, where fast, seamless interactions are crucial. In this blog, we’…

  • Open

    NLWeb Pioneers: Success Stories & Use Cases
    Imagine the web as a vast network of pages you click through—until HTML transformed them into living documents you could link and style with simple tags. Announced at Build 2025 during CEO Satya Nadella’s keynote with CTO Kevin Scott, NLWeb is the next leap for conversation—the “HTML for chat”—turning every site into a natural-language endpoint both people and AI agents can query directly. Watch Kevin’s announcement below, or keep reading to learn about our early adopters. NLWeb Pioneer Highlights Tripadvisor: Conversational travel planning, from “Where should I go this fall with kids?” to full itineraries in one go. Read the spotlight. Qdrant: Lightning-fast, intent-aware search via its vector database. Read the spotlight. O’Reilly Media: Queryable technical library, like chatting with a resident expert. Read the spotlight. Eventbrite: Discover events by intent, not keywords. Read the spotlight. Inception Labs: Sub-second conversational queries using diffusion LLMs. Read the spotlight. Delish (Hearst): Instant recipe matches—“quick vegan dinner with mushrooms and pasta”—tailored to your pantry. Read the spotlight. NLWeb instances also double as Model Context Protocol (MCP) servers—exposing your content to AI assistants and enabling peer-to-peer Agent-to-Agent (A2A) workflows across sites. More PioneersChicago Public Media, Common Sense Media, DDM (Allrecipes & Serious Eats), Milvus, Shopify and Snowflake. Together, these collaborators are laying the protocol-based foundation for an open, agentic web—where sites don’t just display content, they converse. Ready to Build?Get started with the new NLWeb GitHub repository.  ( 22 min )
    NLWeb Pioneers: Success Stories & Use Cases
    Imagine the web as a vast network of pages you click through—until HTML transformed them into living documents you could link and style with simple tags. Announced at Build 2025 during CEO Satya Nadella’s keynote with CTO Kevin Scott, NLWeb is the next leap for conversation—the “HTML for chat”—turning every site into a natural-language endpoint both people and AI agents can query directly. Watch Kevin’s announcement below, or keep reading to learn about our early adopters. NLWeb Pioneer Highlights Tripadvisor: Conversational travel planning, from “Where should I go this fall with kids?” to full itineraries in one go. Read the spotlight. Qdrant: Lightning-fast, intent-aware search via its vector database. Read the spotlight. O’Reilly Media: Queryable technical library, like chatting with a resident expert. Read the spotlight. Eventbrite: Discover events by intent, not keywords. Read the spotlight. Inception Labs: Sub-second conversational queries using diffusion LLMs. Read the spotlight. Delish (Hearst): Instant recipe matches—“quick vegan dinner with mushrooms and pasta”—tailored to your pantry. Read the spotlight. NLWeb instances also double as Model Context Protocol (MCP) servers—exposing your content to AI assistants and enabling peer-to-peer Agent-to-Agent (A2A) workflows across sites. More PioneersChicago Public Media, Common Sense Media, DDM (Allrecipes & Serious Eats), Milvus, Shopify and Snowflake. Together, these collaborators are laying the protocol-based foundation for an open, agentic web—where sites don’t just display content, they converse. Ready to Build?Get started with the new NLWeb GitHub repository.

  • Open

    That’s a wrap for Build 2025!
    Microsoft Build 2025 delivered a powerful vision for the future of data and AI, with Microsoft Fabric and Power BI at the heart of the story. From AI-powered productivity with Copilot to deep integration with Cosmos DB, this year’s announcements reinforced Microsoft’s commitment to unifying the data experience across roles, tools, and industries. Fabric: The … Continue reading “That’s a wrap for Build 2025!”  ( 7 min )
  • Open

    How Amdocs CCoE leveraged Azure AI Agent Service to build intelligent email support agent.
    This post is co-authored with Shlomi Elkayam and Henry Hernandez  from Amdocs CCoE. In this blogpost you will learn how Amdocs CCoE team improved their SLA by providing technical support for IT and cloud infrastructure questions and queries. They used Azure AI Agent Service to build an intelligent email agent that helps Amdocs employees with their technical issues. This post will describe the development phases, solution details and the roadmap ahead. About Amdocs CCoE Amdocs is a multinational telecommunications technology company. The company specializes in software and services for communications, media and financial services providers and digital enterprises. CCoE team is responsible for automation, infrastructure and design of all our Azure solutions, either for internal use cases or …  ( 37 min )
    How Amdocs CCoE leveraged Azure AI Agent Service to build intelligent email support agent.
    This post is co-authored with Shlomi Elkayam and Henry Hernandez  from Amdocs CCoE. In this blogpost you will learn how Amdocs CCoE team improved their SLA by providing technical support for IT and cloud infrastructure questions and queries. They used Azure AI Agent Service to build an intelligent email agent that helps Amdocs employees with their technical issues. This post will describe the development phases, solution details and the roadmap ahead. About Amdocs CCoE Amdocs is a multinational telecommunications technology company. The company specializes in software and services for communications, media and financial services providers and digital enterprises. CCoE team is responsible for automation, infrastructure and design of all our Azure solutions, either for internal use cases or …
  • Open

    Office Add-ins announces Copilot agents with add-in actions and more at Build 2025
    As part of the expanding capabilities for agents across Microsoft 365, Office Platform announces add-in actions for Copilot agents are available in preview. This blog is an overview of all the new capabilities across the platform: APIs, developer tools, and add-in distribution options—making it simpler to build new or iterate on JavaScript add-ins. The post Office Add-ins announces Copilot agents with add-in actions and more at Build 2025 appeared first on Microsoft 365 Developer Blog.  ( 31 min )
    Introducing the Agent Store: Build, publish, and discover agents in Microsoft 365 Copilot
    We’re excited to introduce the Agent Store — a centralized, curated marketplace that features agents built by Microsoft, trusted partners, and customers. The Agent Store offers a new, immersive experience within Microsoft 365 Copilot that enables users to browse, install, and try agents tailored to their needs. The post Introducing the Agent Store: Build, publish, and discover agents in Microsoft 365 Copilot appeared first on Microsoft 365 Developer Blog.  ( 24 min )

  • Open

    Enhance data prep with AI-powered capabilities in Data Wrangler (Preview)
    With AI-powered capabilities in Data Wrangler, you can now do even more to accelerate exploratory analysis and data preparation in Fabric.  ( 6 min )
    SharePoint files destination the first file-based destination for Dataflows Gen2
    The introduction of file-based destinations for Dataflows Gen2 marks a significant step forward in enhancing collaboration, historical tracking, and sharing data for business users. This development begins with SharePoint file destinations in CSV format, offering a streamlined way to share with users on the SharePoint platform. Overview of file-based destinations File-based destinations allow data to … Continue reading “SharePoint files destination the first file-based destination for Dataflows Gen2”  ( 6 min )
    AI-powered development with Copilot for Data pipeline – Boost your productivity in understanding and updating pipeline
    Understanding complex data pipelines created by others can often be a challenging task, especially when users must review each activity individually to understand its configurations, settings, and functions. Additionally, manul updates to general settings, such as timeout and retry parameters, across multiple activities can be time-consuming and tedious. Copilot for Data pipeline introduces advanced capabilities … Continue reading “AI-powered development with Copilot for Data pipeline – Boost your productivity in understanding and updating pipeline”  ( 6 min )
    Fabric CLI: explore and automate Microsoft Fabric from your terminal (Generally Available)
    During FabCon Las Vegas, we introduced the Fabric CLI — a developer-first command-line tool that brings a familiar, file-system-like experience to working with Microsoft Fabric. Since then, thousands of developers have jumped in: exploring, scripting, and embedding the CLI into local workflows. But for many enterprise teams, one question kept coming up: “When will it … Continue reading “Fabric CLI: explore and automate Microsoft Fabric from your terminal (Generally Available)”  ( 8 min )
    What’s new with Fabric CI/CD – May 2025
    As the capabilities of our Fabric Platform Git Integration continue to evolve, we’re happy to share some updates about Fabric’s latest CI/CD capabilities. These advancements are designed to enhance the developer experience and simplify the integration of DevOps practices into everyday workflows. With these innovations, Fabric continues to empower teams to build, test, and deploy … Continue reading “What’s new with Fabric CI/CD – May 2025 “  ( 6 min )
  • Open

    How to use OpenSSL to Send HTTP(S) Requests
    OpenSSL is a powerful tool for working with SSL/TLS, but it can also be used to send custom HTTP requests, which is very useful for debugging and learning how HTTP(S) works at a low level.   1. How It Works Normally, tools like curl or your browser handle everything behind the scenes: they perform the TLS handshake and build the HTTP request. With OpenSSL, you manually create the HTTP request, and OpenSSL only handles the encrypted connection. TLS Handshake: OpenSSL’s s_client command establishes a secure (TLS) connection with the server. Send Request: You pipe (or redirect) a raw HTTP request into s_client. The server processes it and sends a response over the secure channel.   2. Sending a GET Request Here’s how you send a simple GET request to www.bing.com: ( printf "GET / HTTP/1.1\…  ( 23 min )
    How to use OpenSSL to Send HTTP(S) Requests
    OpenSSL is a powerful tool for working with SSL/TLS, but it can also be used to send custom HTTP requests, which is very useful for debugging and learning how HTTP(S) works at a low level.   1. How It Works Normally, tools like curl or your browser handle everything behind the scenes: they perform the TLS handshake and build the HTTP request. With OpenSSL, you manually create the HTTP request, and OpenSSL only handles the encrypted connection. TLS Handshake: OpenSSL’s s_client command establishes a secure (TLS) connection with the server. Send Request: You pipe (or redirect) a raw HTTP request into s_client. The server processes it and sends a response over the secure channel.   2. Sending a GET Request Here’s how you send a simple GET request to www.bing.com: ( printf "GET / HTTP/1.1\…
    Introducing AI-Powered Actionable Insights in Azure Load Testing
    We’re excited to announce the preview of AI powered Actionable Insights in Azure Load Testing—a new capability that helps teams quickly identify performance issues and understand test results through AI-driven analysis.  Performance testing is an essential part of ensuring application reliability and responsiveness, but interpreting the results can often be challenging. It typically involves manually correlating client-side load test telemetry with backend service metrics, which can be both time-consuming and error-prone. Actionable Insights simplifies this process by automatically analyzing test data, surfacing key issues, and offering clear, actionable recommendations—so teams can focus on fixing what matters, not sifting through raw data.  AI-powered diagnostics  Actionable Insights use…  ( 23 min )
    Introducing AI-Powered Actionable Insights in Azure Load Testing
    We’re excited to announce the preview of AI powered Actionable Insights in Azure Load Testing—a new capability that helps teams quickly identify performance issues and understand test results through AI-driven analysis.  Performance testing is an essential part of ensuring application reliability and responsiveness, but interpreting the results can often be challenging. It typically involves manually correlating client-side load test telemetry with backend service metrics, which can be both time-consuming and error-prone. Actionable Insights simplifies this process by automatically analyzing test data, surfacing key issues, and offering clear, actionable recommendations—so teams can focus on fixing what matters, not sifting through raw data.  AI-powered diagnostics  Actionable Insights use…
  • Open

    Lifecycle Management of Blobs (Deletion) using Automation Tasks
    Background: We often encounter scenarios where we need to delete blobs that have been idle in a storage account for an extended period. For a small number of blobs, deletion can be handled easily using the Azure Portal, Storage Explorer, or inline scripts such as PowerShell or Azure CLI. However, in most cases, we deal with a large volume of blobs, making manual deletion impractical. In such situations, it's essential to leverage automation tools to streamline the deletion process. One effective option is using Automation Tasks, which can help schedule and manage blob deletions efficiently. Note: Behind the scenes, an automation task is actually a logic app resource that runs a workflow. So, the Consumption pricing model of logic-app applies to automation tasks.    Scenario’s where “Automa…  ( 30 min )
    Lifecycle Management of Blobs (Deletion) using Automation Tasks
    Background: We often encounter scenarios where we need to delete blobs that have been idle in a storage account for an extended period. For a small number of blobs, deletion can be handled easily using the Azure Portal, Storage Explorer, or inline scripts such as PowerShell or Azure CLI. However, in most cases, we deal with a large volume of blobs, making manual deletion impractical. In such situations, it's essential to leverage automation tools to streamline the deletion process. One effective option is using Automation Tasks, which can help schedule and manage blob deletions efficiently. Note: Behind the scenes, an automation task is actually a logic app resource that runs a workflow. So, the Consumption pricing model of logic-app applies to automation tasks.    Scenario’s where “Automa…

  • Open

    New Microsoft 365 Copilot Tuning | Create fine-tuned models to write like you do
    Fine-tuning adds new skills to foundational models, simulating experience in the tasks you teach the model to do. This complements Retrieval Augmented Generation, which in real-time uses search to find related information, then add that to your prompts for context. Fine-tuning helps ensure that responses meet your quality expectations for specific repeatable tasks, without needing to be prompting expert. It’s great for drafting complex legal agreements, writing technical documentation, authoring medical papers, and more — using detailed, often lengthy precedent files along with what you teach the model. Using Copilot Studio, anyone can create and deploy these fine-tuned models to use with agents without data science or coding expertise. There, you can teach models using data labeling, grou…  ( 48 min )
    New Microsoft 365 Copilot Tuning | Create fine-tuned models to write like you do
    Fine-tuning adds new skills to foundational models, simulating experience in the tasks you teach the model to do. This complements Retrieval Augmented Generation, which in real-time uses search to find related information, then add that to your prompts for context. Fine-tuning helps ensure that responses meet your quality expectations for specific repeatable tasks, without needing to be prompting expert. It’s great for drafting complex legal agreements, writing technical documentation, authoring medical papers, and more — using detailed, often lengthy precedent files along with what you teach the model. Using Copilot Studio, anyone can create and deploy these fine-tuned models to use with agents without data science or coding expertise. There, you can teach models using data labeling, grou…
  • Open

    Semantic Kernel and Microsoft.Extensions.AI: Better Together, Part 1
    This is the start of a series highlighting the integration between Microsoft Semantic Kernel and Microsoft.Extensions.AI. Future parts will provide detailed examples of using Semantic Kernel with Microsoft.Extensions.AI abstractions.  The most common questions are:  “Do Microsoft’s AI extensions replace Semantic Kernel?”  “When should I use Microsoft’s AI extensions instead of Semantic Kernel?”  This blog post […] The post Semantic Kernel and Microsoft.Extensions.AI: Better Together, Part 1 appeared first on Semantic Kernel.  ( 27 min )
    Transitioning to new Extensions AI IEmbeddingGenerator interface
    As Semantic Kernel shifts its foundational abstractions to Microsoft.Extensions.AI, we are obsoleting and moving away from our experimental embeddings interfaces to the new standardized abstractions that provide a more consistent and powerful way to work with AI services across the .NET ecosystem. The Evolution of Embedding Generation in Semantic Kernel Semantic Kernel has always aimed […] The post Transitioning to new Extensions AI IEmbeddingGenerator interface appeared first on Semantic Kernel.  ( 23 min )
    Vector Data Extensions are now Generally Available (GA)
    We’re excited to announce the release of Microsoft.Extensions.VectorData.Abstractions, a foundational library providing exchange types and abstractions for vector stores when working with vector data in AI-powered applications. This release is the result of a close collaboration between the Semantic Kernel and .NET teams, combining expertise in AI and developer tooling to deliver a robust, extensible […] The post Vector Data Extensions are now Generally Available (GA) appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Azure Kubernetes Service Baseline - The Hard Way, Third time's a charm
    1 Access management Azure Kubernetes Service (AKS) supports Microsoft Entra ID integration, which allows you to control access to your cluster resources using Azure role-based access control (RBAC). In this tutorial, you will learn how to integrate AKS with Microsoft Entra ID and assign different roles and permissions to three types of users: An admin user, who will have full access to the AKS cluster and its resources. A backend ops team, who will be responsible for managing the backend application deployed in the AKS cluster. They will only have access to the backend namespace and the resources within it. A frontend ops team, who will be responsible for managing the frontend application deployed in the AKS cluster. They will only have access to the frontend namespace and the resources wi…  ( 48 min )
    Azure Kubernetes Service Baseline - The Hard Way, Third time's a charm
    1 Access management Azure Kubernetes Service (AKS) supports Microsoft Entra ID integration, which allows you to control access to your cluster resources using Azure role-based access control (RBAC). In this tutorial, you will learn how to integrate AKS with Microsoft Entra ID and assign different roles and permissions to three types of users: An admin user, who will have full access to the AKS cluster and its resources. A backend ops team, who will be responsible for managing the backend application deployed in the AKS cluster. They will only have access to the backend namespace and the resources within it. A frontend ops team, who will be responsible for managing the frontend application deployed in the AKS cluster. They will only have access to the frontend namespace and the resources wi…
  • Open

    Bringing AI to the edge: Hackathon Windows ML
    AI Developer Hackathon Windows ML  Hosted by Qualcomm on SnapDragonX We’re excited to announce our support and participation for the upcoming global series of Edge AI hackathons, hosted by Qualcomm Technologies. The first is on June 14-15 in Bangalore.  We see a world of hybrid AI, developing rapidly as new generation of intelligent applications get built for diverse scenarios. These range from mobile, desktop, spatial computing and extending all the way to industrial and automotive. Mission critical workloads oscillate between decision-making in the moment, on device, to fine tuning models on the cloud. We believe we are in the early stages of development of agentic applications that efficiently run on the edge for scenarios needing local deployment and on-device inferencing.  Microsoft W…  ( 27 min )
    Bringing AI to the edge: Hackathon Windows ML
    AI Developer Hackathon Windows ML  Hosted by Qualcomm on SnapDragonX We’re excited to announce our support and participation for the upcoming global series of Edge AI hackathons, hosted by Qualcomm Technologies. The first is on June 14-15 in Bangalore.  We see a world of hybrid AI, developing rapidly as new generation of intelligent applications get built for diverse scenarios. These range from mobile, desktop, spatial computing and extending all the way to industrial and automotive. Mission critical workloads oscillate between decision-making in the moment, on device, to fine tuning models on the cloud. We believe we are in the early stages of development of agentic applications that efficiently run on the edge for scenarios needing local deployment and on-device inferencing.  Microsoft W…
  • Open

    How to Use Postgres MCP Server with GitHub Copilot in VS Code
    GitHub Copilot has changed how developers write code, but when combined with an MCP (Model Copilot Protocol) server, it also connects your services. With it, Copilot can understand your database schema and generate relevant code for your API, data models, or business logic. In this guide, you'll learn how to use the Neon Serverless Postgres MCP server with GitHub Copilot in VS Code to build a sample REST API quickly. We'll walk through how to create an Azure Function that fetches data from a Neon database, all with minimal setup and no manual query writing. From Code Generation to Database Management with GitHub Copilot AI agents are no longer just helping write code—they’re creating and managing databases. When a chatbot logs a customer conversation, or a new region spins up in the Azure …  ( 29 min )
    How to Use Postgres MCP Server with GitHub Copilot in VS Code
    GitHub Copilot has changed how developers write code, but when combined with an MCP (Model Copilot Protocol) server, it also connects your services. With it, Copilot can understand your database schema and generate relevant code for your API, data models, or business logic. In this guide, you'll learn how to use the Neon Serverless Postgres MCP server with GitHub Copilot in VS Code to build a sample REST API quickly. We'll walk through how to create an Azure Function that fetches data from a Neon database, all with minimal setup and no manual query writing. From Code Generation to Database Management with GitHub Copilot AI agents are no longer just helping write code—they’re creating and managing databases. When a chatbot logs a customer conversation, or a new region spins up in the Azure …
  • Open

    [AI Search] LockedSPLResourceFound error when deleting AI Search
    Are you unable to delete AI Search with the following error? LockedSPLResourceFound :Unable to verify management locks on Resource '$Resource_Path '. If you still want to delete the search service, manually delete the SPL resource first and try again.  If you are, this is the right place for you to find a quick resolution! Keep on reading through. [Solution – Delete the Shared Private Link] The error message will appear if you still have Shared Private Link configured in AI Search. AI Search will not let you delete the resource unless you delete the Shared Private Link first. You must delete the Shared Private Link in the Portal manually.Move to Settings > Networking > Shared private access tab.     Once the Shared Private Links are all deleted,  please try again to delete the AI Search. Also please give at least 15 minutes for the Shared Private Links to be deleted completely as it may take longer.   [Extra - I tried to delete Shared Private Links but it’s been pending for a long while] There are occasions where you will see your Shared Private Links are in a state of being deleted for a long time as below. (For example 3 hours plus or more) In this case, please open a case to our Support team mentioning you are having an issue with deleting AI Search due to Shared Private Link not being deleted properly. Our team will take care of the issue from that point on!  ( 20 min )
    [AI Search] LockedSPLResourceFound error when deleting AI Search
    Are you unable to delete AI Search with the following error? LockedSPLResourceFound :Unable to verify management locks on Resource '$Resource_Path '. If you still want to delete the search service, manually delete the SPL resource first and try again.  If you are, this is the right place for you to find a quick resolution! Keep on reading through. [Solution – Delete the Shared Private Link] The error message will appear if you still have Shared Private Link configured in AI Search. AI Search will not let you delete the resource unless you delete the Shared Private Link first. You must delete the Shared Private Link in the Portal manually.Move to Settings > Networking > Shared private access tab.     Once the Shared Private Links are all deleted,  please try again to delete the AI Search. Also please give at least 15 minutes for the Shared Private Links to be deleted completely as it may take longer.   [Extra - I tried to delete Shared Private Links but it’s been pending for a long while] There are occasions where you will see your Shared Private Links are in a state of being deleted for a long time as below. (For example 3 hours plus or more) In this case, please open a case to our Support team mentioning you are having an issue with deleting AI Search due to Shared Private Link not being deleted properly. Our team will take care of the issue from that point on!
  • Open

    On-premises data gateway May 2025 release
    Here is the May 2025 release of the on-premises data gateway (version 3000.270).  ( 6 min )
  • Open

    One Pipeline to Rule Them All: Ensuring CodeQL Scanning Results and Dependency Scanning Results Go to the Intended Repository
    “One Ring to rule them all, One Ring to find them, One Ring to bring them all, and in the darkness bind them.” – J.R.R. Tolkien, The Lord of the Rings In the world of code scanning and dependency scanning, your pipeline is the One Ring—a single definition that can orchestrate scans across multiple repositories. […] The post One Pipeline to Rule Them All: Ensuring CodeQL Scanning Results and Dependency Scanning Results Go to the Intended Repository appeared first on Azure DevOps Blog.  ( 26 min )
  • Open

    Introducing the Microsoft 365 Agents Toolkit
    Read how the Microsoft 365 Agents Toolkit, an evolution of Microsoft Teams Toolkit, is designed to help developers build agents and apps for Microsoft 365 Copilot, Microsoft Teams, and Microsoft 365. The post Introducing the Microsoft 365 Agents Toolkit appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Transforming Android Development: Unveiling MediaTek’s latest chipset with Microsoft's Phi models
    Imagine running advanced AI applications—like intelligent copilots and Retrieval-Augmented Generation (RAG)—directly on Android devices, completely offline. With the rapid evolution of Neural Processing Units (NPUs), this is no longer a future vision—it’s happening now. Optimized AI at the Edge: Phi-4-mini on MediaTek Thanks to MediaTek’s conversion and quantization tools, Microsoft’s Phi-4-mini and Phi-4-mini-reasoning models are now optimized for MediaTek NPUs. This collaboration empowers developers to build fast, responsive, and privacy-preserving AI experiences on Android—without needing cloud connectivity. MediaTek’s flagship Dimensity 9400 and 9400+ platform with Dimensity GenAI Toolkit 2.0 delivers excellent performance with the Phi-4 mini (3.8B) model where ​prefill speed is >800 t…  ( 29 min )
    Transforming Android Development: Unveiling MediaTek’s latest chipset with Microsoft's Phi models
    Imagine running advanced AI applications—like intelligent copilots and Retrieval-Augmented Generation (RAG)—directly on Android devices, completely offline. With the rapid evolution of Neural Processing Units (NPUs), this is no longer a future vision—it’s happening now. Optimized AI at the Edge: Phi-4-mini on MediaTek Thanks to MediaTek’s conversion and quantization tools, Microsoft’s Phi-4-mini and Phi-4-mini-reasoning models are now optimized for MediaTek NPUs. This collaboration empowers developers to build fast, responsive, and privacy-preserving AI experiences on Android—without needing cloud connectivity. MediaTek’s flagship Dimensity 9400 and 9400+ platform with Dimensity GenAI Toolkit 2.0 delivers excellent performance with the Phi-4 mini (3.8B) model where ​prefill speed is >800 t…
    Voice-enabled AI Agents: transforming customer engagement with Azure AI Speech
    We are seeing customers such as Indiana Pacers and Coca-Cola transform customer experiences using Azure AI Speech to power customer interactions. And in the new era of agentic AI, voice is increasingly becoming an important modality to interact with AI Agents in a natural way. Today, we are excited to announce a number to new capabilities in Azure AI Speech that will further propel our customers in the voice-enabled agentic AI era as AI Agents are being rapidly adopted by a wide range of enterprise customers across a wide variety of industries. The updates we are announcing today include the new Voice Live API (Public Preview) which can be used to help simplify creating voice agents that provide fluent and natural speech to speech conversational experiences. To provide a robust conversatio…  ( 47 min )
    Voice-enabled AI Agents: transforming customer engagement with Azure AI Speech
    We are seeing customers such as Indiana Pacers and Coca-Cola transform customer experiences using Azure AI Speech to power customer interactions. And in the new era of agentic AI, voice is increasingly becoming an important modality to interact with AI Agents in a natural way. Today, we are excited to announce a number to new capabilities in Azure AI Speech that will further propel our customers in the voice-enabled agentic AI era as AI Agents are being rapidly adopted by a wide range of enterprise customers across a wide variety of industries. The updates we are announcing today include the new Voice Live API (Public Preview) which can be used to help simplify creating voice agents that provide fluent and natural speech to speech conversational experiences. To provide a robust conversatio…

  • Open

    How Anker soundcore Uses Azure AI Speech for Seamless Multilingual Communication
    “We’re excited to be part of Microsoft Build and to demonstrate what’s possible when AI meets every day tech. Built on deep technical integration and shared innovation goals, we’re able to deliver smarter, more intuitive, and responsive audio products for users around the world.”— Dongping Zhao, President of Anker Innovations   Imagine talking to anyone, no matter the language. soundcore, Anker Innovations' audio brand, has incorporated Microsoft Azure AI Speech services into its new devices to eliminate language barriers. These wireless earbuds now offer real-time speech translation and voice interactions, showcasing how cloud-based AI speech technologies can create immersive, multilingual experiences on consumer devices.    Anker’s Mission and Challenges Anker Innovations is a global sma…  ( 28 min )
    How Anker soundcore Uses Azure AI Speech for Seamless Multilingual Communication
    “We’re excited to be part of Microsoft Build and to demonstrate what’s possible when AI meets every day tech. Built on deep technical integration and shared innovation goals, we’re able to deliver smarter, more intuitive, and responsive audio products for users around the world.”— Dongping Zhao, President of Anker Innovations   Imagine talking to anyone, no matter the language. soundcore, Anker Innovations' audio brand, has incorporated Microsoft Azure AI Speech services into its new devices to eliminate language barriers. These wireless earbuds now offer real-time speech translation and voice interactions, showcasing how cloud-based AI speech technologies can create immersive, multilingual experiences on consumer devices.    Anker’s Mission and Challenges Anker Innovations is a global sma…
    NLWeb Pioneer Q&A: Delish
    As a Product Marketing Director at Azure, I’ve had a front-row seat to the evolution of generative AI—from early text-based bots to today’s intelligent systems that reason across images, documents, and real-world context. But some of the most exciting shifts aren’t just about models—they’re about the web itself. Enter NLWeb, Microsoft’s newly announced open initiative to make websites conversational and AI-native. Imagine asking a recipe site like Delish, “What’s a quick, vegan dinner I can make with mushrooms and pasta?”—and getting a smart, tailored response that pulls directly from your structured content, without relying on rigid keyword search.  Built on familiar tools like Schema.org and vector search, NLWeb is designed to be easy for developers and impactful for users. It’s led by R.V. Guha—the mind behind Schema.org, RSS, and RDF—who recently joined Microsoft to help reimagine the open web for the AI era. The new GitHub repo is now live for developers to explore and build upon. Here’s how NLWeb pioneer Delish is thinking about this shift: Q1: What inspired your team to try NLWeb?  We saw an opportunity to improve discovery for consumers by delivering more relevant results through faceted, natural language queries. Q2: How did the setup process go? Any surprises? Collaborating with the Microsoft engineers was valuable during the test planning, and the initial prototype results were promising as we explored the potential. Q3: What query or interaction made NLWeb click for you?  Users can ask naturally phrased questions—based on cultural moments or food types—and get accurate (and delicious) results. Q4: How are you blending NLWeb with your current experience?  We will actively be testing NLWeb embedded as part of the discovery experience on the Delish site. Q5: If NLWeb reaches its full potential, what could it unlock for your users or the web?  We’re excited about the potential to better serve our customers, drive deeper engagement, increase time spent and grow higher LTV audiences.  ( 21 min )
    NLWeb Pioneer Q&A: Delish
    As a Product Marketing Director at Azure, I’ve had a front-row seat to the evolution of generative AI—from early text-based bots to today’s intelligent systems that reason across images, documents, and real-world context. But some of the most exciting shifts aren’t just about models—they’re about the web itself. Enter NLWeb, Microsoft’s newly announced open initiative to make websites conversational and AI-native. Imagine asking a recipe site like Delish, “What’s a quick, vegan dinner I can make with mushrooms and pasta?”—and getting a smart, tailored response that pulls directly from your structured content, without relying on rigid keyword search.  Built on familiar tools like Schema.org and vector search, NLWeb is designed to be easy for developers and impactful for users. It’s led by R.V. Guha—the mind behind Schema.org, RSS, and RDF—who recently joined Microsoft to help reimagine the open web for the AI era. The new GitHub repo is now live for developers to explore and build upon. Here’s how NLWeb pioneer Delish is thinking about this shift: Q1: What inspired your team to try NLWeb?  We saw an opportunity to improve discovery for consumers by delivering more relevant results through faceted, natural language queries. Q2: How did the setup process go? Any surprises? Collaborating with the Microsoft engineers was valuable during the test planning, and the initial prototype results were promising as we explored the potential. Q3: What query or interaction made NLWeb click for you?  Users can ask naturally phrased questions—based on cultural moments or food types—and get accurate (and delicious) results. Q4: How are you blending NLWeb with your current experience?  We will actively be testing NLWeb embedded as part of the discovery experience on the Delish site. Q5: If NLWeb reaches its full potential, what could it unlock for your users or the web?  We’re excited about the potential to better serve our customers, drive deeper engagement, increase time spent and grow higher LTV audiences.
    From Extraction to Insight: Evolving Azure AI Content Understanding with Reasoning and Enrichment
    First introduced in public preview last year, Azure AI Content Understanding enables you to convert unstructured content—documents, audio, video, text, and images—into structured data. The service is designed to support consistent, high-quality output, directed improvements, built-in enrichment, and robust pre-processing to accelerate workflows and reduce cost. A New Chapter in Content Understanding Since our launch we’ve seen customers pushing the boundaries to go beyond simple data extraction with agentic solutions fully automating decisions. This requires more than just extracting fields. For example, a healthcare insurance provider decision to pay a claim requires cross-checking against insurance policies, applicable contracts, patient’s medical history and prescription datapoints. To …  ( 32 min )
    From Extraction to Insight: Evolving Azure AI Content Understanding with Reasoning and Enrichment
    First introduced in public preview last year, Azure AI Content Understanding enables you to convert unstructured content—documents, audio, video, text, and images—into structured data. The service is designed to support consistent, high-quality output, directed improvements, built-in enrichment, and robust pre-processing to accelerate workflows and reduce cost. A New Chapter in Content Understanding Since our launch we’ve seen customers pushing the boundaries to go beyond simple data extraction with agentic solutions fully automating decisions. This requires more than just extracting fields. For example, a healthcare insurance provider decision to pay a claim requires cross-checking against insurance policies, applicable contracts, patient’s medical history and prescription datapoints. To …
    NLWeb Pioneer Q&A: Qdrant
    We just announced NLWeb at Microsoft Build—starting with a GitHub repo to help developers explore it, and a short list of enterprise pioneers testing it out in the real world. Qdrant is one of those early innovators shaping where this goes. Known for their open-source vector database purpose-built for semantic search, Qdrant is helping developers supercharge the intelligence of their search interfaces—without rebuilding their entire stack. By integrating with NLWeb, Qdrant makes it easy to add fast, intent-aware, and context-rich search to websites and apps of any size. Below, the Qdrant team shares how this integration came together, what developers can expect, and why NLWeb might be the unlock that brings semantic search to the mainstream.     Q1: Why Qdrant Sees NLWeb as a Practical St…  ( 27 min )
    NLWeb Pioneer Q&A: Qdrant
    We just announced NLWeb at Microsoft Build—starting with a GitHub repo to help developers explore it, and a short list of enterprise pioneers testing it out in the real world. Qdrant is one of those early innovators shaping where this goes. Known for their open-source vector database purpose-built for semantic search, Qdrant is helping developers supercharge the intelligence of their search interfaces—without rebuilding their entire stack. By integrating with NLWeb, Qdrant makes it easy to add fast, intent-aware, and context-rich search to websites and apps of any size. Below, the Qdrant team shares how this integration came together, what developers can expect, and why NLWeb might be the unlock that brings semantic search to the mainstream.     Q1: Why Qdrant Sees NLWeb as a Practical St…
    NLWeb Pioneer Q&A: Eventbrite
    We just announced NLWeb at Microsoft Build—starting with a GitHub repo to help developers explore it, and a short list of enterprise pioneers testing it out in the real world. Eventbrite is one of those early innovators shaping where this goes. Known for their global events platform that helps millions discover and attend unique experiences, Eventbrite is now exploring how NLWeb can make that discovery process even more personal, conversational, and precise. Whether you're looking for a date night idea or planning your next creative outing, NLWeb enables natural, expressive search queries that understand intent—not just keywords. Below, Eventbrite shares their journey piloting NLWeb: what made them say yes, how it’s performing in early tests, and what they see on the horizon. Q1: What ins…  ( 27 min )
    NLWeb Pioneer Q&A: Eventbrite
    We just announced NLWeb at Microsoft Build—starting with a GitHub repo to help developers explore it, and a short list of enterprise pioneers testing it out in the real world. Eventbrite is one of those early innovators shaping where this goes. Known for their global events platform that helps millions discover and attend unique experiences, Eventbrite is now exploring how NLWeb can make that discovery process even more personal, conversational, and precise. Whether you're looking for a date night idea or planning your next creative outing, NLWeb enables natural, expressive search queries that understand intent—not just keywords. Below, Eventbrite shares their journey piloting NLWeb: what made them say yes, how it’s performing in early tests, and what they see on the horizon. Q1: What ins…
    NLWeb Pioneer Q&A: Inception
    As a Director of AI Product Marketing at Azure, I have spent the last three years deep in the GenAI ecosystem. From internal chatbots that evolved into multi-model agent orchestrations, I feel like I am witnessing history in my job and life. Every day I wonder what is coming next, to build on top of everything we have developed in just a short amount of time. Now I am excited to share the newest chapter in this incredible run of innovation with the announcement of Microsoft’s launch of NLWeb today, an open project aimed at facilitating the creation of natural language interfaces for websites, enabling users to interact with site content through natural language queries. The initiative strives to empower web publishers by making it easier to develop AI-driven applications that enhance user …  ( 24 min )
    NLWeb Pioneer Q&A: Inception
    As a Director of AI Product Marketing at Azure, I have spent the last three years deep in the GenAI ecosystem. From internal chatbots that evolved into multi-model agent orchestrations, I feel like I am witnessing history in my job and life. Every day I wonder what is coming next, to build on top of everything we have developed in just a short amount of time. Now I am excited to share the newest chapter in this incredible run of innovation with the announcement of Microsoft’s launch of NLWeb today, an open project aimed at facilitating the creation of natural language interfaces for websites, enabling users to interact with site content through natural language queries. The initiative strives to empower web publishers by making it easier to develop AI-driven applications that enhance user …
    NLWeb Pioneer Q&A: O'Reilly
    Over the past three years working in generative AI at Microsoft, I’ve had a front-row seat to some of the most exciting shifts in tech. From chat assistants to multi-modal agents, every layer we’ve built has opened up new questions—and new responsibilities. One of the biggest ones: how do we make the web itself more useful to humans and AI? Enter NLWeb—a new open project from Microsoft that aims to make websites natively conversational. Instead of keyword searches or rigid menus, users (and agents) can query a site’s content in natural language. For developers and web publishers, NLWeb offers a practical way to tap into your existing structured data—like Schema.org markup or product catalogs—and expose that content intelligently, without reinventing your site. I’m especially excited that O…  ( 27 min )
    NLWeb Pioneer Q&A: O'Reilly
    Over the past three years working in generative AI at Microsoft, I’ve had a front-row seat to some of the most exciting shifts in tech. From chat assistants to multi-modal agents, every layer we’ve built has opened up new questions—and new responsibilities. One of the biggest ones: how do we make the web itself more useful to humans and AI? Enter NLWeb—a new open project from Microsoft that aims to make websites natively conversational. Instead of keyword searches or rigid menus, users (and agents) can query a site’s content in natural language. For developers and web publishers, NLWeb offers a practical way to tap into your existing structured data—like Schema.org markup or product catalogs—and expose that content intelligently, without reinventing your site. I’m especially excited that O…
    NLWeb Pioneer Q&A: O'Reilly Media
    Over the past three years working in generative AI at Microsoft, I’ve had a front-row seat to some of the most exciting shifts in tech. From chat assistants to multi-modal agents, every layer we’ve built has opened up new questions—and new responsibilities. One of the biggest ones: how do we make the web itself more useful to humans and AI? Enter NLWeb—a new open project from Microsoft that aims to make websites natively conversational. Instead of keyword searches or rigid menus, users (and agents) can query a site’s content in natural language. For developers and web publishers, NLWeb offers a practical way to tap into your existing structured data—like Schema.org markup or product catalogs—and expose that content intelligently, without reinventing your site. I’m especially excited that O…
    NLWeb Pioneer Q&A: Tripadvisor
    As a Product Marketing Director at Azure, I’ve spent the last few years in the thick of generative AI, from early chatbot experiments to sophisticated agentic systems that now reason across modalities. It’s felt like living inside a tech documentary, with every month adding a new chapter. Today, I’m excited to share what might be one of the most quietly profound shifts yet: NLWeb. Just announced by Microsoft, NLWeb is an open initiative to make the entire web more conversational—enabling users (and AI agents) to interact with websites via natural language instead of dropdowns and keyword searches. For developers and publishers, NLWeb simplifies how you expose your content to AI by using structured data formats like Schema.org, vector indexes, and LLM tools you probably already use. What ma…  ( 29 min )
    NLWeb Pioneer Q&A: Tripadvisor
    As a Product Marketing Director at Azure, I’ve spent the last few years in the thick of generative AI, from early chatbot experiments to sophisticated agentic systems that now reason across modalities. It’s felt like living inside a tech documentary, with every month adding a new chapter. Today, I’m excited to share what might be one of the most quietly profound shifts yet: NLWeb. Just announced by Microsoft, NLWeb is an open initiative to make the entire web more conversational—enabling users (and AI agents) to interact with websites via natural language instead of dropdowns and keyword searches. For developers and publishers, NLWeb simplifies how you expose your content to AI by using structured data formats like Schema.org, vector indexes, and LLM tools you probably already use. What ma…
    Announcing Azure AI Language new features to accelerate your agent development
    In today’s fast-moving AI landscape, businesses are racing to embed conversational intelligence and automation into every customer touchpoint. However, building a reliable and scalable agent from scratch remains complex and time-consuming. Developers tell us they need a streamlined way to map diverse user intents, craft accurate responses, and support global audiences without wrestling with ad-hoc integrations. At the same time, rising expectations around data privacy and compliance introduce yet another layer of overhead. To meet these challenges, today, we’re excited to announce a suite of powerful new tools and templates designed to help developers build intelligent agents faster than ever with our Azure AI Language service. Working together with Azure AI Agent Service, whether you’re t…  ( 37 min )
    Unlock new dimensions of creativity: Gpt-image-1 and Sora
    We're excited to announce Sora, now available in AI Foundry. Learn more about Sora in Azure OpenAI, and its API and video playground, here. To dive deeper into video playground experience, check out this blog post ​  Why does this matter?  AI is no longer *just* about text. Here’s why: multimodal models enable deeper understanding. Today AI doesn’t just understand words: it understands context, visuals, and motion. From prompt to product, imagine going from a product description to a full marketing campaign.  In many ways, the rise of multimodal AI today is comparable to the inception of photography in the 19th century—introducing a new creative medium that, like photography, didn’t replace painting but expanded the boundaries of artistic expression. Just a little over a month ago, we rele…  ( 29 min )
    Introducing Built-in AgentOps Tools in Azure AI Foundry Agent Service
    A New Era of Agent Intelligence We’re thrilled to announce the public preview of Tracing, Evaluation, and Monitoring in Azure AI Foundry Agent Service, features designed to revolutionize how developers build, debug, and optimize AI agents. With detailed traces and customizable evaluators, AgentOps is here to bridge the gap between observability and performance improvement. Whether you’re managing a simple chatbot or a complex multi-agent system, this is the tool you’ve been waiting for. What Makes AgentOps Unique? AgentOps offers an unparalleled suite of functionalities that cater to the challenges AI developers face today. Here are the two cornerstone features: 1. Integrated Tracing Functionality AgentOps provides full execution tracing, offering a detailed, step-by-step breakdown of how…  ( 25 min )
    Azure OpenAI Fine Tuning is Everywhere
    If you’re building an AI agent and need to customize its behavior to specific domains, its interaction tone, or improve its tool selection, you should be fine tuning! Our customers agree, but the challenge faced has been twofold: regional availability and the cost of experimentation. Today, we’re bringing fine tuning of Azure OpenAI models to a dozen new AI Foundry regions with the public previews of Global Training and Developer Tier. With reduced pricing and global availability, AI Foundry makes fine tuning the latest OpenAI models on Azure more accessible and affordable than ever to bring your agents to life. What is Global Training? Global Training expands the reach of model customization with the affordable pricing of our other Global offerings: 🏋️‍♂️ Train the latest OpenAI models …  ( 27 min )
    Expand Azure AI Foundry Agent Service with More Knowledge and Action Tools
    Customers often face challenges deploying AI agents that can handle complex, real-world scenarios due to limited tool access. To empower AI agents with more capabilities, Azure AI Foundry Agent Service now supports a growing set of integrated tools: Grounding with Bing Custom Search (preview), SharePoint (coming soon), Azure Logic Apps (preview), Triggers (preview), and third-party tools, enabling agents to retrieve richer information, take more meaningful actions, and deliver intelligent, goal-driven results. This expansion helps organizations build smarter, more adaptable agents that can better align with dynamic business needs.   Today, we are thrilled to announce Azure AI Foundry Agent Service is now Generally Available including following tools: Azure AI Search, Azure Functions, Code …  ( 43 min )
    Building a Digital Workforce with Multi-Agents in Azure AI Foundry Agent Service
    As organizations increasingly rely on AI to automate complex tasks and scale digital operations, the ability to coordinate multiple agents in a single, cohesive system is becoming critical. Moving beyond single-agent architectures to multi-agent systems enables richer, more dynamic automation – where specialized agents can collaborate, share context, and complete multi-step processes with minimal human intervention. This shift is unlocking the potential for organizations to build full digital workforces that can manage everything from customer support to supply chain automation.   Why Multi-Agents Matter Building a single agent is often straightforward – it’s designed to perform a specific task, like answering common support queries or generating summaries from documents. However, real-wor…  ( 49 min )
    Announcing General Availability of Azure AI Foundry Agent Service
    Agents are revolutionizing business automation by evolving from simple chatbots into sophisticated, collaborative systems. Fueled by advancements in model reasoning and efficiency, these agents can now handle complex, multi-step processes with speed and accuracy. This marks a shift from isolated tools to dynamic, scalable agent workforces that coordinate tasks, share context, and adapt in real time. This transformation enables businesses to optimize operations and elevate customer experiences through AI-driven workflows. Instead of relying on single-purpose bots, organizations are deploying ecosystems of specialized agents that can interact, reason, and respond to changing conditions with minimal oversight. Yet building and scaling these systems is not without challenges. Developers must i…  ( 39 min )
    Announcing Azure AI Language new features to accelerate your agent development
    In today’s fast-moving AI landscape, businesses are racing to embed conversational intelligence and automation into every customer touchpoint. However, building a reliable and scalable agent from scratch remains complex and time-consuming. Developers tell us they need a streamlined way to map diverse user intents, craft accurate responses, and support global audiences without wrestling with ad-hoc integrations. At the same time, rising expectations around data privacy and compliance introduce yet another layer of overhead. To meet these challenges, today, we’re excited to announce a suite of powerful new tools and templates designed to help developers build intelligent agents faster than ever with our Azure AI Language service. Working together with Azure AI Agent Service, whether you’re t…
    Unlock new dimensions of creativity: Gpt-image-1 and Sora
    We're excited to announce Sora, coming soon to AI Foundry. Learn more about Sora in Azure OpenAI, and its API and video playground, here.  Hear how T&Pm, a WPP company, uses Sora in Azure OpenAI to bring their media solutions to the next level   AI is no longer *just* about text. Here’s why: multimodal models enable deeper understanding. Today AI doesn’t just understand words – it understands context, visuals, and motion. From prompt to product, imagine going from a product description to a full marketing campaign.  This shift represents more than just a technical evolution—it’s a creative revolution. In many ways, the rise of multimodal AI today is comparable to the inception of photography in the 19th century—introducing a new creative medium that, like photography, didn’t replace pain…
    Introducing Built-in AgentOps Tools in Azure AI Foundry Agent Service
    A New Era of Agent Intelligence We’re thrilled to announce the public preview of Tracing, Evaluation, and Monitoring in Azure AI Foundry Agent Service, features designed to revolutionize how developers build, debug, and optimize AI agents. With detailed traces and customizable evaluators, AgentOps is here to bridge the gap between observability and performance improvement. Whether you’re managing a simple chatbot or a complex multi-agent system, this is the tool you’ve been waiting for. What Makes AgentOps Unique? AgentOps offers an unparalleled suite of functionalities that cater to the challenges AI developers face today. Here are the two cornerstone features: 1. Integrated Tracing Functionality AgentOps provides full execution tracing, offering a detailed, step-by-step breakdown of how…
    Azure OpenAI Fine Tuning is Everywhere
    If you’re building an AI agent and need to customize its behavior to specific domains, its interaction tone, or improve its tool selection, you should be fine tuning! Our customers agree, but the challenge faced has been twofold: regional availability and the cost of experimentation. Today, we’re bringing fine tuning of Azure OpenAI models to a dozen new AI Foundry regions with the public previews of Global Training and Developer Tier. With reduced pricing and global availability, AI Foundry makes fine tuning the latest OpenAI models on Azure more accessible and affordable than ever to bring your agents to life. What is Global Training? Global Training expands the reach of model customization with the affordable pricing of our other Global offerings: 🏋️‍♂️ Train the latest OpenAI models …
    Expand Azure AI Foundry Agent Service with More Knowledge and Action Tools
    Customers often face challenges deploying AI agents that can handle complex, real-world scenarios due to limited tool access. To empower AI agents with more capabilities, Azure AI Foundry Agent Service now supports a growing set of integrated tools: Grounding with Bing Custom Search (preview), SharePoint (coming soon), Azure Logic Apps (preview), Triggers (preview), and third-party tools, enabling agents to retrieve richer information, take more meaningful actions, and deliver intelligent, goal-driven results. This expansion helps organizations build smarter, more adaptable agents that can better align with dynamic business needs.   Today, we are thrilled to announce Azure AI Foundry Agent Service is now Generally Available including following tools: Azure AI Search, Azure Functions, Code …
    Building a Digital Workforce with Multi-Agents in Azure AI Foundry Agent Service
    As organizations increasingly rely on AI to automate complex tasks and scale digital operations, the ability to coordinate multiple agents in a single, cohesive system is becoming critical. Moving beyond single-agent architectures to multi-agent systems enables richer, more dynamic automation – where specialized agents can collaborate, share context, and complete multi-step processes with minimal human intervention. This shift is unlocking the potential for organizations to build full digital workforces that can manage everything from customer support to supply chain automation.   Why Multi-Agents Matter Building a single agent is often straightforward – it’s designed to perform a specific task, like answering common support queries or generating summaries from documents. However, real-wor…
    Announcing General Availability of Azure AI Foundry Agent Service
    Agents are revolutionizing business automation by evolving from simple chatbots into sophisticated, collaborative systems. Fueled by advancements in model reasoning and efficiency, these agents can now handle complex, multi-step processes with speed and accuracy. This marks a shift from isolated tools to dynamic, scalable agent workforces that coordinate tasks, share context, and adapt in real time. This transformation enables businesses to optimize operations and elevate customer experiences through AI-driven workflows. Instead of relying on single-purpose bots, organizations are deploying ecosystems of specialized agents that can interact, reason, and respond to changing conditions with minimal oversight. Yet building and scaling these systems is not without challenges. Developers must i…
  • Open

    Announcing Azure Command Launcher for Java
    Optimizing JVM Configuration for Azure Deployments Tuning the Java Virtual Machine (JVM) for cloud deployments is notoriously challenging. Over 30% of developers tend to deploy Java workloads with no JVM configuration at all, therefore relying on the default settings of the HotSpot JVM. The default settings in OpenJDK are intentionally conservative, designed to work across […] The post Announcing Azure Command Launcher for Java appeared first on Microsoft for Java Developers.  ( 24 min )
    Vibe coding with GitHub Copilot: Agent mode and MCP support in JetBrains and Eclipse
    Today, we’re excited to announce that GitHub Copilot Agent Mode and MCP support are now in public preview for both JetBrains and Eclipse! Whether you’re working in IntelliJ IDEA, PyCharm, WebStorm or Eclipse, you can now access Copilot’s intelligent agent features and seamlessly manage your project workflows, all from within your IDE.  In this post, we’ll […] The post Vibe coding with GitHub Copilot: Agent mode and MCP support in JetBrains and Eclipse appeared first on Microsoft for Java Developers.  ( 24 min )
  • Open

    Efficient JSON loading to Eventhouse in Fabric Real-Time Intelligence
    In the era of big data, efficiently parsing and analyzing JSON data is critical for gaining actionable insights. Leveraging Kusto, a powerful query engine developed by Microsoft, enhances the efficiency of handling JSON data, making it simpler and faster to derive meaningful patterns and trends. Perhaps more importantly, Kusto’s ability to easily parse simple or … Continue reading “Efficient JSON loading to Eventhouse in Fabric Real-Time Intelligence”  ( 8 min )
    Simplifying Data Ingestion with Copy job – Introducing Change Data Capture (CDC) Support (Preview)
    Copy job is designed to simplify your data ingestion experience without compromise from any source to any destination. It supports multiple data delivery styles, including both batch and incremental copy, providing the flexibility to meet diverse needs. We are excited to introduce the preview of native Change Data Capture (CDC) support in Copy job that … Continue reading “Simplifying Data Ingestion with Copy job – Introducing Change Data Capture (CDC) Support (Preview)”  ( 7 min )
    New Copilot experience in Dataflow Gen2: Natural language to custom column
    Dataflows have great capabilities to help you add new columns. It has an extensive section in the ribbon that helps you create new columns based on the data of your table. For scenarios in which you wish to have full control over how the new column should be created, you can opt for the Custom … Continue reading “New Copilot experience in Dataflow Gen2: Natural language to custom column”  ( 6 min )
    Continuous Ingestion from Azure Storage to Eventhouse (Preview)
    The integration of Azure Storage with Fabric Eventhouse for continuous ingestion represents a significant simplification of data ingestion process from Azure Storage to Eventhouse in Fabric Real-Time Intelligence. It automates extraction and loading from Azure storage and facilitates near real-time updates to Eventhouse KQL DB tables. With this feature, it is now easier for organizations … Continue reading “Continuous Ingestion from Azure Storage to Eventhouse (Preview)”  ( 9 min )
    Extracting deeper insights with Fabric Data Agents in Copilot in Power BI
    Co-author: Joanne Wong We’re excited to announce the upcoming integration of Fabric data agent with Copilot in Power BI, enhancing your ability to extract insights seamlessly. What’s new? A new chat with your data experience is launching soon in Power BI– a full-screen Copilot for users to ask natural language questions and receive accurate, relevant … Continue reading “Extracting deeper insights with Fabric Data Agents in Copilot in Power BI”  ( 7 min )
  • Open

    Configuring Key Vault with Java App Service Linux
    In this blog post we’ll cover the process of integrating Key Vault in the Java Spring Boot app that runs on App Service Linux.  ( 6 min )
  • Open

    Spring AI 1.0 GA is Here - Build Java AI Apps End-to-End on Azure Today
    Spring AI 1.0 is now generally available, and it is ready to help Java developers bring the power of AI into their Spring Boot applications. This release is the result of open collaboration and contributions across the Spring and Microsoft Azure engineering teams. Together, they have made it simple for Java developers to integrate LLMs, vector search, memory, and agentic workflows using the patterns they already know. Why This Matters for Java Developers? Spring AI 1.0, built and maintained by the Spring team at Broadcom with active contributions from Microsoft Azure, delivers an intuitive and powerful foundation for building intelligent apps. You can plug AI into existing Spring Boot apps with minimal friction, using starters and conventions familiar to every Spring developer. Whether you…  ( 32 min )
    Spring AI 1.0 GA is Here - Build Java AI Apps End-to-End on Azure Today
    Spring AI 1.0 is now generally available, and it is ready to help Java developers bring the power of AI into their Spring Boot applications. This release is the result of open collaboration and contributions across the Spring and Microsoft Azure engineering teams. Together, they have made it simple for Java developers to integrate LLMs, vector search, memory, and agentic workflows using the patterns they already know. Why This Matters for Java Developers? Spring AI 1.0, built and maintained by the Spring team at Broadcom with active contributions from Microsoft Azure, delivers an intuitive and powerful foundation for building intelligent apps. You can plug AI into existing Spring Boot apps with minimal friction, using starters and conventions familiar to every Spring developer. Whether you…
    Red Hat OpenShift Virtualization on Azure Red Hat OpenShift in Public Preview
    Today we're excited to announce the public preview of Red Hat OpenShift Virtualization on Azure Red Hat OpenShift (ARO). This new capability brings you:  The best of both worlds - Run your virtual machines and containers on a single platform with unified management  Modernization at your pace - Keep critical VMs running while gradually moving components to containers when you're ready  Maximum value from Azure - Leverage your existing Azure benefits and commitments while optimizing resource usage across all workloads  This collaboration between Microsoft and Red Hat helps you simplify operations, reduce costs, and accelerate your cloud journey - all while preserving your existing VM investments.  Why Organizations Need a Unified VM and Container Platform  Organizations today face sig…  ( 39 min )
    Red Hat OpenShift Virtualization on Azure Red Hat OpenShift in Public Preview
    Today we're excited to announce the public preview of Red Hat OpenShift Virtualization on Azure Red Hat OpenShift (ARO). This new capability brings you:  The best of both worlds - Run your virtual machines and containers on a single platform with unified management  Modernization at your pace - Keep critical VMs running while gradually moving components to containers when you're ready  Maximum value from Azure - Leverage your existing Azure benefits and commitments while optimizing resource usage across all workloads  This collaboration between Microsoft and Red Hat helps you simplify operations, reduce costs, and accelerate your cloud journey - all while preserving your existing VM investments.  Why Organizations Need a Unified VM and Container Platform  Organizations today face sig…
    Configure health probes for Quarkus Apps on Azure Container Apps
    Overview This blog post shows you how to enable SmallRye Health in Quarkus applications with health probes in Azure Container Apps. The techniques shown in this blog post show you how to effectively monitor the health and status of your Quarkus instances. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app uses SmallRye Health to expose application health to Azure Container Apps.   The Quarkus SmallRye Health extension provides the following REST endpoints:   /q/health/live:  Indicates that the application is up and running.   /q/health/ready:  Indicates that the application is ready to accept requests.   /q/health/started:  Indicates that the application starts…  ( 49 min )
    Configure health probes for Quarkus Apps on Azure Container Apps
    Overview This blog post shows you how to enable SmallRye Health in Quarkus applications with health probes in Azure Container Apps. The techniques shown in this blog post show you how to effectively monitor the health and status of your Quarkus instances. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app uses SmallRye Health to expose application health to Azure Container Apps.   The Quarkus SmallRye Health extension provides the following REST endpoints:   /q/health/live:  Indicates that the application is up and running.   /q/health/ready:  Indicates that the application is ready to accept requests.   /q/health/started:  Indicates that the application starts…
    How to debug Azure WebJobs Storage Extensions SDK in dotnet isolated blob trigger function app
    This blog post provides a step-to-step guidance for how to debug the Azure WebJobs Extension SDK both locally and remotely on the function app.   As you all know, Azure Function Apps are built on top of the Azure WebJobs SDK, which is an open-source framework that simplifies the task of writing background processing code that runs in Azure. The Azure WebJobs SDK includes a declarative binding and trigger system, commonly referred to as Azure WebJobs Extensions. These extensions allow developers to easily integrate with various Azure services and define how their background tasks respond to events.   If you encounter any issues while using the Azure WebJobs Extensions SDK, the best way to report them is via GitHub Issues. You can report bugs, request features, or ask questions there. When r…  ( 26 min )
    How to debug Azure WebJobs Storage Extensions SDK in dotnet isolated blob trigger function app
    This blog post provides a step-to-step guidance for how to debug the Azure WebJobs Extension SDK both locally and remotely on the function app.   As you all know, Azure Function Apps are built on top of the Azure WebJobs SDK, which is an open-source framework that simplifies the task of writing background processing code that runs in Azure. The Azure WebJobs SDK includes a declarative binding and trigger system, commonly referred to as Azure WebJobs Extensions. These extensions allow developers to easily integrate with various Azure services and define how their background tasks respond to events.   If you encounter any issues while using the Azure WebJobs Extensions SDK, the best way to report them is via GitHub Issues. You can report bugs, request features, or ask questions there. When r…
    Public Preview: GitHub Copilot App Modernization for Java
    Modernizing Java applications and migrating to the cloud is typically a complex, labor-intensive, and fragmented process. GitHub Copilot App Modernization for Java is a powerful solution designed to simplify and accelerate your journey to the cloud. App Modernization and upgrade for Java offers an intelligent, guided approach that automates Java version upgrade and repetitive tasks and improves consistency — saving time, reducing risks, and accelerating time-to-cloud. GitHub Copilot App Modernization and upgrade for Java is in public preview and offered in a single extension pack, available in the VS Code marketplace. https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-app-mod-pack   The GitHub Copilot App Modernization for Java provides six distinct value pillars, each spe…  ( 27 min )
    Public Preview: GitHub Copilot App Modernization for Java
    Modernizing Java applications and migrating to the cloud is typically a complex, labor-intensive, and fragmented process. GitHub Copilot App Modernization for Java [and upgrade for Java] is a powerful solution designed to simplify and accelerate your journey to the cloud. App Modernization and upgrade for Java offers an intelligent, guided approach that automates Java version upgrade and repetitive tasks and improves consistency — saving time, reducing risks, and accelerating time-to-cloud. GitHub Copilot App Modernization and upgrade for Java is in public preview and offered in a single extension pack, available in the VS Code marketplace. https://marketplace.visualstudio.com/items?itemName=vscjava.vscode-app-mod-pack   The GitHub Copilot App Modernization for Java provides six distinct …
    Powering the Next Generation of AI Apps and Agents on the Azure Application Platform
    Generative AI is already transforming how businesses operate, with organizations seeing an average return of 3.7x for every $1 of investment [The Business Opportunity of AI, IDC study commissioned by Microsoft]. Developers sit at the center of this transformation, and their need for speed, flexibility, and familiarity with existing tools is driving the demand for application platforms that integrate AI seamlessly into their current development workflows. To fully realize the potential of generative AI in applications, organizations must provide developers with frictionless access to AI models, frameworks, and environments that enable them to scale AI applications. We see this in action at organizations like Accenture, Assembly Software, Carvana, Coldplay (Pixel Lab), Global Travel Collecti…  ( 38 min )
    Powering the Next Generation of AI Apps and Agents on the Azure Application Platform
    Generative AI is already transforming how businesses operate, with organizations seeing an average return of 3.7x for every $1 of investment [The Business Opportunity of AI, IDC study commissioned by Microsoft]. Developers sit at the center of this transformation, and their need for speed, flexibility, and familiarity with existing tools is driving the demand for application platforms that integrate AI seamlessly into their current development workflows. To fully realize the potential of generative AI in applications, organizations must provide developers with frictionless access to AI models, frameworks, and environments that enable them to scale AI applications. We see this in action at organizations like Accenture, Assembly Software, Carvana, Coldplay (Pixel Lab), Global Travel Collecti…
    New Observability & Debugging Capabilities for Azure Container Apps
    Azure Container Apps gives you a strong foundation for monitoring and debugging, with built-in features that give you a holistic view of your container app’s health throughout its application lifecycle. As applications grow, developers need even deeper visibility and faster ways to troubleshoot issues. That’s why we’re excited to announce new observability and debugging features. These features will help you further monitor your environment and identify root causes faster. Generally Available: OpenTelemetry agent in Azure Container Apps OpenTelemetry agent in Azure Container Apps is now generally available. This feature enables you to use open-source standards to send your app’s data without setting up the OpenTelemetry collector yourself. You can use the managed agent to choose where to s…  ( 28 min )
    New Observability & Debugging Capabilities for Azure Container Apps
    Azure Container Apps gives you a strong foundation for monitoring and debugging, with built-in features that give you a holistic view of your container app’s health throughout its application lifecycle. As applications grow, developers need even deeper visibility and faster ways to troubleshoot issues. That’s why we’re excited to announce new observability and debugging features. These features will help you further monitor your environment and identify root causes faster. Generally Available: OpenTelemetry agent in Azure Container Apps OpenTelemetry agent in Azure Container Apps is now generally available. This feature enables you to use open-source standards to send your app’s data without setting up the OpenTelemetry collector yourself. You can use the managed agent to choose where to s…
    What's New in Azure App Service at #MSBuild 2025
    New App Service Premium v4 plan  The new App Service Premium v4 (Pv4) plan has entered public preview at Microsoft Build 2025 for both Windows and Linux!  This new plan is designed to support today's highly demanding application performance, scale, and budgets. Built on the latest "v6" general-purpose virtual machines and memory-optimized x64 Azure hardware with faster processors and NVMe temporary storage, it provides a noticeable performance uplift over prior generations of App Service Premium plans (over 25% in early testing). The Premium v4 offering includes nine new sizes ranging from P0v4 with a single virtual CPU and 4GB RAM all the way up through P5mv4, with 32 virtual CPUs and 256GB RAM, providing CPU and memory options to meet any business need. App Service Premium v4 plans provi…  ( 57 min )
    What's New in Azure App Service at #MSBuild 2025
    New App Service Premium v4 plan  The new App Service Premium v4 (Pv4) plan has entered public preview at Microsoft Build 2025 for both Windows and Linux!  This new plan is designed to support today's highly demanding application performance, scale, and budgets. Built on the latest "v6" general-purpose virtual machines and memory-optimized x64 Azure hardware with faster processors and NVMe temporary storage, it provides a noticeable performance uplift over prior generations of App Service Premium plans (over 25% in early testing). The Premium v4 offering includes nine new sizes ranging from P0v4 with a single virtual CPU and 4GB RAM all the way up through P5mv4, with 32 virtual CPUs and 256GB RAM, providing CPU and memory options to meet any business need. App Service Premium v4 plans provi…
    Unlocking new AI workloads in Azure Container Apps
    The rapid rise of AI has unlocked powerful new scenarios—from AI-powered chatbots and image generation to advanced agents. However, deploying AI models at scale presents real challenges including managing compute-intensive workloads, complexities with model deployment, and executing untrusted AI-generated code safely. Azure Container Apps addresses these challenges by offering a fully managed, flexible, serverless container platform designed for modern, cloud-native applications – now with GA of Dedicated GPUs, improved integrations for deploying Foundry models to Azure Container Apps, and the private preview of GPU powered dynamic sessions. Foundry Model Integration Azure AI Foundry Models support a wide collection of ready-to-deploy models. Traditionally, these models can be deployed wi…  ( 26 min )
    Unlocking new AI workloads in Azure Container Apps
    The rapid rise of AI has unlocked powerful new scenarios—from AI-powered chatbots and image generation to advanced agents. However, deploying AI models at scale presents real challenges including managing compute-intensive workloads, complexities with model deployment, and executing untrusted AI-generated code safely. Azure Container Apps addresses these challenges by offering a fully managed, flexible, serverless container platform designed for modern, cloud-native applications – now with GA of Dedicated GPUs, improved integrations for deploying Foundry models to Azure Container Apps, and the private preview of GPU powered dynamic sessions. Foundry Model Integration Azure AI Foundry Models support a wide collection of ready-to-deploy models. Traditionally, these models can be deployed wi…
    Azure App Service Premium v4 plan is now in public preview
    Azure App Service Premium v4 plan is the latest offering in the Azure App Service family, designed to deliver enhanced performance, scalability, and cost efficiency. We are excited to announce the public preview of this major upgrade to one of our most popular services. Key benefits: Fully managed platform-as-a-service (PaaS) to run your favorite web stack, on both Windows and Linux. Built using next-gen Azure hardware for higher performance and reliability. Lower total cost of ownership with new pricing tailored for large-scale app modernization projects. and more to come! Fully managed platform-as-a-service (PaaS) As the next generation of one of the leading PaaS solutions, Premium v4 abstracts infrastructure management, allowing businesses to focus on application development rather th…  ( 26 min )
    Azure App Service Premium v4 plan is now in public preview
    Azure App Service Premium v4 plan is the latest offering in the Azure App Service family, designed to deliver enhanced performance, scalability, and cost efficiency. We are excited to announce the public preview of this major upgrade to one of our most popular services. Key benefits: Fully managed platform-as-a-service (PaaS) to run your favorite web stack, on both Windows and Linux. Built using next-gen Azure hardware for higher performance and reliability. Lower total cost of ownership with new pricing tailored for large-scale app modernization projects. and more to come! Fully managed platform-as-a-service (PaaS) As the next generation of one of the leading PaaS solutions, Premium v4 abstracts infrastructure management, allowing businesses to focus on application development rather th…
    Azure Functions – Build 2025
    Azure Functions – Build 2025 update With Microsoft Build underway, the team is excited to provide an update on the latest releases in Azure Functions this year. Customers are leveraging Azure Functions to build AI solutions, thanks to its serverless capabilities that scale on demand and its native integration for processing real-time data. The newly launched capabilities enable the creation of AI and agentic applications with enhanced offerings, built-in security, and a pay-as-you-go model. Real-time retrieval augmented generation, making organizational data accessible through semantic search Native event driven tool function calling with the AI Foundry Agent service Hosted Model Context Protocol servers. Support for Flex consumption plans, including zone redundancy, increased regions, an…  ( 51 min )
    Azure Functions – Build 2025
    Azure Functions – Build 2025 update With Microsoft Build underway, the team is excited to provide an update on the latest releases in Azure Functions this year. Customers are leveraging Azure Functions to build AI solutions, thanks to its serverless capabilities that scale on demand and its native integration for processing real-time data. The newly launched capabilities enable the creation of AI and agentic applications with enhanced offerings, built-in security, and a pay-as-you-go model. Real-time retrieval augmented generation, making organizational data accessible through semantic search Native event driven tool function calling with the AI Foundry Agent service Hosted Model Context Protocol servers. Support for Flex consumption plans, including zone redundancy, increased regions, an…
    Build secure, flexible, AI-enabled applications with Azure Kubernetes Service
    Building AI applications has never been more accessible. With advancements in tools and platforms, developers can now create sophisticated AI solutions that drive innovation and efficiency across various industries. For many, Kubernetes stands out as natural choice for running AI applications and agents due to its robust orchestration capabilities, scalability, and flexibility.  In this blog, we will explore the latest advancements in Azure Kubernetes Service (AKS) we are announcing at Microsoft Build 2025, designed to enhance flexibility, bolster security, and seamlessly integrate AI capabilities into your Kubernetes environments. These updates will empower developers to create sophisticated AI solutions, improve operational efficiency, and drive innovation across various industries. Let'…  ( 38 min )
    Build secure, flexible, AI-enabled applications with Azure Kubernetes Service
    Building AI applications has never been more accessible. With advancements in tools and platforms, developers can now create sophisticated AI solutions that drive innovation and efficiency across various industries. For many, Kubernetes stands out as natural choice for running AI applications and agents due to its robust orchestration capabilities, scalability, and flexibility.  In this blog, we will explore the latest advancements in Azure Kubernetes Service (AKS) we are announcing at Microsoft Build 2025, designed to enhance flexibility, bolster security, and seamlessly integrate AI capabilities into your Kubernetes environments. These updates will empower developers to create sophisticated AI solutions, improve operational efficiency, and drive innovation across various industries. Let'…
    Announcing Workflow in Azure Container Apps with the Durable task scheduler – Now in Preview!
    We are thrilled to announce the durable workflow capabilities in Azure Container Apps with the Durable task scheduler (preview). This new feature brings powerful workflow capabilities to Azure Container Apps, enabling developers to build and manage complex, durable workflows as code with ease. What is Workflow and the Durable task scheduler? If you’ve missed the initial announcement of the durable task scheduler, please see these existing blog posts: https://aka.ms/dts-early-access https://aka.ms/dts-public-preview In summary, the Durable task scheduler is a fully managed backend for durable execution. Durable Execution is a fault-tolerant approach to running code, designed to handle failures gracefully through automatic retries and state persistence. It is built on three core principle…  ( 26 min )
    What's new in Azure Container Apps at Build'25
    Azure Container Apps is a fully managed serverless container service that runs microservices and containerized applications on Azure. It provides built-in autoscaling, including scale to zero, and offers simplified developer experience with support for multiple programming languages and frameworks, including special features built for .NET and Java. Container Apps also provides many advanced networking and monitoring capabilities, offering seamless deployment and management of containerized applications without the need to manage underlying infrastructure. Following the features announced at Ignite’24, we've continued to innovate and enhance Azure Container Apps. We announced the general availability of Serverless GPUs, enabling seamless AI workloads with automatic scaling, optimized cold …  ( 37 min )
    Building Durable and Deterministic Multi-Agent Orchestrations with Durable Execution
    Durable Execution Durable Execution is a reliable approach to running code, designed to handle failures smoothly with automatic retries and state persistence. It is built on three core principles: Incremental Execution: Each operation runs independently and in order. State Persistence: The output of each step is durably saved to ensure progress is not lost. Fault Tolerance: If a step fails, the operation is retried from the last successful step, skipping previously completed steps. Durable Execution is particularly beneficial for scenarios requiring stateful chaining of operations, such as order-processing applications, data processing pipelines, ETL (extract, transform, load), and as we'll get into in this post, intelligent applications with AI agents. Durable execution simplifies the i…  ( 32 min )
    Building the Agentic Future
    As a business built by developers, for developers, Microsoft has spent decades making it faster, easier and more exciting to create great software. And developers everywhere have turned everything from BASIC and the .NET Framework, to Azure, VS Code, GitHub and more into the digital world we all live in today. But nothing compares to what’s on the horizon as agentic AI redefines both how we build and the apps we’re building. In fact, the promise of agentic AI is so strong that market forecasts predict we’re on track to reach 1.3 billion AI Agents by 2028. Our own data, from 1,500 organizations around the world, shows agent capabilities have jumped as a driver for AI applications from near last to a top three priority when comparing deployments earlier this year to applications being define…  ( 42 min )
    New Networking Capabilities in Azure Container Apps
    New Networking Capabilities in Azure Container Apps Azure Container Apps is your go-to fully managed serverless container service that enables you to deploy and run containerized applications with per-second billing and autoscaling without having to manage infrastructure.   Today, Azure Container Apps is thrilled to announce several new enterprise capabilities that will take the flexibility, security, and manageability of your containerized applications to the next level. These capabilities include premium ingress, rule-based routing, private endpoints, Azure Arc integration, and planned maintenance. Let’s dive into the advanced networking features that Azure Container Apps has introduced. Public Preview: Premium Ingress in Azure Container Apps Azure Container Apps now supports premium ing…  ( 24 min )
    Reimagining App Modernization for the Era of AI
    If you’ve ever been to Microsoft Build, you know it’s not just another tech conference, it’s where ideas spark, connections happen, and the future of software takes shape. This year in Seattle, the energy is off the charts. And for those of us passionate about app modernization, it’s a moment we’ve been building toward (pun intended). At Microsoft, we believe modernization is more than just updating old systems—it’s about unlocking new possibilities with AI at the center. And at Build 2025, we’re showing exactly how we’re doing that. AI Is Changing Everything—And We’re Here for It Let’s start with the big picture: AI is transforming the entire software development lifecycle. From how we build apps to how we manage and scale them, AI is the force multiplier, reshaping how we work, build, de…  ( 35 min )
    Announcing Workflow in Azure Container Apps with the Durable task scheduler – Now in Preview!
    We are thrilled to announce the durable workflow capabilities in Azure Container Apps with the Durable task scheduler (preview). This new feature brings powerful workflow capabilities to Azure Container Apps, enabling developers to build and manage complex, durable workflows as code with ease. What is Workflow and the Durable task scheduler? If you’ve missed the initial announcement of the durable task scheduler, please see these existing blog posts: https://aka.ms/dts-early-access https://aka.ms/dts-public-preview In summary, the Durable task scheduler is a fully managed backend for durable execution. Durable Execution is a fault-tolerant approach to running code, designed to handle failures gracefully through automatic retries and state persistence. It is built on three core principle…
    What's new in Azure Container Apps at Build'25
    Azure Container Apps is a fully managed serverless container service that runs microservices and containerized applications on Azure. It provides built-in autoscaling, including scale to zero, and offers simplified developer experience with support for multiple programming languages and frameworks, including special features built for .NET and Java. Container Apps also provides many advanced networking and monitoring capabilities, offering seamless deployment and management of containerized applications without the need to manage underlying infrastructure. Following the features announced at Ignite’24, we've continued to innovate and enhance Azure Container Apps. We announced the general availability of Serverless GPUs, enabling seamless AI workloads with automatic scaling, optimized cold …
    Building Durable and Deterministic Multi-Agent Orchestrations with Durable Execution
    Durable Execution Durable Execution is a reliable approach to running code, designed to handle failures smoothly with automatic retries and state persistence. It is built on three core principles: Incremental Execution: Each operation runs independently and in order. State Persistence: The output of each step is durably saved to ensure progress is not lost. Fault Tolerance: If a step fails, the operation is retried from the last successful step, skipping previously completed steps. Durable Execution is particularly beneficial for scenarios requiring stateful chaining of operations, such as order-processing applications, data processing pipelines, ETL (extract, transform, load), and as we'll get into in this post, intelligent applications with AI agents. Durable execution simplifies the i…
    Building the Agentic Future
    As a business built by developers, for developers, Microsoft has spent decades making it faster, easier and more exciting to create great software. And developers everywhere have turned everything from BASIC and the .NET Framework, to Azure, VS Code, GitHub and more into the digital world we all live in today. But nothing compares to what’s on the horizon as agentic AI redefines both how we build and the apps we’re building. In fact, the promise of agentic AI is so strong that market forecasts predict we’re on track to reach 1.3 billion AI Agents by 2028. Our own data, from 1,500 organizations around the world, shows agent capabilities have jumped as a driver for AI applications from near last to a top three priority when comparing deployments earlier this year to applications being define…
    New Networking Capabilities in Azure Container Apps
    New Networking Capabilities in Azure Container Apps Azure Container Apps is your go-to fully managed serverless container service that enables you to deploy and run containerized applications with per-second billing and autoscaling without having to manage infrastructure.   Today, Azure Container Apps is thrilled to announce several new enterprise capabilities that will take the flexibility, security, and manageability of your containerized applications to the next level. These capabilities include premium ingress, rule-based routing, private endpoints, Azure Arc integration, and planned maintenance. Let’s dive into the advanced networking features that Azure Container Apps has introduced. Public Preview: Premium Ingress in Azure Container Apps Azure Container Apps now supports premium ing…
    Reimagining App Modernization for the Era of AI
    If you’ve ever been to Microsoft Build, you know it’s not just another tech conference, it’s where ideas spark, connections happen, and the future of software takes shape. This year in Seattle, the energy is off the charts. And for those of us passionate about app modernization, it’s a moment we’ve been building toward (pun intended). At Microsoft, we believe modernization is more than just updating old systems—it’s about unlocking new possibilities with AI at the center. And at Build 2025, we’re showing exactly how we’re doing that. AI Is Changing Everything—And We’re Here for It Let’s start with the big picture: AI is transforming the entire software development lifecycle. From how we build apps to how we manage and scale them, AI is the force multiplier, reshaping how we work, build, de…
  • Open

    Announcing new features and updates in Azure Event Grid
    Discover powerful new features in Azure Event Grid, enhancing its functionality and user experience. This fully managed event broker now supports multi-protocol interoperability, including MQTT, for scalable messaging. It seamlessly connects Microsoft-native and third-party services, enabling robust event-driven applications. Streamline event management with flexible push-pull communication patterns.   We are thrilled to announce General Availability of the Cross-tenant delivery to Event Hubs, Service Bus, Storage Queues, and dead letter storage using managed identity with federated identity credentials (FIC) from Azure Event Grid topics, domains, system topics, and partner topics. New cross-tenant scenarios, currently in Public Preview enable delivery to Event Hubs, webhooks, and dead let…  ( 23 min )
    Announcing new features and updates in Azure Event Grid
    Discover powerful new features in Azure Event Grid, enhancing its functionality and user experience. This fully managed event broker now supports multi-protocol interoperability, including MQTT, for scalable messaging. It seamlessly connects Microsoft-native and third-party services, enabling robust event-driven applications. Streamline event management with flexible push-pull communication patterns.   We are thrilled to announce General Availability of the Cross-tenant delivery to Event Hubs, Service Bus, Storage Queues, and dead letter storage using managed identity with federated identity credentials (FIC) from Azure Event Grid topics, domains, system topics, and partner topics. New cross-tenant scenarios, currently in Public Preview enable delivery to Event Hubs, webhooks, and dead let…
  • Open

    What’s new in Observability at Build 2025
    At Build 2025, we are excited to announce new features in Azure Monitor designed to enhance observability for developers and SREs, making it easier for you to streamline troubleshooting, improve monitoring efficiency, and gain deeper insights into application performance. With our new AI-powered tools, customizable alerts, and advanced visualization capabilities, we’re empowering developers to deliver high-quality, resilient applications with greater operational efficiency. AI-Powered Troubleshooting Capabilities We are excited to disclose two new AI-powered features, as well as share an update to a GA feature, which enhance troubleshooting and monitoring: AI-powered investigations (Public Preview): Identifies possible explanations for service degradations via automated analyses, consolid…  ( 29 min )
    What’s new in Observability at Build 2025
    At Build 2025, we are excited to announce new features in Azure Monitor designed to enhance observability for developers and SREs, making it easier for you to streamline troubleshooting, improve monitoring efficiency, and gain deeper insights into application performance. With our new AI-powered tools, customizable alerts, and advanced visualization capabilities, we’re empowering developers to deliver high-quality, resilient applications with greater operational efficiency. AI-Powered Troubleshooting Capabilities We are excited to disclose two new AI-powered features, as well as share an update to a GA feature, which enhance troubleshooting and monitoring: AI-powered investigations (Public Preview): Identifies possible explanations for service degradations via automated analyses, consolid…
    Announcing the Public Preview of Azure Monitor health models
    Troubleshooting modern cloud-native workloads has become increasingly complex. As applications scale across distributed services and regions, pinpointing the root cause of performance degradation or outages often requires navigating a maze of disconnected signals, metrics, and alerts. This fragmented experience slows down troubleshooting and burdens engineering teams with manual correlation work.  We address these challenges by introducing a unified, intelligent concept of workload health that’s enriched with application context. Health models streamline how you monitor, assess, and respond to issues affecting your workloads. Built on Azure Service Groups, they provide an out-of-the-box model tailored to your environment, consolidate signals to reduce alert noise, and surface actionable in…  ( 29 min )
    Announcing the Public Preview of Azure Monitor health models
    Troubleshooting modern cloud-native workloads has become increasingly complex. As applications scale across distributed services and regions, pinpointing the root cause of performance degradation or outages often requires navigating a maze of disconnected signals, metrics, and alerts. This fragmented experience slows down troubleshooting and burdens engineering teams with manual correlation work.  We address these challenges by introducing a unified, intelligent concept of workload health that’s enriched with application context. Health models streamline how you monitor, assess, and respond to issues affecting your workloads. Built on Azure Service Groups, they provide an out-of-the-box model tailored to your environment, consolidate signals to reduce alert noise, and surface actionable in…
    Enhance your Azure visualizations using Azure Monitor dashboards with Grafana
    In line with our commitment to open-source solutions, we are announcing the public preview of Azure Monitor dashboards with Grafana. This service offers a powerful solution for cloud-native monitoring and visualizing Prometheus metrics. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. Start quickly with pre-built and community dashboards Pre-built Grafana dashboards for Azure Kubernetes Services, Azure Monitor, and dozens of other Azure resources are included and enabled by default. Additionally, you can import dashboards from thousands of publicly available Grafana community and open-source dashboards for Prometh…  ( 22 min )
    Enhance your Azure visualizations using Azure Monitor dashboards with Grafana
    In line with our commitment to open-source solutions, we are announcing the public preview of Azure Monitor dashboards with Grafana. This service offers a powerful solution for cloud-native monitoring and visualizing Prometheus metrics. Dashboards with Grafana enable you to create and edit Grafana dashboards directly in the Azure portal without additional cost and less administrative overhead compared to self-hosting Grafana or using managed Grafana services. Start quickly with pre-built and community dashboards Pre-built Grafana dashboards for Azure Kubernetes Services, Azure Monitor, and dozens of other Azure resources are included and enabled by default. Additionally, you can import dashboards from thousands of publicly available Grafana community and open-source dashboards for Prometh…
    Public Preview: Simple Log Alerts in Azure Monitor
    Public Preview: Simple Log Alerts in Azure Monitor We are excited to announce the Public Preview of Simple Log Alerts in Azure Monitor, available starting in mid-May. This new feature is designed to provide a simplified and more intuitive experience for monitoring and alerting, enhancing your ability to detect and respond to issues in near real-time. Simple Log Alerts are a new type of Log Search Alerts in Azure Monitor, designed to provide a simpler and faster alternative to Log Search Alerts. Unlike Log Search Alerts that aggregate rows over a defined period, Simple Log Alerts evaluate each row individually. This feature is now available for customers using Basic Logs who want to enable alerting. Previously, when customers opted to configure the traces table in Azure Monitor Application …  ( 21 min )
    GA: Managed Prometheus visualizations in Azure Monitor for AKS — unified insights at your fingertips
    We’re thrilled to announce the general availability (GA) of Managed Prometheus visualizations in Azure Monitor for AKS, along with an enhanced, unified AKS Monitoring experience. Troubleshooting Kubernetes clusters is often time-consuming and complex whether you're diagnosing failures, scaling issues, or performance bottlenecks. This redesign of the existing Insights experience brings all your key monitoring data into a single, streamlined view reducing the time and effort it takes to diagnose, triage, and resolve problems so you can keep your applications running smoothly with less manual work. By using Managed Prometheus, customers can also realize up to 80% savings on metrics costs and benefit from up to 90% faster blade load performance delivering both a powerful and cost-efficient way…  ( 25 min )
    Public Preview: Smarter Troubleshooting in Azure Monitor with AI-powered Investigation
    Investigate smarter – click, analyze, and easily mitigate with Azure Monitor investigations! We are excited to introduce the public preview of Azure Monitor issue and investigation.      These new capabilities are designed to enhance your troubleshooting experience and streamline the process of resolving health degradations in your application and infrastructure. What it is  Azure Monitor investigation is an AI-powered, automated analysis designed to scan the telemetry gathered by Azure Monitor to troubleshoot and mitigate potential service health issues. Investigation provides a list of AI-generated findings that include a summary of what happened, potential causes, and steps for further troubleshooting and mitigation. Azure Monitor issue contains all observability related data and proce…  ( 23 min )
    Announcing the Launch of Customizable Email Subjects for Log Search Alerts V2 in Azure Monitor
    We are thrilled to announce the launch of a new feature in Azure Monitor: Customizable Email Subjects for Log Search Alerts V2, available during May. What it is Customizable Email Subjects for Log Search Alerts V2 is a new feature that enables customers to personalize the subject lines of alert emails, making it easier to quickly identify and respond to alerts with more relevant and specific information. How it works This feature allows you to override email subjects with dynamic values by concatenating information from the common schema and custom text. For example, you can customize email subjects to include specific details such as the name of the virtual machine (VM) or patching details, allowing for quick identification without opening the email. Getting Started To get started with Cu…  ( 22 min )
    Public Preview: Simple Log Alerts in Azure Monitor
    Public Preview: Simple Log Alerts in Azure Monitor We are excited to announce the Public Preview of Simple Log Alerts in Azure Monitor, available starting in mid-May. This new feature is designed to provide a simplified and more intuitive experience for monitoring and alerting, enhancing your ability to detect and respond to issues in near real-time. Simple Log Alerts are a new type of Log Search Alerts in Azure Monitor, designed to provide a simpler and faster alternative to Log Search Alerts. Unlike Log Search Alerts that aggregate rows over a defined period, Simple Log Alerts evaluate each row individually. This feature is now available for customers using Basic Logs who want to enable alerting. Previously, when customers opted to configure the traces table in Azure Monitor Application …
    GA: Managed Prometheus visualizations in Azure Monitor for AKS — unified insights at your fingertips
    We’re thrilled to announce the general availability (GA) of Managed Prometheus visualizations in Azure Monitor for AKS, along with an enhanced, unified AKS Monitoring experience. Troubleshooting Kubernetes clusters is often time-consuming and complex whether you're diagnosing failures, scaling issues, or performance bottlenecks. This redesign of the existing Insights experience brings all your key monitoring data into a single, streamlined view reducing the time and effort it takes to diagnose, triage, and resolve problems so you can keep your applications running smoothly with less manual work. By using Managed Prometheus, customers can also realize up to 80% savings on metrics costs and benefit from up to 90% faster blade load performance delivering both a powerful and cost-efficient way…
    Public Preview: Smarter Troubleshooting in Azure Monitor with AI-powered Investigation
    Investigate smarter – click, analyze, and easily mitigate with Azure Monitor investigations! We are excited to introduce the public preview of Azure Monitor issue and investigation.      These new capabilities are designed to enhance your troubleshooting experience and streamline the process of resolving health degradations in your application and infrastructure. What it is  Azure Monitor investigation is an AI-powered, automated analysis designed to scan the telemetry gathered by Azure Monitor to troubleshoot and mitigate potential service health issues. Investigation provides a list of AI-generated findings that include a summary of what happened, potential causes, and steps for further troubleshooting and mitigation. Azure Monitor issue contains all observability related data and proce…
    Announcing the Launch of Customizable Email Subjects for Log Search Alerts V2 in Azure Monitor
    We are thrilled to announce the launch of a new feature in Azure Monitor: Customizable Email Subjects for Log Search Alerts V2, available during May. What it is Customizable Email Subjects for Log Search Alerts V2 is a new feature that enables customers to personalize the subject lines of alert emails, making it easier to quickly identify and respond to alerts with more relevant and specific information. How it works This feature allows you to override email subjects with dynamic values by concatenating information from the common schema and custom text. For example, you can customize email subjects to include specific details such as the name of the virtual machine (VM) or patching details, allowing for quick identification without opening the email. Getting Started To get started with Cu…
  • Open

    Supercharge AI development with new AI-powered features in Microsoft Dev Box
    AI is reshaping how we build, deploy, and scale software. As more apps become AI-powered, developers need environments that can keep up with speed and demands of innovation. We’ve heard from customers just how critical it is to have a zero-config experience when building AI applications, with streamlined access to compute, prebuilt models, and environment […] The post Supercharge AI development with new AI-powered features in Microsoft Dev Box appeared first on Develop from the cloud.  ( 32 min )
    Unlock developer potential with Microsoft Dev Box
    AI agents, model context protocols (MCPs), and emerging AI workflows are fundamentally transforming software development paradigms and broadening the range of applications we can create. Developers are expected to keep up with the pace of innovation, but that’s hard to do when working from traditional development environments or legacy VDI solutions. With Microsoft Dev Box, […] The post Unlock developer potential with Microsoft Dev Box appeared first on Develop from the cloud.  ( 27 min )
  • Open

    Introducing Microsoft 365 Copilot APIs
    Learn how Microsoft 365 Copilot APIs allow you to build solutions grounded in your organization’s content, context, and permissions, without needing to relocate or duplicate data. The post Introducing Microsoft 365 Copilot APIs appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Introducing Azure SRE Agent
    Today we’re thrilled to introduce Azure SRE Agent, an AI-powered tool that makes it easier to sustain production cloud environments. SRE Agent helps respond to incidents quickly and effectively, alleviating the toil of managing production environments. Overall, it results in better service uptime and reduced operational costs. SRE Agent leverages the reasoning capabilities of LLMs to identify the logs and metrics necessary for rapid root cause analysis and issue mitigation. Its advanced AI capabilities transform incident and infrastructure management in Azure, freeing engineers to focus on more meaningful work.  Sign up for the SRE Agent preview click here As more companies move their services online, site reliability engineering (SRE) has become crucial to keeping critical systems reliab…  ( 28 min )
    Introducing Azure SRE Agent
    Today we’re thrilled to introduce Azure SRE Agent, an AI-powered tool that makes it easier to sustain production cloud environments. SRE Agent helps respond to incidents quickly and effectively, alleviating the toil of managing production environments. Overall, it results in better service uptime and reduced operational costs. SRE Agent leverages the reasoning capabilities of LLMs to identify the logs and metrics necessary for rapid root cause analysis and issue mitigation. Its advanced AI capabilities transform incident and infrastructure management in Azure, freeing engineers to focus on more meaningful work.  Sign up for the SRE Agent preview click here As more companies move their services online, site reliability engineering (SRE) has become crucial to keeping critical systems reliab…

  • Open

    Announcing Public Preview of the GitHub Copilot app modernization for Java
    Modernizing Java applications and migrating to the cloud is often a complex, time-consuming, and fragmented process. GitHub Copilot app modernization for Java [and upgrade for Java] is a powerful solution designed to simplify and accelerate your journey to the cloud. Available now in Public Preview as a single extension pack in the Visual Studio Code Marketplace, […] The post Announcing Public Preview of the GitHub Copilot app modernization for Java appeared first on Microsoft for Java Developers.  ( 24 min )
  • Open

    From diagrams to dialogue: Introducing new multimodal functionality in Azure AI Search
    Introduction  We're thrilled to introduce a new suite of multimodal capabilities in Azure AI Search.     This set of features includes both new additions and incremental improvements that enable Azure AI Search to extract text from pages and inline images, generate image descriptions (verbalization), and create vision/text embeddings. It also facilitates storing cropped images in the knowledge store and returning text and image annotations in RAG (Retrieval Augmented Generation) applications to end users.  These features can be configured in our new Azure portal wizard with multimodal support or via the REST API 2025-05-01-preview version.  In addition, we're providing a new GitHub repo with sample code for a RAG app. This resource shows how you can use the index created by Azure AI Search…  ( 34 min )
    Introducing agentic retrieval in Azure AI Search
    Today we’re announcing agentic retrieval in Azure AI Search, a multiturn query engine that plans and runs its own retrieval strategy for improved answer relevance. Compared to traditional, single-shot RAG, agentic retrieval improves answer relevance to complex questions by up to 40%.  It transforms queries, runs parallel searches, and delivers results tuned for agents, along with references and a query activity log. Now available in public preview. What is agentic retrieval? Agentic retrieval in Azure AI Search uses a new query architecture that incorporates user conversation history and an Azure OpenAI model to plan, retrieve and synthesize queries. Here's how it works: An LLM analyzes the entire chat thread to identify the underlying information need. Instead of a single, catch-all que…  ( 26 min )
    Up to 40% better relevance for complex queries with new agentic retrieval engine
    By Alec Berntson, Alina Stoica Beck, Amaia Salvador Aguilera, Arnau Quindós Sánchez, Thibault Gisselbrecht and Xianshun Chen   Agentic retrieval in Azure AI Search is a new API built to effectively answer complex queries by extracting the right content needed. The API defines and runs a query plan, incorporating conversation history and an Azure OpenAI model. It transforms complex queries, then performs multiple searches at once, combining the final results and delivering ready-to-use content for answer generation. In this post we detail the operations that take place while the API is called and walk through the numerous experiments and datasets to evaluate its relevance performance. We learned the agentic retrieval API automates optimal retrieval for complex user queries, so you can get m…  ( 74 min )
    From diagrams to dialogue: Introducing new multimodal functionality in Azure AI Search
    Introduction  We're thrilled to introduce a new suite of multimodal capabilities in Azure AI Search.     This set of features includes both new additions and incremental improvements that enable Azure AI Search to extract text from pages and inline images, generate image descriptions (verbalization), and create vision/text embeddings. It also facilitates storing cropped images in the knowledge store and returning text and image annotations in RAG (Retrieval Augmented Generation) applications to end users.  These features can be configured in our new Azure portal wizard with multimodal support or via the REST API 2025-05-01-preview version.  In addition, we're providing a new GitHub repo with sample code for a RAG app. This resource shows how you can use the index created by Azure AI Search…
    Introducing agentic retrieval in Azure AI Search
    Today we’re announcing agentic retrieval in Azure AI Search, a multiturn query engine that plans and runs its own retrieval strategy for improved answer relevance. Compared to traditional, single-shot RAG, agentic retrieval improves answer relevance to complex questions by up to 40%.  It transforms queries, runs parallel searches, and delivers results tuned for agents, along with references and a query activity log. Now available in public preview. What is agentic retrieval? Agentic retrieval in Azure AI Search uses a new query architecture that incorporates user conversation history and an Azure OpenAI model to plan, retrieve and synthesize queries. Here's how it works: An LLM analyzes the entire chat thread to identify the underlying information need. Instead of a single, catch-all que…
    Up to 40% better relevance for complex queries with new agentic retrieval engine
    By Alec Berntson, Alina Stoica Beck, Amaia Salvador Aguilera, Arnau Quindós Sánchez, Thibault Gisselbrecht and Xianshun Chen   Agentic retrieval in Azure AI Search is a new API built to effectively answer complex queries by extracting the right content needed. The API defines and runs a query plan, incorporating conversation history and an Azure OpenAI model. It transforms complex queries, then performs multiple searches at once, combining the final results and delivering ready-to-use content for answer generation. In this post we detail the operations that take place while the API is called and walk through the numerous experiments and datasets to evaluate its relevance performance. We learned the agentic retrieval API automates optimal retrieval for complex user queries, so you can get m…
  • Open

    Microsoft Fabric Spark: Native Execution Engine now generally available
    The Fabric Spark Native Execution Engine (NEE) is now generally available (GA) as part of Fabric Runtime 1.3. This C++-based vectorized engine (built on Apache Gluten and Velox) runs Spark workloads directly on the lakehouse, requiring no code changes or new libraries. It supports Spark 3.5 APIs and both Parquet and Delta Lake formats, so … Continue reading “Microsoft Fabric Spark: Native Execution Engine now generally available”  ( 6 min )
    Simplify Your Data Strategy: Mirroring for Azure Database for PostgreSQL in Microsoft Fabric for Effortless Analytics on Transactional Data
    PostgreSQL is a popular relational database in application development, and Azure Database for PostgreSQL provides a fully managed service with enterprise-level security, availability, and scalability. Integrating Azure Database for PostgreSQL with Microsoft Fabric through Mirroring (Preview) enables seamless replication of transactional data for analytics, simplifying data processes and ensuring real-time insights and data consistency. Microsoft … Continue reading “Simplify Your Data Strategy: Mirroring for Azure Database for PostgreSQL in Microsoft Fabric for Effortless Analytics on Transactional Data”  ( 8 min )
    New features in Mirroring for Azure SQL Managed Instance – private endpoint support and more
    Mirroring in Microsoft Fabric allows you to seamlessly reflect your existing data estate from Azure SQL Managed Instance into OneLake. The mirrored data is automatically kept up to date in near real-time, enabling advanced analytics and reporting to generate essential business insights. Since our preview announcement of Mirroring for Azure SQL Managed Instance, we have listened … Continue reading “New features in Mirroring for Azure SQL Managed Instance – private endpoint support and more”  ( 6 min )
    What’s new with Mirroring in Fabric at Microsoft Build 2025
    At Microsoft Build 2025, we are thrilled to show you the latest innovations that we have delivered with Mirroring in Fabric. Mirroring is a powerful feature that allows you to seamlessly reflect your existing data estate continuously from any database or data warehouse into OneLake in Fabric. Once Mirroring starts the replication process, the mirrored … Continue reading “What’s new with Mirroring in Fabric at Microsoft Build 2025”  ( 9 min )
    Dataflow Gen2 CI/CD, GIT integration and Public APIs (Generally Available)
    We’re excited to announce the General Availability of Dataflow Gen2 CI/CD & Git integration support! With this set of features, you can now seamlessly integrate your Dataflow Gen2 items with your existing CI/CD pipelines and version control of your workspace in Fabric. This integration allows for better collaboration, versioning, and automation of your deployment process … Continue reading “Dataflow Gen2 CI/CD, GIT integration and Public APIs (Generally Available)”  ( 7 min )
    Encrypt data at rest in your Fabric workspaces using customer-managed keys (Preview)
    As organizations advance in their cloud platform journey, ensuring robust data security remains fundamental. Encryption plays a crucial role in defense-in-depth strategies used to safeguard sensitive information by adding a layer of protection against unauthorized access. In addition to strengthening your security posture, encryption helps you adhere to your organization’s internal security, data governance and … Continue reading “Encrypt data at rest in your Fabric workspaces using customer-managed keys (Preview)”  ( 7 min )
    Warehouse Snapshots in Microsoft Fabric (Preview)
    Maintaining data consistency during ETL (Extract, Transform, Load) processes has long been a critical challenge for data engineers. Whether it’s a nightly pipeline overwriting key records or a mid-day transformation introducing schema drift, the risk of disrupting downstream analytics is both real and costly. In today’s fast-paced, data-driven world, even brief inconsistencies can break dashboards, … Continue reading “Warehouse Snapshots in Microsoft Fabric (Preview)”  ( 11 min )
    Get to insights faster with SaaS databases and “chat with your data”
    A recent study from the Social Science Research Network looked at 5,000 developers using generative AI tools in their day-to-day work and found a 26% average increase in completed tasks. The massive opportunity generative AI presents for developers and data professionals was one of the key driving forces behind the initial development of Microsoft Fabric. … Continue reading “Get to insights faster with SaaS databases and “chat with your data””  ( 14 min )
    Simplifying Medallion Implementation with Materialized Lake Views in Fabric
    We are excited to announce Materialized Lake views (MLV) in Microsoft Fabric. Coming soon in preview, MLV is a new feature that allows you to build declarative data pipelines using SQL, complete with built-in data quality rules and automatic monitoring of data transformations. In essence, an MLV is a persisted, continuously updated view of your … Continue reading “Simplifying Medallion Implementation with Materialized Lake Views in Fabric”  ( 9 min )
    New Shortcut Type for Azure Blob Storage in OneLake shortcuts
    We’re excited to announce a new shortcut type for Azure Blob Storage in Microsoft Fabric! As a key platform for storing unstructured data — from images and documents to logs and media — Azure Blob Storage plays a vital role in powering AI and advanced analytics solutions. With this new shortcut type, you can easily … Continue reading “New Shortcut Type for Azure Blob Storage in OneLake shortcuts”  ( 6 min )
    Announcing Cosmos DB in Microsoft Fabric (Preview)
    Announcing preview of Cosmos DB in Microsoft Fabric. Cosmos DB makes it easy to build AI apps, offering a database that scales automatically, deeply integrated with OneLake ad and is secure right out of the box. It’s built on scalability and performance of Azure Cosmos DB.  ( 8 min )
  • Open

    Agent management updates in the Copilot Control System
    Control who can find, use, and create agents, define permissions, approve or block agent deployments, and configure billing models including pay-as-you-go or prepaid options. Get detailed visibility into how agents are used, which users and groups are driving consumption, and how much they’re costing you. With Microsoft Purview integration, get visibility into sensitive data exposure, track compliance risks, and audit agent activity to stay secure and aligned with your organization’s data policies. Jeremy Chapman, Director of Microsoft 365, shares how to configure, deploy, monitor, and secure AI agents at scale. Define agent access by group or user. Customize permissions with Microsoft 365 admin controls. See how to use the Copilot Control System. Enable pay-as-you-go agent billing with m…  ( 38 min )
    Agent management updates in the Copilot Control System
    Control who can find, use, and create agents, define permissions, approve or block agent deployments, and configure billing models including pay-as-you-go or prepaid options. Get detailed visibility into how agents are used, which users and groups are driving consumption, and how much they’re costing you. With Microsoft Purview integration, get visibility into sensitive data exposure, track compliance risks, and audit agent activity to stay secure and aligned with your organization’s data policies. Jeremy Chapman, Director of Microsoft 365, shares how to configure, deploy, monitor, and secure AI agents at scale. Define agent access by group or user. Customize permissions with Microsoft 365 admin controls. See how to use the Copilot Control System. Enable pay-as-you-go agent billing with m…
  • Open

    Building an Enterprise RAG Pipeline in Azure with NVIDIA AI Blueprint for RAG and Azure NetApp Files
    Table of Contents Abstract Introduction Enterprise RAG: Challenges and Requirements NVIDIA AI Blueprint for RAG Adapting the Blueprint for Azure Deployment Azure NetApp Files: Powering High-Performance RAG Workloads Why Azure NetApp Files works well for RAG Service Levels for RAG Workloads Dynamic Service Level Adjustments Snapshot Capabilities for ML Versioning Cost Optimization Strategies Azure Reference Architecture for Enterprise RAG End-to-End Workflow Implementation Guide: Building the Pipeline Setup your Bash shell Setup your Azure Account Set environment variables Evaluating Your Enterprise RAG Pipeline Retrieval Accuracy Latency and Throughput GPU Utilization Cost Analysis What’s Coming Next Enterprise Use Cases and Real-World Applications Enterprise Search Customer Support Regula…  ( 81 min )
    Building an Enterprise RAG Pipeline in Azure with NVIDIA AI Blueprint for RAG and Azure NetApp Files
    Table of Contents Abstract Introduction Enterprise RAG: Challenges and Requirements NVIDIA AI Blueprint for RAG Adapting the Blueprint for Azure Deployment Azure NetApp Files: Powering High-Performance RAG Workloads Why Azure NetApp Files works well for RAG Service Levels for RAG Workloads Dynamic Service Level Adjustments Snapshot Capabilities for ML Versioning Cost Optimization Strategies Azure Reference Architecture for Enterprise RAG End-to-End Workflow Implementation Guide: Building the Pipeline Setup your Bash shell Setup your Azure Account Set environment variables Evaluating Your Enterprise RAG Pipeline Retrieval Accuracy Latency and Throughput GPU Utilization Cost Analysis What’s Coming Next Enterprise Use Cases and Real-World Applications Enterprise Search Customer Support Regula…
  • Open

    Unlocking AI Potential: Exploring the Model Context Protocol with AI Toolkit
    In the ever-evolving world of Generative AI, the pace of innovation is nothing short of breathtaking. Just a few short months ago, Large Language Models (LLMs) and their smaller counterparts, Small Language Models (SLMs), were the talk of the town. Then came Retrieval Augmented Generation (RAG), offering a powerful way to ground these models to specific knowledge bases. The emergence of Agents and Agentic AI further opened doors for new possibilities by using the GenAI Model capabilities. Now, as the next step in this exciting journey, we're witnessing the arrival of a new open protocol – one that standardizes how applications provide crucial context to LLMs. This new protocol is MCP. Model Context Protocol: Currently there is a Challenge while integrating LLMs with specific tools (like da…  ( 42 min )
    Unlocking AI Potential: Exploring the Model Context Protocol with AI Toolkit
    In the ever-evolving world of Generative AI, the pace of innovation is nothing short of breathtaking. Just a few short months ago, Large Language Models (LLMs) and their smaller counterparts, Small Language Models (SLMs), were the talk of the town. Then came Retrieval Augmented Generation (RAG), offering a powerful way to ground these models to specific knowledge bases. The emergence of Agents and Agentic AI further opened doors for new possibilities by using the GenAI Model capabilities. Now, as the next step in this exciting journey, we're witnessing the arrival of a new open protocol – one that standardizes how applications provide crucial context to LLMs. This new protocol is MCP. Model Context Protocol: Currently there is a Challenge while integrating LLMs with specific tools (like da…
  • Open

    Getting Started with .NET Aspire on Azure App Service
    We’re laying the groundwork to bring .NET Aspire to Azure App Service. While this is just the beginning, we wanted to give you an early preview of how to set up a basic Aspire application on App Service.  ( 5 min )

  • Open

    Kickstart Your AI Development with the Model Context Protocol (MCP) Course
    Model Context Protocol is an open standard that acts as a universal connector between AI models and the outside world. Think of MCP as “the USB-C of the AI world,” allowing AI systems to plug into APIs, databases, files, and other tools seamlessly. By adopting MCP, developers can create smarter, more useful AI applications that access up-to-date information and perform actions like a human developer would. To help developers learn this game-changing technology, Microsoft has created the “MCP for Beginners” course a free, open-source curriculum that guides you from the basics of MCP to building real-world AI integrations. Below, we’ll explore what MCP is, who this course is for, and how it empowers both beginners and intermediate developers to get started with MCP. What is MCP and Why Shoul…  ( 43 min )
    Kickstart Your AI Development with the Model Context Protocol (MCP) Course
    Model Context Protocol is an open standard that acts as a universal connector between AI models and the outside world. Think of MCP as “the USB-C of the AI world,” allowing AI systems to plug into APIs, databases, files, and other tools seamlessly. By adopting MCP, developers can create smarter, more useful AI applications that access up-to-date information and perform actions like a human developer would. To help developers learn this game-changing technology, Microsoft has created the “MCP for Beginners” course a free, open-source curriculum that guides you from the basics of MCP to building real-world AI integrations. Below, we’ll explore what MCP is, who this course is for, and how it empowers both beginners and intermediate developers to get started with MCP. What is MCP and Why Shoul…
  • Open

    Data security for agents and 3rd party AI in Microsoft Purview
    With built-in visibility into how AI apps and agents interact with sensitive data — whether inside Microsoft 365 or across unmanaged consumer tools — you can detect risks early, take decisive action, and enforce the right protections without slowing innovation. See usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work. Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance AI innovation with enterprise-grade data governance and security. Move from detection to prevention. Built-in, pre-configured policies you can activate in seconds. Check o…  ( 43 min )
    Data security for agents and 3rd party AI in Microsoft Purview
    With built-in visibility into how AI apps and agents interact with sensitive data — whether inside Microsoft 365 or across unmanaged consumer tools — you can detect risks early, take decisive action, and enforce the right protections without slowing innovation. See usage trends, investigate prompts and responses, and respond to potential data oversharing or policy violations in real time. From compliance-ready audit logs to adaptive data protection, you’ll have the insights and tools to keep data secure as AI becomes a part of everyday work. Shilpa Ranganathan, Microsoft Purview Principal Group PM, shares how to balance AI innovation with enterprise-grade data governance and security. Move from detection to prevention. Built-in, pre-configured policies you can activate in seconds. Check o…
    Data security controls in OneLake
    Unify and secure your data — no matter where it lives — without sacrificing control using OneLake security, part of Microsoft Fabric.  With granular permissions down to the row, column, and table level, you can confidently manage access across engines like Power BI, Spark, and T-SQL, all from one place. Discover, label, and govern your data with clarity using the integrated OneLake catalog that surfaces the right items fast. Aaron Merrill, Microsoft Fabric Principal Program Manager, shows how you can stay in control, from security to discoverability — owning, sharing, and protecting data on your terms. Protect sensitive information at scale. Set precise data access rules — down to individual rows. Check out OneLake security in Microsoft Fabric. No data duplication needed. Hide sensitive…  ( 41 min )
    Data security controls in OneLake
    Unify and secure your data — no matter where it lives — without sacrificing control using OneLake security, part of Microsoft Fabric.  With granular permissions down to the row, column, and table level, you can confidently manage access across engines like Power BI, Spark, and T-SQL, all from one place. Discover, label, and govern your data with clarity using the integrated OneLake catalog that surfaces the right items fast. Aaron Merrill, Microsoft Fabric Principal Program Manager, shows how you can stay in control, from security to discoverability — owning, sharing, and protecting data on your terms. Protect sensitive information at scale. Set precise data access rules — down to individual rows. Check out OneLake security in Microsoft Fabric. No data duplication needed. Hide sensitive…

  • Open

    Microsoft 365 Copilot Wave 2 Spring updates
    Streamline your day with new, user-focused updates to Microsoft 365 Copilot. Jump into work faster with a redesigned layout that puts Chat, Search, and your agents front and center. New Copilot Search lets you yse natural language to find files, emails, and conversations — even if you don’t remember exact keywords — and get instant summaries and previews without switching apps. Create high-impact visuals, documents, and videos in seconds with the new Copilot Create experience, complete with support for brand templates. Tap into powerful agents like Researcher and Analyst to handle deep tasks or build your own with ease. And if you manage Copilot across your organization, you now have better tools to deploy, monitor, and secure AI use — all from a single view. Describe what you want. Don’…  ( 42 min )
    Microsoft 365 Copilot Wave 2 Spring updates
    Streamline your day with new, user-focused updates to Microsoft 365 Copilot. Jump into work faster with a redesigned layout that puts Chat, Search, and your agents front and center. New Copilot Search lets you yse natural language to find files, emails, and conversations — even if you don’t remember exact keywords — and get instant summaries and previews without switching apps. Create high-impact visuals, documents, and videos in seconds with the new Copilot Create experience, complete with support for brand templates. Tap into powerful agents like Researcher and Analyst to handle deep tasks or build your own with ease. And if you manage Copilot across your organization, you now have better tools to deploy, monitor, and secure AI use — all from a single view. Describe what you want. Don’…
  • Open

    Announcing the General Availability of New Availability Zone Features for Azure App Service
    What are Availability Zones?  Availability Zones, or zone redundancy, refers to the deployment of applications across multiple availability zones within an Azure region. Each availability zone consists of one or more data centers with independent power, cooling, and networking. By leveraging zone redundancy, you can protect your applications and data from data center failures, ensuring uninterrupted service.  Key Updates  The minimum instance requirement for enabling Availability Zones has been reduced from three instances to two, while still maintaining a 99.99% SLA.  Many existing App Service plans with two or more instances will automatically support Availability Zones without additional setup.  The zone redundant setting for App Service plans and App Service Environment v3 is now …  ( 32 min )
    Announcing the General Availability of New Availability Zone Features for Azure App Service
    What are Availability Zones?  Availability Zones, or zone redundancy, refers to the deployment of applications across multiple availability zones within an Azure region. Each availability zone consists of one or more data centers with independent power, cooling, and networking. By leveraging zone redundancy, you can protect your applications and data from data center failures, ensuring uninterrupted service.  Key Updates  The minimum instance requirement for enabling Availability Zones has been reduced from three instances to two, while still maintaining a 99.99% SLA.  Many existing App Service plans with two or more instances will automatically support Availability Zones without additional setup.  The zone redundant setting for App Service plans and App Service Environment v3 is now …
    Diagnose Web App Issues Instantly—Just Drop a Screenshot into Conversational Diagnostics
    It’s that time of year again—Microsoft Build 2025 is here! And in the spirit of pushing boundaries with AI, we’re thrilled to introduce a powerful new preview feature in Conversational Diagnostics. 📸 Diagnose with a ScreenshotNo more struggling to describe a tricky issue or typing out long explanations. With this new capability, you can simply paste, upload, or drag a screenshot into the chat. Conversational Diagnostics will analyze the image, identify the context, and surface relevant diagnostics for your selected Azure Resource—all in seconds. Whether you're debugging a web app or triaging a customer issue, this feature helps you move from problem to insight faster than ever. Thank you!  ( 18 min )
    Diagnose Web App Issues Instantly—Just Drop a Screenshot into Conversational Diagnostics
    It’s that time of year again—Microsoft Build 2025 is here! And in the spirit of pushing boundaries with AI, we’re thrilled to introduce a powerful new preview feature in Conversational Diagnostics. 📸 Diagnose with a ScreenshotNo more struggling to describe a tricky issue or typing out long explanations. With this new capability, you can simply paste, upload, or drag a screenshot into the chat. Conversational Diagnostics will analyze the image, identify the context, and surface relevant diagnostics for your selected Azure Resource—all in seconds. Whether you're debugging a web app or triaging a customer issue, this feature helps you move from problem to insight faster than ever. Thank you!

  • Open

    Allocating Azure ML Costs with Kubecost
    Cost tracking is a critical aspect of cloud operations—it helps you understand not just how much you're spending, but also where that spend is going and which teams are responsible. When running a Machine Learning capability with multiple consumers across your organisation, it becomes especially challenging to attribute compute costs to the teams building and deploying models. With the extensive compute use in Machine Learning, these costs can add up quickly. In this article, we’ll explore how tools like Kubecost can help bring visibility and accountability to ML workloads. Tracking costs in Azure can mostly be done through Azure Cost Management, however when we are running these ML models as endpoints and deployments in a Kubernetes cluster, things can get a bit trickier. Azure Cost Manag…  ( 41 min )
    Allocating Azure ML Costs with Kubecost
    Cost tracking is a critical aspect of cloud operations—it helps you understand not just how much you're spending, but also where that spend is going and which teams are responsible. When running a Machine Learning capability with multiple consumers across your organisation, it becomes especially challenging to attribute compute costs to the teams building and deploying models. With the extensive compute use in Machine Learning, these costs can add up quickly. In this article, we’ll explore how tools like Kubecost can help bring visibility and accountability to ML workloads. Tracking costs in Azure can mostly be done through Azure Cost Management, however when we are running these ML models as endpoints and deployments in a Kubernetes cluster, things can get a bit trickier. Azure Cost Manag…
    Announcing Native Azure Functions Support in Azure Container Apps
    A New Way to Host Functions on ACA With the new native hosting model, Azure Functions are now fully integrated into ACA. This means you can deploy and run your functions directly on ACA, taking full advantage of the robust app platform.  Create via Portal: Option to optimize for Azure function If you are using CLI, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the Container App resource. Create via CLI: Set “kind=functionapp” property Please note, in the new native hosting model, Azure Functions extensions will continue to work as before. Auto-scaling will remain available. Deployments are supported through ARM templates, Bicep, Azure CLI, and the Azure portal. Monitoring using Applicat…  ( 27 min )
    Announcing Native Azure Functions Support in Azure Container Apps
    A New Way to Host Functions on ACA With the new native hosting model, Azure Functions are now fully integrated into ACA. This means you can deploy and run your functions directly on ACA, taking full advantage of the robust app platform.  Create via Portal: Option to optimize for Azure function If you are using CLI, you can deploy Azure Functions directly onto Azure Container Apps using the Microsoft.App resource provider by setting “kind=functionapp” property on the Container App resource. Create via CLI: Set “kind=functionapp” property Please note, in the new native hosting model, Azure Functions extensions will continue to work as before. Auto-scaling will remain available. Deployments are supported through ARM templates, Bicep, Azure CLI, and the Azure portal. Monitoring using Applicat…
  • Open

    Protect AI apps with Microsoft Defender
    Stay in control with Microsoft Defender. You can identify which AI apps and cloud services are in use across your environment, evaluate their risk levels, and allow or block them as needed — all from one place. Whether it’s a sanctioned tool or a shadow AI app, you’re equipped to set the right policies and respond fast to emerging threats.  Microsoft Defender gives you the visibility to track complex attack paths — linking signals across endpoints, identities, and cloud apps. Investigate real-time alerts, protect sensitive data from misuse in AI tools like Copilot, and enforce controls even for in-house developed apps using system prompts and Azure AI Foundry.  Rob Lefferts, Microsoft Security CVP, joins me in the Mechanics studio to share how you can safeguard your AI-powered environment…  ( 59 min )
    Protect AI apps with Microsoft Defender
    Stay in control with Microsoft Defender. You can identify which AI apps and cloud services are in use across your environment, evaluate their risk levels, and allow or block them as needed — all from one place. Whether it’s a sanctioned tool or a shadow AI app, you’re equipped to set the right policies and respond fast to emerging threats.  Microsoft Defender gives you the visibility to track complex attack paths — linking signals across endpoints, identities, and cloud apps. Investigate real-time alerts, protect sensitive data from misuse in AI tools like Copilot, and enforce controls even for in-house developed apps using system prompts and Azure AI Foundry.  Rob Lefferts, Microsoft Security CVP, joins me in the Mechanics studio to share how you can safeguard your AI-powered environment…
    How Microsoft 365 Backup works and how to set it up
    Protect your Microsoft 365 data and stay in control with Microsoft 365 Backup — whether managing email, documents, or sites across Exchange, OneDrive, and SharePoint. Define exactly what you want to back up and restore precisely what you need to with speeds reaching 2TB per hour at scale. With flexible policies, dynamic rules, and recovery points up to 365 days back, you can stay resilient and ready.  In this introduction, I'll show you how to minimize disruption and keep your organization moving forward even in the event of a disaster with Microsoft 365 Backup.  Fine-tune what gets backed up.  Back up by user, site, group, or file type — to meet your exact needs. Get started with Microsoft 365 Backup.  Restore data in-place or to a new location.  Compare versions before committing. Tak…  ( 45 min )
    How Microsoft 365 Backup works and how to set it up
    Protect your Microsoft 365 data and stay in control with Microsoft 365 Backup — whether managing email, documents, or sites across Exchange, OneDrive, and SharePoint. Define exactly what you want to back up and restore precisely what you need to with speeds reaching 2TB per hour at scale. With flexible policies, dynamic rules, and recovery points up to 365 days back, you can stay resilient and ready.  In this introduction, I'll show you how to minimize disruption and keep your organization moving forward even in the event of a disaster with Microsoft 365 Backup.  Fine-tune what gets backed up.  Back up by user, site, group, or file type — to meet your exact needs. Get started with Microsoft 365 Backup.  Restore data in-place or to a new location.  Compare versions before committing. Tak…
  • Open

    Maximum Allowed Cores exceeded for the Managed Environment
    This post will discuss this message that may be seen on Azure Container Apps.  ( 8 min )
  • Open

    RAG Virtual Assistant - Built with Microsoft Fabric and Azure OpenAI
    Introduction In Kenya's evolving higher education landscape, access to accurate information about funding opportunities remains a critical challenge for students and their families. When the New Funding Model (NFM) was introduced in May 2023, many applicants found themselves navigating unfamiliar processes with limited guidance. This knowledge gap inspired our team to develop a solution that would bridge this divide using cutting-edge AI technology. Our project, RAG-Powered Virtual Assistant for the Higher Education Fund (HEF), emerged as the overall winner at the Microsoft Data + AI Hack Kenya 2025. By leveraging Microsoft Fabric's powerful Eventhouse capabilities alongside Azure OpenAI services, we created an intelligent assistant that provides instant, accurate responses to queries abou…  ( 40 min )
    RAG Virtual Assistant - Built with Microsoft Fabric and Azure OpenAI
    Introduction In Kenya's evolving higher education landscape, access to accurate information about funding opportunities remains a critical challenge for students and their families. When the New Funding Model (NFM) was introduced in May 2023, many applicants found themselves navigating unfamiliar processes with limited guidance. This knowledge gap inspired our team to develop a solution that would bridge this divide using cutting-edge AI technology. Our project, RAG-Powered Virtual Assistant for the Higher Education Fund (HEF), emerged as the overall winner at the Microsoft Data + AI Hack Kenya 2025. By leveraging Microsoft Fabric's powerful Eventhouse capabilities alongside Azure OpenAI services, we created an intelligent assistant that provides instant, accurate responses to queries abou…
    Power Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
    Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This …  ( 27 min )
    Power Up Your Open WebUI with Azure AI Speech: Quick STT & TTS Integration
    Introduction Ever found yourself wishing your web interface could really talk and listen back to you? With a few clicks (and a bit of code), you can turn your plain Open WebUI into a full-on voice assistant. In this post, you’ll see how to spin up an Azure Speech resource, hook it into your frontend, and watch as user speech transforms into text and your app’s responses leap off the screen in a human-like voice. By the end of this guide, you’ll have a voice-enabled web UI that actually converses with users, opening the door to hands-free controls, better accessibility, and a genuinely richer user experience. Ready to make your web app speak? Let’s dive in. Why Azure AI Speech? We use Azure AI Speech service in Open Web UI to enable voice interactions directly within web applications. This …
  • Open

    Semantic Kernel: Package previews, Graduations & Deprecations
    Semantic Kernel: Package Previews, Graduations & Deprecations We are excited to share a summary of recent updates and continuous clean-up efforts across the Semantic Kernel .NET codebase. These changes focus on improving maintainability, aligning with the latest APIs, and ensuring a consistent experience for users. Below you’ll find details on package graduations, deprecations, and a […] The post Semantic Kernel: Package previews, Graduations & Deprecations appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Smart Mutations in Microsoft Fabric API for GraphQL with Stored Procedures
    Overview Microsoft Fabric API for GraphQL makes it easy to query and mutate data from a Fabric- SQL database and other Fabric data sources such as Data Warehouse and Lakehouse, with strongly typed schemas and a rich query language allowing developers to create an intuitive API without writing custom server code. While you can’t customize … Continue reading “Smart Mutations in Microsoft Fabric API for GraphQL with Stored Procedures”  ( 7 min )
    Updates to Fabric Copilot Capacity
    Fabric Copilot Capacities are making changes to be more streamlined and easier to use.  ( 6 min )
    Improving productivity in Fabric Notebooks with Inline Code Completion
    We are excited to introduce Copilot Inline Code Completion, an AI-powered feature that helps data scientists and engineers write high-quality Python code faster and with greater ease. Inspired by GitHub Copilot, this feature offers intelligent code suggestions as you type, with no commands needed. By understanding the context of your notebook, Copilot Inline Code Completion … Continue reading “Improving productivity in Fabric Notebooks with Inline Code Completion”  ( 6 min )
  • Open

    Learn How to Build Smarter AI Agents with Microsoft’s MCP Resources Hub
    If you've been curious about how to build your own AI agents that can talk to APIs, connect with tools like databases, or even follow documentation you're in the right place. Microsoft has created something called MCP, which stands for Model‑Context‑Protocol. And to help you learn it step by step, they’ve made an amazing MCP Resources Hub on GitHub. In this blog, I’ll Walk you through what MCP is, why it matters, and how to use this hub to get started, even if you're new to AI development. What is MCP (Model‑Context‑Protocol)? Think of MCP like a communication bridge between your AI model and the outside world. Normally, when we chat with AI (like ChatGPT), it only knows what’s in its training data. But with MCP, you can give your AI real-time context from: APIs Documents Databases Websit…  ( 29 min )
    Learn How to Build Smarter AI Agents with Microsoft’s MCP Resources Hub
    If you've been curious about how to build your own AI agents that can talk to APIs, connect with tools like databases, or even follow documentation you're in the right place. Microsoft has created something called MCP, which stands for Model‑Context‑Protocol. And to help you learn it step by step, they’ve made an amazing MCP Resources Hub on GitHub. In this blog, I’ll Walk you through what MCP is, why it matters, and how to use this hub to get started, even if you're new to AI development. What is MCP (Model‑Context‑Protocol)? Think of MCP like a communication bridge between your AI model and the outside world. Normally, when we chat with AI (like ChatGPT), it only knows what’s in its training data. But with MCP, you can give your AI real-time context from: APIs Documents Databases Websit…

  • Open

    How to Choose the Right Hosting Plan – WordPress on App Service
    Choosing the right hosting plan for your WordPress site on Azure App Service can feel overwhelming—but it doesn’t have to be. Whether you're just exploring WordPress or launching a high-traffic production site, we’ve created four tailored hosting plans to help you get started quickly and confidently. Let’s walk through how to pick the right plan for your needs. Which Hosting Plan Should You Choose? We’ve simplified the decision-making process with a clear recommendation based on your use case: Use CaseRecommended Plan Hobby or exploratory siteFree or Basic Small production websiteStandard High-load production websitePremium   💡 Important: Only the Premium plan supports High Availability (HA). This is the only setting that cannot be changed after deployment. If HA is a requirement,…  ( 25 min )
    How to Choose the Right Hosting Plan – WordPress on App Service
    Choosing the right hosting plan for your WordPress site on Azure App Service can feel overwhelming—but it doesn’t have to be. Whether you're just exploring WordPress or launching a high-traffic production site, we’ve created four tailored hosting plans to help you get started quickly and confidently. Let’s walk through how to pick the right plan for your needs. Which Hosting Plan Should You Choose? We’ve simplified the decision-making process with a clear recommendation based on your use case: Use CaseRecommended Plan Hobby or exploratory siteFree or Basic Small production websiteStandard High-load production websitePremium   💡 Important: Only the Premium plan supports High Availability (HA). This is the only setting that cannot be changed after deployment. If HA is a requirement,…
  • Open

    Orchestrate your Databricks Jobs with Fabric Data pipelines
    We’re excited to announce that you can now orchestrate Azure Databricks Jobs from your Microsoft Fabric data pipelines! Databrick Jobs allow you to schedule and orchestrate a task or multiple tasks in a workflow in your Databricks workspace. Since any operation in Databricks can be a task, this means you can now run anything in … Continue reading “Orchestrate your Databricks Jobs with Fabric Data pipelines”  ( 6 min )
  • Open

    Stop Translating Docs Manually! Automate Your Global Reach with Co-op Translator v0.8 Series
    Stop Translating Docs Manually! Automate Your Global Reach with Co-op Translator v0.8 Series   Is your team or open-source project drowning in the endless cycle of manually translating documentation? Every update to your source content triggers a wave of tedious, error-prone work across multiple languages, slowing down knowledge sharing and hindering your global impact.   This challenge became particularly clear within large-scale Microsoft educational projects like the "For Beginners" series, where manual translation simply couldn't keep pace. A scalable, automated solution was needed to ensure valuable technical knowledge reaches learners and developers worldwide, breaking down language barriers that limit participation and slow innovation.   Co-op Translator, a Microsoft Azure open-sour…  ( 29 min )
    Stop Translating Docs Manually! Automate Your Global Reach with Co-op Translator v0.8 Series
    Stop Translating Docs Manually! Automate Your Global Reach with Co-op Translator v0.8 Series   Is your team or open-source project drowning in the endless cycle of manually translating documentation? Every update to your source content triggers a wave of tedious, error-prone work across multiple languages, slowing down knowledge sharing and hindering your global impact.   This challenge became particularly clear within large-scale Microsoft educational projects like the "For Beginners" series, where manual translation simply couldn't keep pace. A scalable, automated solution was needed to ensure valuable technical knowledge reaches learners and developers worldwide, breaking down language barriers that limit participation and slow innovation.   Co-op Translator, a Microsoft Azure open-sour…
  • Open

    Mastering Query Fields in Azure AI Document Intelligence with C#
    Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage.   Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below…  ( 23 min )
    Mastering Query Fields in Azure AI Document Intelligence with C#
    Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage.   Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below…
  • Open

    Mastering Query Fields in Azure AI Document Intelligence with C#
    Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage.   Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below…  ( 23 min )
    Mastering Query Fields in Azure AI Document Intelligence with C#
    Introduction Azure AI Document Intelligence simplifies document data extraction, with features like query fields enabling targeted data retrieval. However, using these features with the C# SDK can be tricky. This guide highlights a real-world issue, provides a corrected implementation, and shares best practices for efficient usage.   Use case scenario During the cause of Azure AI Document Intelligence software engineering code tasks or review, many developers encountered an error while trying to extract fields like "FullName," "CompanyName," and "JobTitle" using `AnalyzeDocumentAsync`: The error might be similar to Inner Error: The parameter urlSource or base64Source is required. This is a challenge referred to as parameter errors and SDK changes. Most problematic code are looks like below…
  • Open

    Azure NetApp Files solutions for three EDA Cloud-Compute scenarios
    Table of Contents Abstract Introduction EDA Cloud-Compute scenarios Scenario 1: Burst to Azure from on-premises Data Center Scenario 2: “24x7 Single Set Workload” Scenario 3: "Data Center Supplement" Summary Abstract Azure NetApp Files (ANF) is transforming Electronic Design Automation (EDA) workflows in the cloud by delivering unparalleled performance, scalability, and efficiency. This blog explores how ANF addresses critical challenges in three cloud compute scenarios: Cloud Bursting, 24x7 All-in-Cloud, and Cloud-based Data Center Supplement. These solutions are tailored to optimize EDA processes, which rely on high-performance NFS file systems to design advanced semiconductor products. With the ability to support clusters exceeding 50,000 cores, ANF enhances productivity, shortens desi…  ( 45 min )
    Azure NetApp Files solutions for three EDA Cloud-Compute scenarios
    Table of Contents Abstract Introduction EDA Cloud-Compute scenarios Scenario 1: Burst to Azure from on-premises Data Center Scenario 2: “24x7 Single Set Workload” Scenario 3: "Data Center Supplement" Summary Abstract Azure NetApp Files (ANF) is transforming Electronic Design Automation (EDA) workflows in the cloud by delivering unparalleled performance, scalability, and efficiency. This blog explores how ANF addresses critical challenges in three cloud compute scenarios: Cloud Bursting, 24x7 All-in-Cloud, and Cloud-based Data Center Supplement. These solutions are tailored to optimize EDA processes, which rely on high-performance NFS file systems to design advanced semiconductor products. With the ability to support clusters exceeding 50,000 cores, ANF enhances productivity, shortens desi…
    Natural Language to SQL Semantic Kernel Multi-Agent System
    In today’s data-driven landscape, the ability to access and interpret information in a human-readable format is increasingly valuable. Being able to interact with and query your database in natural language is the game changer. In this post, we’ll walk through how to build a SQL agent using the Semantic Kernel framework to interact with a PostgreSQL database containing DVD rental data. I’ll explain how to define Semantic Kernel functions through plugins and how to incorporate them into agents. We’ll also look at how to set up these agents and guide them with well-structured instructions. Our example uses a PostgreSQL database that stores detailed information about DVD rentals. This sample database contains 15 tables and can be found here: PostgreSQL Sample Database. With a natural language…  ( 41 min )
    Natural Language to SQL Semantic Kernel Multi-Agent System
    In today’s data-driven landscape, the ability to access and interpret information in a human-readable format is increasingly valuable. Being able to interact with and query your database in natural language is the game changer. In this post, we’ll walk through how to build a SQL agent using the Semantic Kernel framework to interact with a PostgreSQL database containing DVD rental data. I’ll explain how to define Semantic Kernel functions through plugins and how to incorporate them into agents. We’ll also look at how to set up these agents and guide them with well-structured instructions. Our example uses a PostgreSQL database that stores detailed information about DVD rentals. This sample database contains 15 tables and can be found here: PostgreSQL Sample Database. With a natural language…
  • Open

    High-volume batch transaction processing
    The architecture uses AKS to implement compute clusters of the applications that process high-volume batches of transactions. The applications receive the transactions in messages from Service Bus topics or queues. The topics and queues can be at Azure datacenters in different geographic regions, and multiple AKS clusters can read input from them.   Architecture       Workflow The numbered circles in the diagram correspond to the numbered steps in the following list. The architecture uses Service Bus topics and queues to organize the batch processing input and to pass it downstream for processing. Azure Load Balancer, a Layer 4 (TCP, UDP) load balancer, distributes incoming traffic among healthy instances of services defined in a load-balanced set. Load balancing and management of connec…  ( 59 min )
    High-volume batch transaction processing
    The architecture uses AKS to implement compute clusters of the applications that process high-volume batches of transactions. The applications receive the transactions in messages from Service Bus topics or queues. The topics and queues can be at Azure datacenters in different geographic regions, and multiple AKS clusters can read input from them.   Architecture       Workflow The numbered circles in the diagram correspond to the numbered steps in the following list. The architecture uses Service Bus topics and queues to organize the batch processing input and to pass it downstream for processing. Azure Load Balancer, a Layer 4 (TCP, UDP) load balancer, distributes incoming traffic among healthy instances of services defined in a load-balanced set. Load balancing and management of connec…
  • Open

    Introducing Azure AI Content Understanding for Beginners
    Enterprises today face several challenges in processing and extracting insights from multimodal data, like managing diverse data formats, ensuring data quality, and streamlining workflows efficiently. Ensuring the accuracy and usability of extracted insights often requires advanced AI techniques, while inefficiencies in managing large data volumes increase costs and delay results. Azure AI Content Understanding addresses these pain points by offering a unified solution to transform unstructured data into actionable insights, improve data accuracy with schema extraction and confidence scoring, and integrate seamlessly with Azure’s ecosystem to enhance efficiency and reduce costs. Content Understanding makes it easy to extract custom task-specific output without advanced GenAI skills. It ena…  ( 25 min )
    Introducing Azure AI Content Understanding for Beginners
    Enterprises today face several challenges in processing and extracting insights from multimodal data, like managing diverse data formats, ensuring data quality, and streamlining workflows efficiently. Ensuring the accuracy and usability of extracted insights often requires advanced AI techniques, while inefficiencies in managing large data volumes increase costs and delay results. Azure AI Content Understanding addresses these pain points by offering a unified solution to transform unstructured data into actionable insights, improve data accuracy with schema extraction and confidence scoring, and integrate seamlessly with Azure’s ecosystem to enhance efficiency and reduce costs. Content Understanding makes it easy to extract custom task-specific output without advanced GenAI skills. It ena…

  • Open

    Query vs. Mutation in API for GraphQL – Understanding the difference
    GraphQL has revolutionized the way developers interact with APIs by offering a more flexible and efficient alternative to REST. Before getting started , Create an API for GraphQL in Fabric and add data to use GraphQL in Fabric. At the heart of GraphQL are two core operations: queries and mutations. While they may look similar on the surface, they serve very different purposes. Let’s explain it in detail.  ( 7 min )
    Evaluate your Fabric Data Agents programmatically with the Python SDK (Preview)
    We’re excited to announce that native support for evaluating Data Agents through the Fabric SDK is now available in Preview. You can now run structured evaluations of your agent’s responses using Python — directly from notebooks or your own automation pipelines. Whether you’re validating accuracy before deploying to production, tuning prompts for better performance, or … Continue reading “Evaluate your Fabric Data Agents programmatically with the Python SDK (Preview)”  ( 7 min )
  • Open

    Part 2 - How to Create a VS Code Extension for API Health Checks?
    Introduction Have you ever thought about to build a Visual Studio Code extension as your capstone project? That’s what I did: Part 1 - Develop a VS Code Extension for Your Capstone Project. I have created a Visual Studio Code Extension, API Guardian, that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing issues caused by breaking or non-breaking changes and to alleviate the difficulties in performing maintenance due to unclear or outdated documentation. Let's build your very own extension! Now, let’s do it step by step.   Step 1 – Install the NPM package for generator-code Ensure Node.js is installed before proceeding. Verify by running node -v, which will display the installed versi…  ( 27 min )
    Part 2 - How to Create a VS Code Extension for API Health Checks?
    Introduction Have you ever thought about to build a Visual Studio Code extension as your capstone project? That’s what I did: Part 1 - Develop a VS Code Extension for Your Capstone Project. I have created a Visual Studio Code Extension, API Guardian, that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing issues caused by breaking or non-breaking changes and to alleviate the difficulties in performing maintenance due to unclear or outdated documentation. Let's build your very own extension! Now, let’s do it step by step.   Step 1 – Install the NPM package for generator-code Ensure Node.js is installed before proceeding. Verify by running node -v, which will display the installed versi…
  • Open

    Create Your First AI Agent with JavaScript and Azure AI Agent Service!
    Introduction: The Era of AI Agents in JavaScript During the AI Agents Hackathon, one of the most anticipated sessions was presented by Wassim Chegham, Senior AI Developer Advocate for JavaScript at Microsoft. The topic? "How to Create Your First AI Agent with JavaScript and Azure AI Agent Service" — a powerful tool designed for modern developers looking to build AI-first applications with security, scalability, and productivity in mind. In this article, we explore the main highlights of the session, focusing on how you can create your own AI agent using JavaScript and Azure AI Agent Service. The video’s goal is clear: walk through the step-by-step process of creating AI agents using JavaScript and TypeScript with Azure AI Foundry, and explain all the key concepts behind this new developmen…  ( 35 min )
    Create Your First AI Agent with JavaScript and Azure AI Agent Service!
    Introduction: The Era of AI Agents in JavaScript During the AI Agents Hackathon, one of the most anticipated sessions was presented by Wassim Chegham, Senior AI Developer Advocate for JavaScript at Microsoft. The topic? "How to Create Your First AI Agent with JavaScript and Azure AI Agent Service" — a powerful tool designed for modern developers looking to build AI-first applications with security, scalability, and productivity in mind. In this article, we explore the main highlights of the session, focusing on how you can create your own AI agent using JavaScript and Azure AI Agent Service. The video’s goal is clear: walk through the step-by-step process of creating AI agents using JavaScript and TypeScript with Azure AI Foundry, and explain all the key concepts behind this new developmen…
  • Open

    Azure Migrate - Build 2025 updates
    Shiva Shastri Sr Product Marketing Manager, Azure Migrate—Product & Ecosystem. Cost-effective and sustainable innovation. In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay competitive through innovations while managing costs. By leveraging the power of the cloud, organizations can achieve cost-effectiveness and foster sustainable innovation. By transitioning to Azure, any organization can achieve greater financial flexibility, operational efficiency, and gain access to innovations that provide a competitive edge in the marketplace. Collocating application resources and data is essential for optimal performance and return on investment (ROI). Once in Azure, secure and responsible AI can help you with insights and actions that lead to better outcom…  ( 27 min )
    Azure Migrate - Build 2025 updates
    Shiva Shastri Sr Product Marketing Manager, Azure Migrate—Product & Ecosystem. Cost-effective and sustainable innovation. In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay competitive through innovations while managing costs. By leveraging the power of the cloud, organizations can achieve cost-effectiveness and foster sustainable innovation. By transitioning to Azure, any organization can achieve greater financial flexibility, operational efficiency, and gain access to innovations that provide a competitive edge in the marketplace. Collocating application resources and data is essential for optimal performance and return on investment (ROI). Once in Azure, secure and responsible AI can help you with insights and actions that lead to better outcom…
  • Open

    Innovation in Action: Azure Red Hat OpenShift at Build and Red Hat Summit 2025
    The strategic partnership between Microsoft Azure and Red Hat continues to flourish in 2025, with both companies showcasing their joint innovations at two major tech events: Microsoft Build (May 19-22 in Seattle) and Red Hat Summit 2025. This collaboration represents one of tech's most impactful partnerships, combining Microsoft's cloud expertise with Red Hat's open-source leadership to create solutions that drive digital transformation across industries. Microsoft Build 2025: AI Innovation Meets Open Source Microsoft Build 2025 will take place from May 19-22 at the Seattle Convention Center, bringing together developers, creators, and AI innovators from around the world. This year's event has been extended to four days, offering more opportunities for learning and networking. A key highl…  ( 34 min )
    Innovation in Action: Azure Red Hat OpenShift at Build and Red Hat Summit 2025
    The strategic partnership between Microsoft Azure and Red Hat continues to flourish in 2025, with both companies showcasing their joint innovations at two major tech events: Microsoft Build (May 19-22 in Seattle) and Red Hat Summit 2025. This collaboration represents one of tech's most impactful partnerships, combining Microsoft's cloud expertise with Red Hat's open-source leadership to create solutions that drive digital transformation across industries. Microsoft Build 2025: AI Innovation Meets Open Source Microsoft Build 2025 will take place from May 19-22 at the Seattle Convention Center, bringing together developers, creators, and AI innovators from around the world. This year's event has been extended to four days, offering more opportunities for learning and networking. A key highl…

  • Open

    The State of Coding the Future with Java and AI – May 2025
    Software development is changing fast, and Java developers are right in the middle of it – especially when it comes to using Artificial Intelligence (AI) in their apps. This report brings together feedback from 647 Java professionals to show where things stand and what is possible as Java and AI come together. One of the […] The post The State of Coding the Future with Java and AI – May 2025 appeared first on Microsoft for Java Developers.  ( 44 min )
  • Open

    Accelerate AI on Oracle Databases with Open Mirroring, Fabric Data Agent, and Azure AI Foundry
    Additional contributors: Venkat Ramakrishnan, Amir Jafari, Wilson Lee, Maraki Ketema As organizations accelerate their hybrid cloud adoption strategies, Oracle Database@Azure has emerged as a critical platform for running Oracle database workloads using Exadata, Autonomous, Exadata Exascale and Base databases for the enterprise. However, deriving real-time, AI-powered insights in hybrid settings has long been challenging due … Continue reading “Accelerate AI on Oracle Databases with Open Mirroring, Fabric Data Agent, and Azure AI Foundry”  ( 9 min )
    Announcing Copilot for SQL Analytics Endpoint in Microsoft Fabric (Preview)
    We’re excited to introduce Copilot for SQL Analytics Endpoint, now in preview – a transformative, AI-powered assistant built to change how you query, explore, and analyze data within Microsoft Fabric’s SQL experience. With Copilot integrated directly into the SQL Analytics Endpoint, users can now express intent in natural language and instantly receive ready-to-run T-SQL. Whether … Continue reading “Announcing Copilot for SQL Analytics Endpoint in Microsoft Fabric (Preview)”  ( 7 min )
  • Open

    Build your code-first agent with Azure AI Foundry: Self-Guided Workshop
    Build your first Agent App Agentic AI is changing how we build intelligent apps - enabling software to reason, plan, and act for us. Learning to build AI agents is quickly becoming a must-have skill for anyone working with AI. Self-Guided Workshop Try our self-guided “Build your code-first agent with Azure AI Foundry” workshop to get hands-on with Azure AI Agent Service. You’ll learn to build, deploy, and interact with agents using Azure’s powerful tools. What is Azure AI Agent Service? Azure AI Agent Service lets you create, orchestrate, and manage AI-powered agents that can handle complex tasks, integrate with tools, and deploy securely. What Will You Learn? The basics of agentic AI apps and how they differ from traditional apps How to set up your Azure environment How to build your first agent How to test and interact with your agent Advanced features like tool integration and memory management Who Is This For? Anyone interested in building intelligent, goal-oriented agents — developers, data scientists, students, and AI enthusiasts. No prior experience with Azure AI Agent Service required. How Does the Workshop Work? Tip: Select the self-guided tab in Getting Started for the right instructions. Step-by-step guides at your own pace Code samples and templates Real-world scenarios Get Started See what agentic AI can do for you with the self-guided “Build your code-first agent with Azure AI Foundry” workshop. Build practical skills in one of AI’s most exciting areas. Try the workshop and start building agents that make a difference! Additional Resources Azure AI Foundry Documentation Azure AI Agent Service Overview Questions or feedback Questions or feedback? Visit the issues page. Happy learning and building with Azure AI Agent Service!  ( 21 min )
    Build your code-first agent with Azure AI Foundry: Self-Guided Workshop
    Build your first Agent App Agentic AI is changing how we build intelligent apps - enabling software to reason, plan, and act for us. Learning to build AI agents is quickly becoming a must-have skill for anyone working with AI. Self-Guided Workshop Try our self-guided “Build your code-first agent with Azure AI Foundry” workshop to get hands-on with Azure AI Agent Service. You’ll learn to build, deploy, and interact with agents using Azure’s powerful tools. What is Azure AI Agent Service? Azure AI Agent Service lets you create, orchestrate, and manage AI-powered agents that can handle complex tasks, integrate with tools, and deploy securely. What Will You Learn? The basics of agentic AI apps and how they differ from traditional apps How to set up your Azure environment How to build your first agent How to test and interact with your agent Advanced features like tool integration and memory management Who Is This For? Anyone interested in building intelligent, goal-oriented agents — developers, data scientists, students, and AI enthusiasts. No prior experience with Azure AI Agent Service required. How Does the Workshop Work? Tip: Select the self-guided tab in Getting Started for the right instructions. Step-by-step guides at your own pace Code samples and templates Real-world scenarios Get Started See what agentic AI can do for you with the self-guided “Build your code-first agent with Azure AI Foundry” workshop. Build practical skills in one of AI’s most exciting areas. Try the workshop and start building agents that make a difference! Additional Resources Azure AI Foundry Documentation Azure AI Agent Service Overview Questions or feedback Questions or feedback? Visit the issues page. Happy learning and building with Azure AI Agent Service!
  • Open

    Dynamic Tool Discovery: Azure AI Agent Service + MCP Server Integration
    At the time of this writing, Azure AI Agent Service does not offer turnkey integration with Model Context Protocol (MCP) Servers. Discussed here is a solution that helps to leverage MCP's powerful capabilities while working within the Azure ecosystem. The integration approach piggybacks on the Function integration capability in the Azure AI Agent Service. By utilizing an MCP Client to discover and register tools from an MCP Server as Functions with the Agent Service, we create a seamless integration layer between the two systems. Built using the Microsoft Bot Framework, this application can be published as an AI Assistant across numerous channels like Microsoft Teams, Slack, and others. For development and testing purposes, we've used the Bot Framework Emulator to run and validate the appl…  ( 29 min )
    Dynamic Tool Discovery: Azure AI Agent Service + MCP Server Integration
    At the time of this writing, Azure AI Agent Service does not offer turnkey integration with Model Context Protocol (MCP) Servers. Discussed here is a solution that helps to leverage MCP's powerful capabilities while working within the Azure ecosystem. The integration approach piggybacks on the Function integration capability in the Azure AI Agent Service. By utilizing an MCP Client to discover and register tools from an MCP Server as Functions with the Agent Service, we create a seamless integration layer between the two systems. Built using the Microsoft Bot Framework, this application can be published as an AI Assistant across numerous channels like Microsoft Teams, Slack, and others. For development and testing purposes, we've used the Bot Framework Emulator to run and validate the appl…

  • Open

    From Complexity to Simplicity: The ASC and Azure AI Partnership
    ASC Technologies, a leader in compliance recording and AI-driven data analytics, provides cutting-edge software solutions for capturing and analyzing communication channels. Their innovative technology empowers more than 500 customers worldwide to record communications legally while extracting valuable insights and helping to prevent fraudulent activities. Many of their customers operate in heavily regulated industries where compliance recording is mandatory. These organizations rely on the ability to consolidate and analyze information shared across multiple channels including voice recordings, chat logs, speaker recognition, video analysis, and document and screen activity. As ASC’s customer base expanded, and their clients accumulated millions of calls and vast amounts of conversation m…  ( 25 min )
    From Complexity to Simplicity: The ASC and Azure AI Partnership
    ASC Technologies, a leader in compliance recording and AI-driven data analytics, provides cutting-edge software solutions for capturing and analyzing communication channels. Their innovative technology empowers more than 500 customers worldwide to record communications legally while extracting valuable insights and helping to prevent fraudulent activities. Many of their customers operate in heavily regulated industries where compliance recording is mandatory. These organizations rely on the ability to consolidate and analyze information shared across multiple channels including voice recordings, chat logs, speaker recognition, video analysis, and document and screen activity. As ASC’s customer base expanded, and their clients accumulated millions of calls and vast amounts of conversation m…

  • Open

    Seamlessly Integrating Azure Document Intelligence with Azure API Management (APIM)
    In today’s data-driven world, organizations are increasingly turning to AI for document understanding. Whether it's extracting invoices, contracts, ID cards, or complex forms, Azure Document Intelligence (formerly known as Form Recognizer) provides a robust, AI-powered solution for automated document processing. But what happens when you want to scale, secure, and load balance your document intelligence backend for high availability and enterprise-grade integration? Enter Azure API Management (APIM) — your gateway to efficient, scalable API orchestration. In this blog, we’ll explore how to integrate Azure Document Intelligence with APIM using a load-balanced architecture that works seamlessly with the Document Intelligence SDK — without rewriting your application logic. Azure Doc Intellige…  ( 36 min )
    Seamlessly Integrating Azure Document Intelligence with Azure API Management (APIM)
    In today’s data-driven world, organizations are increasingly turning to AI for document understanding. Whether it's extracting invoices, contracts, ID cards, or complex forms, Azure Document Intelligence (formerly known as Form Recognizer) provides a robust, AI-powered solution for automated document processing. But what happens when you want to scale, secure, and load balance your document intelligence backend for high availability and enterprise-grade integration? Enter Azure API Management (APIM) — your gateway to efficient, scalable API orchestration. In this blog, we’ll explore how to integrate Azure Document Intelligence with APIM using a load-balanced architecture that works seamlessly with the Document Intelligence SDK — without rewriting your application logic. Azure Doc Intellige…
  • Open

    Java monitoring over SSH
    This post will cover how to remotely connect to the JVM when running on Azure App Service with Java.  ( 3 min )
  • Open

    Azure Container Apps with Application Gateway and custom domain: hostname mismatch
    Introduction Azure Container Apps offers a robust platform for deploying microservices and containerized applications. When integrating with Azure Application Gateway, an internal container app environment can be accessed via public internet. Users often bind custom domains to enhance accessibility and user experience. A common challenge arises when we bind the custom domain on Application Gateway and try to access container app. Container app is acting as a middleware service and needs to forward request to another API server or finish authentication process, users may encountered HTTP 403 forbidden error which is caused by hostname/redirect URL mismatch. What's more, you definitely don't want to expose your backend service default domain. This blog explores these challenges and offers pr…  ( 22 min )
    Azure Container Apps with Application Gateway and custom domain: hostname mismatch
    Introduction Azure Container Apps offers a robust platform for deploying microservices and containerized applications. When integrating with Azure Application Gateway, an internal container app environment can be accessed via public internet. Users often bind custom domains to enhance accessibility and user experience. A common challenge arises when we bind the custom domain on Application Gateway and try to access container app. Container app is acting as a middleware service and needs to forward request to another API server or finish authentication process, users may encountered HTTP 403 forbidden error which is caused by hostname/redirect URL mismatch. What's more, you definitely don't want to expose your backend service default domain. This blog explores these challenges and offers pr…
  • Open

    Python in Visual Studio Code – May 2025 Release
    The May 2025 release includes updates in the Python Environments extension, a new color picker added by Pylance, branch coverage support, and more! The post Python in Visual Studio Code – May 2025 Release appeared first on Microsoft for Python Developers Blog.  ( 24 min )
  • Open

    Exchange Web Services code analyzer and usage report
    We are less than 18 months away from the retirement of Exchange Web Services. Start planning your migration from EWS to Microsoft Graph. The post Exchange Web Services code analyzer and usage report appeared first on Microsoft 365 Developer Blog.  ( 24 min )

  • Open

    JWT it like it’s hot: A practical guide for Kubernetes Structured Authentication
    With this practical guide, you now know how to secure your Kubernetes cluster using the structured-authentication feature, offering flexible integration with any JWT-compliant token provider. The post JWT it like it’s hot: A practical guide for Kubernetes Structured Authentication appeared first on Microsoft Open Source Blog.  ( 16 min )
  • Open

    Shortcut cache and on-prem gateway support (Generally Available)
    Shortcut cache and on-prem gateway support are now generally available (GA) Shortcut cache Shortcuts in OneLake allow you to quickly and easily source data from external cloud providers and use it across all Fabric workloads such as Power BI reports, SQL, Spark and Kusto.  However, each time these workloads read data from cross-cloud sources, the … Continue reading “Shortcut cache and on-prem gateway support (Generally Available)”  ( 6 min )
    Manage connections for shortcuts
    Shortcuts in OneLake provide a quick and easy way to make your data available in Fabric without having to copy it.  Simply create a new shortcut and your data is instantly available to all Fabric workloads. When you first create a new shortcut, you also set up a shared cloud connection. These are the same connections … Continue reading “Manage connections for shortcuts”  ( 6 min )
  • Open

    Build faster with this simple AZD template for FastAPI on Azure App Service
    I’ve made this Simple FastAPI AZD template for Azure App Service to help you get to the fun part, and to cut out all the extra infrastructure that you don’t necessarily want or need. This FastAPI template for Azure App Service gives you all the infrastructure as code to deploy a basic “Hello World” FastAPI web app that you can spin up using AZD (Azure Developer SDK) with just three commands. How to do it To get started, you just need to install AZD, a command-line tool you can use right there in VS Code. Then you’re ready to grab the template and deploy. Run these commands and follow the prompts as you go.  Grab our new simple FastAPI template for Azure App Service: azd init --template Azure-Samples/azd-simple-fastapi-appservice It will ask you to give your environment a name. This will b…  ( 33 min )
    Build faster with this simple AZD template for FastAPI on Azure App Service
    I’ve made this Simple FastAPI AZD template for Azure App Service to help you get to the fun part, and to cut out all the extra infrastructure that you don’t necessarily want or need. This FastAPI template for Azure App Service gives you all the infrastructure as code to deploy a basic “Hello World” FastAPI web app that you can spin up using AZD (Azure Developer SDK) with just three commands. How to do it To get started, you just need to install AZD, a command-line tool you can use right there in VS Code. Then you’re ready to grab the template and deploy. Run these commands and follow the prompts as you go.  Grab our new simple FastAPI template for Azure App Service: azd init --template Azure-Samples/azd-simple-fastapi-appservice It will ask you to give your environment a name. This will b…
  • Open

    Unlocking the Power of Model Distillation through Azure AI Foundry
    AI distillation is a powerful technique in machine learning that involves transferring knowledge from a large, complex model (often called the "teacher") to a smaller, more efficient "student" model. The goal is to retain the performance and accuracy of the larger model while drastically reducing computational requirements, making AI systems faster, cheaper, and more deployable especially in real-time or resource-constrained environments. In this post, we'll explore what AI distillation is, why it's gaining traction, and how it's being used to bring the power of advanced AI models to everyday applications. Stored completions in Azure OpenAI's AI Foundry provide a structured way to capture and reuse high-quality model responses, streamlining the model distillation process. By logging curate…  ( 38 min )
    Unlocking the Power of Model Distillation through Azure AI Foundry
    AI distillation is a powerful technique in machine learning that involves transferring knowledge from a large, complex model (often called the "teacher") to a smaller, more efficient "student" model. The goal is to retain the performance and accuracy of the larger model while drastically reducing computational requirements, making AI systems faster, cheaper, and more deployable especially in real-time or resource-constrained environments. In this post, we'll explore what AI distillation is, why it's gaining traction, and how it's being used to bring the power of advanced AI models to everyday applications. Stored completions in Azure OpenAI's AI Foundry provide a structured way to capture and reuse high-quality model responses, streamlining the model distillation process. By logging curate…
    Navigating AI Solutions: Microsoft Copilot Studio vs. Azure AI Foundry
    Are you looking to build custom Copilots but unsure about the differences between Copilot Studio and Azure AI Foundry? As a Microsoft Technical Trainer with over a decade of experience, I've spent the last 18 months focusing on Azure AI Solutions and Copilot. Through numerous workshops, I've seen firsthand how customers benefit from AI solutions beyond Microsoft Copilot. Microsoft 365 Copilot Chat offers seamless integration with Generative AI for tasks like document creation, content summarization, and insights from M365 solutions such as Email, OneDrive, SharePoint, and Teams. It ensures compliance with organizational security, governance, and privacy policies, making it ideal for immediate AI assistance without customization. On the other hand, platforms like Copilot Studio and Azure AI…  ( 37 min )
    Navigating AI Solutions: Microsoft Copilot Studio vs. Azure AI Foundry
    Are you looking to build custom Copilots but unsure about the differences between Copilot Studio and Azure AI Foundry? As a Microsoft Technical Trainer with over a decade of experience, I've spent the last 18 months focusing on Azure AI Solutions and Copilot. Through numerous workshops, I've seen firsthand how customers benefit from AI solutions beyond Microsoft Copilot. Microsoft 365 Copilot Chat offers seamless integration with Generative AI for tasks like document creation, content summarization, and insights from M365 solutions such as Email, OneDrive, SharePoint, and Teams. It ensures compliance with organizational security, governance, and privacy policies, making it ideal for immediate AI assistance without customization. On the other hand, platforms like Copilot Studio and Azure AI…
    Azure OpenAI o-series & GPT-4.1 Models Now Available in Azure AI Agent Service
    New Models Available!   We’re excited to announce the preview availability of the following Azure OpenAI Service models for use in the Azure AI Agent Service, starting 5/7:  o1  o3-mini  gpt-4.1  gpt-4.1-mini  gpt-4.1-nano    Azure OpenAI o-Series Models  Azure OpenAI o-series models are designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.    o1: The most capable model in the o1 series, offering enhanced reasoning abilities.  o3 (coming soon): The most capable reasoning model in the o model series, and the first one to offer full tools support for agentic so…  ( 30 min )
    Azure OpenAI o-series & GPT-4.1 Models Now Available in Azure AI Agent Service
    New Models Available!   We’re excited to announce the preview availability of the following Azure OpenAI Service models for use in the Azure AI Agent Service, starting 5/7:  o1  o3-mini  gpt-4.1  gpt-4.1-mini  gpt-4.1-nano    Azure OpenAI o-Series Models  Azure OpenAI o-series models are designed to tackle reasoning and problem-solving tasks with increased focus and capability. These models spend more time processing and understanding the user's request, making them exceptionally strong in areas like science, coding, and math compared to previous iterations.    o1: The most capable model in the o1 series, offering enhanced reasoning abilities.  o3 (coming soon): The most capable reasoning model in the o model series, and the first one to offer full tools support for agentic so…
  • Open

    Prepare your Office Add-in for the European Accessibility Act (EAA)
    Starting June 28, 2025, the European Accessibility Act (EAA) takes effect, requiring all digital products and services offered to EU customers to meet comprehensive accessibility standards. If you're developing Office Add-ins, this may impact you. This blog explains what you need to know to ensure your Office Add-ins are compliant. The post Prepare your Office Add-in for the European Accessibility Act (EAA) appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    The State of Coding the Future with Java and AI – May 2025
    Software development is changing fast, and Java developers are right in the middle of it - especially when it comes to using Artificial Intelligence (AI) in their apps. This report brings together feedback from 647 Java professionals to show where things stand and what is possible as Java and AI come together. One of the biggest takeaways is this: Java developers do not need to be experts in AI, machine learning, or Python. With tools like the Model Context Protocol (MCP) Java SDK, Spring AI, and LangChain4j, they can start adding smart features to their apps using the skills they already have. Whether it is making recommendations, spotting fraud, supporting natural language search or a world of possibilities, AI can be part of everyday Java development. The report walks through real-world…  ( 102 min )
    The State of Coding the Future with Java and AI – May 2025
    Software development is changing fast, and Java developers are right in the middle of it - especially when it comes to using Artificial Intelligence (AI) in their apps. This report brings together feedback from 647 Java professionals to show where things stand and what is possible as Java and AI come together. One of the biggest takeaways is this: Java developers do not need to be experts in AI, machine learning, or Python. With tools like the Model Context Protocol (MCP) Java SDK, Spring AI, and LangChain4j, they can start adding smart features to their apps using the skills they already have. Whether it is making recommendations, spotting fraud, supporting natural language search or a world of possibilities, AI can be part of everyday Java development. The report walks through real-world…

  • Open

    Enabling broader adoption of XMLA-based tools and scenarios
    Starting on June 9, 2025, all Power BI and Fabric capacity SKUs will support XMLA read/write operations by default. This change is intended to assist customers using XMLA-based tools to create, edit, and maintain semantic models, such as DAX Query View in the web, Live Editing in Power BI Desktop, SQL Server Management Studio (SSMS), … Continue reading “Enabling broader adoption of XMLA-based tools and scenarios”  ( 5 min )
  • Open

    Part 1 - Develop a VS Code Extension for Your Capstone Project
    API Guardian - My Capstone Project As software and APIs evolve, developers encounter significant difficulties in maintaining and updating API endpoints. Breaking changes can lead to system instability, while outdated or unclear documentation makes maintenance less efficient. These challenges are further compounded by the time-consuming nature of updating dependencies and the tendency to prioritize new features over maintenance tasks. The absence of effective tools and processes to tackle these issues reduces overall productivity and developer efficiency. To address this, API Guardian was created as a Visual Studio Code extension that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing …  ( 26 min )
    Part 1 - Develop a VS Code Extension for Your Capstone Project
    API Guardian - My Capstone Project As software and APIs evolve, developers encounter significant difficulties in maintaining and updating API endpoints. Breaking changes can lead to system instability, while outdated or unclear documentation makes maintenance less efficient. These challenges are further compounded by the time-consuming nature of updating dependencies and the tendency to prioritize new features over maintenance tasks. The absence of effective tools and processes to tackle these issues reduces overall productivity and developer efficiency. To address this, API Guardian was created as a Visual Studio Code extension that identifies API endpoints in a project and checks their functionality before deployment. This solution was developed to help developers save time spent fixing …
  • Open

    Custom Tracing in API Management
    Scenario: In case of encountering error in API management, request tracing is an invaluable feature that serves as a debugger. It allows for tracking the flow of requests as they pass through various policy logic, providing detailed insights into the complete API Management (APIM) processing. Here is a link if you would like to read more on how to enable request tracing in API management.  However, it is the most common way to debug your API, let's assume a real-life scenario where you encounter a sporadic error or unexpected response while processing the live APIM calls and need to drill down the issue. In such cases, attaching a debugger or running request traces can be challenging, especially when the issue is intermittent or requires checking the specific code logic. This often necessi…  ( 29 min )
    Custom Tracing in API Management
    Scenario: In case of encountering error in API management, request tracing is an invaluable feature that serves as a debugger. It allows for tracking the flow of requests as they pass through various policy logic, providing detailed insights into the complete API Management (APIM) processing. Here is a link if you would like to read more on how to enable request tracing in API management.  However, it is the most common way to debug your API, let's assume a real-life scenario where you encounter a sporadic error or unexpected response while processing the live APIM calls and need to drill down the issue. In such cases, attaching a debugger or running request traces can be challenging, especially when the issue is intermittent or requires checking the specific code logic. This often necessi…
  • Open

    Nested App Authentication: Now generally available across Microsoft 365
    Get started with Nested App Authentication, a modern protocol for simplifying authentication for Personal Tab Teams apps that run across Microsoft 365. The post Nested App Authentication: Now generally available across Microsoft 365 appeared first on Microsoft 365 Developer Blog.  ( 23 min )

  • Open

    RC1: Semantic Kernel for Java Agents API
    We’re excited to announce the release candidate of the Semantic Kernel for Java Agents API! This marks a major step forward in bringing the power of intelligent agents to Java developers, enabling them to build rich, contextual, and interactive AI experiences using the Semantic Kernel framework. What Are Agents in Semantic Kernel? Agents are intelligent, autonomous […] The post RC1: Semantic Kernel for Java Agents API appeared first on Semantic Kernel.  ( 22 min )
  • Open

    Smart Auditing: Leveraging Azure AI Agents to Transform Financial Oversight
    In today's data-driven business environment, audit teams often spend weeks poring over logs and databases to verify spending and billing information. This time-consuming process is ripe for automation. But is there a way to implement AI solutions without getting lost in complex technical frameworks? While tools like LangChain, Semantic Kernel, and AutoGen offer powerful AI agent capabilities, sometimes you need a straightforward solution that just works.  So, what's the answer for teams seeking simplicity without sacrificing effectiveness? This tutorial will show you how to use Azure AI Agent Service to build an AI agent that can directly access your Postgres database to streamline audit workflows. No complex chains or graphs required, just a practical solution to get your audit process au…  ( 43 min )
    Smart Auditing: Leveraging Azure AI Agents to Transform Financial Oversight
    In today's data-driven business environment, audit teams often spend weeks poring over logs and databases to verify spending and billing information. This time-consuming process is ripe for automation. But is there a way to implement AI solutions without getting lost in complex technical frameworks? While tools like LangChain, Semantic Kernel, and AutoGen offer powerful AI agent capabilities, sometimes you need a straightforward solution that just works.  So, what's the answer for teams seeking simplicity without sacrificing effectiveness? This tutorial will show you how to use Azure AI Agent Service to build an AI agent that can directly access your Postgres database to streamline audit workflows. No complex chains or graphs required, just a practical solution to get your audit process au…
  • Open

    Hubs and Workspaces on Azure Machine Learning – General Availability
    We are pleased to announce that hubs and workspaces is now generally available on Azure machine learning allowing users to use hub for team’s collaboration environment for machine learning applications.  Azure Hubs and Workspaces provide a centralized platform capability for Azure Machine Learning. This feature enables developers to innovate faster by creating project workspaces and accessing shared company resources without needing repeated assistance from IT administrators.  Quick Model Building and Experimentation without IT bottleneck  Hubs and Workspaces in Azure Machine Learning provide a centralized solution for managing machine learning resources. Hubs act as a central resource management construct that oversees security, connectivity, computing resources, and team quotas. Once cre…  ( 26 min )
    Hubs and Workspaces on Azure Machine Learning – General Availability
    We are pleased to announce that hubs and workspaces is now generally available on Azure machine learning allowing users to use hub for team’s collaboration environment for machine learning applications.  Azure Hubs and Workspaces provide a centralized platform capability for Azure Machine Learning. This feature enables developers to innovate faster by creating project workspaces and accessing shared company resources without needing repeated assistance from IT administrators.  Quick Model Building and Experimentation without IT bottleneck  Hubs and Workspaces in Azure Machine Learning provide a centralized solution for managing machine learning resources. Hubs act as a central resource management construct that oversees security, connectivity, computing resources, and team quotas. Once cre…
  • Open

    Announcing the updated Teams AI Library and MCP support
    Discover the new and improved Teams AI Library, designed to help developers create even more powerful agents for Microsoft Teams. The post Announcing the updated Teams AI Library and MCP support appeared first on Microsoft 365 Developer Blog.  ( 24 min )

  • Open

    Help Shape the Future of Log Analytics: Your Feedback Matters
    We’re launching a quick survey to gather your feedback on Azure Monitor Log Analytics. Your input directly impacts our product roadmap and helps us prioritize the features and improvements that matter most to you. The survey takes just a few minutes, and your responses will remain confidential. Take the Survey   New to Log Analytics? Start here: Get Started with Azure Monitor Log Analytics Overview of Log Analytics in Azure Monitor - Azure Monitor | Microsoft Learn For questions or additional feedback, feel free to reach out to Noyablanga@microsoft.com.Thank you for being part of this journey!  ( 18 min )
    Help Shape the Future of Log Analytics: Your Feedback Matters
    We’re launching a quick survey to gather your feedback on Azure Monitor Log Analytics. Your input directly impacts our product roadmap and helps us prioritize the features and improvements that matter most to you. The survey takes just a few minutes, and your responses will remain confidential. Take the Survey   New to Log Analytics? Start here: Get Started with Azure Monitor Log Analytics Overview of Log Analytics in Azure Monitor - Azure Monitor | Microsoft Learn For questions or additional feedback, feel free to reach out to Noyablanga@microsoft.com.Thank you for being part of this journey!
  • Open

    Applications (and revisions) stuck in activating state on Azure Container Apps
    This post refers to issues where you may see revisions “stuck in ‘Activating’ state” when using Azure Container Apps - and what are some common causes and explainations behind this.  ( 6 min )
  • Open

    Where Does an LLM Keep All That Knowledge? A Peek into the Physical Side of AI
    We often hear about Large Language Models (LLMs) like GPT-4 having billions of parameters and being trained on massive datasets. But have you ever wondered: Where is all that data actually stored? And more fundamentally, how does a computer even store knowledge in the first place? Let’s take a journey from the world of electric charges to the vast neural networks powering today’s AI. Data at the Most Basic Level: 0s and 1s At its core, all digital data is just binary — a series of 0s and 1s. These bits are represented physically using electric charges or magnetic states: In RAM or CPU/GPU memory, bits are stored using transistors and capacitors that are either charged (1) or not charged (0). In SSDs, data is stored using floating-gate transistors that trap electrons to represent binary st…  ( 23 min )
    Where Does an LLM Keep All That Knowledge? A Peek into the Physical Side of AI
    We often hear about Large Language Models (LLMs) like GPT-4 having billions of parameters and being trained on massive datasets. But have you ever wondered: Where is all that data actually stored? And more fundamentally, how does a computer even store knowledge in the first place? Let’s take a journey from the world of electric charges to the vast neural networks powering today’s AI. Data at the Most Basic Level: 0s and 1s At its core, all digital data is just binary — a series of 0s and 1s. These bits are represented physically using electric charges or magnetic states: In RAM or CPU/GPU memory, bits are stored using transistors and capacitors that are either charged (1) or not charged (0). In SSDs, data is stored using floating-gate transistors that trap electrons to represent binary st…
  • Open

    Activator as an Orchestrator of the Fabric Event Driven flows
    With Fabric Events general availability, the role of Activator expands from setting notifications and acting on your data in real time to becoming an orchestration centerpiece. Activator acts as a connecting tissue enabling complex event-driven and data-driven flows. Let’s look at a very common architecture we often see our customers implement: In this architecture we … Continue reading “Activator as an Orchestrator of the Fabric Event Driven flows”  ( 6 min )
    Task flows in Microsoft Fabric (Generally Available)
    Task flows feature is now generally available! Task flows streamline the design of your data solutions and ensure consistency between design and development efforts. It also allows you to navigate items and manage your workspace more easily, even as it becomes more complex over time. Since its preview last May, we have received a great … Continue reading “Task flows in Microsoft Fabric (Generally Available)”  ( 6 min )
  • Open

    AI Agents in Production: From Prototype to Reality - Part 10
    Hi everyone, Shivam Goyal here! This marks the final installment in our AI Agents for Beginners series, based on the awesome repository (link to the repo). I hope you've enjoyed this journey into the world of agentic AI! In previous posts ([links to parts 1-9 at the end]), we've covered the fundamentals and key design patterns. Now, let's explore the practical considerations of deploying AI agents to production, focusing on performance, cost management, and evaluation. As an active member of the AI community, I'm excited to share these insights to help you bring your agentic AI projects to life. From Lab to Production: Key Considerations Successfully deploying AI agents requires careful planning and attention to detail. We need to consider: How to plan the deployment of your AI Agent to p…  ( 26 min )
    AI Agents in Production: From Prototype to Reality - Part 10
    Hi everyone, Shivam Goyal here! This marks the final installment in our AI Agents for Beginners series, based on the awesome repository (link to the repo). I hope you've enjoyed this journey into the world of agentic AI! In previous posts ([links to parts 1-9 at the end]), we've covered the fundamentals and key design patterns. Now, let's explore the practical considerations of deploying AI agents to production, focusing on performance, cost management, and evaluation. As an active member of the AI community, I'm excited to share these insights to help you bring your agentic AI projects to life. From Lab to Production: Key Considerations Successfully deploying AI agents requires careful planning and attention to detail. We need to consider: How to plan the deployment of your AI Agent to p…

  • Open

    Granting Azure Resources Access to SharePoint Online Sites Using Managed Identity
    When integrating Azure resources like Logic Apps, Function Apps, or Azure VMs with SharePoint Online, you often need secure and granular access control. Rather than handling credentials manually, Managed Identity is the recommended approach to securely authenticate to Microsoft Graph and access SharePoint resources. High-level steps: Step 1: Enable Managed Identity (or App Registration) Step 2: Grant Sites.Selected Permission in Microsoft Entra ID Step 3: Assign SharePoint Site-Level Permission Step 1: Enable Managed Identity (or App Registration) For your Azure resource (e.g., Logic App): Navigate to the Azure portal. Go to the resource (e.g., Logic App). Under Identity, enable System-assigned Managed Identity. Note the Object ID and Client ID (you’ll need the Client ID later). Alternat…  ( 23 min )
    Granting Azure Resources Access to SharePoint Online Sites Using Managed Identity
    When integrating Azure resources like Logic Apps, Function Apps, or Azure VMs with SharePoint Online, you often need secure and granular access control. Rather than handling credentials manually, Managed Identity is the recommended approach to securely authenticate to Microsoft Graph and access SharePoint resources. High-level steps: Step 1: Enable Managed Identity (or App Registration) Step 2: Grant Sites.Selected Permission in Microsoft Entra ID Step 3: Assign SharePoint Site-Level Permission Step 1: Enable Managed Identity (or App Registration) For your Azure resource (e.g., Logic App): Navigate to the Azure portal. Go to the resource (e.g., Logic App). Under Identity, enable System-assigned Managed Identity. Note the Object ID and Client ID (you’ll need the Client ID later). Alternat…
  • Open

    Guest Blog: Orchestrating AI Agents with Semantic Kernel Plugins: A Technical Deep Dive
    Today we’re excited to welcome Jarre Nejatyab as a guest blog to highlight a technical deep dive on orchestrating AI Agents with Semantic Kernel Plugins. In the rapidly evolving world of Large Language Models (LLMs), orchestrating specialized AI agents has become crucial for building sophisticated cognitive architectures capable of complex reasoning and task execution. While […] The post Guest Blog: Orchestrating AI Agents with Semantic Kernel Plugins: A Technical Deep Dive appeared first on Semantic Kernel.  ( 28 min )

  • Open

    Real-time Speech Transcription with GPT-4o-transcribe and GPT-4o-mini-transcribe using WebSocket
    Azure OpenAI has expanded its speech recognition capabilities with two powerful models: GPT-4o-transcribe and GPT-4o-mini-transcribe. These models also leverage WebSocket connections to enable real-time transcription of audio streams, providing developers with cutting-edge tools for speech-to-text applications. In this technical blog, we'll explore how these models work and demonstrate a practical implementation using Python. Understanding OpenAI's Realtime Transcription API Unlike the regular REST API for audio transcription, Azure OpenAI's Realtime API enables continuous streaming of audio data through WebSockets or WebRTC connections. This approach is particularly valuable for applications requiring immediate transcription feedback, such as live captioning, meeting transcription, or voi…  ( 31 min )
    Real-time Speech Transcription with GPT-4o-transcribe and GPT-4o-mini-transcribe using WebSocket
    Azure OpenAI has expanded its speech recognition capabilities with two powerful models: GPT-4o-transcribe and GPT-4o-mini-transcribe. These models also leverage WebSocket connections to enable real-time transcription of audio streams, providing developers with cutting-edge tools for speech-to-text applications. In this technical blog, we'll explore how these models work and demonstrate a practical implementation using Python. Understanding OpenAI's Realtime Transcription API Unlike the regular REST API for audio transcription, Azure OpenAI's Realtime API enables continuous streaming of audio data through WebSockets or WebRTC connections. This approach is particularly valuable for applications requiring immediate transcription feedback, such as live captioning, meeting transcription, or voi…
    A Microsoft Fabric Template for Azure AI Content Understanding is Now Available
    We are excited to share that we have released a new Microsoft Fabric pipeline template that helps you easily send the results from Azure AI Content Understanding into a Fabric Lakehouse! This template makes it easier than ever to harvest information from multimodal content and turn it into structured data using Content Understanding and perform further analysis in Microsoft Fabric. Whether you are looking to extract insights from a contract, call transcript, or video footage, this template simplifies the process and gives you fast access to Fabric’s powerful data tools. Why It Matters Azure AI Content Understanding uses powerful large language models (LLMs) to extract key information from documents, videos, audio, and image files. For example, it can identify key phrases in documents, extract tables from invoices, or generate video chapters and summaries. This template lets you seamlessly feed structured JSON outputs from Content Understanding into a Fabric Lakehouse, where you can immediately use Power BI, Dataflows, and other tools to analyze and make sense of the data.   Key Benefits Quick Setup: Move from unstructured content to structured data in no time—no complicated setup required! Seamless Integration: Connect Azure AI and Microsoft Fabric effortlessly. Secure & Scalable: Every component is built on Microsoft’s cloud, ensuring your data is safe and scalable as your needs grow. Try It Now You can find the template and setup instructions on GitHub. We would love to hear how you are using it! Feel free to leave any questions or feedback in the comments below or send us an email.  Resources & Documentation Explore the following resources to learn more about Azure AI Content Understanding and Microsoft Fabric Azure Content Understanding Overview Microsoft Fabric Overview Azure Content Understanding in Azure AI Foundry Azure Content Understanding FAQs  ( 21 min )
    A Microsoft Fabric Template for Azure AI Content Understanding is Now Available
    We are excited to share that we have released a new Microsoft Fabric pipeline template that helps you easily send the results from Azure AI Content Understanding into a Fabric Lakehouse! This template makes it easier than ever to harvest information from multimodal content and turn it into structured data using Content Understanding and perform further analysis in Microsoft Fabric. Whether you are looking to extract insights from a contract, call transcript, or video footage, this template simplifies the process and gives you fast access to Fabric’s powerful data tools. Why It Matters Azure AI Content Understanding uses powerful large language models (LLMs) to extract key information from documents, videos, audio, and image files. For example, it can identify key phrases in documents, extract tables from invoices, or generate video chapters and summaries. This template lets you seamlessly feed structured JSON outputs from Content Understanding into a Fabric Lakehouse, where you can immediately use Power BI, Dataflows, and other tools to analyze and make sense of the data.   Key Benefits Quick Setup: Move from unstructured content to structured data in no time—no complicated setup required! Seamless Integration: Connect Azure AI and Microsoft Fabric effortlessly. Secure & Scalable: Every component is built on Microsoft’s cloud, ensuring your data is safe and scalable as your needs grow. Try It Now You can find the template and setup instructions on GitHub. We would love to hear how you are using it! Feel free to leave any questions or feedback in the comments below or send us an email.  Resources & Documentation Explore the following resources to learn more about Azure AI Content Understanding and Microsoft Fabric Azure Content Understanding Overview Microsoft Fabric Overview Azure Content Understanding in Azure AI Foundry Azure Content Understanding FAQs
  • Open

    Introducing Azure DevOps ID Token Refresh and Terraform Task Version 5
    We are excited to share some recent updates that improve the experience of using Workload identity federation (OpenID Connect) with Azure DevOps and Terraform on Microsoft Azure. Many working parts have come together to make this possible and we’ll share those here. We are also very pleased to announce version 5 of the Microsoft DevLabs […] The post Introducing Azure DevOps ID Token Refresh and Terraform Task Version 5 appeared first on Azure DevOps Blog.  ( 25 min )
  • Open

    How Networking setting of Batch Account impacts simplified communication mode Batch pool
    As described in our official document, the classic communication mode of Batch node will be retired on 31 March 2026. Instead, it’s recommended to use simplified communication mode while creating Batch pool.   But while user changes their Batch pool communication mode from classic to simplified and applies the necessary changes of network security group per documentation, they will find out that the node is still stuck in unusable status.   A very possible cause of this issue is due to the bad networking setting of Batch Account.   This blog will mainly talk about why networking setting can cause node using simplified communication mode stuck in unusable status and how to configure correct networking setting under different user scenarios.   Cause: As described in this document, the differ…  ( 23 min )
    How Networking setting of Batch Account impacts simplified communication mode Batch pool
    As described in our official document, the classic communication mode of Batch node will be retired on 31 March 2026. Instead, it’s recommended to use simplified communication mode while creating Batch pool.   But while user changes their Batch pool communication mode from classic to simplified and applies the necessary changes of network security group per documentation, they will find out that the node is still stuck in unusable status.   A very possible cause of this issue is due to the bad networking setting of Batch Account.   This blog will mainly talk about why networking setting can cause node using simplified communication mode stuck in unusable status and how to configure correct networking setting under different user scenarios.   Cause: As described in this document, the differ…
  • Open

    Get ready for Microsoft Build 2025
    Microsoft Build is just a few weeks away. To celebrate, we’re highlighting resources that will help you get ready for the big event. Explore some of the exciting sessions you can join in-person or online, learn new skills before jumping into live deep-dive sessions, brush up on best practices, and get up to speed on the latest developer tools so you can hit the event ready to take your knowledge (and your applications) to the next level. Connect, code, and grow at BuildIt’s almost time for Microsoft Build! Can’t join the event live in-person? No problem. You can still experience the event streaming live online for free (May 19-22). Watch the keynote, join live sessions, learn new skills, and watch in-depth demos. Join the .NET & C# teams at Microsoft Build 2025Don’t miss this opportunity t…  ( 28 min )
    Get ready for Microsoft Build 2025
    Microsoft Build is just a few weeks away. To celebrate, we’re highlighting resources that will help you get ready for the big event. Explore some of the exciting sessions you can join in-person or online, learn new skills before jumping into live deep-dive sessions, brush up on best practices, and get up to speed on the latest developer tools so you can hit the event ready to take your knowledge (and your applications) to the next level. Connect, code, and grow at BuildIt’s almost time for Microsoft Build! Can’t join the event live in-person? No problem. You can still experience the event streaming live online for free (May 19-22). Watch the keynote, join live sessions, learn new skills, and watch in-depth demos. Join the .NET & C# teams at Microsoft Build 2025Don’t miss this opportunity t…
  • Open

    Introducing Cloud Accelerate Factory: Unlock zero cost deployment assistance for Azure
    As AI reshapes how businesses operate and deliver value, many organizations are seeking ways to modernize their infrastructure — quickly, securely, and at scale. With the right tools and support, cloud adoption becomes an empowering step toward innovation and growth. Azure Innovate & Azure Migrate and Modernize were designed to support your entire cloud journey, providing expert guidance, funding, and comprehensive resources all in one place to help you maximize the value of Azure to boost business growth. We’re continuously evolving these offerings to deliver even more value at scale. That’s why we created Cloud Accelerate Factory, a new benefit of Azure Innovate & Azure Migrate and Modernize, built on the patterns and insights from thousands of customer deployments. The Factory provides …  ( 23 min )
    Introducing Cloud Accelerate Factory: Unlock zero cost deployment assistance for Azure
    As AI reshapes how businesses operate and deliver value, many organizations are seeking ways to modernize their infrastructure — quickly, securely, and at scale. With the right tools and support, cloud adoption becomes an empowering step toward innovation and growth. Azure Innovate & Azure Migrate and Modernize were designed to support your entire cloud journey, providing expert guidance, funding, and comprehensive resources all in one place to help you maximize the value of Azure to boost business growth. We’re continuously evolving these offerings to deliver even more value at scale. That’s why we created Cloud Accelerate Factory, a new benefit of Azure Innovate & Azure Migrate and Modernize, built on the patterns and insights from thousands of customer deployments. The Factory provides …
  • Open

    Streamlining data discovery for AI/ML with OpenMetadata on AKS and Azure NetApp Files
    Table of Contents Abstract Introduction Prerequisites Workstation setup Repository directory contents Terraform variables file Credentials Azure settings Instaclustr settings VNet settings AKS cluster settings Azure NetApp Files settings PostgreSQL settings OpenSearch settings Authorized networks Infrastructure deployment Application deployment Using OpenMetadata Adding a service Adding an ingestion Cleanup Summary Additional Information Abstract This article contains a step-by-step guide to deploying OpenMetadata on Azure Kubernetes Service (AKS), using Azure NetApp Files for storage. It also covers the deployment and configuration of PostgreSQL and OpenSearch databases to run externally from the Kubernetes cluster, following OpenMetadata best practices, managed by NetApp® Instaclustr®. T…  ( 61 min )
    Streamlining data discovery for AI/ML with OpenMetadata on AKS and Azure NetApp Files
    Table of Contents Abstract Introduction Prerequisites Workstation setup Repository directory contents Terraform variables file Credentials Azure settings Instaclustr settings VNet settings AKS cluster settings Azure NetApp Files settings PostgreSQL settings OpenSearch settings Authorized networks Infrastructure deployment Application deployment Using OpenMetadata Adding a service Adding an ingestion Cleanup Summary Additional Information Abstract This article contains a step-by-step guide to deploying OpenMetadata on Azure Kubernetes Service (AKS), using Azure NetApp Files for storage. It also covers the deployment and configuration of PostgreSQL and OpenSearch databases to run externally from the Kubernetes cluster, following OpenMetadata best practices, managed by NetApp® Instaclustr®. T…

  • Open

    Authenticate to Fabric data connections using Azure Key Vault stored secrets (Preview)
    Azure Key Vault support in Fabric Data connections is now in preview! With this capability, we are introducing a new concept called ‘Azure Key Vault references’ in Microsoft Fabric, using which, users can reuse their existing Azure key vault secrets for authentication to data source connections instead of copy-pasting passwords, slashing credential-management effort and audit risk. … Continue reading “Authenticate to Fabric data connections using Azure Key Vault stored secrets (Preview)”  ( 7 min )
    Announcing the winners of “Hack Together: The Microsoft Data + AI Kenya Hack”
    We are excited to announce the winners of the winners of Hack Together: The Microsoft Data + AI Kenya Hack! About the Hackathon The HackTogether was an exciting opportunity to bring bold ideas to life by building Data & AI solutions using Microsoft Fabric and Azure AI Services. Organized exclusively for participants from Kenya, the … Continue reading “Announcing the winners of “Hack Together: The Microsoft Data + AI Kenya Hack””  ( 10 min )
    Introducing new OpenAI Plugins for Eventhouse (Preview)
    We are excited to announce the release of two powerful AI plugins for Eventhouse: AI Embed Text Plugin and AI Chat Completion Prompt Plugin. These plugins are designed to enhance your data analysis capabilities and augment your workflows with OpenAI models, providing more granular control to power users who seek precision over model output or wish to fine-tune … Continue reading “Introducing new OpenAI Plugins for Eventhouse (Preview)”  ( 7 min )
  • Open

    Microsoft.Extensions.AI: Integrating AI into your .NET applications
    Artificial Intelligence (AI) is transforming the way we build applications. With the introduction of Microsoft.Extensions.AI, integrating AI services into .NET applications has never been easier. In this blog, we'll explore Microsoft.Extensions.AI, why .NET developers should try it out and how to get started using it to build a simple text generation application. Why Microsoft.Extensions.AI? Microsoft.Extensions.AI provides unified abstractions and middleware for integrating AI services into .NET applications. This means you can work with AI capabilities like chat features, embedding generation, and tool calling without worrying about specific platform implementations. Whether you're using Azure AI, OpenAI, or other AI services, Microsoft.Extensions.AI ensures seamless integration and coll…  ( 36 min )
    Microsoft.Extensions.AI: Integrating AI into your .NET applications
    Artificial Intelligence (AI) is transforming the way we build applications. With the introduction of Microsoft.Extensions.AI, integrating AI services into .NET applications has never been easier. In this blog, we'll explore Microsoft.Extensions.AI, why .NET developers should try it out and how to get started using it to build a simple text generation application. Why Microsoft.Extensions.AI? Microsoft.Extensions.AI provides unified abstractions and middleware for integrating AI services into .NET applications. This means you can work with AI capabilities like chat features, embedding generation, and tool calling without worrying about specific platform implementations. Whether you're using Azure AI, OpenAI, or other AI services, Microsoft.Extensions.AI ensures seamless integration and coll…
  • Open

    How to use DefaultAzureCredential across multiple tenants
    If you are using the DefaultAzureCredential class from the Azure Identity SDK while your user account is associated with multiple tenants, you may find yourself frequently running into API authentication errors (such as HTTP 401/Unauthorized). This post is for you! These are your two options for successful authentication from a non-default tenant: Setup your environment precisely to force DefaultAzureCredential to use the desired tenant Use a specific credential class and explicitly pass in the desired tenant ID Option 1: Get DefaultAzureCredential working The DefaultAzureCredential class is a credential chain, which means that it tries a sequence of credential classes until it finds one that can authenticate successfully. The current sequence is: EnvironmentCredential WorkloadIdentityC…  ( 26 min )
    How to use DefaultAzureCredential across multiple tenants
    If you are using the DefaultAzureCredential class from the Azure Identity SDK while your user account is associated with multiple tenants, you may find yourself frequently running into API authentication errors (such as HTTP 401/Unauthorized). This post is for you! These are your two options for successful authentication from a non-default tenant: Setup your environment precisely to force DefaultAzureCredential to use the desired tenant Use a specific credential class and explicitly pass in the desired tenant ID Option 1: Get DefaultAzureCredential working The DefaultAzureCredential class is a credential chain, which means that it tries a sequence of credential classes until it finds one that can authenticate successfully. The current sequence is: EnvironmentCredential WorkloadIdentityC…
    Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers
    Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers Introduction Phi-4-Reasoning is a state-of-the-art AI model developed by Microsoft Research, designed to excel in complex reasoning tasks. With its advanced capabilities, Phi-4-Reasoning is a powerful tool for AI developers, enabling them to tackle intricate problems with ease and precision.     What is Phi-4-Reasoning? Phi-4-Reasoning is a 14-billion parameter open-weight reasoning model that has been fine-tuned from the Phi-4 model using supervised fine-tuning on a dataset of chain-of-thought traces..  We are also releasing Phi-4-reasoning-plus, a variant enhanced through a short phase of outcome-based reinforcement learning that offers higher performance by generating longer reasoning traces. This model is designed to handle …  ( 34 min )
    Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers
    Showcasing Phi-4-Reasoning: A Game-Changer for AI Developers Introduction Phi-4-Reasoning is a state-of-the-art AI model developed by Microsoft Research, designed to excel in complex reasoning tasks. With its advanced capabilities, Phi-4-Reasoning is a powerful tool for AI developers, enabling them to tackle intricate problems with ease and precision.     What is Phi-4-Reasoning? Phi-4-Reasoning is a 14-billion parameter open-weight reasoning model that has been fine-tuned from the Phi-4 model using supervised fine-tuning on a dataset of chain-of-thought traces..  We are also releasing Phi-4-reasoning-plus, a variant enhanced through a short phase of outcome-based reinforcement learning that offers higher performance by generating longer reasoning traces. This model is designed to handle …
  • Open

    Feedback Loops in GenAI with Azure Functions, Azure OpenAI and Neon serverless Postgres
    As vector search and Retrieval Augmented Generation(RAG) become mainstream for Generative AI (GenAI) use cases, we’re looking ahead to what’s next. GenAI primarily operates in a one-way direction, generating content based on input data.GenAI must have a secret for production data. Generative Feedback Loops (GFL) are focused on optimizing and improving the AI’s outputs over time through a cycle of feedback and learnings based on the production data. In GFL, results generated from Large Language Models (LLMs) like GPT are vectorized, indexed, and saved back into vector storage for better-filtered semantic search operations. This creates a dynamic cycle that adapts LLMs to new and continuously changing data, and user needs. GFL offers personalized, up-to-date summaries and suggestions. A good…  ( 50 min )
    Feedback Loops in GenAI with Azure Functions, Azure OpenAI and Neon serverless Postgres
    As vector search and Retrieval Augmented Generation(RAG) become mainstream for Generative AI (GenAI) use cases, we’re looking ahead to what’s next. GenAI primarily operates in a one-way direction, generating content based on input data.GenAI must have a secret for production data. Generative Feedback Loops (GFL) are focused on optimizing and improving the AI’s outputs over time through a cycle of feedback and learnings based on the production data. In GFL, results generated from Large Language Models (LLMs) like GPT are vectorized, indexed, and saved back into vector storage for better-filtered semantic search operations. This creates a dynamic cycle that adapts LLMs to new and continuously changing data, and user needs. GFL offers personalized, up-to-date summaries and suggestions. A good…
  • Open

    Dev Proxy v0.27 with generating TypeSpec files and configuring using natural language
    Dev Proxy v0.27 is even more developer-friendly, helping you generate API specs faster, improving suggestions while editing, and laying the foundation for more flexible AI integrations. The post Dev Proxy v0.27 with generating TypeSpec files and configuring using natural language appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    General Availability: App Service Webjobs on Linux
    Last year, we introduced Webjobs on Linux as a preview feature. We are now excited to announce General Avilability for Webjobs on App Service Linux for both code an containers scenarios.  ( 2 min )

  • Open

    Streamline & Modernise ASP.NET Auth: Moving enterprise apps from IIS to App Service with Easy Auth
    Introduction When modernising your enterprise ASP.NET (.NET Framework) or ASP.NET Core applications and moving them from IIS over to Azure App Service, one of the aspects you will have to take into consideration is how you will manage authentication (AuthN) and authorisation (AuthZ). Specifically, for applications that leverage on-premises auth mechanisms such as Integrated Windows Authentication, you will need to start considering more modern auth protocols such as OpenID Connect/OAuth which are more suited to the cloud. Fortunately, App Service includes built-in authentication and authorisation support also known as 'Easy Auth', which requires minimal to zero code changes. This feature is integrated into the platform, includes a built-in token store, and operates as a middleware running …  ( 40 min )
    Streamline & Modernise ASP.NET Auth: Moving enterprise apps from IIS to App Service with Easy Auth
    Introduction When modernising your enterprise ASP.NET (.NET Framework) or ASP.NET Core applications and moving them from IIS over to Azure App Service, one of the aspects you will have to take into consideration is how you will manage authentication (AuthN) and authorisation (AuthZ). Specifically, for applications that leverage on-premises auth mechanisms such as Integrated Windows Authentication, you will need to start considering more modern auth protocols such as OpenID Connect/OAuth which are more suited to the cloud. Fortunately, App Service includes built-in authentication and authorisation support also known as 'Easy Auth', which requires minimal to zero code changes. This feature is integrated into the platform, includes a built-in token store, and operates as a middleware running …
    How to use Azure Table Storage with .NET Aspire and a Minimal API
    Azure Storage is a versatile cloud storage solution that I've used in many projects. In this post, I'll share my experience integrating it into a .NET Aspire project through two perspectives: first, by building a simple demo project to learn the basics, and then by applying those learnings to migrate a real-world application, AzUrlShortener. This post is part of a series about modernizing the AzUrlShortener project: Migrating AzUrlShortener from Azure SWA to Azure Container Apps Converting a Blazor WASM to FluentUI Blazor server Azure Developer CLI (azd) in a real-life scenario How to use Azure Table Storage with .NET Aspire and a Minimal API Part 1: Learning using a simple project For this post we will be using a simpler project instead of the full AzUrlShortener solution to make it eas…  ( 36 min )
    How to use Azure Table Storage with .NET Aspire and a Minimal API
    Azure Storage is a versatile cloud storage solution that I've used in many projects. In this post, I'll share my experience integrating it into a .NET Aspire project through two perspectives: first, by building a simple demo project to learn the basics, and then by applying those learnings to migrate a real-world application, AzUrlShortener. This post is part of a series about modernizing the AzUrlShortener project: Migrating AzUrlShortener from Azure SWA to Azure Container Apps Converting a Blazor WASM to FluentUI Blazor server Azure Developer CLI (azd) in a real-life scenario How to use Azure Table Storage with .NET Aspire and a Minimal API Part 1: Learning using a simple project For this post we will be using a simpler project instead of the full AzUrlShortener solution to make it eas…
    Tracking Kubernetes Updates in AKS Clusters
    When you support Azure Kubernetes Service (AKS) clusters, keeping up with new versions of Kubernetes being released, and ensuring that your clusters are on a supported version can be difficult. If you have one or two clusters it might be OK, but as your estate grows it can be difficult to keep on top of which clusters have which version of Kubernetes and which needs updates. One way of dealing with this could be to implement Azure Kubernetes Fleet Manager (Fleet). Fleet provides a comprehensive solution for monitoring Kubernetes and Node Image versions in your clusters, and rolling out updates across your estate. You can read more details on Fleet for update management here. However, if you're not ready to implement Fleet, or your AKS estate isn't large enough to warrant it, we can build a…  ( 35 min )
    Tracking Kubernetes Updates in AKS Clusters
    When you support Azure Kubernetes Service (AKS) clusters, keeping up with new versions of Kubernetes being released, and ensuring that your clusters are on a supported version can be difficult. If you have one or two clusters it might be OK, but as your estate grows it can be difficult to keep on top of which clusters have which version of Kubernetes and which needs updates. One way of dealing with this could be to implement Azure Kubernetes Fleet Manager (Fleet). Fleet provides a comprehensive solution for monitoring Kubernetes and Node Image versions in your clusters, and rolling out updates across your estate. You can read more details on Fleet for update management here. However, if you're not ready to implement Fleet, or your AKS estate isn't large enough to warrant it, we can build a…
  • Open

    Using network troubleshooting tools with Azure Container Apps
    This post will go over using networking troubleshooting tools and which scenarios they may be best for  ( 8 min )
  • Open

    Medallion Architecture in Fabric Real-Time Intelligence
    Introduction Building a multi-layer, medallion architecture using Fabric Real-Time Intelligence (RTI) requires a different approach compared to traditional data warehousing techniques. But even transactional source systems can be effectively processed in RTI. To demonstrate, we’ll look at how sales orders (created in a relational database) can be continuously ingested and transformed through a RTI bronze, … Continue reading “Medallion Architecture in Fabric Real-Time Intelligence”  ( 9 min )
    Fabric April 2025 Feature Summary
    Welcome to the Fabric April 2025 Feature Summary! This update brings exciting advancements across various workloads, including Low-code AI tools to accelerate productivity in notebooks (Preview), session Scoped distributed #temp table in Fabric Data Warehouse (Generally Available) and the Migration assistant for Fabric Data Warehouse (Preview) to simplify your migration experience. Contents Community & Events … Continue reading “Fabric April 2025 Feature Summary”  ( 16 min )
  • Open

    AI Sparks: Unleashing Agents with the AI Toolkit
    The final episode of our "AI Sparks" series delved deep into the exciting world of AI Agents and their practical implementation. We also covered a fair part of MCP with Microsoft AI Toolkit extension for VS Code.  We kicked off by charting the evolutionary path of intelligent conversational systems. Starting with the rudimentary rule-based Basic Chatbots, we then explored the advancements brought by Basic Generative AI Chatbots, which offered contextually aware interactions. Then we explored the Retrieval-Augmented Generation (RAG), highlighting its ability to ground generative models in specific knowledge bases, significantly enhancing accuracy and relevance. The limitations were also discussed for the above mentioned techniques. The session was then centralized to the theme – Agents and …  ( 28 min )
    AI Sparks: Unleashing Agents with the AI Toolkit
    The final episode of our "AI Sparks" series delved deep into the exciting world of AI Agents and their practical implementation. We also covered a fair part of MCP with Microsoft AI Toolkit extension for VS Code.  We kicked off by charting the evolutionary path of intelligent conversational systems. Starting with the rudimentary rule-based Basic Chatbots, we then explored the advancements brought by Basic Generative AI Chatbots, which offered contextually aware interactions. Then we explored the Retrieval-Augmented Generation (RAG), highlighting its ability to ground generative models in specific knowledge bases, significantly enhancing accuracy and relevance. The limitations were also discussed for the above mentioned techniques. The session was then centralized to the theme – Agents and …
  • Open

    Getting Started with Azure MCP Server: A Guide for Developers
    The world of cloud computing is growing rapidly, and Azure is at the forefront of this innovation. If you're a student developer eager to dive into Azure and learn about Model Context Protocol (MCP), the Azure MCP Server is your perfect starting point. This tool, currently in Public Preview, empowers AI agents to seamlessly interact with Azure services like Azure Storage, Cosmos DB, and more. Let's explore how you can get started! 🎯 Why Use the Azure MCP Server? The Azure MCP Server revolutionizes how AI agents and developers interact with Azure services. Here's a glimpse of what it offers: Exploration Made Easy: List storage accounts, databases, resource groups, tables, and more with natural language commands. Advanced Operations: Manage configurations, query analytics, and execute comp…  ( 24 min )
    Getting Started with Azure MCP Server: A Guide for Developers
    The world of cloud computing is growing rapidly, and Azure is at the forefront of this innovation. If you're a student developer eager to dive into Azure and learn about Model Context Protocol (MCP), the Azure MCP Server is your perfect starting point. This tool, currently in Public Preview, empowers AI agents to seamlessly interact with Azure services like Azure Storage, Cosmos DB, and more. Let's explore how you can get started! 🎯 Why Use the Azure MCP Server? The Azure MCP Server revolutionizes how AI agents and developers interact with Azure services. Here's a glimpse of what it offers: Exploration Made Easy: List storage accounts, databases, resource groups, tables, and more with natural language commands. Advanced Operations: Manage configurations, query analytics, and execute comp…
  • Open

    GenAIOps and Evals Best Practices
    Contributors and Reviewers: Jay Sen (C), Anthony Nevico (C), Chris Kahrs (C), Anurag Karuparti (C), John De Havilland (R) Key drivers behind GenAI Evaluation Risk Mitigation and Reliability: Proactively identifying issues to ensure GenAI models perform safely and consistently in critical environments.  Iterative Improvement: Leveraging continuous feedback loops and tests to refine models and maintain alignment with evolving business objectives.  Transparency and Accountability: Establishing clear, shared metrics that build trust between technical teams and business stakeholders, ensuring AI deployments are safe, ethical, and outcome driven.  Speed to market: Evaluation frameworks allow the adoption of GenAI technologies across business processes in a safe, sane and validated manner, allow…  ( 37 min )
    GenAIOps and Evals Best Practices
    Contributors and Reviewers: Jay Sen (C), Anthony Nevico (C), Chris Kahrs (C), Anurag Karuparti (C), John De Havilland (R) Key drivers behind GenAI Evaluation Risk Mitigation and Reliability: Proactively identifying issues to ensure GenAI models perform safely and consistently in critical environments.  Iterative Improvement: Leveraging continuous feedback loops and tests to refine models and maintain alignment with evolving business objectives.  Transparency and Accountability: Establishing clear, shared metrics that build trust between technical teams and business stakeholders, ensuring AI deployments are safe, ethical, and outcome driven.  Speed to market: Evaluation frameworks allow the adoption of GenAI technologies across business processes in a safe, sane and validated manner, allow…
  • Open

    Elevate Your Virtual Machine Management with Multi-Select, Sorting, Grouping, and Tags in Azure DevTest Labs
    We are thrilled to unveil exciting new enhancements in the My Virtual Machine view within Azure DevTest Labs that will revolutionize your VM management experience. With these updates, managing your virtual machines has never been easier or more efficient. Imagine being able to multi-select VMs to start, stop, restart, or delete them all at once […] The post Elevate Your Virtual Machine Management with Multi-Select, Sorting, Grouping, and Tags in Azure DevTest Labs appeared first on Develop from the cloud.  ( 22 min )

  • Open

    FSI Knowledge Mining and Intelligent Document Process Reference Architecture
    FSI customers such as insurance companies and banks rely on their vast amounts of data to provide sometimes hundreds of individual products to their customers. From assessing product suitability, underwriting, fraud investigations, and claims handling, many employees and applications depend on accessing this data to do their jobs efficiently. Since the capabilities of GenAI have been realised, we have been helping our customers in this market transform their business with unified systems that simplify access to this data and speed up the processing times of these core tasks, while remaining compliant with the numerous regulations that govern the FSI space. Combining the use of Knowledge Mining with Intelligent Document processing provides a powerful solution to reduce the manual effort and…  ( 39 min )
    FSI Knowledge Mining and Intelligent Document Process Reference Architecture
    FSI customers such as insurance companies and banks rely on their vast amounts of data to provide sometimes hundreds of individual products to their customers. From assessing product suitability, underwriting, fraud investigations, and claims handling, many employees and applications depend on accessing this data to do their jobs efficiently. Since the capabilities of GenAI have been realised, we have been helping our customers in this market transform their business with unified systems that simplify access to this data and speed up the processing times of these core tasks, while remaining compliant with the numerous regulations that govern the FSI space. Combining the use of Knowledge Mining with Intelligent Document processing provides a powerful solution to reduce the manual effort and…
    Add-ins and more – WordPress on App Service
    The WordPress on App Service create flow offers a streamlined process to set up your site along with all the necessary Azure resources. Let's learn more about add-ins that can enhance your WordPress experience and help you decide which ones to opt for. Deploying WordPress on App Service is a breeze thanks to the ARM template approach, which ties together Azure applications to ensure a seamless experience for developers. Whether you're a seasoned pro or new to the create flow, this guide will demystify these additional settings and help you make informed choices. Add-ins tab Managed Identity: Say goodbye to managing credentials! Managed Identities provide secure access to Azure resources without storing sensitive credentials. Enabling this option creates a user-assigned managed identity, c…  ( 25 min )
    Add-ins and more – WordPress on App Service
    The WordPress on App Service create flow offers a streamlined process to set up your site along with all the necessary Azure resources. Let's learn more about add-ins that can enhance your WordPress experience and help you decide which ones to opt for. Deploying WordPress on App Service is a breeze thanks to the ARM template approach, which ties together Azure applications to ensure a seamless experience for developers. Whether you're a seasoned pro or new to the create flow, this guide will demystify these additional settings and help you make informed choices. Add-ins tab Managed Identity: Say goodbye to managing credentials! Managed Identities provide secure access to Azure resources without storing sensitive credentials. Enabling this option creates a user-assigned managed identity, c…
  • Open

    Fabric SQL Database Integration: Unlocking New Possibilities with Power BI desktop
    Introducing Seamless Connectivity for Enhanced Data Analytics and Reporting. We are thrilled to announce a new SQL database integration with Power BI Desktop! This innovative feature is designed to empower users with streamlined access to their SQL databases, providing greater flexibility and precision for building insightful reports and dashboards. With this integration, users can now … Continue reading “Fabric SQL Database Integration: Unlocking New Possibilities with Power BI desktop”  ( 6 min )
  • Open

    AI Agents Readiness and skilling on Demand Events
    2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development. On Demand content now available TopicTrack AI Agents Hackathon KickoffAll Build your code-first app with Azure AI Agent ServicePython AI Agents for Java using Azure AI FoundryJava Build your code-first app with Azure AI Agent ServicePython Build and extend agents for Microsoft 365 CopilotCopilots Transforming business processes with multi-agent AI using Semantic KernelPython Build your code-first app with Azure AI Agent Service (.NET)C# Building custom engine agents with Azure AI Foundry and Visual Studio CodeCopilots Your first AI Agent in JS with …  ( 24 min )
    AI Agents Readiness and skilling on Demand Events
    2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development. On Demand content now available TopicTrack AI Agents Hackathon KickoffAll Build your code-first app with Azure AI Agent ServicePython AI Agents for Java using Azure AI FoundryJava Build your code-first app with Azure AI Agent ServicePython Build and extend agents for Microsoft 365 CopilotCopilots Transforming business processes with multi-agent AI using Semantic KernelPython Build your code-first app with Azure AI Agent Service (.NET)C# Building custom engine agents with Azure AI Foundry and Visual Studio CodeCopilots Your first AI Agent in JS with …
  • Open

    Routine Planned Maintenance Notifications Improvements for App Service
    As of April 2025, we are happy to announce major improvements to App Service routine maintenance notifications.  ( 3 min )
  • Open

    Advancing Fine-Tuning in Azure AI Foundry: April 2025 Updates
    As organizations increasingly tailor foundation models to meet their domain-specific needs, Azure AI Foundry continues to deliver new capabilities that streamline, scale, and enhance the fine-tuning experience. One such organization, Decagon AI, fine-tuned GPT-4o-mini using Azure OpenAI Service’s supervised fine-tuning for their customer service agents. They were able to improve model accuracy and observed substantially lower latency for inferencing. This is one of my favorite use cases because it combines two cutting edge techniques - agents and fine tuning - for better results! “Fine-tuning GPT-4o-mini on Azure dramatically accelerated our delivery timeline,” said Ashwin Sreenvias, CEO at Decagon AI. “The training performance, simplicity of the pipeline, and integrated tooling gave us a …  ( 26 min )
    Advancing Fine-Tuning in Azure AI Foundry: April 2025 Updates
    As organizations increasingly tailor foundation models to meet their domain-specific needs, Azure AI Foundry continues to deliver new capabilities that streamline, scale, and enhance the fine-tuning experience. One such organization, Decagon AI, fine-tuned GPT-4o-mini using Azure OpenAI Service’s supervised fine-tuning for their customer service agents. They were able to improve model accuracy and observed substantially lower latency for inferencing. This is one of my favorite use cases because it combines two cutting edge techniques - agents and fine tuning - for better results! “Fine-tuning GPT-4o-mini on Azure dramatically accelerated our delivery timeline,” said Ashwin Sreenvias, CEO at Decagon AI. “The training performance, simplicity of the pipeline, and integrated tooling gave us a …
  • Open

    Guest Blog: Letting AI Help Make the World More Accessible – Analyzing Website Accessibility with Semantic Kernel and OmniParser
    Today we’re excited to welcome Jonathan David, as a guest author on the Semantic Kernel blog. We’ll turn it over to Jonathan to dive into Letting AI Help Make the World More Accessible – Analyzing Website Accessibility with Semantic Kernel and OmniParser.   With the European Accessibility Act and Germany’s Barrierefreiheitsstärkungsgesetz (which translates to Barrier […] The post Guest Blog: Letting AI Help Make the World More Accessible – Analyzing Website Accessibility with Semantic Kernel and OmniParser appeared first on Semantic Kernel.  ( 34 min )
  • Open

    Application Awareness in Azure Migrate
    Shiva Shastri Sr Product Marketing Manager, Azure Migrate—Product & Ecosystem. Intuitive and cost-effective migrations. In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay competitive through innovations while managing costs. By leveraging the power of the cloud, organizations can achieve unparalleled cost-effectiveness and foster sustainable innovation. By transitioning to Azure, any organization can achieve greater financial flexibility, operational efficiency, and access to secure innovations that provide a competitive edge in the marketplace. Collocating application resources and data is essential for optimal performance and return on investment (ROI). Once in Azure, secure and responsible AI can help provide insights and lead to actions with b…  ( 23 min )
    Application Awareness in Azure Migrate
    Shiva Shastri Sr Product Marketing Manager, Azure Migrate—Product & Ecosystem. Intuitive and cost-effective migrations. In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay competitive through innovations while managing costs. By leveraging the power of the cloud, organizations can achieve unparalleled cost-effectiveness and foster sustainable innovation. By transitioning to Azure, any organization can achieve greater financial flexibility, operational efficiency, and access to secure innovations that provide a competitive edge in the marketplace. Collocating application resources and data is essential for optimal performance and return on investment (ROI). Once in Azure, secure and responsible AI can help provide insights and lead to actions with b…

  • Open

    Using Azure Machine Learning (AML) for Medical Imaging Vision Model Training and Fine-tuning
    Vision Model Architectures At present, Transformer-based vision model architecture is considered the forefront of advanced vision modeling.  These models are exceptionally versatile, capable of handling a wide range of applications, from object detection and image segmentation to contextual classification. Two popular Transformer-based model implementations are often used in real-world applications.  These are: Masked Autoencoders (MAE) and Vision Transformer (ViT) Masked Autoencoders (MAE) Masked Autoencoders (MAE) are a type of Transformer-based vision model architecture. They are designed to handle large-scale vision tasks by leveraging the power of self-supervised learning. The key idea behind MAE is to mask a portion of the input image and then train the model to reconstruct the mis…  ( 42 min )
    Using Azure Machine Learning (AML) for Medical Imaging Vision Model Training and Fine-tuning
    Vision Model Architectures At present, Transformer-based vision model architecture is considered the forefront of advanced vision modeling.  These models are exceptionally versatile, capable of handling a wide range of applications, from object detection and image segmentation to contextual classification. Two popular Transformer-based model implementations are often used in real-world applications.  These are: Masked Autoencoders (MAE) and Vision Transformer (ViT) Masked Autoencoders (MAE) Masked Autoencoders (MAE) are a type of Transformer-based vision model architecture. They are designed to handle large-scale vision tasks by leveraging the power of self-supervised learning. The key idea behind MAE is to mask a portion of the input image and then train the model to reconstruct the mis…
  • Open

    Service principal and private library support for Fabric User data functions
    Using Service Principal and Managed Identity, along with private libraries for Fabric user data functions, makes working with data much easier and more secure. These features let developers customize workflows and use their own code to solve problems, boosting productivity and creativity in teams. As businesses grow and rely more on unique analytics and automation, these tools help simplify data management and improve operations. Check this blog post to learn more  ( 6 min )
  • Open

    AI Agents: Metacognition for Self-Aware Intelligence - Part 9
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with an exciting topic: Metacognition in AI Agents. In previous posts ([links to parts 1-8 at the end]), we've covered fundamental concepts and design patterns. Now, we'll explore how to equip AI agents with the ability to "think about thinking," enabling them to evaluate, adapt, and improve their own cognitive processes. What is Metacognition? Metacognition, often described as "thinking about thinking," refers to higher-order cognitive processes that involve self-awareness and self-regulation of one's cognitive activities. In AI, this means enabling agents to evaluate their actions, identify errors, and adjust strategies based on past experiences. This self-awareness allows…  ( 26 min )
    AI Agents: Metacognition for Self-Aware Intelligence - Part 9
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with an exciting topic: Metacognition in AI Agents. In previous posts ([links to parts 1-8 at the end]), we've covered fundamental concepts and design patterns. Now, we'll explore how to equip AI agents with the ability to "think about thinking," enabling them to evaluate, adapt, and improve their own cognitive processes. What is Metacognition? Metacognition, often described as "thinking about thinking," refers to higher-order cognitive processes that involve self-awareness and self-regulation of one's cognitive activities. In AI, this means enabling agents to evaluate their actions, identify errors, and adjust strategies based on past experiences. This self-awareness allows…
  • Open

    "Appointment Booking Assistant"—an AI-powered voice agent
    Introduction Imagine having an intelligent assistant that can schedule appointments for you over a phone call. The Appointment Booking Assistant is exactly that – a voice-driven AI agent that answers calls, converses naturally with users, and books appointments in a calendar. This solution showcasing how modern cloud services and AI can streamline scheduling tasks. It brings together real-time voice interaction with the power of AI and Microsoft 365 integration, allowing users to simply speak with an assistant to set up meetings or appointments. The result is a faster, more accessible way to manage bookings without needing a human receptionist or manual coordination. Technologies Involved Building this assistant required combining several key technologies, each playing a specific role: Az…  ( 59 min )
    "Appointment Booking Assistant"—an AI-powered voice agent
    Introduction Imagine having an intelligent assistant that can schedule appointments for you over a phone call. The Appointment Booking Assistant is exactly that – a voice-driven AI agent that answers calls, converses naturally with users, and books appointments in a calendar. This solution showcasing how modern cloud services and AI can streamline scheduling tasks. It brings together real-time voice interaction with the power of AI and Microsoft 365 integration, allowing users to simply speak with an assistant to set up meetings or appointments. The result is a faster, more accessible way to manage bookings without needing a human receptionist or manual coordination. Technologies Involved Building this assistant required combining several key technologies, each playing a specific role: Az…

  • Open

    Azure Kubernetes Fleet Manager Demo with Terraform Code
    Introduction Azure Kubernetes Fleet Manager (Fleet Manager) simplifies the at-scale management of multiple Azure Kubernetes Service (AKS) clusters by treating them as a coordinated “fleet.” One Fleet Manager hub can manage up to 100 AKS clusters in a single Azure AD tenant and region scope, so you can register, organize, and operate a large number of clusters from a single control plane. In this walkthrough, we’ll explore: The key benefits and considerations of using Fleet Manager A real-world e-commerce use case How to deploy a Fleet Manager hub, AKS clusters, and Azure Front Door with Terraform How everything looks and works in the Azure portal Along the way, you’ll see screenshots from my demo environment to illustrate each feature.   Why Use Fleet Manager? Managing dozens or even hun…  ( 31 min )
    Azure Kubernetes Fleet Manager Demo with Terraform Code
    Introduction Azure Kubernetes Fleet Manager (Fleet Manager) simplifies the at-scale management of multiple Azure Kubernetes Service (AKS) clusters by treating them as a coordinated “fleet.” One Fleet Manager hub can manage up to 100 AKS clusters in a single Azure AD tenant and region scope, so you can register, organize, and operate a large number of clusters from a single control plane. In this walkthrough, we’ll explore: The key benefits and considerations of using Fleet Manager A real-world e-commerce use case How to deploy a Fleet Manager hub, AKS clusters, and Azure Front Door with Terraform How everything looks and works in the Azure portal Along the way, you’ll see screenshots from my demo environment to illustrate each feature.   Why Use Fleet Manager? Managing dozens or even hun…
  • Open

    Guest Blog: SemantiClip: A Practical Guide to Building Your Own AI Agent with Semantic Kernel
    Today we’re excited to welcome Vic Perdana, as a guest author on the Semantic Kernel blog today to cover his work on a SemantiClip: A Practical Guide to Building Your Own AI Agent with Semantic Kernel. We’ll turn it over to Vic to dive in further. Everywhere you look lately, the buzz is about AI agents. But […] The post Guest Blog: SemantiClip: A Practical Guide to Building Your Own AI Agent with Semantic Kernel appeared first on Semantic Kernel.  ( 28 min )
  • Open

    How Xi’an Jiaotong-Liverpool University scaled hands-on learning with Microsoft Dev Box
    As AI and data science rapidly reshape industries, universities worldwide are rethinking how they deliver hands-on learning. At Xi’an Jiaotong-Liverpool University (XJTLU) in China, the School of AI and Advanced Computing embraced Microsoft Dev Box to give students a modern, scalable, and real-world development environment—right from their first year. Here’s how XJTLU transformed their curriculum […] The post How Xi’an Jiaotong-Liverpool University scaled hands-on learning with Microsoft Dev Box appeared first on Develop from the cloud.  ( 23 min )

  • Open

    Understanding Azure OpenAI Service Quotas and Limits: A Beginner-Friendly Guide
    Azure OpenAI Service allows developers, researchers, and students to integrate powerful AI models like GPT-4, GPT-3.5, and DALL·E into their applications. But with great power comes great responsibility and limits. Before you dive into building your next AI-powered solution, it's crucial to understand how quotas and limits work in the Azure OpenAI ecosystem. This guide is designed to help students and beginners easily understand the concept of quotas, limits, and how to manage them effectively. What Are Quotas and Limits? Think of Azure's quotas as your "AI data pack." It defines how much you can use the service. Meanwhile, limits are hard boundaries set by Azure to ensure fair use and system stability. Quota The maximum number of resources (e.g., tokens, requests) allocated to your Az…  ( 24 min )
    Understanding Azure OpenAI Service Quotas and Limits: A Beginner-Friendly Guide
    Azure OpenAI Service allows developers, researchers, and students to integrate powerful AI models like GPT-4, GPT-3.5, and DALL·E into their applications. But with great power comes great responsibility and limits. Before you dive into building your next AI-powered solution, it's crucial to understand how quotas and limits work in the Azure OpenAI ecosystem. This guide is designed to help students and beginners easily understand the concept of quotas, limits, and how to manage them effectively. What Are Quotas and Limits? Think of Azure's quotas as your "AI data pack." It defines how much you can use the service. Meanwhile, limits are hard boundaries set by Azure to ensure fair use and system stability. Quota The maximum number of resources (e.g., tokens, requests) allocated to your Az…
  • Open

    Accelerating DeepSeek Inference with AMD MI300: A Collaborative Breakthrough
    Accelerating DeepSeek Inference with AMD MI300: A Collaborative Breakthrough  Over the past few months, we’ve been collaborating closely with AMD to deliver a new level of performance for large-scale inference—starting with the DeepSeek-R1 and DeepSeek-V3 models on Azure AI Foundry.  Through day-by-day improvements on inference frameworks and major kernels and shared engineering investment, we’ve significantly accelerated inference on AMD MI300 hardware, reaching competitive performance with traditional NVIDIA alternatives. The result? Faster output, and more flexibility for Models-as-a-Service (MaaS) customers using DeepSeek models.  Why AMD MI300?  While many enterprise workloads are optimized for NVIDIA GPUs, AMD’s MI300 architecture has proven to be a strong contender—especially for la…  ( 31 min )
    Accelerating DeepSeek Inference with AMD MI300: A Collaborative Breakthrough
    Accelerating DeepSeek Inference with AMD MI300: A Collaborative Breakthrough  Over the past few months, we’ve been collaborating closely with AMD to deliver a new level of performance for large-scale inference—starting with the DeepSeek-R1 and DeepSeek-V3 models on Azure AI Foundry.  Through day-by-day improvements on inference frameworks and major kernels and shared engineering investment, we’ve significantly accelerated inference on AMD MI300 hardware, reaching competitive performance with traditional NVIDIA alternatives. The result? Faster output, and more flexibility for Models-as-a-Service (MaaS) customers using DeepSeek models.  Why AMD MI300?  While many enterprise workloads are optimized for NVIDIA GPUs, AMD’s MI300 architecture has proven to be a strong contender—especially for la…
  • Open

    Migrating Cloud-Based Databases from AWS to Azure: Key Insights and Best Practices
    Migrating your cloud-based databases to Microsoft Azure can be a transformative journey, offering enhanced performance, scalability, and security. I’ve had the opportunity to dive deep into this process, and I’m excited to share the key points that can make your migration smooth and successful. If you're using the Azure Migration Hub as your starting point, you're already ahead. But when migrating to cloud-based databases, a few key details can make or break your deployment. Some key considerations for migrating are infrastructure and configuration, web application firewall configuration, DNS, hostnames, and session management. Databases you can migrate to Azure PostgreSQL Advanced Security Features: Integration with Azure Key Vault ensures secure storage and management of encryption keys …  ( 30 min )
    Migrating Cloud-Based Databases from AWS to Azure: Key Insights and Best Practices
    Migrating your cloud-based databases to Microsoft Azure can be a transformative journey, offering enhanced performance, scalability, and security. I’ve had the opportunity to dive deep into this process, and I’m excited to share the key points that can make your migration smooth and successful. If you're using the Azure Migration Hub as your starting point, you're already ahead. But when migrating to cloud-based databases, a few key details can make or break your deployment. Some key considerations for migrating are infrastructure and configuration, web application firewall configuration, DNS, hostnames, and session management. Databases you can migrate to Azure PostgreSQL Advanced Security Features: Integration with Azure Key Vault ensures secure storage and management of encryption keys …
  • Open

    Public Preview: Metrics usage insights for Azure Monitor Workspace
    As organizations expand their services and applications, reliability and high availability are a top priority to ensure they provide a high level of quality to their customers. As the complexity of these services and applications grows, organizations continue to collect more telemetry to ensure higher observability. However, many are facing a common challenge: increasing costs driven by the ever-growing volume of telemetry data.   Over time, as products grow and evolve, not all telemetry remains valuable. In fact, over instrumentation can create unnecessary noise, generating data that contributes to higher costs without delivering actionable insights. In a time where every team is being asked to do more with less, identifying which telemetry streams truly matter has become essential.   To …  ( 23 min )
    Public Preview: Metrics usage insights for Azure Monitor Workspace
    As organizations expand their services and applications, reliability and high availability are a top priority to ensure they provide a high level of quality to their customers. As the complexity of these services and applications grows, organizations continue to collect more telemetry to ensure higher observability. However, many are facing a common challenge: increasing costs driven by the ever-growing volume of telemetry data.   Over time, as products grow and evolve, not all telemetry remains valuable. In fact, over instrumentation can create unnecessary noise, generating data that contributes to higher costs without delivering actionable insights. In a time where every team is being asked to do more with less, identifying which telemetry streams truly matter has become essential.   To …
  • Open

    AI Resilience: Strategies to Keep Your Intelligent App Running at Peak Performance
    Stay Online Reliability. It's one of the 5 pillars of Azure Well-Architect Framework.  When starting to implement and go-to-market any new product witch has any integration with Open AI Service you can face spikes of usage in your workload and, even having everything scaling correctly in your side, if you have an Azure Open AI Services deployed using PTU you can reach the PTU threshold and them start to experience some 429 response code. You also will receive some important information about the when you can retry the request in the header of the response and with this information you can implement in your business logic a solution. Here in this article I will show how to use the API Management Service policy to handle this and also explore the native cache to save some tokens! Architectur…  ( 25 min )
    AI Resilience: Strategies to Keep Your Intelligent App Running at Peak Performance
    Stay Online Reliability. It's one of the 5 pillars of Azure Well-Architect Framework.  When starting to implement and go-to-market any new product witch has any integration with Open AI Service you can face spikes of usage in your workload and, even having everything scaling correctly in your side, if you have an Azure Open AI Services deployed using PTU you can reach the PTU threshold and them start to experience some 429 response code. You also will receive some important information about the when you can retry the request in the header of the response and with this information you can implement in your business logic a solution. Here in this article I will show how to use the API Management Service policy to handle this and also explore the native cache to save some tokens! Architectur…

  • Open

    Customer Case Study: Microsoft Store Assistant — bringing multi expert intelligence to Microsoft Store chat with Semantic Kernel and Azure AI
    Introduction In October 2024 Microsoft replaced a legacy rule‑based chat bot on Microsoft Store with Microsoft Store Assistant, powered by Azure Open AI, Semantic Kernel, and real‑time page context. The transformation changed a scripted, button-driven experience into a conversation that comprehends the entire public Microsoft portfolio, including Surface and Xbox products, Microsoft 365 subscriptions, Azure services, and the Dynamics and Power Platform […] The post Customer Case Study: Microsoft Store Assistant — bringing multi expert intelligence to Microsoft Store chat with Semantic Kernel and Azure AI appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Microsoft 365 Certification control spotlight: HIPAA
    Learn how Microsoft 365 Certification verifies that ISVs have established protocols for managing health information, dealing with emergencies and service disruptions, and complying with key HIPAA regulations. The post Microsoft 365 Certification control spotlight: HIPAA appeared first on Microsoft 365 Developer Blog.  ( 23 min )
    Announcing SharePoint Framework 1.21 with updates on building enterprise extensibility within Microsoft 365
    We are excited to announce general availability for the SharePoint Framework 1.21. This time focus is primarily on technical platform updates and new UX options in SharePoint and in Viva Connections. The post Announcing SharePoint Framework 1.21 with updates on building enterprise extensibility within Microsoft 365 appeared first on Microsoft 365 Developer Blog.  ( 24 min )
  • Open

    Mastering Getting Started with Agents: Your On-Demand Resource Hub
    What’s Included in Your Learning Journey? Explore on-demand sessions broken down by week, topic, and track, providing targeted guidance for developers using Python, Java, C#, JavaScript, and more. Here's a peek at what you can expect:Foundational insights into building agents TopicTrack Build your code-first app with Azure AI Agent ServicePython AI Agents for Java using Azure AI FoundryJava Build your code-first app with Azure AI Agent ServicePython Build and extend agents for Microsoft 365 CopilotCopilots Transforming business processes with multi-agent AI using Semantic KernelPython Build your code-first app with Azure AI Agent Service (.NET)C# Build more sophisticated agents and explore advanced capabilities: TopicTrack Building custom engine agents with Azure AI Foundry and Vis…  ( 23 min )
    Mastering Getting Started with Agents: Your On-Demand Resource Hub
    What’s Included in Your Learning Journey? Explore on-demand sessions broken down by week, topic, and track, providing targeted guidance for developers using Python, Java, C#, JavaScript, and more. Here's a peek at what you can expect:Foundational insights into building agents TopicTrack Build your code-first app with Azure AI Agent ServicePython AI Agents for Java using Azure AI FoundryJava Build your code-first app with Azure AI Agent ServicePython Build and extend agents for Microsoft 365 CopilotCopilots Transforming business processes with multi-agent AI using Semantic KernelPython Build your code-first app with Azure AI Agent Service (.NET)C# Build more sophisticated agents and explore advanced capabilities: TopicTrack Building custom engine agents with Azure AI Foundry and Vis…
  • Open

    Week 3 . Microsoft Agents Hack Online Events and Readiness Resources
    Readiness and skilling events for Week 3: Microsoft AI Agents Hack Register Now at https://aka.ms/agentshack https://aka.ms/agentshack     2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development.Register Now: https://aka.ms/agentshack 🔥 Learn from expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Week 3: April 21st-25th LIVE & ONDEMAND Day/TimeTopicTrack 4/21 12:00 PM PTKnowledge-augmented agents with LlamaIndex.TSJS 4/22 06:00 AM PTBuilding a AI Agent with Prompty and Azure AI FoundryPython 4/22 09:00 AM PTReal-time Multi-Agent LLM solutions with SignalR, gRPC, and HTTP based on Semantic KernelC# 4/22 10:30 AM PTLearn Live: Fundamentals of AI agents on Azure- 4/22 12:00 PM PTDemystifying Agents: Building an AI Agent from Scratch on Your Own Data using Azure SQLC# 4/22 03:00 PM PTVoiceRAG: talk to your dataPython 4/23 09:00 AM PTBuilding Multi-Agent Apps on top of Azure PostgreSQLPython 4/23 12:00 PM PTAgentic RAG with reflectionPython 4/23 03:00 PM PTMulti-source data patterns for modern RAG appsC# 4/24 06:00 AM PTEngineering agents that Think, Act, and Govern themselvesC# 4/24 09:00 AM PTExtending AI Agents with Azure FunctionsPython, C# 4/24 12:00 PM PTBuild real time voice agents with Azure Communication ServicesPython 🌟 Join the Conversation on Azure AI Foundry Discussions! 🌟Have ideas, questions, or insights about AI? Don't keep them to yourself! Share your thoughts, engage with experts, and connect with a community that’s shaping the future of artificial intelligence. 🧠✨👉 Click here to join the discussion!  ( 21 min )
    Week 3 . Microsoft Agents Hack Online Events and Readiness Resources
    Readiness and skilling events for Week 3: Microsoft AI Agents Hack Register Now at https://aka.ms/agentshack https://aka.ms/agentshack     2025 is the year of AI agents! But what exactly is an agent, and how can you build one? Whether you're a seasoned developer or just starting out, this FREE three-week virtual hackathon is your chance to dive deep into AI agent development.Register Now: https://aka.ms/agentshack 🔥 Learn from expert-led sessions streamed live on YouTube, covering top frameworks like Semantic Kernel, Autogen, the new Azure AI Agents SDK and the Microsoft 365 Agents SDK. Week 3: April 21st-25th LIVE & ONDEMAND Day/TimeTopicTrack 4/21 12:00 PM PTKnowledge-augmented agents with LlamaIndex.TSJS 4/22 06:00 AM PTBuilding a AI Agent with Prompty and Azure AI FoundryPython 4/22 09:00 AM PTReal-time Multi-Agent LLM solutions with SignalR, gRPC, and HTTP based on Semantic KernelC# 4/22 10:30 AM PTLearn Live: Fundamentals of AI agents on Azure- 4/22 12:00 PM PTDemystifying Agents: Building an AI Agent from Scratch on Your Own Data using Azure SQLC# 4/22 03:00 PM PTVoiceRAG: talk to your dataPython 4/23 09:00 AM PTBuilding Multi-Agent Apps on top of Azure PostgreSQLPython 4/23 12:00 PM PTAgentic RAG with reflectionPython 4/23 03:00 PM PTMulti-source data patterns for modern RAG appsC# 4/24 06:00 AM PTEngineering agents that Think, Act, and Govern themselvesC# 4/24 09:00 AM PTExtending AI Agents with Azure FunctionsPython, C# 4/24 12:00 PM PTBuild real time voice agents with Azure Communication ServicesPython 🌟 Join the Conversation on Azure AI Foundry Discussions! 🌟Have ideas, questions, or insights about AI? Don't keep them to yourself! Share your thoughts, engage with experts, and connect with a community that’s shaping the future of artificial intelligence. 🧠✨👉 Click here to join the discussion!
    VS Code Live: Agent Mode Day Highlights
    🎙️ Featuring: Olivia McVicker, Cassidy Williams, Burke Holland, Harald Kirschner, Toby Padilla, Rob Lourens, Tim Rogers, James Montemagno, Don Jayamanne, Brigit Murtaugh, Chris Harrison. What is Agent Mode? Agent Mode in VS Code represents a leap beyond traditional AI code completion. Instead of simply suggesting code snippets, Agent Mode empowers the AI to: Write, edit, and iterate on code Run terminal commands autonomously Fix its own mistakes during the workflow Interact with external tools, APIs, and services This creates a more dynamic, "agentic" coding partner that can automate complex tasks, reduce manual intervention, and keep developers in their flow. Agent Mode is accessible directly in VS Code and integrates seamlessly with GitHub Copilot, making advanced AI capabilities avai…  ( 26 min )
    VS Code Live: Agent Mode Day Highlights
    🎙️ Featuring: Olivia McVicker, Cassidy Williams, Burke Holland, Harald Kirschner, Toby Padilla, Rob Lourens, Tim Rogers, James Montemagno, Don Jayamanne, Brigit Murtaugh, Chris Harrison. What is Agent Mode? Agent Mode in VS Code represents a leap beyond traditional AI code completion. Instead of simply suggesting code snippets, Agent Mode empowers the AI to: Write, edit, and iterate on code Run terminal commands autonomously Fix its own mistakes during the workflow Interact with external tools, APIs, and services This creates a more dynamic, "agentic" coding partner that can automate complex tasks, reduce manual intervention, and keep developers in their flow. Agent Mode is accessible directly in VS Code and integrates seamlessly with GitHub Copilot, making advanced AI capabilities avai…
  • Open

    Streaming and Analyzing Azure Storage Diagnostic Logs via Event Hub using Service Bus Explorer
    Monitoring Azure Storage operations is crucial for ensuring performance, compliance, and security. Azure provides various options to collect and route diagnostic logs. One powerful option is sending logs to Azure Event Hub, which allows real-time streaming and integration with external tools and analytics platforms. In this blog, we’ll walk through setting up diagnostic logging for an Azure Storage account with Event Hub as the destination, and then demonstrate how to analyse incoming logs using Service Bus Explorer.   Prerequisites Before we begin, make sure you have the following set up: 1. Azure Event Hub Configuration An Event Hub namespace and instance set up in your Azure subscription. 2. Service Bus Explorer Tool We'll use Service Bus Explorer to connect to Event Hub and analyse l…  ( 25 min )
    Streaming and Analyzing Azure Storage Diagnostic Logs via Event Hub using Service Bus Explorer
    Monitoring Azure Storage operations is crucial for ensuring performance, compliance, and security. Azure provides various options to collect and route diagnostic logs. One powerful option is sending logs to Azure Event Hub, which allows real-time streaming and integration with external tools and analytics platforms. In this blog, we’ll walk through setting up diagnostic logging for an Azure Storage account with Event Hub as the destination, and then demonstrate how to analyse incoming logs using Service Bus Explorer.   Prerequisites Before we begin, make sure you have the following set up: 1. Azure Event Hub Configuration An Event Hub namespace and instance set up in your Azure subscription. 2. Service Bus Explorer Tool We'll use Service Bus Explorer to connect to Event Hub and analyse l…
    Tips for Migrating Azure Event Hub from Standard to Basic Tier Using Scripts
    Introduction What are Event Hubs? Azure Event Hub is a big data streaming platform and event ingestion service by Microsoft Azure. It’s designed to ingest, buffer, store, and process millions of events per second in real time.   Feature Comparison The Standard tier of Azure Event Hubs provides features beyond what is available in the Basic tier. The following features are included with Standard:   Feature Basic Tier Standard Tier Capture Feature                         ❌ Not available ✅ Available Virtual Network Integration ❌ Not available ✅ Available Auto-Inflate ❌ Not available ✅ Available Consumer Groups Limited (only 1 group) Up to 20 consumer groups Message Retention Up to 1 day Up to 7 days   Many organizations or users choose to downgrade their Event Hu…  ( 24 min )
    Tips for Migrating Azure Event Hub from Standard to Basic Tier Using Scripts
    Introduction What are Event Hubs? Azure Event Hub is a big data streaming platform and event ingestion service by Microsoft Azure. It’s designed to ingest, buffer, store, and process millions of events per second in real time.   Feature Comparison The Standard tier of Azure Event Hubs provides features beyond what is available in the Basic tier. The following features are included with Standard:   Feature Basic Tier Standard Tier Capture Feature                         ❌ Not available ✅ Available Virtual Network Integration ❌ Not available ✅ Available Auto-Inflate ❌ Not available ✅ Available Consumer Groups Limited (only 1 group) Up to 20 consumer groups Message Retention Up to 1 day Up to 7 days   Many organizations or users choose to downgrade their Event Hu…
  • Open

    Using the CUA model in Azure OpenAI for procure to Pay Automation
    Solution Architecture   The solution leverages a comprehensive stack of Azure technologies: **Azure OpenAI Service**: Powers core AI capabilities Responses API: Orchestrates the workflow, by calling the tools below and performing actions automatically.  Computer Using Agent (CUA) model: Enables browser automation. This is called through Function Calling, since there are other steps to be performed between the calls to this model, where the gpt-4o model is used, like reasoning through vision, performing vector search and evaluating business rules for anomalies detection. GPT-4o: Processes invoice images with vision capabilities Vector store: Maintains business rules and documentation Azure Container Apps: Hosts procurement web applications Azure SQL Database: Stores contract and procur…  ( 39 min )
    Using the CUA model in Azure OpenAI for procure to Pay Automation
    Solution Architecture   The solution leverages a comprehensive stack of Azure technologies: **Azure OpenAI Service**: Powers core AI capabilities Responses API: Orchestrates the workflow, by calling the tools below and performing actions automatically.  Computer Using Agent (CUA) model: Enables browser automation. This is called through Function Calling, since there are other steps to be performed between the calls to this model, where the gpt-4o model is used, like reasoning through vision, performing vector search and evaluating business rules for anomalies detection. GPT-4o: Processes invoice images with vision capabilities Vector store: Maintains business rules and documentation Azure Container Apps: Hosts procurement web applications Azure SQL Database: Stores contract and procur…
    Using Azure OpenAI's Computer Using Agent for procure to Pay Automation
    Solution Architecture   The solution leverages a comprehensive stack of Azure technologies: **Azure OpenAI Service**: Powers core AI capabilities Responses API: Orchestrates the workflow, by calling the tools below and performing actions automatically.  Computer Using Agent (CUA) model: Enables browser automation. This is called through Function Calling, since there are other steps to be performed between the calls to this model, where the gpt-4o model is used, like reasoning through vision, performing vector search and evaluating business rules for anomalies detection. GPT-4o: Processes invoice images with vision capabilities Vector store: Maintains business rules and documentation Azure Container Apps: Hosts procurement web applications Azure SQL Database: Stores contract and procur…
    SLM Model Weight Merging for Federated Multi-tenant Requirements
    Model merging is a technique for combining the model parameters of multiple models, specifically finetuned variants of a common base model, into a single unified model. In the context of Small Language Models (SLMs), which are lightweight and efficient, merging allows us to have variants of a domain specialized base model to suit different tenant-specific requirements (like fine-tune the base model on their own data set), and enable transfer of the model parameters to the base model without the need to expose the data used for tenant-specific requirements. Model merging operates at the parameter level, using techniques such as weighted averaging, SLERP (Spherical Linear Interpolation), task arithmetic, or advanced methods like TIES leading to a model that preserves both the general abiliti…  ( 39 min )
    SLM Model Weight Merging for Federated Multi-tenant Requirements
    Model merging is a technique for combining the model parameters of multiple models, specifically finetuned variants of a common base model, into a single unified model. In the context of Small Language Models (SLMs), which are lightweight and efficient, merging allows us to have variants of a domain specialized base model to suit different tenant-specific requirements (like fine-tune the base model on their own data set), and enable transfer of the model parameters to the base model without the need to expose the data used for tenant-specific requirements. Model merging operates at the parameter level, using techniques such as weighted averaging, SLERP (Spherical Linear Interpolation), task arithmetic, or advanced methods like TIES leading to a model that preserves both the general abiliti…
    Tracing your Semantic Kernel Agents with Azure AI Foundry
    Many of us have encountered questions about monitoring Semantic Kernel Agents. As developers, we want to understand several aspects: the prompts sent by the Kernel to the AI Service, the behind-the-scenes processes when the Kernel calls the functions we added as plugins, and the token usage during the communication between the AI Service and the Kernel. These are all excellent questions that boil down to how we can observe Semantic Kernel. We can start answering these questions with the use of Azure AI Foundry. So let's dive into it! Adding the Azure AI Inference Connector to the Kernel  The key aspect is to replace the chat completion service that we normally add to the Kernel, by  the Azure AI Inference connector from the Azure Inference Client Library. This connector will automatically…  ( 24 min )
    Tracing your Semantic Kernel Agents with Azure AI Foundry
    Many of us have encountered questions about monitoring Semantic Kernel Agents. As developers, we want to understand several aspects: the prompts sent by the Kernel to the AI Service, the behind-the-scenes processes when the Kernel calls the functions we added as plugins, and the token usage during the communication between the AI Service and the Kernel. These are all excellent questions that boil down to how we can observe Semantic Kernel. We can start answering these questions with the use of Azure AI Foundry. So let's dive into it! Adding the Azure AI Inference Connector to the Kernel  The key aspect is to replace the chat completion service that we normally add to the Kernel, by  the Azure AI Inference connector from the Azure Inference Client Library. This connector will automatically…
  • Open

    Introduction of item limits in a Fabric workspace
    Previously, there were no restrictions on the number of Fabric items that could be created in a workspace, with a limit for Power BI items already being enforced. Even though this allows flexibility for our users, having too many items in workspaces reduces the overall user friendliness and effectiveness of the platform. As of April … Continue reading “Introduction of item limits in a Fabric workspace”  ( 6 min )
    Passing parameter values to refresh a Dataflow Gen2 (Preview)
    Parameters in Dataflow Gen2 enhance flexibility by allowing dynamic adjustments without altering the dataflow itself. They simplify organization, reduce redundancy, and centralize control, making workflows more efficient and adaptable to varying inputs and scenarios. Leveraging query parameters while authoring Dataflows Gen2 has been possible for a long time, however, it was not possible to override … Continue reading “Passing parameter values to refresh a Dataflow Gen2 (Preview)”  ( 7 min )
  • Open

    Important Updates to Container Images of Microsoft Build of OpenJDK
    Mariner Linux 2.0 will reach its End-Of-Life (EOL) in July of 2025 and will be replaced with Azure Linux (version 3.0). To ensure a smooth transition for our customers and partners, the Java Engineering Group (DevDiv JEG) behind the Microsoft Build of OpenJDK has developed a migration aligned with this timeline. This strategy takes effect […] The post Important Updates to Container Images of Microsoft Build of OpenJDK appeared first on Microsoft for Java Developers.  ( 23 min )
  • Open

    Announcing Public Preview of Larger Container Sizes on Azure Container Instances
    ACI provides a fast and simple way to run containers in the cloud. As a serverless solution, ACI eliminates the need to manage underlying infrastructure, automatically scaling to meet application demands. Customers benefit from using ACI because it offers flexible resource allocation, pay-per-use pricing, and rapid deployment, making it easier to focus on development and innovation without worrying about infrastructure management.  Today, we are excited to announce the public preview of larger container sizes on Azure Container Instances (ACI). Customers can now deploy workloads with higher vCPU and memory for standard containers, confidential containers, containers with virtual networks, and containers utilizing virtual nodes to connect to Azure Kubernetes Service (AKS). ACI now supports …  ( 26 min )
    Announcing Public Preview of Larger Container Sizes on Azure Container Instances
    ACI provides a fast and simple way to run containers in the cloud. As a serverless solution, ACI eliminates the need to manage underlying infrastructure, automatically scaling to meet application demands. Customers benefit from using ACI because it offers flexible resource allocation, pay-per-use pricing, and rapid deployment, making it easier to focus on development and innovation without worrying about infrastructure management.  Today, we are excited to announce the public preview of larger container sizes on Azure Container Instances (ACI). Customers can now deploy workloads with higher vCPU and memory for standard containers, confidential containers, containers with virtual networks, and containers utilizing virtual nodes to connect to Azure Kubernetes Service (AKS). ACI now supports …
  • Open

    Spring Cleaning: A CTA for Azure DevOps OAuth Apps with expired or long-living secrets
    Today, we officially closed the doors on any new Azure DevOps OAuth app registrations. As we prepare for the end-of-life for Azure DevOps OAuth apps in 2026, we’ll begin outreach to engage existing app owners and support them through the migration process to use the Microsoft Identity platform instead for future app development with Azure […] The post Spring Cleaning: A CTA for Azure DevOps OAuth Apps with expired or long-living secrets appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Migrate or modernize your applications using Azure Migrate
    Introduction The journey to the cloud is an essential step for modern enterprises looking to leverage the benefits of security, innovation (AI), scalability, flexibility, and cost-efficiency.  To help unlock these benefits migration or modernization to Azure is critical for reasons such as colocation of IT assets. A crucial part of this transformation is understanding the current state of your IT infrastructure, including workloads, applications, and their interdependencies. Often, organizations aim to set their migration goals based on the applications they want to move to the cloud, rather than focusing on individual servers or databases in isolation. In our endeavour to both simplify and enrich your cloud adoption journey. We are introducing new capabilities in Azure Migrate to help you…  ( 28 min )
    Migrate or modernize your applications using Azure Migrate
    Introduction The journey to the cloud is an essential step for modern enterprises looking to leverage the benefits of security, innovation (AI), scalability, flexibility, and cost-efficiency.  To help unlock these benefits migration or modernization to Azure is critical for reasons such as colocation of IT assets. A crucial part of this transformation is understanding the current state of your IT infrastructure, including workloads, applications, and their interdependencies. Often, organizations aim to set their migration goals based on the applications they want to move to the cloud, rather than focusing on individual servers or databases in isolation. In our endeavour to both simplify and enrich your cloud adoption journey. We are introducing new capabilities in Azure Migrate to help you…

  • Open

    Build data-driven agents with curated data from OneLake
    Innovation doesn’t always happen in a straight line. From the invention of the World Wide Web, to the introduction of smartphones, technology often makes massive leaps that transform how we interact with the world almost overnight. Now we’re seeing the next great shift: the era of AI. This shift has been decades in the making, … Continue reading “Build data-driven agents with curated data from OneLake”  ( 9 min )
    Best practices for Microsoft Fabric GraphQL API performance
    Microsoft Fabric’s GraphQL API offers a powerful way to query data efficiently, but performance optimization is key to ensuring smooth and scalable applications. In this blog, we’ll explore best practices to maximize the efficiency of your Fabric GraphQL API. Whether you’re handling complex queries or optimizing response times, these strategies will help you get the best performance out of your GraphQL implementation  ( 7 min )
    Develop, test and deploy a user data functions in Microsoft Fabric using Visual Studio Code
    In this blog post, we will walk through the process of creating, testing, and deploying a user data function using Visual Studio Code, based on the official Microsoft Fabric documentation. This guide will help you understand the steps involved and provide a practical example to get you started.  ( 7 min )
    Build a Data Warehouse schema with Copilot for Data Warehouse
    As a data engineer, it is important to be able to efficiently organize, analyze and derive insights from your data so that you can drive informed and data-driven decisions across your organization. Having a well set up Data Warehouse, you can ensure data integrity, improve your query performance and support advanced analytics. Optimizing a Data … Continue reading “Build a Data Warehouse schema with Copilot for Data Warehouse”  ( 8 min )
    On-premises data gateway April 2025 release
    Here is the April2025 release of the on-premises data gateway (version 3000.266).  ( 5 min )
  • Open

    Exciting updates coming to the Microsoft 365 Developer Program
    We are excited to share a preview of upcoming updates to the Microsoft 365 Developer Program. The post Exciting updates coming to the Microsoft 365 Developer Program appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Learn Generative AI with JavaScript: Free and Interactive Course! 💡🤖
    In the latest video on my YouTube Channel, the Microsoft JavaScript + A.I Advocacy team presents an innovative initiative for developers who want to take their first steps with Artificial Intelligence: the free course Generative AI with JavaScript. Combining technical learning with a gamified experience, the course is an excellent gateway for those who want to explore Generative AI using JavaScript/TypeScript.   Let’s talk a bit more about the course and how it can help you become a more skilled and up-to-date developer with the latest tech trends. About the Course: Generative AI with JavaScript I recorded a video where I explain the main concepts covered in the course, including Generative AI techniques, practical examples, and tips to maximize your learning. If you haven’t watched it ye…  ( 29 min )
    Learn Generative AI with JavaScript: Free and Interactive Course! 💡🤖
    In the latest video on my YouTube Channel, the Microsoft JavaScript + A.I Advocacy team presents an innovative initiative for developers who want to take their first steps with Artificial Intelligence: the free course Generative AI with JavaScript. Combining technical learning with a gamified experience, the course is an excellent gateway for those who want to explore Generative AI using JavaScript/TypeScript.   Let’s talk a bit more about the course and how it can help you become a more skilled and up-to-date developer with the latest tech trends. About the Course: Generative AI with JavaScript I recorded a video where I explain the main concepts covered in the course, including Generative AI techniques, practical examples, and tips to maximize your learning. If you haven’t watched it ye…
  • Open

    Everything You Need to Know About Reasoning Models: o1, o3, o4-mini and Beyond
    Think AI has hit a wall? The latest reasoning models will make you reconsider everything. Contributors: Rafal Rutyna, Brady Leavitt, Julia Heseltine, Tierney Morgan, Liam Cavanagh, Riccardo Chiodaroli  There are new models coming out every week. So why should you care about reasoning models?Unlike previous AI offerings, reasoning models such as o1, o3, and o4-mini mark a fundamental shift in enterprise automation. For the first time, organizations can access AI with PhD-level intelligence—capable of automating business processes that require multi-step reasoning, expert-level analysis, and contextual decision making. Tasks that previously relied on human judgment—such as processing complex cases, analyzing fraud, or generating insights from data—can now be handled transparently, accurately…  ( 64 min )
    Everything You Need to Know About Reasoning Models: o1, o3, o4-mini and Beyond
    Think AI has hit a wall? The latest reasoning models will make you reconsider everything. Contributors: Rafal Rutyna, Brady Leavitt, Julia Heseltine, Tierney Morgan, Liam Cavanagh, Riccardo Chiodaroli  There are new models coming out every week. So why should you care about reasoning models?Unlike previous AI offerings, reasoning models such as o1, o3, and o4-mini mark a fundamental shift in enterprise automation. For the first time, organizations can access AI with PhD-level intelligence—capable of automating business processes that require multi-step reasoning, expert-level analysis, and contextual decision making. Tasks that previously relied on human judgment—such as processing complex cases, analyzing fraud, or generating insights from data—can now be handled transparently, accurately…
    Memory Management for AI Agents
    When we think about how humans function daily, memory plays a critical role beyond mere cognition. The brain has two primary types of memory: short-term and long-term. Short-term memory allows us to temporarily hold onto information, such as conversations or names, while long-term memory is where important knowledge and skills—like learning to walk or recalling a conversation from two weeks ago—are stored.   Memory operates by strengthening neural connections between events, facts, or concepts. These connections are reinforced by relevance and frequency of use, making frequently accessed memories easier to recall. Over time, we might forget information we no longer use because the brain prunes unused neural pathways, prioritizing the memories we frequently rely on. This can explain why rec…  ( 35 min )
    Memory Management for AI Agents
    When we think about how humans function daily, memory plays a critical role beyond mere cognition. The brain has two primary types of memory: short-term and long-term. Short-term memory allows us to temporarily hold onto information, such as conversations or names, while long-term memory is where important knowledge and skills—like learning to walk or recalling a conversation from two weeks ago—are stored.   Memory operates by strengthening neural connections between events, facts, or concepts. These connections are reinforced by relevance and frequency of use, making frequently accessed memories easier to recall. Over time, we might forget information we no longer use because the brain prunes unused neural pathways, prioritizing the memories we frequently rely on. This can explain why rec…
  • Open

    1-Bit Brilliance: BitNet on Azure App Service with Just a CPU
    In a world where running large language models typically demands GPUs and hefty cloud bills, Microsoft Research is reshaping the narrative with BitNet — a compact, 1-bit quantized transformer that delivers surprising capabilities even when deployed on modest hardware.  ( 4 min )
  • Open

    Java OpenJDK April 2025 Patch & Security Update
    Hello Java customers! We are happy to announce the latest April 2025 patch & security update release for the Microsoft Build of OpenJDK. Download and install the binaries today. OpenJDK 21.0.7 OpenJDK 17.0.15 OpenJDK 11.0.27 Check our release notes page for details on fixes and enhancements. The source code of our builds is available now on GitHub for […] The post Java OpenJDK April 2025 Patch & Security Update appeared first on Microsoft for Java Developers.  ( 23 min )

  • Open

    Microsoft Purview protections for Copilot
    Use Microsoft Purview and Microsoft 365 Copilot together to build a secure, enterprise-ready foundation for generative AI. Apply existing data protection and compliance controls, gain visibility into AI usage, and reduce risk from oversharing or insider threats. Classify, restrict, and monitor sensitive data used in Copilot interactions. Investigate risky behavior, enforce dynamic policies, and block inappropriate use — all from within your Microsoft 365 environment. Erica Toelle, Microsoft Purview Senior Product Manager, shares how to implement these controls and proactively manage data risks in Copilot deployments. Control what content can be referenced in generated responses. Check out Microsoft 365 Copilot security and privacy basics. Uncover risky or sensitive interactions. Use DSP…  ( 41 min )
    Microsoft Purview protections for Copilot
    Use Microsoft Purview and Microsoft 365 Copilot together to build a secure, enterprise-ready foundation for generative AI. Apply existing data protection and compliance controls, gain visibility into AI usage, and reduce risk from oversharing or insider threats. Classify, restrict, and monitor sensitive data used in Copilot interactions. Investigate risky behavior, enforce dynamic policies, and block inappropriate use — all from within your Microsoft 365 environment. Erica Toelle, Microsoft Purview Senior Product Manager, shares how to implement these controls and proactively manage data risks in Copilot deployments. Control what content can be referenced in generated responses. Check out Microsoft 365 Copilot security and privacy basics. Uncover risky or sensitive interactions. Use DSP…
  • Open

    Session-scoped distributed #temp tables in Fabric Data Warehouse (Generally Available)
    Introducing distributed session-scoped temporary (#temp) tables in Fabric Data Warehouse and Fabric Lakehouse SQL Endpoints. #temp tables have been a feature of Microsoft SQL Server (and other database systems) for many years. In the current implementation of Fabric data warehouse, #temp tables are session scoped or local temp tables. Global temp tables are not included … Continue reading “Session-scoped distributed #temp tables in Fabric Data Warehouse (Generally Available)”  ( 7 min )
  • Open

    Combating Digitally Altered Images: Deepfake Detection
    In today's digital age, the rise of deepfake technology poses significant threats to credibility, privacy, and security. This article delves into our Deepfake Detection Project, a robust solution designed to combat the misuse of AI-generated content. Our team, comprising Microsoft Learn Student Ambassadors - Saksham Kumar and Rhythm Narang both from India embarked on this journey to create a tool that helps users verify the authenticity of digital images. Project Overview The Deepfake Detection Project aims to provide a reliable tool for detecting and classifying images as either real or deepfake. Our primary goal is to reduce the spread of misinformation, protect individuals from identity theft, and prevent the malicious use of AI technologies. By implementing this tool, we hope to safegu…  ( 29 min )
    Combating Digitally Altered Images: Deepfake Detection
    In today's digital age, the rise of deepfake technology poses significant threats to credibility, privacy, and security. This article delves into our Deepfake Detection Project, a robust solution designed to combat the misuse of AI-generated content. Our team, comprising Microsoft Learn Student Ambassadors - Saksham Kumar and Rhythm Narang both from India embarked on this journey to create a tool that helps users verify the authenticity of digital images. Project Overview The Deepfake Detection Project aims to provide a reliable tool for detecting and classifying images as either real or deepfake. Our primary goal is to reduce the spread of misinformation, protect individuals from identity theft, and prevent the malicious use of AI technologies. By implementing this tool, we hope to safegu…
  • Open

    How to get started quickly with setting up the Microsoft Dev Box service
    Last updated: April 21, 2025 🎯 Introduction Are you an IT admin, platform engineer, or a developer looking to explore a developer-centric, cloud-powered workstation experience? In this post, I’ll show you how to start quickly and try out Microsoft Dev Box using either a Microsoft 365 Business Premium or Microsoft 365 E3 plan — both […] The post How to get started quickly with setting up the Microsoft Dev Box service appeared first on Develop from the cloud.  ( 23 min )
  • Open

    Guest Blog: Build an AI App That Can Browse the Internet Using Microsoft’s Playwright MCP Server & Semantic Kernel — in Just 4 Steps
    Today we’re excited to feature a returning guest author, Akshay Kokane to share his recent Medium article on Building an AI App That Can Browse the Internet Using Microsoft’s Playwright MCP Server & Semantic Kernel. We’ll turn it over to him to dive in! MCP! It’s the new buzzword in the AI world. So, I thought […] The post Guest Blog: Build an AI App That Can Browse the Internet Using Microsoft’s Playwright MCP Server & Semantic Kernel — in Just 4 Steps appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Introducing the Microsoft Graph API usage report
    Learn more about our new journey to give customers more insight and control over how applications access their data through Microsoft Graph. The post Introducing the Microsoft Graph API usage report appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    AI Agents: The Multi-Agent Design Pattern - Part 8
    Hi everyone, Shivam Goyal here! This blog series exploring AI agents, based on Microsoft's AI Agents for Beginners repository, continues. In previous posts ([links to parts 1-7 at the end]), we've built a solid foundation, exploring agent fundamentals, frameworks, and design principles. Now, we'll delve into the Multi-Agent Design Pattern, a powerful approach for tackling complex tasks by leveraging the collective intelligence of multiple specialized agents. Introduction to Multi-Agent Systems As you progress in building AI agent applications, you'll inevitably encounter scenarios where a single agent isn't enough. This is where the Multi-Agent Design Pattern comes into play. But how do you know when to transition to a multi-agent system and what are the benefits? When to Use Multi-Agent S…  ( 25 min )
    AI Agents: The Multi-Agent Design Pattern - Part 8
    Hi everyone, Shivam Goyal here! This blog series exploring AI agents, based on Microsoft's AI Agents for Beginners repository, continues. In previous posts ([links to parts 1-7 at the end]), we've built a solid foundation, exploring agent fundamentals, frameworks, and design principles. Now, we'll delve into the Multi-Agent Design Pattern, a powerful approach for tackling complex tasks by leveraging the collective intelligence of multiple specialized agents. Introduction to Multi-Agent Systems As you progress in building AI agent applications, you'll inevitably encounter scenarios where a single agent isn't enough. This is where the Multi-Agent Design Pattern comes into play. But how do you know when to transition to a multi-agent system and what are the benefits? When to Use Multi-Agent S…

  • Open

    Using Azure Functions to read Azure VMware Solution data via Powershell, PowerCLI, and the API
    Gotcha’s Functions are region specific. Functions require a small (ie /28) Do not use “ –  “ if Azure functions, use “ _ “ if you need a spacer. Pay attention to the sections on RBAC and system managed identities.   Build Order   Create or identify the Log Analytics Workspace that will be used for storage diagnostics.   Create a KeyVault and set the RBAC properties to allow creation of secrets.   Create KeyVault secrets to hold the account IDs and account passwords that will be used to authenticate to the vCenter Server, NSX and HCX appliances.   Create/identify a storage account and an empty subnet that will be used by the function.   Create the Function App   Creating the Functions App common variable retrieval code (shared by all functions)   Example 1: Retrieving vCenter…  ( 60 min )
    Using Azure Functions to read Azure VMware Solution data via Powershell, PowerCLI, and the API
    Gotcha’s Functions are region specific. Functions require a small (ie /28) Do not use “ –  “ if Azure functions, use “ _ “ if you need a spacer. Pay attention to the sections on RBAC and system managed identities.   Build Order   Create or identify the Log Analytics Workspace that will be used for storage diagnostics.   Create a KeyVault and set the RBAC properties to allow creation of secrets.   Create KeyVault secrets to hold the account IDs and account passwords that will be used to authenticate to the vCenter Server, NSX and HCX appliances.   Create/identify a storage account and an empty subnet that will be used by the function.   Create the Function App   Creating the Functions App common variable retrieval code (shared by all functions)   Example 1: Retrieving vCenter…
    Using Azure Functions to read AVS data via Powershell, PowerCLI, and the API
    Gotcha’s Functions are region specific. Functions require a small (ie /28) Do not use “ –  “ if Azure functions, use “ _ “ if you need a spacer. Pay attention to the sections on RBAC and system managed identities.   Build Order   Create or identify the Log Analytics Workspace that will be used for storage diagnostics.   Create a KeyVault and set the RBAC properties to allow creation of secrets.   Create KeyVault secrets to hold the account IDs and account passwords that will be used to authenticate to the vCenter, NSX-T and HCX appliances.   Create/identify a storage account and an empty subnet that will be used by the function.   Create the Function App   Creating the Functions App common variable retrieval code (shared by all functions)   Example 1: Retrieving vCenter data…

  • Open

    Arizona Department of Transportation Innovates with Azure AI Vision
    The Arizona Department of Transportation (ADOT) is committed to providing safe and efficient transportation services to the residents of Arizona. With a focus on innovation and customer service, ADOT’s Motor Vehicle Division (MVD) continually seeks new ways to enhance its services and improve the overall experience for its residents. The challenge ADOT MVD had a tough challenge to ensure the security and authenticity of transactions, especially those involving sensitive information. Every day, the department needs to verify thousands of customers seeking to use its online services to perform activities like updating customer information including addresses, renewing vehicle registrations, ordering replacement driver licenses, and ordering driver and vehicle records. Traditional methods of …  ( 32 min )
    Arizona Department of Transportation Innovates with Azure AI Vision
    The Arizona Department of Transportation (ADOT) is committed to providing safe and efficient transportation services to the residents of Arizona. With a focus on innovation and customer service, ADOT’s Motor Vehicle Division (MVD) continually seeks new ways to enhance its services and improve the overall experience for its residents. The challenge ADOT MVD had a tough challenge to ensure the security and authenticity of transactions, especially those involving sensitive information. Every day, the department needs to verify thousands of customers seeking to use its online services to perform activities like updating customer information including addresses, renewing vehicle registrations, ordering replacement driver licenses, and ordering driver and vehicle records. Traditional methods of …

  • Open

    Azure Boards + GitHub: Recent Updates
    Over the past several months, we’ve delivered a series of improvements to the Azure Boards + GitHub integration. Whether you’re tracking code, managing pull requests, or connecting pipelines, these updates aim to simplify and strengthen the link between your work items and your GitHub activity. Here’s a recap of everything we’ve released (or are just […] The post Azure Boards + GitHub: Recent Updates appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Why Azure AI Is Retail’s Secret Sauce
    Executive Summary Leading RCG enterprises are standardizing on Azure AI—specifically Azure OpenAI Service, Azure Machine Learning, Azure AI Search, and Azure AI Vision—to increase digital‑channel conversion, sharpen demand forecasts, automate store execution, and accelerate product innovation. Documented results include up to 30 percent uplift in search conversion, 10 percent reduction in stock‑outs, and multimillion‑dollar productivity gains. This roadmap consolidates field data from CarMax, Kroger, Coca‑Cola, Estée Lauder, PepsiCo and Microsoft reference architectures to guide board‑level investment and technology planning. 1 Strategic Value of Azure AI Azure AI delivers state‑of‑the‑art language (GPT‑4o, GPT-4.1), reasoning (o1, o3, o4-mini) and multimodal (Phi‑3 Vision) models through …  ( 25 min )
    Why Azure AI Is Retail’s Secret Sauce
    Executive Summary Leading RCG enterprises are standardizing on Azure AI—specifically Azure OpenAI Service, Azure Machine Learning, Azure AI Search, and Azure AI Vision—to increase digital‑channel conversion, sharpen demand forecasts, automate store execution, and accelerate product innovation. Documented results include up to 30 percent uplift in search conversion, 10 percent reduction in stock‑outs, and multimillion‑dollar productivity gains. This roadmap consolidates field data from CarMax, Kroger, Coca‑Cola, Estée Lauder, PepsiCo and Microsoft reference architectures to guide board‑level investment and technology planning. 1 Strategic Value of Azure AI Azure AI delivers state‑of‑the‑art language (GPT‑4o, GPT-4.1), reasoning (o1, o3, o4-mini) and multimodal (Phi‑3 Vision) models through …
  • Open

    FabCon Las Vegas keynote recording now available
    Record-Breaking Attendance The Microsoft Fabric Community Conference (FabCon) was a monumental success with over 6,000 attendees, 200+ breakout sessions, 20 workshops, and 70+ sponsors. Day 1 keynotes at the T-Mobile Arena featured a packed auditorium of Fabric enthusiasts ready to discover the future of Microsoft’s unified AI platform. While FabCon was an in-person only event, … Continue reading “FabCon Las Vegas keynote recording now available”  ( 7 min )
  • Open

    Integrating Semantic Kernel Python with Google’s A2A Protocol
    Google’s Agent-to-Agent (A2A) protocol is designed to enable seamless interoperability among diverse AI agents. Microsoft’s Semantic Kernel (SK), an open-source platform for orchestrating intelligent agent interactions, is now being integrated into the A2A ecosystem. In this blog, we demonstrate how Semantic Kernel agents can easily function as an A2A Server, efficiently routing agent calls to […] The post Integrating Semantic Kernel Python with Google’s A2A Protocol appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Introducing MAI-DS-R1
    Authors: Samer Hassan, Doran Chakraborty, Qi Zhang ,Yuan Yu Today we’re releasing MAI-DS-R1, a new open weights DeepSeek R1 model variant, via both Azure AI Foundry and HuggingFace. This new model has been post-trained by the Microsoft AI team to improve its responsiveness on blocked topics and its risk profile, while maintaining its reasoning capabilities and competitive performance.  Key results:  MAI-DS-R1 successfully responds to 99.3% of prompts related to blocked topics, outperforming DeepSeek R1 by 2.2x, and matching Perplexity’s R1-1776.   MAI-DS-R1 also delivers higher satisfaction metrics on internal evals, outperforming DeepSeek R1 and R1-1776 by 2.1x and 1.3x, respectively.   MAI-DS-R1 outperforms both DeepSeek’s R1 and R1-1776 in reducing harmful content in both the “thin…  ( 86 min )
    Introducing MAI-DS-R1
    Authors: Samer Hassan, Doran Chakraborty, Qi Zhang ,Yuan Yu Today we’re releasing MAI-DS-R1, a new open weights DeepSeek R1 model variant, via both Azure AI Foundry and HuggingFace. This new model has been post-trained by the Microsoft AI team to improve its responsiveness on blocked topics and its risk profile, while maintaining its reasoning capabilities and competitive performance.  Key results:  MAI-DS-R1 successfully responds to 99.3% of prompts related to blocked topics, outperforming DeepSeek R1 by 2.2x, and matching Perplexity’s R1-1776.   MAI-DS-R1 also delivers higher satisfaction metrics on internal evals, outperforming DeepSeek R1 and R1-1776 by 2.1x and 1.3x, respectively.   MAI-DS-R1 outperforms both DeepSeek’s R1 and R1-1776 in reducing harmful content in both the “thin…

  • Open

    Microsoft 365 Copilot Power User Tips
    Take control of your workday — summarize long emails instantly, turn meeting transcripts into actionable plans, and build strategic documents in seconds using your own data with Microsoft 365 Copilot. Instead of chasing down context, ask natural prompts and get clear, detailed results complete with tone-matched writing, visual recaps, and real-time collaboration. Get up to speed on complex email threads, transform insights from missed meetings into next steps, and pull relevant content from across your calendar, inbox, and docs — all without switching tools or losing momentum. Mary Pasch, Microsoft 365 Principal PM, shows how whether you’re refining a plan in Word, responding in Outlook, or catching up in Teams, Copilot works behind the scenes to help you move faster and focus on what mat…  ( 43 min )
    Microsoft 365 Copilot Power User Tips
    Take control of your workday — summarize long emails instantly, turn meeting transcripts into actionable plans, and build strategic documents in seconds using your own data with Microsoft 365 Copilot. Instead of chasing down context, ask natural prompts and get clear, detailed results complete with tone-matched writing, visual recaps, and real-time collaboration. Get up to speed on complex email threads, transform insights from missed meetings into next steps, and pull relevant content from across your calendar, inbox, and docs — all without switching tools or losing momentum. Mary Pasch, Microsoft 365 Principal PM, shows how whether you’re refining a plan in Word, responding in Outlook, or catching up in Teams, Copilot works behind the scenes to help you move faster and focus on what mat…
    New reasoning agents: Researcher and Analyst in Microsoft 365 Copilot
    Analyze data and research with expertise on demand, and automate workflows with intelligent agents in Microsoft 365 Copilot. Analyst thinks like a data scientist and Researcher like an expert, so you can uncover insights, validate logic, and generate expert-level reports in minutes.  And using Microsoft Copilot Studio, build your own autonomous AI agents to streamline multi-step processes with deep reasoning, like responding to RFPs or synthesizing internal knowledge, incorporating Copilot Flows as automated actions. No need for perfect prompts — just describe what you need, and Copilot will reason through the task, surface key insights, and deliver actionable results faster than ever.  Jeremy Chapman, Microsoft 365 Director, walks you through how to use these AI-driven agents step-by-ste…  ( 41 min )
    New reasoning agents: Researcher and Analyst in Microsoft 365 Copilot
    Analyze data and research with expertise on demand, and automate workflows with intelligent agents in Microsoft 365 Copilot. Analyst thinks like a data scientist and Researcher like an expert, so you can uncover insights, validate logic, and generate expert-level reports in minutes.  And using Microsoft Copilot Studio, build your own autonomous AI agents to streamline multi-step processes with deep reasoning, like responding to RFPs or synthesizing internal knowledge, incorporating Copilot Flows as automated actions. No need for perfect prompts — just describe what you need, and Copilot will reason through the task, surface key insights, and deliver actionable results faster than ever.  Jeremy Chapman, Microsoft 365 Director, walks you through how to use these AI-driven agents step-by-ste…
    Microsoft Purview: New data security controls for the browser & network
    Protect your organization’s data with Microsoft Purview. Gain complete visibility into potential data leaks, from AI applications to unmanaged cloud services, and take immediate action to prevent unwanted data sharing. Microsoft Purview unifies data security controls across Microsoft 365 apps, the Edge browser, Windows and macOS endpoints, and even network communications over HTTPS — all in one place. Take control of your data security with automated risk insights, real-time policy enforcement, and seamless management across apps and devices. Strengthen compliance, block unauthorized transfers, and streamline policy creation to stay ahead of evolving threats. Roberto Yglesias, Microsoft Purview Principal GPM, goes beyond Data Loss Prevention  Keep sensitive data secure no matter where it …  ( 42 min )
    Microsoft Purview: New data security controls for the browser & network
    Protect your organization’s data with Microsoft Purview. Gain complete visibility into potential data leaks, from AI applications to unmanaged cloud services, and take immediate action to prevent unwanted data sharing. Microsoft Purview unifies data security controls across Microsoft 365 apps, the Edge browser, Windows and macOS endpoints, and even network communications over HTTPS — all in one place. Take control of your data security with automated risk insights, real-time policy enforcement, and seamless management across apps and devices. Strengthen compliance, block unauthorized transfers, and streamline policy creation to stay ahead of evolving threats. Roberto Yglesias, Microsoft Purview Principal GPM, goes beyond Data Loss Prevention  Keep sensitive data secure no matter where it …
  • Open

    Microsoft 365 Certification control spotlight: General Data Protection Regulation (GDPR)
    Read how Microsoft 365 Certification helps ISVs validate General Data Protection Regulation (GDPR) compliance. The post Microsoft 365 Certification control spotlight: General Data Protection Regulation (GDPR) appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Semantic Kernel adds Model Context Protocol (MCP) support for Python
    We are excited to announce that Semantic Kernel (SK) now has first-class support for the Model Context Protocol (MCP) — a standard created by Anthropic to enable models, tools, and agents to share context and capabilities seamlessly. With this release, SK can act as both an MCP host (client) and an MCP server, and you […] The post Semantic Kernel adds Model Context Protocol (MCP) support for Python appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Azure AI Search: Cut Vector Costs Up To 92.5% with New Compression Techniques
    TLDR: Key learnings from our compression technique evaluation Cost savings: Up to 92.5% reduction in monthly costs Storage efficiency: Vector index size reduced by up to 99% Speed improvement: Query response times up to 33% faster with compressed vectors Quality maintained: Many compression configurations maintain 99-100% of baseline relevance quality At scale, the cost of storing and querying large, high-dimensional vector indexes can balloon. The common trade-off? Either pay a premium to maintain top-tier search quality or sacrifice user experience to limit expenses. With Azure AI Search, you no longer have to choose. Through testing, we have identified ways to reduce system costs without compromising retrieval performance quality.   Our experiments show: 92.5% reduction in cost when…  ( 52 min )
    Azure AI Search: Cut Vector Costs Up To 92.5% with New Compression Techniques
    TLDR: Key learnings from our compression technique evaluation Cost savings: Up to 92.5% reduction in monthly costs Storage efficiency: Vector index size reduced by up to 99% Speed improvement: Query response times up to 33% faster with compressed vectors Quality maintained: Many compression configurations maintain 99-100% of baseline relevance quality At scale, the cost of storing and querying large, high-dimensional vector indexes can balloon. The common trade-off? Either pay a premium to maintain top-tier search quality or sacrifice user experience to limit expenses. With Azure AI Search, you no longer have to choose. Through testing, we have identified ways to reduce system costs without compromising retrieval performance quality.   Our experiments show: 92.5% reduction in cost when…
    Building an Interactive Feedback Review Agent with Azure AI Search and Haystack
    By Khye Wei (Azure AI Search) & Amna Mubashar (Haystack)   We’re excited to announce the integration of Haystack with Azure AI Search! To demonstrate its capabilities, we’ll walk you through building an interactive review agent to efficiently retrieve and analyze customer reviews. By combining Azure AI Search’s hybrid retrieval with Haystack’s flexible pipeline architecture, this agent provides deeper insights through sentiment analysis and intelligent summarization tools. Why Use Azure AI Search with Haystack? Azure AI Search offers an enterprise-grade retrieval system with battle-tested AI search technology, built for high performance GenAI applications at any scale: Hybrid Search: Combining keyword-based BM25 and vector-based searches with reciprocal rank fusion (RRF). Semantic Ranking…  ( 39 min )
    Building an Interactive Feedback Review Agent with Azure AI Search and Haystack
    We’re excited to announce the integration of Haystack with Azure AI Search! To demonstrate its capabilities, we’ll walk you through building an interactive review agent to efficiently retrieve and analyze customer reviews. By combining Azure AI Search’s hybrid retrieval with Haystack’s flexible pipeline architecture, this agent provides deeper insights through sentiment analysis and intelligent summarization tools. Why Use Azure AI Search with Haystack? Azure AI Search offers an enterprise-grade retrieval system with battle-tested AI search technology, built for high performance GenAI applications at any scale: Hybrid Search: Combining keyword-based BM25 and vector-based searches with reciprocal rank fusion (RRF). Semantic Ranking: Enhancing retrieval results using deep learning models. S…
    Bonus RAG Time Journey: Agentic RAG
    This is a bonus post for RAG Time, a 6-part educational series on retrieval-augmented generation (RAG). In this series, we explored topics such as indexing and retrieval techniques for RAG, data ingestion, and storage optimization. The final topic for this series covers agentic RAG, and how to use semi-autonomous agents to make a dynamic and self-refining retrieval system. What we'll cover: Overview and definition of agentic RAG Example of a single-shot RAG flow Two examples of agentic RAG: single-step and multi-step reflection What is agentic RAG? An agent is a component of an AI application that leverages generative models to make decisions and execute actions autonomously. Agentic RAG improves the traditional RAG flow by actively interacting with its environment using tools, memory, a…  ( 37 min )
    Bonus RAG Time Journey: Agentic RAG
    This is a bonus post for RAG Time, a 6-part educational series on retrieval-augmented generation (RAG). In this series, we explored topics such as indexing and retrieval techniques for RAG, data ingestion, and storage optimization. The final topic for this series covers agentic RAG, and how to use semi-autonomous agents to make a dynamic and self-refining retrieval system. What we'll cover: Overview and definition of agentic RAG Example of a single-shot RAG flow Two examples of agentic RAG: single-step and multi-step reflection What is agentic RAG? An agent is a component of an AI application that leverages generative models to make decisions and execute actions autonomously. Agentic RAG improves the traditional RAG flow by actively interacting with its environment using tools, memory, a…
  • Open

    Mastering SKU Estimations with the Microsoft Fabric SKU Estimator
    In today’s ever-changing analytics landscape it can be difficult to plan out your next project or your enterprise analytics roadmap. Designed to optimize data infrastructure planning, the Microsoft Fabric SKU Estimator helps customers and partners to accurately estimate capacity requirements and select the most suitable SKU for their workloads, protecting users from under-provisioning and overcommitment. … Continue reading “Mastering SKU Estimations with the Microsoft Fabric SKU Estimator”  ( 9 min )
    BULK INSERT statement is generally available!
    The BULK INSERT statement is generally available in Fabric Data Warehouse. The BULK INSERT statement enables you to ingest parquet or csv data into a table from the specified file stored in Azure Data Lake or Azure Blob storage: The BULK INSERT statement is very similar to the COPY INTO statement and enables you to … Continue reading “BULK INSERT statement is generally available!”  ( 7 min )
  • Open

    Step-by-Step Contact Center Chat Analysis with Azure OpenAI & Communication Services
    1. Introduction Contact centers are the front lines of customer interaction, generating vast amounts of valuable data through chat logs, call transcripts, and emails. However, manually sifting through this data to find actionable insights is often a monumental task. Imagine the scenario of a thriving online service, like a food delivery app: as usage climbs, so does the number of customer support chats, making it incredibly difficult to pinpoint recurring problems or gauge overall satisfaction from the sea of text. How can businesses effectively tap into this wealth of information? This post explores a powerful solution: building an automated analytics platform using Azure Communication Services (ACS) combined with the intelligence of Azure OpenAI Service. We'll outline how this integratio…  ( 72 min )
    Step-by-Step Contact Center Chat Analysis with Azure OpenAI & Communication Services
    1. Introduction Contact centers are the front lines of customer interaction, generating vast amounts of valuable data through chat logs, call transcripts, and emails. However, manually sifting through this data to find actionable insights is often a monumental task. Imagine the scenario of a thriving online service, like a food delivery app: as usage climbs, so does the number of customer support chats, making it incredibly difficult to pinpoint recurring problems or gauge overall satisfaction from the sea of text. How can businesses effectively tap into this wealth of information? This post explores a powerful solution: building an automated analytics platform using Azure Communication Services (ACS) combined with the intelligence of Azure OpenAI Service. We'll outline how this integratio…
  • Open

    Host Remote MCP Servers in Azure App Service
    My colleague, Anthony Chu, from Azure Container Apps, recently published an excellent blog post outlining how to get started with MCP servers in Azure Container Apps. I highly recommend reading it, as there are many similarities between hosting MCP servers on Azure Container Apps and Azure App Service. He also provides great background information on remote MCP servers and their future plans. In this article, I will build on that foundation and show you how to run remote MCP servers as web apps in Azure App Service, and how to connect to them with GitHub Copilot in Visual Studio Code. Quick Background on MCP Servers MCP (Model Context Protocol) servers are part of a rapidly evolving technology used for hosting and managing model-based contexts. These servers interact with clients like GitH…  ( 22 min )
    Host Remote MCP Servers in Azure App Service
    My colleague, Anthony Chu, from Azure Container Apps, recently published an excellent blog post outlining how to get started with MCP servers in Azure Container Apps. I highly recommend reading it, as there are many similarities between hosting MCP servers on Azure Container Apps and Azure App Service. He also provides great background information on remote MCP servers and their future plans. In this article, I will build on that foundation and show you how to run remote MCP servers as web apps in Azure App Service, and how to connect to them with GitHub Copilot in Visual Studio Code. Quick Background on MCP Servers MCP (Model Context Protocol) servers are part of a rapidly evolving technology used for hosting and managing model-based contexts. These servers interact with clients like GitH…
  • Open

    Automate creation of work items in ADO and Export/Import workflow packages
    This article would create multiple work items (tasks) for any specific User Story in a particular Backlog with a certain TAG value in Azure Devops. This article would also show steps to Export/Import a workflow package.PART 1: Create multiple items for a parent User StoryThis article uses Power Automate to do the same.There are certain Pre-requisites that have to be fulfilled,1. Be a part of an Azure Devops organization, have a project created along with some User Stories.2. Have access to Power Automate to create workflows, and make sure you are able to add connections from Automate to ADO (Not to worry we would check that in the below steps)Now go ahead and Follow the below steps to achieve the purpose.Step1: Open power automate portal and Click on "My Flows" Link: https://make.powerauto…  ( 27 min )
    Automate creation of work items in ADO and Export/Import workflow packages
    This article would create multiple work items (tasks) for any specific User Story in a particular Backlog with a certain TAG value in Azure Devops. This article would also show steps to Export/Import a workflow package.PART 1: Create multiple items for a parent User StoryThis article uses Power Automate to do the same.There are certain Pre-requisites that have to be fulfilled,1. Be a part of an Azure Devops organization, have a project created along with some User Stories.2. Have access to Power Automate to create workflows, and make sure you are able to add connections from Automate to ADO (Not to worry we would check that in the below steps)Now go ahead and Follow the below steps to achieve the purpose.Step1: Open power automate portal and Click on "My Flows" Link: https://make.powerauto…

  • Open

    NodeJs GitHub Action Deployment on App Service Linux Using Publish Profile
    Overview  ( 3 min )
  • Open

    Common use cases for building solutions with Microsoft Fabric User data functions (UDFs)
    Data engineering often presents challenges with data quality or complex data analytics processing that requires custom logic. This is where Fabric User data functions can be used to implement custom logic into your data processes or pipelines. Here are the most common cases where Fabric User Data Functions  ( 8 min )
  • Open

    Unlocking the Power of Azure: Mastering Resource Management in Kubernetes
    Hi, I’m Pranjal Mishra, a Student Ambassador from Galgotias University, pursuing B.Tech in Computer Science with a specialization in AI & ML. As someone passionate about cloud computing and DevOps, I often explore how platforms like Azure simplify complex infrastructure challenges—especially when working with containerized applications in Kubernetes. In this article, we’ll dive into resource management in Kubernetes, with a focus on implementing resource quotas and limits using Azure Kubernetes Service (AKS). Whether you're optimizing cost, ensuring performance, or avoiding resource contention, this guide is for you. Why Resource Management Matters ? In Kubernetes, resource limits and quotas are your best allies in controlling how much CPU and memory workloads consume. Without these contro…  ( 25 min )
    Unlocking the Power of Azure: Mastering Resource Management in Kubernetes
    Hi, I’m Pranjal Mishra, a Student Ambassador from Galgotias University, pursuing B.Tech in Computer Science with a specialization in AI & ML. As someone passionate about cloud computing and DevOps, I often explore how platforms like Azure simplify complex infrastructure challenges—especially when working with containerized applications in Kubernetes. In this article, we’ll dive into resource management in Kubernetes, with a focus on implementing resource quotas and limits using Azure Kubernetes Service (AKS). Whether you're optimizing cost, ensuring performance, or avoiding resource contention, this guide is for you. Why Resource Management Matters ? In Kubernetes, resource limits and quotas are your best allies in controlling how much CPU and memory workloads consume. Without these contro…
  • Open

    Customer Case Study: Announcing the Neon Serverless Postgres Connector for Microsoft Semantic Kernel
    Announcing the Neon Serverless Postgres Connector for Microsoft Semantic Kernel We’re excited to introduce the Neon Serverless Postgres Connector for Microsoft Semantic Kernel, enabling developers to seamlessly integrate Neon’s serverless Postgres capabilities with AI-driven vector search and retrieval workflows. By leveraging the **pgvector** extension in Neon and the existing Postgres Vector Store connector, this integration […] The post Customer Case Study: Announcing the Neon Serverless Postgres Connector for Microsoft Semantic Kernel appeared first on Semantic Kernel.  ( 26 min )
    Guest Blog: Bridging Business and Technology: Transforming Natural Language Queries into SQL with Semantic Kernel Part 2
    Today we’d like to welcome back a team of internal Microsoft employees for part 2 of their guest blog series focused on Bridging Business and Technology: Transforming Natural Language Queries into SQL with Semantic Kernel. We’ll turn it over to our authors – Samer El Housseini, Riccardo Chiodaroli, Daniel Labbe and Fabrizio Ruocco to dive […] The post Guest Blog: Bridging Business and Technology: Transforming Natural Language Queries into SQL with Semantic Kernel Part 2 appeared first on Semantic Kernel.  ( 31 min )
  • Open

    Azure Monitor Application Insights Auto-Instrumentation for Java and Node Microservices on AKS
    Key Takeaways (TLDR) Monitor Java and Node applications with zero code changes Fast onboarding: just 2 steps Supports distributed tracing, logs, and metrics Correlates application-level telemetry in Application Insights with infrastructure-level telemetry in Container Insights Available today in public preview Introduction Monitoring your applications is now easier than ever with the public preview release of Auto-Instrumentation for Azure Kubernetes Service (AKS). You can now easily monitor your Java and Node deployments without changing your code by leveraging auto-instrumentation that is integrated into the AKS cluster.  This feature is ideal for developers or operators who are... Looking to add monitoring in the easiest way possible, without modifying code and avoiding ongoing SDK u…  ( 31 min )
    Azure Monitor Application Insights Auto-Instrumentation for Java and Node Microservices on AKS
    Monitoring your applications is now easier than ever with the public preview release of Auto-Instrumentation for Azure Kubernetes Service (AKS). You can now easily monitor your Java and Node deployments without changing your code by leveraging auto-instrumentation that is integrated into the AKS cluster.  This feature is ideal for developers or operators who are... Looking to add monitoring in the easiest way possible, without modifying code and avoiding ongoing SDK update maintenance. Starting out on their monitoring journey and looking to benefit from carefully chosen default configurations with the ability to tweak them over time. Working with someone else’s code and looking to instrument at scale. Or considering monitoring for the first time at the time of deployment. Before the intr…

  • Open

    Major Updates to VS Code Docker: Introducing Container Tools
    The first, most obvious thing is the introduction of the Container Tools extension to broaden our focus and open new extensibility opportunities. The existing extension code (and MIT license) will be migrated to the Container Tools extension, and the Docker extension will become an extension pack that includes the Docker DX and Container Tools extensions. For you, this means the ability to customize the tooling to meet your needs - choose your preferred container runtime and only the functionality that you need in the extension settings. This major update marks a significant step forward in enhancing the development experience when working with containers. Please comment here with any questions or feedback and stay tuned to experiment with the new features!   tl;dr  The Docker extension is becoming the Container Tools extension Still free and open source Podman support is coming No action is required  ( 19 min )
    Major Updates to VS Code Docker: Introducing Container Tools
    The first, most obvious thing is the introduction of the Container Tools extension to broaden our focus and open new extensibility opportunities. The existing extension code (and MIT license) will be migrated to the Container Tools extension, and the Docker extension will become an extension pack that includes the Docker DX and Container Tools extensions. For you, this means the ability to customize the tooling to meet your needs - choose your preferred container runtime and only the functionality that you need in the extension settings. This major update marks a significant step forward in enhancing the development experience when working with containers. Please comment here with any questions or feedback and stay tuned to experiment with the new features!   tl;dr  The Docker extension is becoming the Container Tools extension Still free and open source Podman support is coming No action is required
    Getting Started with .NET on Azure Container Apps
    Great news for .NET developers who would like to become familiar with containers and Azure Container Apps (ACA)! We just released a new Getting Started guide for .NET developers on Azure Container Apps. This guide is designed to help you get started with Azure Container Apps and understand how to build and deploy your applications using this service.   In a series of guided lessons, you will learn: All about container services on Azure - and where ACA fits in How to run a monolith on ACA How to add authentication to an app on ACA How to run microservices on ACA How to implement a CI/CD pipeline for ACA How to monitor and optimize your app for cost on ACA How to monitor the performance of your app on ACA How .NET Aspire helps to orchestrate your app on ACA All the code is available on Git…  ( 23 min )
    Getting Started with .NET on Azure Container Apps
    Great news for .NET developers who would like to become familiar with containers and Azure Container Apps (ACA)! We just released a new Getting Started guide for .NET developers on Azure Container Apps. This guide is designed to help you get started with Azure Container Apps and understand how to build and deploy your applications using this service.   In a series of guided lessons, you will learn: All about container services on Azure - and where ACA fits in How to run a monolith on ACA How to add authentication to an app on ACA How to run microservices on ACA How to implement a CI/CD pipeline for ACA How to monitor and optimize your app for cost on ACA How to monitor the performance of your app on ACA How .NET Aspire helps to orchestrate your app on ACA All the code is available on Git…
  • Open

    Azure VMware Solution approved DISA Provisional Authorization of Azure Government at IL5
    Today we are pleased to announce that Azure VMware Solution in Microsoft Azure Government was approved and added as a service within the DISA Provisional Authorization of Azure Government at Impact Level 5. Azure VMware Solution (AVS) is a fully managed service in Azure that customers can use to extend their on-premises VMware vSphere workloads more seamlessly to the cloud, while maintaining their existing skills and operational processes. Learn more about how you can streamline your migration efforts with Azure VMware Solution in Azure Government. Azure VMware Solution was already approved at DoD Impact Level 4 in Azure Government. With this latest approval, DoD customers and their partners who require the higher impact level can now meet those requirements. Customers and their partners who require DoD Impact Level 2 can use Azure VMware Solution in Azure Commercial or Azure Government. For details about availability and pricing, please reach out to your Microsoft account team, and to learn more about getting started on Azure VMware Solution you can visit the documentation page. To learn more about DoD Impact Level scope for Azure Commercial and Azure Government, you can visit the Azure compliance documentation.  ( 19 min )
    Azure VMware Solution approved DISA Provisional Authorization of Azure Government at IL5
    Today we are pleased to announce that Azure VMware Solution in Microsoft Azure Government was approved and added as a service within the DISA Provisional Authorization of Azure Government at Impact Level 5. Azure VMware Solution (AVS) is a fully managed service in Azure that customers can use to extend their on-premises VMware vSphere workloads more seamlessly to the cloud, while maintaining their existing skills and operational processes. Learn more about how you can streamline your migration efforts with Azure VMware Solution in Azure Government. Azure VMware Solution was already approved at DoD Impact Level 4 in Azure Government. With this latest approval, DoD customers and their partners who require the higher impact level can now meet those requirements. Customers and their partners who require DoD Impact Level 2 can use Azure VMware Solution in Azure Commercial or Azure Government. For details about availability and pricing, please reach out to your Microsoft account team, and to learn more about getting started on Azure VMware Solution you can visit the documentation page. To learn more about DoD Impact Level scope for Azure Commercial and Azure Government, you can visit the Azure compliance documentation.
    AVS was approved as a service within the DISA Provisional Authorization of Azure Government at IL5
    Today we are pleased to announce that Azure VMware Solution in Microsoft Azure Government was approved and added as a service within the DISA Provisional Authorization of Azure Government at Impact Level 5. Azure VMware Solution (AVS) is a fully managed service in Azure that customers can use to extend their on-premises VMware workloads more seamlessly to the cloud, while maintaining their existing skills and operational processes. Learn more about how you can streamline your migration efforts with Azure VMware Solution in Azure Government. AVS was already approved at DoD Impact Level 4 in Azure Government. With this latest approval, DoD customers and their partners who require the higher impact level can now meet those requirements. Customers and their partners who require DoD Impact Level 2 can use AVS in Azure Commercial or Azure Government. For details about availability and pricing, please reach out to your Microsoft account team, and to learn more about getting started on Azure VMware Solution you can visit the documentation page. To learn more about DoD Impact Level scope for Azure Commercial and Azure Government, you can visit the Azure compliance documentation.
    Azure VMware Solution now available in the new AV48 node size in Japan East.
    Today we're announcing the availability of the Azure VMware Solution AV48 SKU in Japan East.  This SKU modernizes the CPU to the Intel Sapphire Rapids architecture and increases the deployed cores and memory per server to better accommodate today’s workloads. Key features of the new AV48 in Japan East: Dual Intel Xeon Gold 6442Y CPUs (Sapphire Rapids microarchitecture) with 24 cores/CPU @ 2.6 GHz / 3.3Ghz All Core Turbo / 4.0 GHz Max Turbo, Total 48 physical cores (96 logical cores with hyperthreading) 1TB of DRAM Memory 19.2TB storage capacity with all NVMe based SSDs 1.5TB of NVMe Cache For pricing reach out to your Microsoft Account Team, or visit the Azure Portal quota request page. Learn More  ( 18 min )
    Azure VMware Solution now available in the new AV48 node size in Japan East.
    Today we're announcing the availability of the AVS AV48 SKU in Japan East.  This SKU modernizes the CPU to the Intel Sapphire Rapids architecture and increases the deployed cores and memory per server to better accommodate today’s workloads. Key features of the new AV48 in Japan East: Dual Intel Xeon Gold 6442Y CPUs (Sapphire Rapids microarchitecture) with 24 cores/CPU @ 2.6 GHz / 3.3Ghz All Core Turbo / 4.0 GHz Max Turbo, Total 48 physical cores (96 logical cores with hyperthreading) 1TB of DRAM Memory 19.2TB storage capacity with all NVMe based SSDs 1.5TB of NVMe Cache For pricing reach out to your Microsoft Account Team, or visit the Azure Portal quota request page. Learn More
    Forward Azure VMware Solution logs anywhere using Azure Logic Apps
    Overview  As enterprises scale their infrastructure in Microsoft Azure using Azure VMware Solution, gaining real-time visibility into the operational health of their private cloud environment becomes increasingly critical. Whether troubleshooting deployment issues, monitoring security events, or performing compliance audits, centralized logging is a must-have.  Azure VMware Solution offers flexible options for exporting syslogs from vCenter Server, ESXi Hosts, and NSX components. While many customers already use Log Analytics or third-party log platforms for visibility, some have unique operational or compliance requirements that necessitate forwarding logs to specific destinations outside the Microsoft ecosystem.  With the advent of VMware Cloud Foundation on Azure VMware Solution, custom…  ( 30 min )
    Forward Azure VMware Solution logs anywhere using Azure Logic Apps
    Overview  As enterprises scale their infrastructure in Microsoft Azure using Azure VMware Solution, gaining real-time visibility into the operational health of their private cloud environment becomes increasingly critical. Whether troubleshooting deployment issues, monitoring security events, or performing compliance audits, centralized logging is a must-have.  Azure VMware Solution offers flexible options for exporting syslogs from vCenter Server, ESXi Hosts, and NSX components. While many customers already use Log Analytics or third-party log platforms for visibility, some have unique operational or compliance requirements that necessitate forwarding logs to specific destinations outside the Microsoft ecosystem.  With the advent of VMware Cloud Foundation on Azure VMware Solution, custom…
    Migrating from EKS to AKS: What Actually Matters
    If you're using the Azure Migration Hub as your starting point, you're already ahead. But when it comes to migrating Kubernetes workloads from EKS to AKS, there are still a few key details that can make or break your deployment. We recently walked through a real-world migration of a typical web app from AWS EKS to Azure AKS, and while the core containers came over cleanly, the supporting architecture required some careful rework. Here’s what stood out during the process—no fluff, just what matters when you're doing the work. Mind the Infrastructure, Not Just the App The app itself (a voting tool using Redis and Postgres) migrated easily. But things got more complex when we looked at the surrounding infrastructure: Ingress and Load Balancing: AWS ALB maps loosely to Azure Application Gatew…  ( 23 min )
    Migrating from EKS to AKS: What Actually Matters
    If you're using the Azure Migration Hub as your starting point, you're already ahead. But when it comes to migrating Kubernetes workloads from EKS to AKS, there are still a few key details that can make or break your deployment. We recently walked through a real-world migration of a typical web app from AWS EKS to Azure AKS, and while the core containers came over cleanly, the supporting architecture required some careful rework. Here’s what stood out during the process—no fluff, just what matters when you're doing the work. Mind the Infrastructure, Not Just the App The app itself (a voting tool using Redis and Postgres) migrated easily. But things got more complex when we looked at the surrounding infrastructure: Ingress and Load Balancing: AWS ALB maps loosely to Azure Application Gatew…
  • Open

    Fabric Espresso – Episodes about Performance Optimization & Compute Management in Microsoft Fabric
    Fabric Espresso – Episodes About Performance Optimization & Compute Management in Microsoft Fabric  ( 6 min )

  • Open

    Azure Red Hat OpenShift: April 2025 Update
    Enterprise Kubernetes shouldn't be complicated or insecure. That's why our April 2025 update brings powerful new features to make your Azure Red Hat OpenShift experience even better. Here's what's new, with links to get you started right away!  🔐 Security Enhancements  Managed Identity & Workload Identity → Replace long-lived credentials with short-term tokens for enhanced security. Now in public preview! Only available for new cluster creation on ARO 4.13 and newer. Get implementation details and read the Red Hat blog.  Managed identity workload identity on Azure Red Hat OpenShift value proposition Cluster-Wide Proxy → Enable connectivity from ARO cluster components to external endpoints via corporate proxies. This feature is only for cluster components, not for customer workloads. Pe…  ( 23 min )
    Azure Red Hat OpenShift: April 2025 Update
    Enterprise Kubernetes shouldn't be complicated or insecure. That's why our April 2025 update brings powerful new features to make your Azure Red Hat OpenShift experience even better. Here's what's new, with links to get you started right away!  🔐 Security Enhancements  Managed Identity & Workload Identity → Replace long-lived credentials with short-term tokens for enhanced security. Now in public preview! Only available for new cluster creation on ARO 4.13 and newer. Get implementation details and read the Red Hat blog.  Managed identity workload identity on Azure Red Hat OpenShift value proposition Cluster-Wide Proxy → Enable connectivity from ARO cluster components to external endpoints via corporate proxies. This feature is only for cluster components, not for customer workloads. Pe…
  • Open

    Azure Firewall and Service Endpoints
    In my recent blog series Private Link reality bites I briefly mentioned the possibility of inspecting Service Endpoints with Azure Firewall, and many have asked for more details on that configuration. Here we go! First things first: what the heck am I talking about? Most Azure services such as Azure Storage, Azure SQL and many others can be accessed directly over the public Internet. However, there are two alternatives to access those services over Microsoft's backbone: Private Link and VNet Service Endpoints. Microsoft's overall recommendation is using private link, but some organizations prefer leveraging service endpoints. Feel free to read this post on a comparison of the two. You might want to inspect traffic to Azure services with network firewalls, even if that traffic is leveraging…  ( 27 min )
    Azure Firewall and Service Endpoints
    In my recent blog series Private Link reality bites I briefly mentioned the possibility of inspecting Service Endpoints with Azure Firewall, and many have asked for more details on that configuration. Here we go! First things first: what the heck am I talking about? Most Azure services such as Azure Storage, Azure SQL and many others can be accessed directly over the public Internet. However, there are two alternatives to access those services over Microsoft's backbone: Private Link and VNet Service Endpoints. Microsoft's overall recommendation is using private link, but some organizations prefer leveraging service endpoints. Feel free to read this post on a comparison of the two. You might want to inspect traffic to Azure services with network firewalls, even if that traffic is leveraging…
  • Open

    AI Agents: Planning and Orchestration with the Planning Design Pattern - Part 7
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a focus on the Planning Design Pattern. In previous posts (links at the end!), we've built a strong foundation in AI agent concepts. Now, we'll explore how to design agents that can effectively plan and orchestrate complex tasks, breaking them down into manageable subtasks and coordinating their execution. Introduction to Planning Design The Planning Design Pattern helps AI agents tackle complex goals by providing a structured approach to task decomposition and execution. This involves: Defining a clear overall goal. Breaking down the task into smaller, manageable subtasks. Leveraging structured output for easier processing. Using an event-driven approach for dynamic a…  ( 24 min )
    AI Agents: Planning and Orchestration with the Planning Design Pattern - Part 7
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a focus on the Planning Design Pattern. In previous posts (links at the end!), we've built a strong foundation in AI agent concepts. Now, we'll explore how to design agents that can effectively plan and orchestrate complex tasks, breaking them down into manageable subtasks and coordinating their execution. Introduction to Planning Design The Planning Design Pattern helps AI agents tackle complex goals by providing a structured approach to task decomposition and execution. This involves: Defining a clear overall goal. Breaking down the task into smaller, manageable subtasks. Leveraging structured output for easier processing. Using an event-driven approach for dynamic a…
  • Open

    Empowering businesses with smart capacity planning: Introducing the Microsoft Fabric SKU estimator (Preview)
    We’re excited to unveil the Microsoft Fabric SKU estimator, now available in preview—an enhanced version of the previously introduced Microsoft Fabric Capacity Calculator. This advanced tool has been refined based on extensive user feedback to provide tailored capacity estimations for businesses. Designed to optimize data infrastructure planning, the Microsoft Fabric SKU Estimator helps customers and … Continue reading “Empowering businesses with smart capacity planning: Introducing the Microsoft Fabric SKU estimator (Preview)”  ( 6 min )
    Purview DLP Policies with Restrict Access for Fabric Lakehouses (Preview)
    In today’s fast-paced data-driven world, enterprises are building more sophisticated data platforms to gain insights and drive innovation. Microsoft Fabric Lakehouses combine the scale of a data lake with the management finesse of a data warehouse – delivering unified analytics in an ever-evolving business landscape. But with great data comes great responsibility. Protecting sensitive information … Continue reading “Purview DLP Policies with Restrict Access for Fabric Lakehouses (Preview)”  ( 6 min )
    Microsoft Purview Data Loss Prevention policies for Fabric have been extended to KQL and Mirrored Databases (Preview)
    Microsoft Purview’s Data Loss Prevention (DLP) policies for Fabric now supports Fabric KQL and Mirrored DBs! Purview DLP policies help organizations to improve their data security posture and comply with governmental and industry regulations. Security teams use DLP policies to automatically detect upload of sensitive information to Microsoft 365 applications like SharePoint and Exchange, and … Continue reading “Microsoft Purview Data Loss Prevention policies for Fabric have been extended to KQL and Mirrored Databases (Preview)”  ( 6 min )
  • Open

    Evaluating Agentic AI Systems: A Deep Dive into Agentic Metrics
    In this post, we explore the latest Agentic metrics introduced in the Azure AI Evaluation library, a Python library designed to assess generative AI systems with both traditional NLP metrics (like BLEU and ROUGE) and AI-assisted evaluators (such as relevance, coherence, and safety). With the rise of agentic systems, the library now includes purpose-built evaluators for complex agent workflows. We’ll focus on three key metrics: Task Adherence, Tool Call Accuracy, and Intent Resolution—each capturing a critical dimension of an agent’s performance. To help illustrate these evaluation strategies, you can find AgenticEvals, a simple public repo that showcases these metrics in action using Semantic Kernel for the agentic/orchestration layer and Azure AI Evaluation library for the evaluation.   W…  ( 28 min )
    Evaluating Agentic AI Systems: A Deep Dive into Agentic Metrics
    In this post, we explore the latest Agentic metrics introduced in the Azure AI Evaluation library, a Python library designed to assess generative AI systems with both traditional NLP metrics (like BLEU and ROUGE) and AI-assisted evaluators (such as relevance, coherence, and safety). With the rise of agentic systems, the library now includes purpose-built evaluators for complex agent workflows. We’ll focus on three key metrics: Task Adherence, Tool Call Accuracy, and Intent Resolution—each capturing a critical dimension of an agent’s performance. To help illustrate these evaluation strategies, you can find AgenticEvals, a simple public repo that showcases these metrics in action using Semantic Kernel for the agentic/orchestration layer and Azure AI Evaluation library for the evaluation.   W…

  • Open

    Model Mondays: Bringing AI Home with Local Development
    As generative AI tools become more powerful, developers are looking for faster, more flexible ways to experiment, fine-tune, and deploy models. But not every workflow starts—or needs to stay—in the cloud. In this episode of Model Mondays, we explore how the AI Toolkit for Visual Studio Code is transforming the development experience by enabling local AI workflows that give you more control, faster iteration, and seamless integration with your existing tools. Whether you're working in a constrained environment, or just prefer to prototype locally, this toolkit makes it possible to run and refine AI models right from your own machine. Whether you're tinkering with models on a plane, prototyping in a coffee shop, or just want to test your prompts in peace, this toolkit has your back. It's like taking your favorite AI models on a road trip... and they actually behave. What’s in it for you? Each Model Mondays episode is a 30-minute boost to your AI skillset: Stay updated – A 5-min recap of the week’s hottest model drops and Azure AI Foundry news Get hands-on – A 15-min walkthrough focused on how to fine-tune Mistral in Azure Ask the experts – Live Q&A with Microsoft and Mistral  And the conversation doesn’t stop there. Join us every Friday for a Model Mondays Watercooler Chat at 1:30 PM ET / 10:30 AM PT in our Discord community, where we recap, react, and nerd out with the broader AI community. In Case You Missed It Episode 1: GitHub Models – Building better dev experiences Episode 2: Reasoning Models  Episode 3: Search & Retrieval Models Episode 4 : Visual Generative Models Episode 5 : Fine-Tuning Models Be Part of the Movement Watch Live on Microsoft Reactor – RSVP Now Join the AI Community – Discord Fridays Explore the Tech – Model Mondays GitHub So, grab your laptop, launch VS Code, and let’s bring AI development home!!  ( 22 min )
    Model Mondays: Bringing AI Home with Local Development
    As generative AI tools become more powerful, developers are looking for faster, more flexible ways to experiment, fine-tune, and deploy models. But not every workflow starts—or needs to stay—in the cloud. In this episode of Model Mondays, we explore how the AI Toolkit for Visual Studio Code is transforming the development experience by enabling local AI workflows that give you more control, faster iteration, and seamless integration with your existing tools. Whether you're working in a constrained environment, or just prefer to prototype locally, this toolkit makes it possible to run and refine AI models right from your own machine. Whether you're tinkering with models on a plane, prototyping in a coffee shop, or just want to test your prompts in peace, this toolkit has your back. It's like taking your favorite AI models on a road trip... and they actually behave. What’s in it for you? Each Model Mondays episode is a 30-minute boost to your AI skillset: Stay updated – A 5-min recap of the week’s hottest model drops and Azure AI Foundry news Get hands-on – A 15-min walkthrough focused on how to fine-tune Mistral in Azure Ask the experts – Live Q&A with Microsoft and Mistral  And the conversation doesn’t stop there. Join us every Friday for a Model Mondays Watercooler Chat at 1:30 PM ET / 10:30 AM PT in our Discord community, where we recap, react, and nerd out with the broader AI community. In Case You Missed It Episode 1: GitHub Models – Building better dev experiences Episode 2: Reasoning Models  Episode 3: Search & Retrieval Models Episode 4 : Visual Generative Models Episode 5 : Fine-Tuning Models Be Part of the Movement Watch Live on Microsoft Reactor – RSVP Now Join the AI Community – Discord Fridays Explore the Tech – Model Mondays GitHub So, grab your laptop, launch VS Code, and let’s bring AI development home!!
  • Open

    Host remote MCP servers in Azure Container Apps
    Whether you're building AI agents or using LLM powered tools like GitHub Copilot in Visual Studio Code, you're probably hearing a lot about MCP (Model Context Protocol) lately; maybe you're already using it. It's quickly becoming the standard interoperability layer between different components of the AI stack. In this article, we'll explore how to run remote MCP servers as serverless containers in Azure Container Apps and use them in GitHub Copilot in Visual Studio Code. MCP servers today MCP follows a client-server architecture. It all starts with a client, such as GitHub Copilot in VS Code or Claude Desktop. A client connects to one or more MCP servers. Servers are the main extensibility points in MCP. Each server provides new tools, skills, and capabilities to the client. For example, a…  ( 32 min )
    Host remote MCP servers in Azure Container Apps
    Whether you're building AI agents or using LLM powered tools like GitHub Copilot in Visual Studio Code, you're probably hearing a lot about MCP (Model Context Protocol) lately; maybe you're already using it. It's quickly becoming the standard interoperability layer between different components of the AI stack. In this article, we'll explore how to run remote MCP servers as serverless containers in Azure Container Apps and use them in GitHub Copilot in Visual Studio Code. MCP servers today MCP follows a client-server architecture. It all starts with a client, such as GitHub Copilot in VS Code or Claude Desktop. A client connects to one or more MCP servers. Servers are the main extensibility points in MCP. Each server provides new tools, skills, and capabilities to the client. For example, a…
  • Open

    Guest Blog: Revolutionize Business Automation with AI: A Guide to Microsoft’s Semantic Kernel Process Framework
    Revolutionize Business Automation with AI: A Guide to Microsoft’s Semantic Kernel Process Framework Step-by-Step guide on creating your first process with AI Microsoft’s AI Framework, Semantic Kernel, is an easy-to-use C#, Java, and Python-based AI framework that helps you quickly build AI solutions or integrate AI capabilities into your existing app. Semantic Kernel provides various […] The post Guest Blog: Revolutionize Business Automation with AI: A Guide to Microsoft’s Semantic Kernel Process Framework appeared first on Semantic Kernel.  ( 26 min )
  • Open

    How to use any Python AI agent framework with free GitHub Models
    I ❤️ when companies offer free tiers for developer services, since it gives everyone a way to learn new technologies without breaking the bank. Free tiers are especially important for students and people between jobs, when the desire to learn is high but the available cash is low. That's why I'm such a fan of GitHub Models: free, high-quality generative AI models available to anyone with a GitHub account. The available models include the latest OpenAI LLMs (like o3-mini), LLMs from the research community (like Phi and Llama), LLMs from other popular providers (like Mistral and Jamba), multimodal models (like gpt-4o and llama-vision-instruct) and even a few embedding models (from OpenAI and Cohere). With access to such a range of models, you can prototype complex multi-model workflows to im…  ( 32 min )
    How to use any Python AI agent framework with free GitHub Models
    I ❤️ when companies offer free tiers for developer services, since it gives everyone a way to learn new technologies without breaking the bank. Free tiers are especially important for students and people between jobs, when the desire to learn is high but the available cash is low. That's why I'm such a fan of GitHub Models: free, high-quality generative AI models available to anyone with a GitHub account. The available models include the latest OpenAI LLMs (like o3-mini), LLMs from the research community (like Phi and Llama), LLMs from other popular providers (like Mistral and Jamba), multimodal models (like gpt-4o and llama-vision-instruct) and even a few embedding models (from OpenAI and Cohere). With access to such a range of models, you can prototype complex multi-model workflows to im…
  • Open

    General-Purpose vs Reasoning Models in Azure OpenAI
    1. Introduction Since Large Language Models (LLMs) have become mainstream, a wide range of models have emerged to serve different types of tasks—from casual chatbot interactions to advanced scientific reasoning. If you're familiar with GPT-3.5 and GPT-4, you'll know that these models set a high standard for general-purpose AI. But as the field evolves, the distinction between model types has become more pronounced. In this blog, we'll explore the differences between two major categories of LLMs: General-Purpose Models – Designed for broad tasks like conversation, content generation, and multimodal input processing. Reasoning Models – Optimized for tasks requiring logic, problem-solving, and step-by-step breakdowns. We'll use specific models available in Azure OpenAI as examples to illust…  ( 40 min )
    General-Purpose vs Reasoning Models in Azure OpenAI
    1. Introduction Since Large Language Models (LLMs) have become mainstream, a wide range of models have emerged to serve different types of tasks—from casual chatbot interactions to advanced scientific reasoning. If you're familiar with GPT-3.5 and GPT-4, you'll know that these models set a high standard for general-purpose AI. But as the field evolves, the distinction between model types has become more pronounced. In this blog, we'll explore the differences between two major categories of LLMs: General-Purpose Models – Designed for broad tasks like conversation, content generation, and multimodal input processing. Reasoning Models – Optimized for tasks requiring logic, problem-solving, and step-by-step breakdowns. We'll use specific models available in Azure OpenAI as examples to illust…

  • Open

    Resolving Microsoft Graph PowerShell 2.26+ compatibility issues with Azure Runbooks
    We know how important Azure Automation workflows and appreciate the critical role played by automation runbooks. Some customers have experienced issues with the release of 2.26.1 of the Microsoft Graph PowerShell SDK, particularly when running PowerShell 7.2 runbooks in Azure Automation. The core challenge is a conflict around .NET 6 where fixing Runbooks would break […] The post Resolving Microsoft Graph PowerShell 2.26+ compatibility issues with Azure Runbooks appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    Configure Virtual Applications, Mounted Azure Files, and Static File Access in Azure App Service
    Background: In Azure App Service, developers often need to serve files (images, config files, data files, etc.) that are stored outside the app’s wwwroot folder — such as on an Azure File Share. This is especially useful when: You want to share files across multiple web apps Your files are too large to bundle with the app You need to manage files independently from the app deployment To accomplish this, Azure provides the ability to: Mount external Azure File Shares into your web app's file system Expose those mounted folders via virtual paths Configure directory browsing and MIME types to make files directly accessible over the browser Step-by-Step Configuration: ===Azure Storage Account part=== Create Azure File Share ===Azure App Service part=== 1.Mount Azure File Share Azure Porta…  ( 23 min )
    Configure Virtual Applications, Mounted Azure Files, and Static File Access in Azure App Service
    Background: In Azure App Service, developers often need to serve files (images, config files, data files, etc.) that are stored outside the app’s wwwroot folder — such as on an Azure File Share. This is especially useful when: You want to share files across multiple web apps Your files are too large to bundle with the app You need to manage files independently from the app deployment To accomplish this, Azure provides the ability to: Mount external Azure File Shares into your web app's file system Expose those mounted folders via virtual paths Configure directory browsing and MIME types to make files directly accessible over the browser Step-by-Step Configuration: ===Azure Storage Account part=== Create Azure File Share ===Azure App Service part=== 1.Mount Azure File Share Azure Porta…
    Announcing the Public Preview of the New Hybrid Connection Manager (HCM)
    Key Features and Improvements The new version of HCM introduces several enhancements aimed at improving usability, performance, and security: Cross-Platform Compatibility: The new HCM is now supported on both Windows and Linux clients, allowing for seamless management of hybrid connections across different platforms, providing users with greater flexibility and control. Enhanced User Interface: We have redesigned the GUI to offer a more intuitive and efficient user experience. In addition to a new and more accessible GUI, we have also introduced a CLI that includes all the functionality needed to manage connections, especially for our Linux customers who may solely use a CLI to manage their workloads. Improved Visibility: The new version offers enhanced logging and connection testing, whi…  ( 27 min )
    Announcing the Public Preview of the New Hybrid Connection Manager (HCM)
    Key Features and Improvements The new version of HCM introduces several enhancements aimed at improving usability, performance, and security: Cross-Platform Compatibility: The new HCM is now supported on both Windows and Linux clients, allowing for seamless management of hybrid connections across different platforms, providing users with greater flexibility and control. Enhanced User Interface: We have redesigned the GUI to offer a more intuitive and efficient user experience. In addition to a new and more accessible GUI, we have also introduced a CLI that includes all the functionality needed to manage connections, especially for our Linux customers who may solely use a CLI to manage their workloads. Improved Visibility: The new version offers enhanced logging and connection testing, whi…
  • Open

    Best Practices for Mitigating Hallucinations in Large Language Models (LLMs)
    Real-world AI Solutions: Lessons from the Field Overview  This document provides practical guidance for minimizing hallucinations—instances where models produce inaccurate or fabricated content—when building applications with Azure AI services. It targets developers, architects, and MLOps teams working with LLMs in enterprise settings.   Key Outcomes ✅ Reduce hallucinations through retrieval-augmented strategies and prompt engineering✅ Improve model output reliability, grounding, and explainability✅ Enable robust enterprise deployment through layered safety, monitoring, and security   Understanding Hallucinations Hallucinations come in different forms. Here are some realistic examples for each category to help clarify them: Type Description Example Factual Outputs are incorrect or…  ( 31 min )
    Best Practices for Mitigating Hallucinations in Large Language Models (LLMs)
    Real-world AI Solutions: Lessons from the Field Overview  This document provides practical guidance for minimizing hallucinations—instances where models produce inaccurate or fabricated content—when building applications with Azure AI services. It targets developers, architects, and MLOps teams working with LLMs in enterprise settings.   Key Outcomes ✅ Reduce hallucinations through retrieval-augmented strategies and prompt engineering✅ Improve model output reliability, grounding, and explainability✅ Enable robust enterprise deployment through layered safety, monitoring, and security   Understanding Hallucinations Hallucinations come in different forms. Here are some realistic examples for each category to help clarify them: Type Description Example Factual Outputs are incorrect or…
    New enhanced navigation in Azure AI Search
    Faceted navigation is a key component of search experiences, helping users intuitively drill down through large sets of search results by refining their queries quickly and efficiently.  We are announcing several improvements to facets in preview: Hierarchical facets enable developers to create multi-level navigation trees, offering a more organized view of search categories Facet filtering provides precision by allowing regular expressions to refine the facet values displayed Facet summing introduces the ability to aggregate numeric data within facet Hierarchical Facets Facets in Azure AI Search were previously limited to a flat, one layer model. Consider the following index which models a product catalog: Product ID Name Category Subcategory Price P001 UltraHD Smart TV Elect…  ( 25 min )
    New enhanced navigation in Azure AI Search
    Faceted navigation is a key component of search experiences, helping users intuitively drill down through large sets of search results by refining their queries quickly and efficiently.  We are announcing several improvements to facets in preview: Hierarchical facets enable developers to create multi-level navigation trees, offering a more organized view of search categories Facet filtering provides precision by allowing regular expressions to refine the facet values displayed Facet summing introduces the ability to aggregate numeric data within facet Hierarchical Facets Facets in Azure AI Search were previously limited to a flat, one layer model. Consider the following index which models a product catalog: Product ID Name Category Subcategory Price P001 UltraHD Smart TV Elect…
  • Open

    Azure Training Maps
    Overview The Azure Training Maps are a comprehensive visual guide to the Azure ecosystem, integrating all the resources, tools, structures, and connections covered in the course into one inclusive diagram. It enables students to map out and understand the elements they've studied, providing a clear picture of their place within the larger Azure ecosystem. It serves as a 1:1 representation of all the topics officially covered in the instructor-led training. Formats available include PDF, Visio, Excel, and Video.     Links: Each icon in the blueprint has a hyperlink to the pertinent document in the learning path on Learn. Layers: You have the capability to filter layers to concentrate on segments of the course by modules. I.E.: Just day 1 of AZ-104, using filters in Visio and selecting modu…  ( 25 min )
    Azure Training Maps
    Overview The Azure Training Maps are a comprehensive visual guide to the Azure ecosystem, integrating all the resources, tools, structures, and connections covered in the course into one inclusive diagram. It enables students to map out and understand the elements they've studied, providing a clear picture of their place within the larger Azure ecosystem. It serves as a 1:1 representation of all the topics officially covered in the instructor-led training. Formats available include PDF, Visio, Excel, and Video.     Links: Each icon in the blueprint has a hyperlink to the pertinent document in the learning path on Learn. Layers: You have the capability to filter layers to concentrate on segments of the course by modules. I.E.: Just day 1 of AZ-104, using filters in Visio and selecting modu…
    Synthetic Monitoring in Application Insights Using Playwright: A Game-Changer
    Monitoring the availability and performance of web applications is crucial to ensuring a seamless user experience. Azure Application Insights provides powerful synthetic monitoring capabilities to help detect issues proactively. However, Microsoft has deprecated two key features: (Deprecated) Multi-step web tests: Previously, these allowed developers to record and replay a sequence of web requests to test complex workflows. They were created in Visual Studio Enterprise and uploaded to the portal. (Deprecated) URL ping tests: These tests checked if an endpoint was responding and measured performance. They allowed setting custom success criteria, dependent request parsing, and retries. With these features being phased out, we are left without built-in logic to test application health beyon…  ( 26 min )
    Synthetic Monitoring in Application Insights Using Playwright: A Game-Changer
    Monitoring the availability and performance of web applications is crucial to ensuring a seamless user experience. Azure Application Insights provides powerful synthetic monitoring capabilities to help detect issues proactively. However, Microsoft has deprecated two key features: (Deprecated) Multi-step web tests: Previously, these allowed developers to record and replay a sequence of web requests to test complex workflows. They were created in Visual Studio Enterprise and uploaded to the portal. (Deprecated) URL ping tests: These tests checked if an endpoint was responding and measured performance. They allowed setting custom success criteria, dependent request parsing, and retries. With these features being phased out, we are left without built-in logic to test application health beyon…
  • Open

    Microsoft at PyTexas 2025: Join Us for a Celebration of Python and Innovation
    Microsoft is thrilled to announce our participation in PyTexas 2025, taking place this year in the vibrant city of Austin, Texas! At this year’s event, Microsoft is proud to contribute to the community’s growth and excitement by hosting a booth and delivering an engaging talk. The post Microsoft at PyTexas 2025: Join Us for a Celebration of Python and Innovation appeared first on Microsoft for Python Developers Blog.  ( 23 min )
  • Open

    New agents and Copilot Chat for frontline staff
    Stay productive and connected with AI-powered experiences for frontline workers using Android and iOS devices to quickly sign in, manage tasks, find information, and collaborate with team members. Easily access essential resources, check inventory, and get instant answers from Copilot Chat and AI agents that you can build in SharePoint and Microsoft Copilot Studio. Microsoft 365’s latest device experiences take frontline productivity and customer interactions to the next level — from improving customer interactions to managing worksites.  Avery Salumbides, Microsoft 365 Frontline Product Manager, demonstrates key updates across Microsoft Teams, Copilot, and AI agents on mobile, plus essential admin setup considerations to equip your frontline. Secure, fast sign-in. Less downtime and more…  ( 44 min )
    New agents and Copilot Chat for frontline staff
    Stay productive and connected with AI-powered experiences for frontline workers using Android and iOS devices to quickly sign in, manage tasks, find information, and collaborate with team members. Easily access essential resources, check inventory, and get instant answers from Copilot Chat and AI agents that you can build in SharePoint and Microsoft Copilot Studio. Microsoft 365’s latest device experiences take frontline productivity and customer interactions to the next level — from improving customer interactions to managing worksites.  Avery Salumbides, Microsoft 365 Frontline Product Manager, demonstrates key updates across Microsoft Teams, Copilot, and AI agents on mobile, plus essential admin setup considerations to equip your frontline. Secure, fast sign-in. Less downtime and more…

  • Open

    Microsoft 365 Certification control spotlight: Privacy
    Discover how Microsoft 365 Certification ensures ISVs use the latest privacy and personally identifiable information (PII) management controls to protect customer data. The post Microsoft 365 Certification control spotlight: Privacy appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    Boost your development with Microsoft Fabric extensions for Visual Studio Code
    Microsoft Fabric is changing how we handle data engineering and data science. To make things easier, Microsoft added some cool extensions for Visual Studio Code (VS Code) that help you manage Fabric artifacts and build analytical applications. By adding these Microsoft Fabric extensions to VS Code, developers can quickly create Fabric solutions and manage their … Continue reading “Boost your development with Microsoft Fabric extensions for Visual Studio Code”  ( 7 min )
    Recap of Data Factory Announcements at Fabric Conference US 2025
    We had such an exciting week for Fabric during the Fabric Conference US, filled with several product announcements and sneak previews of upcoming new features. Thanks to all of you who participated in the conference, either in person or by being part of the many virtual conversations through blogs, Community forums, social media and other … Continue reading “Recap of Data Factory Announcements at Fabric Conference US 2025”  ( 16 min )
    Use Service Principals to create shortcuts to ADLS Gen2 storage accounts with trusted access
    You now have the capability with service principals to create shortcuts to Azure Data Lake Storage (ADLS) Gen2 storage accounts that have firewall enabled.  Previously, the creation of ADLS Gen2 shortcuts by service principals was restricted when firewall settings were active. However, with the latest changes, service principals will be able to navigate these restrictions … Continue reading “Use Service Principals to create shortcuts to ADLS Gen2 storage accounts with trusted access”  ( 6 min )
  • Open

    Resolving Azure App Service Mount Failures with File Share and Blob Storage
    When using Azure App Service to host web applications, it is common to mount file shares or blob storage hosted in an Azure Storage Account, a configuration also known as "Bring Your Own Storage (BYOS)". While the setup process is seamless, troubleshooting mount issues can be challenging due to different authentication, networking and other configuration aspects. Whether you are encountering errors during the application startup, or viewing any permission denied messages, this guide will help you by going through a step-by-step checklist, to validate the underlying dependencies and settings required for a successful mount: On Azure portal, open the Web App Configuration menu and select the Path Mappings tab. Confirm that the external storage is not being mounted to the unsupported filesys…  ( 27 min )
    Resolving Azure App Service Mount Failures with File Share and Blob Storage
    When using Azure App Service to host web applications, it is common to mount file shares or blob storage hosted in an Azure Storage Account, a configuration also known as "Bring Your Own Storage (BYOS)". While the setup process is seamless, troubleshooting mount issues can be challenging due to different authentication, networking and other configuration aspects. Whether you are encountering errors during the application startup, or viewing any permission denied messages, this guide will help you by going through a step-by-step checklist, to validate the underlying dependencies and settings required for a successful mount: On Azure portal, open the Web App Configuration menu and select the Path Mappings tab. Confirm that the external storage is not being mounted to the unsupported filesys…
    How can I hide the Server information in the response headers in PHP?
    In certain scenarios, you might want to remove the server information from your request header. Therefore, we might consider hiding that information. In Azure App Service for PHP, we are using Nginx, and we can modify configuration files if necessary. First, we need to locate the Nginx configuration file on the Kudu site, which can be found at the path /etc/nginx/nginx.conf. Then, perform cp /etc/nginx/nginx.conf /home/site/nginx.conf. We modify the configuration file under /home to retain our changes. We open the configuration file and uncomment the server_tokens off; directive in the http section of the Nginx configuration. Then you need to configure the startup command using Azure Portal from Configuration -> General Settings as below: cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload Checking again, we can see that the Nginx version is hidden. But what if we want to hide all the server information? To do this, follow these steps: (1) Copy the Nginx configuration file to the /home directory as we mentioned earlier. This is necessary because any files outside of /home will not be preserved after a restart. Use the following command: cp /etc/nginx/nginx.conf /home/site/nginx.conf (2) Open the Nginx configuration file located in /home, and add the following line in the http section. more_clear_headers 'server'. After adding it, save the file. (3) Update custom startup command using Azure Portal from Configuration -> General Settings as follows: apt update && apt install -y nginx-extras && cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload (4) Once done, and the request header should no longer display the Server information.   Reference: How to set Nginx headers -  ( 21 min )
    How can I hide the Server information in the response headers in PHP?
    In certain scenarios, you might want to remove the server information from your request header. Therefore, we might consider hiding that information. In Azure App Service for PHP, we are using Nginx, and we can modify configuration files if necessary. First, we need to locate the Nginx configuration file on the Kudu site, which can be found at the path /etc/nginx/nginx.conf. Then, perform cp /etc/nginx/nginx.conf /home/site/nginx.conf. We modify the configuration file under /home to retain our changes. We open the configuration file and uncomment the server_tokens off; directive in the http section of the Nginx configuration. Then you need to configure the startup command using Azure Portal from Configuration -> General Settings as below: cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload Checking again, we can see that the Nginx version is hidden. But what if we want to hide all the server information? To do this, follow these steps: (1) Copy the Nginx configuration file to the /home directory as we mentioned earlier. This is necessary because any files outside of /home will not be preserved after a restart. Use the following command: cp /etc/nginx/nginx.conf /home/site/nginx.conf (2) Open the Nginx configuration file located in /home, and add the following line in the http section. more_clear_headers 'server'. After adding it, save the file. (3) Update custom startup command using Azure Portal from Configuration -> General Settings as follows: apt update && apt install -y nginx-extras && cp /home/nginx.conf /etc/nginx/nginx.conf && service nginx reload (4) Once done, and the request header should no longer display the Server information.   Reference: How to set Nginx headers -
    Connect Azure SQL Server via System Assigned Managed Identity under ASP.NET
    TOC Why we use it Architecture How to use it References   Why we use it This tutorial will introduce how to integrate Microsoft Entra with Azure SQL Server to avoid using fixed usernames and passwords. By utilizing System-assigned managed identities as a programmatic bridge, it becomes easier for Azure-related PaaS services (such as Container Apps) to communicate with the database without storing connection information in plain text.   Architecture I will introduce each service or component and their configurations in subsequent chapters according to the order of A-C: A: The company's account administrator needs to create or designate a user as the database administrator. This role can only be assigned to one person within the database and is responsible for basic configuration and the cr…  ( 33 min )
    Connect Azure SQL Server via System Assigned Managed Identity under ASP.NET
    TOC Why we use it Architecture How to use it References   Why we use it This tutorial will introduce how to integrate Microsoft Entra with Azure SQL Server to avoid using fixed usernames and passwords. By utilizing System-assigned managed identities as a programmatic bridge, it becomes easier for Azure-related PaaS services (such as Container Apps) to communicate with the database without storing connection information in plain text.   Architecture I will introduce each service or component and their configurations in subsequent chapters according to the order of A-C: A: The company's account administrator needs to create or designate a user as the database administrator. This role can only be assigned to one person within the database and is responsible for basic configuration and the cr…
  • Open

    The Smarter Way to Migrate to Azure
    Cut Through the Chaos If you’ve worked on a cloud migration, you already know—it’s not just about moving workloads. It’s about navigating ambiguity. Mapping services. Aligning teams. Untangling legacy architecture. Making decisions that won’t come back to bite you six months down the line. We built the Azure Migration Hub to bring order to that mess. It’s not a campaign or a marketing layer. It’s a guide—built by architects, engineers, and field teams who’ve done this work at scale—designed to help others do it better. The Migration Hub offers a structured path forward, connecting strategy, architecture patterns, and tooling guidance in the sequence real teams actually need. It also incorporates Azure Essentials best practices, which facilitate a smoother and more efficient transition to t…  ( 24 min )
    The Smarter Way to Migrate to Azure
    Cut Through the Chaos If you’ve worked on a cloud migration, you already know—it’s not just about moving workloads. It’s about navigating ambiguity. Mapping services. Aligning teams. Untangling legacy architecture. Making decisions that won’t come back to bite you six months down the line. We built the Azure Migration Hub to bring order to that mess. It’s not a campaign or a marketing layer. It’s a guide—built by architects, engineers, and field teams who’ve done this work at scale—designed to help others do it better. The Migration Hub offers a structured path forward, connecting strategy, architecture patterns, and tooling guidance in the sequence real teams actually need. It also incorporates Azure Essentials best practices, which facilitate a smoother and more efficient transition to t…
    Future-proof your workloads with Windows Server updates and Azure migration skilling
    Windows Server has been a trusted solution for businesses for over 30 years, providing reliability, security, and scalability for critical workloads. With the introduction of Windows Server 2025, Microsoft is enhancing security, performance, and cloud integration to help organizations focus on innovation rather than administrative overhead. Designed for seamless connectivity with Azure, Windows Server 2025 enables your business to leverage cloud-native tools and hybrid management capabilities effortlessly.   In this blog we’ll focus on the value of migrating your on-premises Windows Server workloads to Azure and review official Plans on Microsoft Learn—Migrate and Secure Windows Server Workloads on Azure—that will help your team succeed. We’re also excited to invite you to our upcoming Win…  ( 28 min )
    Future-proof your workloads with Windows Server updates and Azure migration skilling
    Windows Server has been a trusted solution for businesses for over 30 years, providing reliability, security, and scalability for critical workloads. With the introduction of Windows Server 2025, Microsoft is enhancing security, performance, and cloud integration to help organizations focus on innovation rather than administrative overhead. Designed for seamless connectivity with Azure, Windows Server 2025 enables your business to leverage cloud-native tools and hybrid management capabilities effortlessly.   In this blog we’ll focus on the value of migrating your on-premises Windows Server workloads to Azure and review official Plans on Microsoft Learn—Migrate and Secure Windows Server Workloads on Azure—that will help your team succeed. We’re also excited to invite you to our upcoming Win…
    Power your Linux and PostgreSQL innovation with Azure migration skilling and community events
    Power your Linux and PostgreSQL innovation with Azure migration skilling and community events  Managing on-prem and hybrid Linux and PostgreSQL workloads presents ongoing challenges, from hardware maintenance and scalability limitations to security risks and operational complexity. As workloads grow, so do costs and administrative burdens of keeping them performant and resilient. Migrating to Azure provides a modern, cloud-based solution that enhances security, scalability, and cost efficiency—freeing teams to focus on innovation rather than infrastructure management.   In this blog, we’ll explore not only the value of migrating Linux and PostgreSQL workloads to Azure, but also some crucial, expert-curated skilling resources we provide to help your team master the process. Plus, we’ll disc…  ( 30 min )
    Power your Linux and PostgreSQL innovation with Azure migration skilling and community events
    Power your Linux and PostgreSQL innovation with Azure migration skilling and community events  Managing on-prem and hybrid Linux and PostgreSQL workloads presents ongoing challenges, from hardware maintenance and scalability limitations to security risks and operational complexity. As workloads grow, so do costs and administrative burdens of keeping them performant and resilient. Migrating to Azure provides a modern, cloud-based solution that enhances security, scalability, and cost efficiency—freeing teams to focus on innovation rather than infrastructure management.   In this blog, we’ll explore not only the value of migrating Linux and PostgreSQL workloads to Azure, but also some crucial, expert-curated skilling resources we provide to help your team master the process. Plus, we’ll disc…
  • Open

    New scale options in Azure AI Search: change your pricing tier and service upgrade
    Introduction Azure AI Search is announcing two new preview features that make it easier to scale your search service to avoid production issues from storage limitations as your needs grow. Available in preview today: Change your pricing tier: Change the tier of your existing Azure AI Search service via the portal or management plane REST API. Self-service upgrade: Upgrade your search service to enable features previously only available in new services, such as the new storage limits released in April 2024. Change your pricing tier to scale up Now you can change your service tier from the Azure portal or management plane API. It's a simple scaling operation like adding partitions or replicas that ensures uninterrupted growth and operational continuity. Before, if you reached the maximum t…  ( 28 min )
    New scale options in Azure AI Search: change your pricing tier and service upgrade
    Introduction Azure AI Search is announcing two new preview features that make it easier to scale your search service to avoid production issues from storage limitations as your needs grow. Available in preview today: Change your pricing tier: Change the tier of your existing Azure AI Search service via the portal or management plane REST API. Self-service upgrade: Upgrade your search service to enable features previously only available in new services, such as the new storage limits released in April 2024. Change your pricing tier to scale up Now you can change your service tier from the Azure portal or management plane API. It's a simple scaling operation like adding partitions or replicas that ensures uninterrupted growth and operational continuity. Before, if you reached the maximum t…
  • Open

    Announcing Hybrid Search with Semantic Kernel for .NET
    Today we’re thrilled to announce support for Hybrid search with Semantic Kernel Vector Stores for .NET. What is Hybrid Search? Hybrid search performs two parallel searches on a vector database.  The union of the results of these two searches are then returned to callers with a combined rank, based on the rankings from each of […] The post Announcing Hybrid Search with Semantic Kernel for .NET appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Guest Blog: A Comprehensive Guide to Agentic AI with Semantic Kernel
    Today we’re excited to welcome Arafat Tehsin, who’s a Microsoft Most Valuable Professional (MVP) for AI. back as a guest author on the Semantic Kernel blog today to cover his work on a Comprehensive Guide to Agentic AI with Semantic Kernel. We’ll turn it over to Arafat to dive in further. The world of AI is evolving […] The post Guest Blog: A Comprehensive Guide to Agentic AI with Semantic Kernel appeared first on Semantic Kernel.  ( 23 min )
  • Open

    Introducing the agent debugging experience in Microsoft 365 Copilot
    Learn how to debug your agents within Microsoft 365 Copilot to streamline your workflow using the agent debugging experience, now generally available. The post Introducing the agent debugging experience in Microsoft 365 Copilot appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    Lease Management in Azure Storage & Common troubleshooting scenarios
    The blog explains how lease management in Azure Storage works, covering the management of concurrent access to blobs and containers. It discusses key concepts such as acquiring, renewing, changing, releasing, and breaking leases, ensuring only the lease holder can modify or delete a resource for a specified duration. Additionally, it explores common troubleshooting scenarios in Azure Storage Lease Management. Lease management in Azure Storage allows you to create and manage locks on blobs for write and delete operations. This is particularly useful for ensuring that only one client can write to a blob at a time, preventing conflicts and ensuring data consistency. Key Concepts: Lease States: A blob can be in one of several lease states, such as Available, Leased, Expired, Breaking, and Bro…  ( 33 min )
    Lease Management in Azure Storage & Common troubleshooting scenarios
    The blog explains how lease management in Azure Storage works, covering the management of concurrent access to blobs and containers. It discusses key concepts such as acquiring, renewing, changing, releasing, and breaking leases, ensuring only the lease holder can modify or delete a resource for a specified duration. Additionally, it explores common troubleshooting scenarios in Azure Storage Lease Management. Lease management in Azure Storage allows you to create and manage locks on blobs for write and delete operations. This is particularly useful for ensuring that only one client can write to a blob at a time, preventing conflicts and ensuring data consistency. Key Concepts: Lease States: A blob can be in one of several lease states, such as Available, Leased, Expired, Breaking, and Bro…
    Performing Simple SFTP Operations on Azure Blob Storage using CURL Commands
    Introduction Azure Blob Storage now supports the SFTP protocol, making it easier to interact with blobs using standard tools like curl. This blog guides you through performing simple upload, download, delete, and list operations using curl over SFTP with Azure Blob Storage.   Pre-requisites Azure Blob Storage with SFTP enabled (Storage Account must have hierarchical namespace enabled) Enable SFTP Local user created in Azure Storage Account with SSH Key Pair as Authentication method and appropriate container permissions. Private key (.pem file) for SFTP authentication. Curl tool installed (version 7.55.0+ for SFTP support) Please note that the following tests are inclined towards Curl on Windows. The same can be performed with other OS with appropriate format changes.   Authentication Sup…  ( 25 min )
    Performing Simple SFTP Operations on Azure Blob Storage using CURL Commands
    Introduction Azure Blob Storage now supports the SFTP protocol, making it easier to interact with blobs using standard tools like curl. This blog guides you through performing simple upload, download, delete, and list operations using curl over SFTP with Azure Blob Storage.   Pre-requisites Azure Blob Storage with SFTP enabled (Storage Account must have hierarchical namespace enabled) Enable SFTP Local user created in Azure Storage Account with SSH Key Pair as Authentication method and appropriate container permissions. Private key (.pem file) for SFTP authentication. Curl tool installed (version 7.55.0+ for SFTP support) Please note that the following tests are inclined towards Curl on Windows. The same can be performed with other OS with appropriate format changes.   Authentication Sup…
  • Open

    Understanding Real-Time Intelligence CDC connector for PostgreSQL database
    Coauthor: Aazathraj, Chief Data Architect, Apollo Hospitals Real-Time Intelligence in Microsoft Fabric provides multiple database change data capture connectors including SQL database, MySQL, PostgreSQL, and Cosmos DB, which allows anyone to easily react and take actions on database changes in real-time. Each of the databases works differently when it comes to enabling CDC, giving permission … Continue reading “Understanding Real-Time Intelligence CDC connector for PostgreSQL database”  ( 9 min )
  • Open

    Strapi on App Service: Quick start
    Introduction Strapi is an open-source, headless CMS that is highly customizable and developer-friendly, making it a popular choice for content management. When it comes to Strapi hosting, deploying Strapi, or self hosting Strapi, Azure App Service stands out as a premier solution. Azure App Service is a fully managed platform for building, deploying, and scaling web apps, offering unparalleled scalability and reliability.  In this quick start guide, you will learn how to create and deploy your first Strapi site on Azure App Service Linux, using Azure Database for MySQL or PostgreSQL, along with other necessary Azure resources. Steps to deploy Strapi on App Service What is Strapi on App Service? App Service is a fully managed platform for building, deploying, and scaling web apps. Deploying…  ( 42 min )
    Strapi on App Service: Quick start
    Strapi is a widely used open-source headless CMS platform that empowers developers and content creators worldwide to manage and deliver content efficiently. It is known for its flexibility and robustness, having been continuously developed and improved by the community over the years. In this quick start guide, you will learn how to create and deploy your first Strapi site on Azure App Service Linux, using Azure Database for MySQL or PostgreSQL, along with other necessary Azure resources. This guide utilizes an ARM template to install the required resources for hosting your Strapi application, which will incur costs for your Azure Subscription. For pricing details, please refer to our section on estimating pricing. For more information read our Strapi on App Service overview blog. What is …
    Strapi on App Service: FAQ
    Where to host Strapi? How to self-host Strapi? When it comes to Strapi hosting, deploying Strapi, or self-hosting Strapi, Azure App Service stands out as a premier solution. Azure App Service is a fully managed platform for building, deploying, and scaling web apps, offering unparalleled scalability and reliability. Deploy Strapi on Azure App Service to leverage Strapi's flexible content management capabilities with the robust infrastructure of Microsoft's cloud. With greater customization control, global region availability, pre-built integration with other Azure services, Strapi on Azure App Service simplifies infrastructure management while ensuring high availability, security, and performance. Learn more from documentation below, Strapi on App Service - Overview How to deploy Strapi o…  ( 56 min )
    Strapi on App Service: FAQ
    How do I enable a custom domain for my Strapi website? Custom domains can be set up with these resources: Using custom domains on Azure App Service Configuring custom domains with Azure Front Door Does Strapi on App Service have email functionality? Yes, email functionality is supported through Azure Communication Services. You can also configure custom email domains.  In order to integrate Strapi application with Azure communication services, we use an Email plugin here. Can I use other databases with Strapi on Azure App Service? Strapi supports MySQL, PostgreSQL, MariaDB and SQL Lite. Our current ARM template solution supports using PostgreSQL or MySQL as database. To use another supported database, you could either modify the ARM template or even set up database manually and update th…
    Strapi on App Service: Overview
    What is Strapi on App Service? Strapi is an open-source, headless CMS that is highly customizable and developer-friendly, making it a popular choice for content management. When it comes to Strapi hosting, deploying Strapi, or self hosting Strapi, Azure App Service stands out as a premier solution. Azure App Service is a fully managed platform for building, deploying, and scaling web apps, offering unparalleled scalability and reliability. Deploy Strapi on Azure App Service to leverage Strapi's flexible content management capabilities with the robust infrastructure of Microsoft's cloud. Whether you're looking to self-host Strapi or find the best hosting options for Strapi, Azure App Service provides the ideal environment for high availability, security, and performance. This offering integ…  ( 36 min )
    Strapi on App Service: Overview
    What is Strapi on App Service? Strapi is an open-source, headless CMS that is highly customizable and developer-friendly. App Service is a fully managed platform for building, deploying, and scaling web apps. Deploying Strapi on App Service brings together the power of Strapi's flexible content management capabilities with the scalability and reliability of Microsoft's cloud infrastructure. This offering integrates key Azure services such as: Azure App Service: A scalable platform-as-a-service (PaaS) optimized for running Node.js applications such as Strapi. Azure Database for MySQL flexible server – A fully managed database service that offers high availability, automated maintenance, and elastic scaling for MySQL databases. Azure Database for PostgreSQL flexible server: A fully managed …
    Getting Started with Linux WebJobs on App Service - NodeJS
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the Node 20 LTS runtime stack, create your App Service Web App. The stack must be Node, since we plan on writing our WebJob using Node and a bash startup script. For this example, we’ll use Node 20 LTS. Next, we’ll add a basic Web…  ( 25 min )
    Getting Started with Linux WebJobs on App Service - NodeJS
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the Node 20 LTS runtime stack, create your App Service Web App. The stack must be Node, since we plan on writing our WebJob using Node and a bash startup script. For this example, we’ll use Node 20 LTS. Next, we’ll add a basic Web…
    Getting Started with Linux WebJobs on App Service – PHP
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the PHP 8.4 runtime stack, create your App Service Web App. The stack must be PHP, since we plan on writing our WebJob using PHP and a bash startup script. For this example, we’ll use PHP 8.4. Next, we’ll add a basic WebJob to our…  ( 25 min )
    Getting Started with Linux WebJobs on App Service – PHP
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the PHP 8.4 runtime stack, create your App Service Web App. The stack must be PHP, since we plan on writing our WebJob using PHP and a bash startup script. For this example, we’ll use PHP 8.4. Next, we’ll add a basic WebJob to our…
    Getting Started with Linux WebJobs on App Service - .NET 9
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the .NET 9 runtime stack, create your App Service Web App. The stack must be .NET, since we plan on writing our WebJob using .NET and a bash startup script. For this example, we’ll use .NET 9. Next, we’ll add a basic WebJob to our…  ( 26 min )
    Getting Started with Linux WebJobs on App Service - .NET 9
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and the .NET 9 runtime stack, create your App Service Web App. The stack must be .NET, since we plan on writing our WebJob using .NET and a bash startup script. For this example, we’ll use .NET 9. Next, we’ll add a basic WebJob to our…
  • Open

    Fast Stress Test of DeepSeek 671B on Azure AMD MI300X
    This artical is refer to this artical, welcome to read it: https://techcommunity.microsoft.com/blog/azurehighperformancecomputingblog/running-deepseek-r1-on-a-single-ndv5-mi300x-vm/4372726   Azure GPU VM Environment PreparationQuickly create a Spot VM, using Spot VM and password-based authentication: az vm create --name --resource-group --location --image microsoft-dsvm:ubuntu-hpc:2204-rocm:22.04.2025030701 --size Standard_ND96isr_MI300X_v5 --security-type Standard --priority Spot --max-price -1 --eviction-policy Deallocate --os-disk-size-gb 256 --os-disk-delete-option Delete --admin-username azureadmin --authentication-type password --admin-password The CLI command I used to create the VM: xinyu [ ~ ]$ az vm create --name mi300x-x…  ( 27 min )
    Fast Stress Test of DeepSeek 671B on Azure AMD MI300X
    This artical is refer to: https://techcommunity.microsoft.com/blog/azurehighperformancecomputingblog/running-deepseek-r1-on-a-single-ndv5-mi300x-vm/4372726   Azure GPU VM Environment PreparationQuickly create a Spot VM, using Spot VM and password-based authentication: az vm create --name --resource-group --location --image microsoft-dsvm:ubuntu-hpc:2204-rocm:22.04.2025030701 --size Standard_ND96isr_MI300X_v5 --security-type Standard --priority Spot --max-price -1 --eviction-policy Deallocate --os-disk-size-gb 256 --os-disk-delete-option Delete --admin-username azureadmin --authentication-type password --admin-password The CLI command I used to create the VM: xinyu [ ~ ]$ az vm create --name mi300x-xinyu --resource-group amdrg --loc…
  • Open

    Microsoft Dev Box subscriptions and licensing requirements demystified: What You Need and Why
    Thinking of deploying Microsoft Dev Box service but want to understand the licensing and subscription requirements first? Then, you have come to the right place. This blog post breaks it all down—clearly and simply—so you know exactly what you need and why, whether you’re an IT admin, platform engineer, developer, or a decision-maker. Microsoft Dev […] The post Microsoft Dev Box subscriptions and licensing requirements demystified: What You Need and Why appeared first on Develop from the cloud.  ( 24 min )
  • Open

    April Patches for Azure DevOps Server and Team Foundation Server
    Today we are releasing patches that impact our self-hosted product, Azure DevOps Server, as well as Team Foundation Server 2018.3.2. We strongly encourage and recommend that all customers use the latest, most secure release of Azure DevOps Server. You can download the latest version of the product, Azure DevOps Server 2022.2 from the Azure DevOps […] The post April Patches for Azure DevOps Server and Team Foundation Server appeared first on Azure DevOps Blog.  ( 23 min )

  • Open

    Announcing CI/CD Enhancements for Azure Load Testing
    We are excited to announce a significant update to our Azure Load Testing service, aimed at enhancing the experience of setting up and running load tests from CI/CD systems, including Azure DevOps and GitHub. This update is a direct response to customer feedback and is designed to streamline the process, making it more efficient and user-friendly. Key Features and Improvements: Enhanced CI/CD Integration: Developers and testers can now configure application components and the metrics to monitor directly from a CI/CD pipeline. This integration allows monitoring the application infrastructure during the test run. You can make the following changes to your load test YAML config. appComponents: - resourceId: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/samplerg/provid…  ( 23 min )
    Announcing CI/CD Enhancements for Azure Load Testing
    We are excited to announce a significant update to our Azure Load Testing service, aimed at enhancing the experience of setting up and running load tests from CI/CD systems, including Azure DevOps and GitHub. This update is a direct response to customer feedback and is designed to streamline the process, making it more efficient and user-friendly. Key Features and Improvements: Enhanced CI/CD Integration: Developers and testers can now configure application components and the metrics to monitor directly from a CI/CD pipeline. This integration allows monitoring the application infrastructure during the test run. You can make the following changes to your load test YAML config. appComponents: - resourceId: "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/samplerg/provid…
    Meet your hosts for JDConf 2025!
    JDConf 2025 is right around the corner and is set to be a global gathering for Java developers passionate about Cloud, AI, and the future of Java. With 22+ sessions and 10+ hours of live content streaming from April 9 - 10, plus additional on-demand sessions, this year’s event dives into app modernization, intelligent apps, frameworks, and AI-powered development with tools like Copilot and more. We are excited to invite you to join us with our three distinguished hosts: Bruno Borges, Sandra Ahlgrimm, and Rory Preddy. Meet Bruno Borges - Your host for JDConf 2025 Americas Bruno Borges is a seasoned professional with a rich background in the tech industry. Currently serving as Principal Product Manager at Microsoft, he focuses on Java developers' experience on Azure and beyond. With over tw…  ( 25 min )
    Meet your hosts for JDConf 2025!
    JDConf 2025 is right around the corner and is set to be a global gathering for Java developers passionate about Cloud, AI, and the future of Java. With 22+ sessions and 10+ hours of live content streaming from April 9 - 10, plus additional on-demand sessions, this year’s event dives into app modernization, intelligent apps, frameworks, and AI-powered development with tools like Copilot and more. We are excited to invite you to join us with our three distinguished hosts: Bruno Borges, Sandra Ahlgrimm, and Rory Preddy. Meet Bruno Borges - Your host for JDConf 2025 Americas Bruno Borges is a seasoned professional with a rich background in the tech industry. Currently serving as Principal Product Manager at Microsoft, he focuses on Java developers' experience on Azure and beyond. With over tw…
  • Open

    Python Vector Store Connectors update: Faiss, Azure SQL Server and Pinecone
    Announcing New Vector Stores: Faiss, SQL Server, and Pinecone We are thrilled to announce the availability of three new Vector Stores and Vector Store Record Collections: Faiss, SQL Server, and Pinecone. These new connectors will enable you to store and retrieve vector data efficiently, making it easier to work with your own data and data […] The post Python Vector Store Connectors update: Faiss, Azure SQL Server and Pinecone appeared first on Semantic Kernel.  ( 23 min )
    Guest Blog: Semantic Kernel and Copilot Studio Usage Series – Part 1
    Today on the Semantic Kernel blog we’re excited to welcome a group of guest authors from Microsoft. We’ll turn it over to Riccardo Chiodaroli, Samar El Housseini, Daniel Lavve and Fabrizio Ruocco to dive into their use cases with Semantic Kernel and Copilot Studio. In today’s fast-paced digital economy, intelligent automation is no longer optional—it’s […] The post Guest Blog: Semantic Kernel and Copilot Studio Usage Series – Part 1 appeared first on Semantic Kernel.  ( 25 min )
  • Open

    Modo Agente: disponible para todos los usuarios y compatible con MCP
    ¡El modo Agente se está lanzando para todos los usuarios de VS Code! El agente actúa como un programador autónomo que realiza tareas de codificación en varias etapas bajo tu comando, como analizar tu base de código, proponer ediciones de archivos y ejecutar comandos en el terminal. Responde a errores de compilación y lint, monitorea la salida del terminal y corrige automáticamente en un bucle hasta que la tarea esté completada. El agente también puede usar herramientas contribuidas, lo que le permite interactuar con servidores MCP externos o extensiones de VS Code para realizar una amplia variedad de tareas.   Disponible para todos los usuarios Abre la vista de Chat, inicia sesión en GitHub, configura chat.agent.enabled en tus ajustes y selecciona Agente en el menú desplegable del modo de…  ( 36 min )
    Modo Agente: disponible para todos los usuarios y compatible con MCP
    ¡El modo Agente se está lanzando para todos los usuarios de VS Code! El agente actúa como un programador autónomo que realiza tareas de codificación en varias etapas bajo tu comando, como analizar tu base de código, proponer ediciones de archivos y ejecutar comandos en el terminal. Responde a errores de compilación y lint, monitorea la salida del terminal y corrige automáticamente en un bucle hasta que la tarea esté completada. El agente también puede usar herramientas contribuidas, lo que le permite interactuar con servidores MCP externos o extensiones de VS Code para realizar una amplia variedad de tareas.   Disponible para todos los usuarios Abre la vista de Chat, inicia sesión en GitHub, configura chat.agent.enabled en tus ajustes y selecciona Agente en el menú desplegable del modo de…
  • Open

    Announcing permission model changes for OneLake events in Fabric Real-Time Hub
    We are excited to announce the latest update to our permission model for OneLake events in the Fabric Real-Time Hub. Previously, users with the ReadAll permission, such as workspace admins, members, and contributors, could subscribe to OneLake events for items like lakehouses, warehouses, SQL databases, mirrored databases, and KQL databases. To provide more granular control, we … Continue reading “Announcing permission model changes for OneLake events in Fabric Real-Time Hub”  ( 6 min )
    Optimizing for CI/CD in Microsoft Fabric
    For nearly three years, Microsoft’s internal Azure Data team has been developing data engineering solutions using Microsoft Fabric. Throughout this journey, we’ve refined our Continuous Integration and Continuous Deployment (CI/CD) approach by experimenting with various branching models, workspace structures, and parameterization techniques. This article walks you through why we chose our strategy and how to implement it in … Continue reading “Optimizing for CI/CD in Microsoft Fabric”  ( 9 min )
    Implementing proactive monitoring with KQL query alerts with Activator
    Driving actions from real-time organizational data is important for making informed data-driven decisions and improving overall efficiency. By leveraging data effectively, organizations can gain insights into customer behaviour, operational performance, and market trends, enabling them to respond promptly to emerging issues and opportunities. Setting alerts on KQL queries can significantly enhance this proactive approach, especially … Continue reading “Implementing proactive monitoring with KQL query alerts with Activator”  ( 7 min )
    Content Sharing Report (Preview)
    Overview of the Admin monitoring workspace The admin monitoring workspace is an out-of-the-box solution that installs automatically when a Fabric admin accesses it. You can share the entire workspace or individual reports/semantic models with any user or group. Use the reports for insights on user activity, content sharing, capacity performance, and more in your Fabric … Continue reading “Content Sharing Report (Preview)”  ( 9 min )
  • Open

    Getting Results with AI Agents + Bing Grounding
    With the rise of Generative AI technologies, web search has never been more important—or more overwhelming. Modern search engines like Bing excel at delivering vast amounts of information quickly, but quantity alone doesn’t always guarantee the best possible insights. Enter the AI Agent Bing Grounding search tool: a specialized tool that uses Bing to refine, curate, and tailor results for easy consumption. In this post, we’ll explore what makes the Bing Grounding tool special, and how curated results can transform the way you use search results in your application. Taming the Overload with Curated Results It’s no secret that information overload is a real challenge in today’s fast-moving digital landscape. Even casual browsing can feel time-consuming when sifting through pages of search re…  ( 25 min )
    Getting Results with AI Agents + Bing Grounding
    With the rise of Generative AI technologies, web search has never been more important—or more overwhelming. Modern search engines like Bing excel at delivering vast amounts of information quickly, but quantity alone doesn’t always guarantee the best possible insights. Enter the AI Agent Bing Grounding search tool: a specialized tool that uses Bing to refine, curate, and tailor results for easy consumption. In this post, we’ll explore what makes the Bing Grounding tool special, and how curated results can transform the way you use search results in your application. Taming the Overload with Curated Results It’s no secret that information overload is a real challenge in today’s fast-moving digital landscape. Even casual browsing can feel time-consuming when sifting through pages of search re…
    Revolutionizing Retail: Meet the Two-Stage AI-Enhanced Search
    This article was written by the AI GBB Team: Samer El Housseini, Setu Chokshi, Aastha Madaan, and Ali Soliman. If you've ever struggled with a retailer's website search—typing in something simple like "snow boots" and getting random results, e.g. garden hoses—you're not alone. Traditional search engines often miss the mark because they're stuck in an outdated world of keyword matching. Modern shoppers want more. They want searches that understand context, intent, and personal preferences. Enter the game-changer: Two-Stage AI-Enhanced Search, powered by Azure AI Search and Azure OpenAI services. What's the Big Idea? Several retailers and e-commerce giants in the UK and Australia are already looking to transform customer experience using AI-enabled cutting-edge solutions. Customers often wis…  ( 38 min )
    Revolutionizing Retail: Meet the Two-Stage AI-Enhanced Search
    If you've ever struggled with a retailer's website search—typing in something simple like "snow boots" and getting random results, e.g. garden hoses—you're not alone. Traditional search engines often miss the mark because they're stuck in an outdated world of keyword matching. Modern shoppers want more. They want searches that understand context, intent, and personal preferences. Enter the game-changer: Two-Stage AI-Enhanced Search, powered by Azure AI Search and Azure OpenAI services. What's the Big Idea? Several retailers and e-commerce giants in the UK and Australia are already looking to transform customer experience using AI-enabled cutting-edge solutions. Customers often wish to search for products that they may want to give as a gift, something nice to wear for an occasion, somethin…

  • Open

    OPENROWSET function in Fabric Data Warehouse (Generally Available)
    The OPENROWSET function is generally available in Fabric Data Warehouse and Fabric SQL endpoints for Lakehouse and Mirrored databases. The OPENROWSET function enables you to easily read Parquet and CSV files stored in Azure Data Lake Storage and Azure Blob Storage: With the OPENROWSET function, you can easily browse files before loading them into the … Continue reading “OPENROWSET function in Fabric Data Warehouse (Generally Available)”  ( 8 min )
    Utilize User Data Functions in Data pipelines with the Functions activity (Preview)
    User Data Functions are now available in preview within data pipeline’s functions activity! This exciting new feature is designed to significantly enhance your data processing capabilities by allowing you to create and manage custom functions tailored to your specific needs. What is a functions activity? The functions activity in data pipelines is a powerful tool … Continue reading “Utilize User Data Functions in Data pipelines with the Functions activity (Preview)”  ( 9 min )
    Building an analytical web application with Microsoft Fabric
    Imagine a retail company that wants to gain insights into customer sentiment for each of its products. They also want to find their top-selling and least-selling products. Using Microsoft Fabric, they can build a powerful analytical application to transform their raw data into actionable insights. The process starts with ingesting raw data, such as customer reviews and sales figures, and ends with providing refined data through an API for internal use. This helps the company efficiently process customer feedback and make informed decisions to improve their products. In this blog post, we’ll dive into the architecture of an analytical application powered by Microsoft Fabric, as shown in the image, and provide a step-by-step guide on how to build it.  ( 8 min )
  • Open

    AI Agents: Building Trustworthy Agents- Part 6
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a critical topic: building trustworthy AI agents. In previous posts (links at the end!), we explored agent fundamentals, frameworks, design principles, tool usage, and Agentic RAG. Now, we'll focus on ensuring safety, security, and user privacy in your AI agent applications. Building Safe and Effective AI Agents Safety in AI agents means ensuring they behave as intended. A core component of this is a robust system message (or prompt) framework. Building a System Message Framework System messages define the rules, instructions, and guidelines for LLMs within agents. A scalable framework for crafting these messages is crucial: Meta System Message: A template prompt used …  ( 26 min )
    AI Agents: Building Trustworthy Agents- Part 6
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a critical topic: building trustworthy AI agents. In previous posts (links at the end!), we explored agent fundamentals, frameworks, design principles, tool usage, and Agentic RAG. Now, we'll focus on ensuring safety, security, and user privacy in your AI agent applications. Building Safe and Effective AI Agents Safety in AI agents means ensuring they behave as intended. A core component of this is a robust system message (or prompt) framework. Building a System Message Framework System messages define the rules, instructions, and guidelines for LLMs within agents. A scalable framework for crafting these messages is crucial: Meta System Message: A template prompt used …

  • Open

    Removal of deprecated DISCO & WSDL aspx pages from SharePoint Online
    We are removing DSICO and WSDL pages from the SharePoint Online by mid September 2025. The post Removal of deprecated DISCO & WSDL aspx pages from SharePoint Online appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    Build AI agents with Python in #AgentsHack
    Microsoft is holding an AI Agents Hackathon, and we want to see what you can build with Python! We'll have 20+ live streams showing you how to build AI agents with Python using popular agent frameworks and Microsoft technologies. Then, you can submit your project for a chance to win prizes, including a Best in Python prize! The post Build AI agents with Python in #AgentsHack appeared first on Microsoft for Python Developers Blog.  ( 24 min )
    Python in Visual Studio Code – April 2025 Release
    The April 2025 release of the Python and Jupyter extensions for Visual Studio Code is now available. This update introduces enhancements to the Copilot experience in Notebooks, improved support for editable installs, faster and more reliable diagnostics, and the addition of custom Node.js arguments with Pylance, and more! The post Python in Visual Studio Code – April 2025 Release appeared first on Microsoft for Python Developers Blog.  ( 24 min )
  • Open

    Model Mondays: Teaching your model new tricks with fine-tuning
    Whether you're optimizing a model for a specialized customer service bot, adapting tone for brand voice, or turbocharging domain-specific performance- fine-tuning unlocks the next level of precision and relevance. And this week, we're going hands-on with Mistral models, showing you just how simple, powerful, and cost-efficient fine-tuning can be in Azure AI Foundry. When: Monday, April 7Time: 1:30 PM ET | 10:30 AM PTWhere: Microsoft Reactor Live Show – RSVP Here   Why Fine-Tune Mistral? Mistral models are quickly gaining traction thanks to their speed, efficiency, and open-access architecture. But what makes them shine is how easily they can be adapted for your business context. In this episode, we’ll show you how to: Prepare datasets and configure training parameters Launch a fine-tuning job in Azure AI Foundry (yes, it’s as easy as a few clicks) Deploy your fine-tuned model with confidence And if you're wondering whether it's worth the effort, spoiler alert: early adopters are seeing dramatic boosts in accuracy and latency reduction, especially in niche or sensitive use cases. What’s in it for you? Each Model Mondays episode is a 30-minute boost to your AI skillset: Stay updated – A 5-min recap of the week’s hottest model drops and Azure AI Foundry news Get hands-on – A 15-min walkthrough focused on how to fine-tune Mistral in Azure Ask the experts – Live Q&A with Microsoft and Mistral  And the conversation doesn’t stop there. Join us every Friday for a Model Mondays Watercooler Chat at 1:30 PM ET / 10:30 AM PT in our Discord community, where we recap, react, and nerd out with the broader AI community. In Case You Missed It Episode 1: GitHub Models – Building better dev experiences Episode 2: Reasoning Models  Episode 3: Search & Retrieval Models Episode 4 : Visual Generative Models Be Part of the Movement Watch Live on Microsoft Reactor – RSVP Now Join the AI Community – Discord Fridays Explore the Tech – Model Mondays GitHub The future of AI is happening every Monday—don’t miss it.  ( 22 min )
    Model Mondays: Teaching your model new tricks with fine-tuning
    Whether you're optimizing a model for a specialized customer service bot, adapting tone for brand voice, or turbocharging domain-specific performance- fine-tuning unlocks the next level of precision and relevance. And this week, we're going hands-on with Mistral models, showing you just how simple, powerful, and cost-efficient fine-tuning can be in Azure AI Foundry. When: Monday, April 7Time: 1:30 PM ET | 10:30 AM PTWhere: Microsoft Reactor Live Show – RSVP Here   Why Fine-Tune Mistral? Mistral models are quickly gaining traction thanks to their speed, efficiency, and open-access architecture. But what makes them shine is how easily they can be adapted for your business context. In this episode, we’ll show you how to: Prepare datasets and configure training parameters Launch a fine-tuning job in Azure AI Foundry (yes, it’s as easy as a few clicks) Deploy your fine-tuned model with confidence And if you're wondering whether it's worth the effort, spoiler alert: early adopters are seeing dramatic boosts in accuracy and latency reduction, especially in niche or sensitive use cases. What’s in it for you? Each Model Mondays episode is a 30-minute boost to your AI skillset: Stay updated – A 5-min recap of the week’s hottest model drops and Azure AI Foundry news Get hands-on – A 15-min walkthrough focused on how to fine-tune Mistral in Azure Ask the experts – Live Q&A with Microsoft and Mistral  And the conversation doesn’t stop there. Join us every Friday for a Model Mondays Watercooler Chat at 1:30 PM ET / 10:30 AM PT in our Discord community, where we recap, react, and nerd out with the broader AI community. In Case You Missed It Episode 1: GitHub Models – Building better dev experiences Episode 2: Reasoning Models  Episode 3: Search & Retrieval Models Episode 4 : Visual Generative Models Be Part of the Movement Watch Live on Microsoft Reactor – RSVP Now Join the AI Community – Discord Fridays Explore the Tech – Model Mondays GitHub The future of AI is happening every Monday—don’t miss it.
    Serverless Fine Tuning Now In More US-Regions!
    We value your feedback and recognize the demand for fine tuning to be accessible in more regions. Today, we are excited to announce that serverless finetuning for Mistral, Phi, and NTT models is now available across all US regions where base model inferencing is also accessible. This expansion aims to provide greater flexibility and accessibility for users, ensuring that everyone can benefit from the enhanced capabilities of serverless finetuning.   Region Availability Cross region finetuning is now enabled in the following regions: EastUS  EastUS2 SouthCentralUS  NorthCentralUS  WestUS  WestUS3 Model Availability Mistral-Nemo  Mistral-Large-2411  Ministral-3B  Phi-3.5-mini-instruct  Phi-3.5-MoE-instruct  Phi-4-mini-instruct  Tsuzumi-7b     Looking Ahead: More Models and Regions As we continue to innovate and expand our model offerings, more models and regions will soon be supported. Our team is working diligently to ensure that users across various locations can benefit from the latest advancements in serverless finetuning. Stay tuned for updates as we roll out these enhancements, providing even greater flexibility and accessibility for our global user base. We appreciate your ongoing support and look forward to sharing more details in the near future.   Get started today!  Whether you're a newcomer to fine-tuning or an experienced developer, getting started with Azure AI Foundry is now more accessible than ever. Fine-tuning is available through both Azure AI Foundry and Azure ML Studio, offering a user-friendly interface for those who prefer a graphical user interface (GUI) and SDK’s and CLI for advanced users. Learn more!  Try it out with Azure AI Foundry Explore documentation for the model catalog in Azure AI Foundry Begin using the Finetuning SDK in the notebook Learn more about Azure AI Content Safety - Azure AI Content Safety – AI Content Moderation | Microsoft Azure   Get started with Finetuning on Azure AI Foundry Learn more about region availability  ( 21 min )
    Serverless Fine Tuning Now In More US-Regions!
    We value your feedback and recognize the demand for fine tuning to be accessible in more regions. Today, we are excited to announce that serverless finetuning for Mistral, Phi, and NTT models is now available across all US regions where base model inferencing is also accessible. This expansion aims to provide greater flexibility and accessibility for users, ensuring that everyone can benefit from the enhanced capabilities of serverless finetuning.   Region Availability Cross region finetuning is now enabled in the following regions: EastUS  EastUS2 SouthCentralUS  NorthCentralUS  WestUS  WestUS3 Model Availability Mistral-Nemo  Mistral-Large-2411  Ministral-3B  Phi-3.5-mini-instruct  Phi-3.5-MoE-instruct  Phi-4-mini-instruct  Tsuzumi-7b     Looking Ahead: More Models and Regions As we continue to innovate and expand our model offerings, more models and regions will soon be supported. Our team is working diligently to ensure that users across various locations can benefit from the latest advancements in serverless finetuning. Stay tuned for updates as we roll out these enhancements, providing even greater flexibility and accessibility for our global user base. We appreciate your ongoing support and look forward to sharing more details in the near future.   Get started today!  Whether you're a newcomer to fine-tuning or an experienced developer, getting started with Azure AI Foundry is now more accessible than ever. Fine-tuning is available through both Azure AI Foundry and Azure ML Studio, offering a user-friendly interface for those who prefer a graphical user interface (GUI) and SDK’s and CLI for advanced users. Learn more!  Try it out with Azure AI Foundry Explore documentation for the model catalog in Azure AI Foundry Begin using the Finetuning SDK in the notebook Learn more about Azure AI Content Safety - Azure AI Content Safety – AI Content Moderation | Microsoft Azure   Get started with Finetuning on Azure AI Foundry Learn more about region availability
  • Open

    Build AI agent tools using remote MCP with Azure Functions
    Model Context Protocol (MCP) is a way for apps to provide capabilities and context to a large language model. A key feature of MCP is the ability to define tools that AI agents can leverage to accomplish whatever tasks they’ve been given. MCP servers can be run locally, but remote MCP servers are important for sharing tools that work at cloud scale.  Today, we’re pleased to share an early experimental preview of triggers and bindings that allow you to build tools using remote MCP with server-sent events (SSE) with Azure Functions. Azure Functions lets you author focused, event-driven logic that scales automatically in response to demand. You just write code reflecting unique requirements of your tools, and Functions will take care of the rest. Remote MCP quickstarts for Azure Functions are…  ( 36 min )
    Build AI agent tools using remote MCP with Azure Functions
    Model Context Protocol (MCP) is a way for apps to provide capabilities and context to a large language model. A key feature of MCP is the ability to define tools that AI agents can leverage to accomplish whatever tasks they’ve been given. MCP servers can be run locally, but remote MCP servers are important for sharing tools that work at cloud scale.  Today, we’re pleased to share an early experimental preview of triggers and bindings that allow you to build tools using remote MCP with server-sent events (SSE) with Azure Functions. Azure Functions lets you author focused, event-driven logic that scales automatically in response to demand. You just write code reflecting unique requirements of your tools, and Functions will take care of the rest. Remote MCP quickstarts for Azure Functions are…
  • Open

    Boards Integration with GitHub Enterprise Cloud and Data Residency (Public Preview)
    Back in January, we launched a private preview of our Boards integration with GitHub Enterprise Cloud with data residency. If you’re unfamiliar with GitHub’s data residency option and what it means for your organization, you can learn more in the original announcement. Since the private preview launch, we’ve gathered valuable feedback from early adopters, and […] The post Boards Integration with GitHub Enterprise Cloud and Data Residency (Public Preview) appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Semantic Kernel Agents are now Generally Available
    The time is finally here, Semantic Kernel’s Agent framework is now Generally Available! Available today as part of Semantic Kernel 1.45 (.NET) and 1.27 (Python), the Semantic Kernel Agent framework makes it easier for agents to coordinate and dramatically reduces the code developers need to write to build amazing AI applications. What does Generally Available […] The post Semantic Kernel Agents are now Generally Available appeared first on Semantic Kernel.  ( 24 min )

  • Open

    How to set up Windows 365 (2025 tutorial)
    Set up and access your Cloud PCs from anywhere with a full Windows experience on any device using Windows 365. Whether you’re working from a browser, the Windows app, or Windows 365 Link, your desktop, apps, and settings are always available — just like a traditional PC. As an admin, you can quickly provision and manage Cloud PCs for multiple users with Microsoft Intune.  Scott Manchester, Windows Cloud Vice President, shows how easy it is to set up secure, scalable environments, ensure business continuity with built-in restore, and optimize performance with AI-powered insights. Work securely from anywhere.  Windows 365 acts like your personal PC in the cloud — scale CPU, RAM, and storage as needed. See it here. Deploy Cloud PCs in minutes.  Provision Cloud PCs in just a few clicks with…  ( 62 min )
    How to set up Windows 365 (2025 tutorial)
    Set up and access your Cloud PCs from anywhere with a full Windows experience on any device using Windows 365. Whether you’re working from a browser, the Windows app, or Windows 365 Link, your desktop, apps, and settings are always available — just like a traditional PC. As an admin, you can quickly provision and manage Cloud PCs for multiple users with Microsoft Intune.  Scott Manchester, Windows Cloud Vice President, shows how easy it is to set up secure, scalable environments, ensure business continuity with built-in restore, and optimize performance with AI-powered insights. Work securely from anywhere.  Windows 365 acts like your personal PC in the cloud — scale CPU, RAM, and storage as needed. See it here. Deploy Cloud PCs in minutes.  Provision Cloud PCs in just a few clicks with…
  • Open

    AI-powered development with Data Factory Microsoft Fabric
    In today’s data-driven landscape, organizations are constantly seeking ways to streamline their data integration processes, enhance productivity, and democratize access to powerful data engineering capabilities with Copilot for Data Factory. At Microsoft, we’re committed to empowering data engineers and analysts with intelligent tools that reduce complexity and accelerate development. We’re excited to share the latest … Continue reading “AI-powered development with Data Factory Microsoft Fabric”  ( 7 min )
    DataOps in Fabric Data Factory
    We’ve made several significant updates to our Fabric Data Factory artifacts stories, including Continuous Integration/Continuous Deployment (CI/CD) and APIs support! These updates are designed to automate the integration, testing, and deployment of code changes, ensuring efficient and reliable development. In Fabric Data Factory, we currently support two key features in collaboration with the Application Lifecycle … Continue reading “DataOps in Fabric Data Factory”  ( 6 min )
    Best-in-class connectivity and data movement with Data Factory in Fabric
    In the fast-evolving data integration landscape, Data Factory continues to enhance the existing connectors to provide high-throughput data ingestion experience with no-code, low-code experience. With a focus on improving connector efficiency and expanding capabilities, recent updates bring significant advancements to a number of connectors. These improvements focus on: Latest innovations 1. Lakehouse connector now supports … Continue reading “Best-in-class connectivity and data movement with Data Factory in Fabric”  ( 7 min )
  • Open

    Learn Python + AI from our video series!
    We just wrapped up our first Python + AI series, a six-part series showing how to use generative AI models from Python, with versions in both English and Spanish. We covered multiple kinds of models, like LLMs, embedding models, and multimodal models. We introduced popular approaches like RAG, function calling, and structured outputs. Finally, we discussed AI risk mitigation layers and showed how to evaluate AI quality and safety. To make it easy for everyone to follow along, we made sure all of our code examples work with GitHub Models, a service which provides free models for every GitHub account holder for experimentation and education. Even if you missed the live series, you can still go through all the material from the links below! If you're an instructor yourself, feel free to use t…  ( 38 min )
    Learn Python + AI from our video series!
    We just wrapped up our first Python + AI series, a six-part series showing how to use generative AI models from Python, with versions in both English and Spanish. We covered multiple kinds of models, like LLMs, embedding models, and multimodal models. We introduced popular approaches like RAG, function calling, and structured outputs. Finally, we discussed AI risk mitigation layers and showed how to evaluate AI quality and safety. To make it easy for everyone to follow along, we made sure all of our code examples work with GitHub Models, a service which provides free models for every GitHub account holder for experimentation and education. Even if you missed the live series, you can still go through all the material from the links below! If you're an instructor yourself, feel free to use t…
  • Open

    Code the Future with Java and AI – Join Me at JDConf 2025
    JDConf 2025 is just around the corner, and whether you’re a Java developer, architect, team leader, or decision maker I hope you’ll join me as we explore how Java is evolving with the power of AI and how you can start building the next generation of intelligent applications today.  Why JDConf 2025?  With over 22 expert-led sessions and 10+ hours of live content, JDConf is packed with learning, hands-on demos, and real-world solutions. You’ll hear from Java leaders and engineers on everything from modern application design to bringing AI into your Java stack. It’s free, virtual and your chance to connect from wherever you are. (On-demand sessions will also be available globally from April 9–10, so you can tune in anytime from anywhere.)  Bring AI into Java Apps  At JDConf 2025, we are going…  ( 27 min )
    Code the Future with Java and AI – Join Me at JDConf 2025
    JDConf 2025 is just around the corner, and whether you’re a Java developer, architect, team leader, or decision maker I hope you’ll join me as we explore how Java is evolving with the power of AI and how you can start building the next generation of intelligent applications today.  Why JDConf 2025?  With over 22 expert-led sessions and 10+ hours of live content, JDConf is packed with learning, hands-on demos, and real-world solutions. You’ll hear from Java leaders and engineers on everything from modern application design to bringing AI into your Java stack. It’s free, virtual and your chance to connect from wherever you are. (On-demand sessions will also be available globally from April 9–10, so you can tune in anytime from anywhere.)  Bring AI into Java Apps  At JDConf 2025, we are going…
  • Open

    Hola, Spain Central! Microsoft Dev Box Expands in Europe
    You asked, we built it. We’re thrilled to announce that Spain Central is now a supported region for Microsoft Dev Box! 🎉 That’s right — starting today, you can spin up Dev Boxes in Spain Central and get all the benefits of fast, secure, ready-to-code workstations, now closer to your European teams and data. To […] The post Hola, Spain Central! Microsoft Dev Box Expands in Europe appeared first on Develop from the cloud.  ( 22 min )
  • Open

    Using OpenAI’s Audio-Preview Model with Semantic Kernel
    OpenAI’s gpt-4o-audio-preview is a powerful multimodal model that enables audio input and output capabilities, allowing developers to create more natural and accessible AI interactions. This model supports both speech-to-text and text-to-speech functionalities in a single API call through the Chat Completions API, making it suitable for building voice-enabled applications where turn-based interactions are appropriate. In this […] The post Using OpenAI’s Audio-Preview Model with Semantic Kernel appeared first on Semantic Kernel.  ( 23 min )

  • Open

    Folder REST API (Preview)
    Workspace folders are an easy way for you to efficiently organize and manage items in the workspace. We’re pleased to share that Folder Rest API is now in preview. Create and manage folders in automation scenarios and seamlessly integrate with other systems and tools. What’s APIs are new? Updated existing APIs What’s coming soon? While the … Continue reading “Folder REST API (Preview)”  ( 5 min )
    New Eventstream sources: MQTT, Solace PubSub+, Azure Data Explorer, Weather & Azure Event Grid
    Eventstream is a data streaming service in Fabric Real-Time Intelligence, enables users to ingest, transform, and route real-time data streams from multiple sources into Fabric. We’re expanding Evenstream’s capabilities, and making real-time data integration even more seamless, by introducing five new source connectors and additional sample data streams.  With these new connectors, you can easily … Continue reading “New Eventstream sources: MQTT, Solace PubSub+, Azure Data Explorer, Weather & Azure Event Grid ”  ( 6 min )
    Workload Development Kit – OneLake support and Developer Experience enhancements
    We are excited to share several updates and enhancements for OneLake integration and the Workload Development Kit (WDK). These improvements aim to provide a smoother and more intuitive user experience, as well as new opportunities for monetization and real-time intelligence integration. OneLake integration All items now support storing data in OneLake. This means folders for … Continue reading “Workload Development Kit – OneLake support and Developer Experience enhancements”  ( 7 min )
  • Open

    Using Azure Monitor Workbook to calculate Azure Storage Used Capacity for all storage accounts
    In this blog, we will explore how to use Azure Monitor Workbook to collect and analyze metrics for all or selected storage accounts within a given subscription. We will walk through the steps to set up the workbook, configure metrics, and visualize storage account data to gain valuable insights into usage. Introduction For a given individual blob storage account, we can calculate the used capacity or transactions count or blob count by making use of PowerShell or Metrices available on Portal or Blob Inventory reports. However, if we are supposed to perform the same activity on all the storage accounts under a given subscription and create a consolidated report, it will be a huge task. For such cases, the Blob Inventory reports will not be of much help as it works on individual storage acco…  ( 26 min )
    Using Azure Monitor Workbook to calculate Azure Storage Used Capacity for all storage accounts
    In this blog, we will explore how to use Azure Monitor Workbook to collect and analyze metrics for all or selected storage accounts within a given subscription. We will walk through the steps to set up the workbook, configure metrics, and visualize storage account data to gain valuable insights into usage. Introduction For a given individual blob storage account, we can calculate the used capacity or transactions count or blob count by making use of PowerShell or Metrices available on Portal or Blob Inventory reports. However, if we are supposed to perform the same activity on all the storage accounts under a given subscription and create a consolidated report, it will be a huge task. For such cases, the Blob Inventory reports will not be of much help as it works on individual storage acco…
  • Open

    Microsoft 365 Certification control spotlight: Data access management
    Read how Microsoft 365 Certification ensures data access management best practices for Microsoft 365 apps, add-ins, and Copilot agents. The post Microsoft 365 Certification control spotlight: Data access management appeared first on Microsoft 365 Developer Blog.  ( 23 min )
    Dev Proxy v0.26 with improved mocking, plugin validation, and Docker support
    The latest version of Dev Proxy brings improved validation, plugin reliability, a brand-new Docker image and more. The post Dev Proxy v0.26 with improved mocking, plugin validation, and Docker support appeared first on Microsoft 365 Developer Blog.  ( 23 min )
  • Open

    CDN Domain URL change for Agents in Pipelines
    Introduction We have announced the retirement of Edgio CDN for Azure DevOps and are transitioning to a solution served by Akamai and Azure Front Door CDNs. This change affects Azure DevOps Pipelines customers. This article provides guidance for the Azure DevOps Pipelines customers to check if they are impacted by this change in CDN and […] The post CDN Domain URL change for Agents in Pipelines appeared first on Azure DevOps Blog.  ( 25 min )
    TFVC Policies Storage Updates
    TFVC Check-In Policies TFVC projects can have check-in policies such as Build (Require last build was successful), Work Item (Require associated work item), Changeset comments policy (Require users to add comment to their check-in), etc. We are changing the way we store these policies on the server. This change will slightly affect TFVC users since […] The post TFVC Policies Storage Updates appeared first on Azure DevOps Blog.  ( 22 min )
  • Open

    Get Ready for .NET Conf: Focus on Modernization
    We’re excited to announce the topics and speakers for .NET Conf: Focus on Modernization, our latest virtual event on April 22-23, 2025! This event features live sessions from .NET and cloud computing experts, providing attendees with the latest insights into modernizing .NET applications, including technical upgrades, cloud migration, and tooling advancements. To get ready, visit the .NET Conf: Focus on Modernization home page and click Add to Calendar so you can save the date on your calendar. From this page, on the day of the event you’ll be able to join a live stream on YouTube and Twitch. We will also make the source code for the demos available on GitHub and the on-demand replays will be available on our YouTube channel. Learn more: https://focus.dotnetconf.net/ Why attend? In the fas…  ( 25 min )
    Get Ready for .NET Conf: Focus on Modernization
    We’re excited to announce the topics and speakers for .NET Conf: Focus on Modernization, our latest virtual event on April 22-23, 2025! This event features live sessions from .NET and cloud computing experts, providing attendees with the latest insights into modernizing .NET applications, including technical upgrades, cloud migration, and tooling advancements. To get ready, visit the .NET Conf: Focus on Modernization home page and click Add to Calendar so you can save the date on your calendar. From this page, on the day of the event you’ll be able to join a live stream on YouTube and Twitch. We will also make the source code for the demos available on GitHub and the on-demand replays will be available on our YouTube channel. Learn more: https://focus.dotnetconf.net/ Why attend? In the fas…
  • Open

    Join the UK University Cloud Challenge 2025
    DISCLAIMER: this is for UK only What is the UK University Cloud Challenge? The UK University Cloud Challenge 2025 is an exciting initiative aimed at enhancing employability and promoting friendly rivalry among students from various institutions. This challenge focuses on developing AI skills, which are in high demand, and provides participants with the opportunity to earn a Microsoft professional certification in AI.  Why You Should Join  AI Skills in Demand: Enhance your skillset with AI awareness.  Microsoft Certification: Earn a Microsoft AI Fundamentals certification (AI-900).  Access to Resources: Gain access to the recording of the  kick-off webinar session which was held on 26th March, 2025, and other valuable resources.  Friendly Rivalry: Compete against other students and institutions in a fun and engaging way.    Call to Actions  Register Now: Sign up for the University Cloud Challenge 2025 at https://aka.ms/UCC25.    Important: This is for UK only. Certification Exams: Take the certification exams between 2nd to 12th May 2025.  ( 19 min )
    Join the UK University Cloud Challenge 2025
    DISCLAIMER: this is for UK only What is the UK University Cloud Challenge? The UK University Cloud Challenge 2025 is an exciting initiative aimed at enhancing employability and promoting friendly rivalry among students from various institutions. This challenge focuses on developing AI skills, which are in high demand, and provides participants with the opportunity to earn a Microsoft professional certification in AI.  Why You Should Join  AI Skills in Demand: Enhance your skillset with AI awareness.  Microsoft Certification: Earn a Microsoft AI Fundamentals certification (AI-900).  Access to Resources: Gain access to the recording of the  kick-off webinar session which was held on 26th March, 2025, and other valuable resources.  Friendly Rivalry: Compete against other students and institutions in a fun and engaging way.    Call to Actions  Register Now: Sign up for the University Cloud Challenge 2025 at https://aka.ms/UCC25.    Important: This is for UK only. Certification Exams: Take the certification exams between 2nd to 12th May 2025.
    Exploring Azure Container Apps for Java developers: a must-watch video series
    Hi all! We are excited to share with you the second video in an ongoing series by Ayan Gupta, introducing Azure Container Apps (ACA) for Java developers. This video is titled "Java in Containers: Introduction to ACA's Architecture and Components" and is packed with valuable insights for anyone looking to take their Java applications to production. In this video, Ayan Gupta explains what ACA is and why it's an ideal platform for Java developers. You'll learn about ACA's architecture and components, and see various ways to quickly and easily deploy your Java apps using ACA. Don't miss out on this opportunity to enhance your skills and knowledge. Click here to subscribe to the Java at Microsoft YouTube channel to be notified of each new video in this series. Happy learning!  ( 19 min )
    Exploring Azure Container Apps for Java developers: a must-watch video series
    Hi all! We are excited to share with you the second video in an ongoing series by Ayan Gupta, introducing Azure Container Apps (ACA) for Java developers. This video is titled "Java in Containers: Introduction to ACA's Architecture and Components" and is packed with valuable insights for anyone looking to take their Java applications to production. In this video, Ayan Gupta explains what ACA is and why it's an ideal platform for Java developers. You'll learn about ACA's architecture and components, and see various ways to quickly and easily deploy your Java apps using ACA. Don't miss out on this opportunity to enhance your skills and knowledge. Click here to subscribe to the Java at Microsoft YouTube channel to be notified of each new video in this series. Happy learning!
  • Open

    RAG Time Journey 5: Enterprise-ready RAG
    Introduction Congratulations on making it this far and welcome to RAG Time Journey 5! This is the next step in our multi-format educational series on all things Retrieval Augmented Generation (RAG). Here we will explore how Azure AI Search integrates security measures while following safe AI principles to ensure secure RAG solutions.   Explore additional posts in our RAG Time series: Journey 1, Journey 2, Journey 3, Journey 4.   The development of AI and RAG is leading many companies to incorporate AI-driven solutions into their operations. This transition highlights the importance of embracing best practices for enterprise readiness to ensure long-term success.   But what is enterprise readiness? Enterprise readiness describes the process of being prepared to develop and manage a service,…  ( 46 min )
    RAG Time Journey 5: Enterprise-ready RAG
    Introduction Congratulations on making it this far and welcome to RAG Time Journey 5! This is the next step in our multi-format educational series on all things Retrieval Augmented Generation (RAG). Here we will explore how Azure AI Search integrates security measures while following safe AI principles to ensure secure RAG solutions.   Explore additional posts in our RAG Time series: Journey 1, Journey 2, Journey 3, Journey 4.   The development of AI and RAG is leading many companies to incorporate AI-driven solutions into their operations. This transition highlights the importance of embracing best practices for enterprise readiness to ensure long-term success.   But what is enterprise readiness? Enterprise readiness describes the process of being prepared to develop and manage a service,…

  • Open

    Announcing Fabric User Data Functions (Preview)
    We are excited to announce the preview of Fabric User Data Functions! This feature is made to empower developers to create functions that contain their business logic and connect to Fabric data sources, and/or invoke them from other Fabric items such as Data pipelines, Notebooks and Power BI reports. Fabric User Data Functions leverage the … Continue reading “Announcing Fabric User Data Functions (Preview)”  ( 7 min )
    On-premises data gateway February 2025 release
    Here is the February 2025 release of the on-premises data gateway.  ( 5 min )
    What’s new with OneLake shortcuts
    Microsoft Fabric shortcuts enable organizations to unify their data across various domains and clouds by creating a single virtual data lake. These shortcuts act as symbolic links to data in different storage locations, simplifying access and reducing the need for multiple copies. OneLake serves as the central hub for all analytics data. By using OneLake … Continue reading “What’s new with OneLake shortcuts”  ( 7 min )
    Hints in Fabric Data Warehouse
    What are hints? Hints are optional keywords that you can add to your SQL statements to provide additional information or instructions to the query optimizer. Hints can help you improve the performance, scalability, or consistency of your queries by overriding the default behavior of the query optimizer. For example, you can use hints to specify … Continue reading “Hints in Fabric Data Warehouse”  ( 8 min )
    Unlock the Power of Query insights and become a Fabric Data Warehouse performance detective
    In today’s data-driven landscape, optimizing query performance is paramount for organizations relying on data warehouses. Microsoft Fabric’s Query Insights emerges as a powerful tool, enabling data professionals to delve deep into query behaviors and enhance system efficiency. Understanding Query Insights Query Insights in Microsoft Fabric serves as a centralized repository, storing 30 days of historical … Continue reading “Unlock the Power of Query insights and become a Fabric Data Warehouse performance detective”  ( 7 min )
    SHOWPLAN_XML in Fabric Data Warehouse (Preview)
    We are excited to announce support for SHOWPLAN_XML in Microsoft Fabric Data Warehouse in preview. This capability allows users to generate and view the estimated query execution plan in XML format, a tool for analyzing and optimizing SQL queries. Whether you’re troubleshooting performance bottlenecks or refining query strategies during development, SHOWPLAN_XML offers a granular, detailed … Continue reading “SHOWPLAN_XML in Fabric Data Warehouse (Preview)”  ( 6 min )
    Playbook for metadata driven Lakehouse implementation in Microsoft Fabric
    Co-Author – Gyani Sinha, Abhishek Narain Overview A well-architected lakehouse enables organizations to efficiently manage and process data for analytics, machine learning, and reporting. To achieve governance, scalability, operational excellence, and optimal performance, adopting a structured, metadata-driven approach is crucial for lakehouse implementation. Building on our previous blog, Demystifying Data Ingestion in Fabric, this post … Continue reading “Playbook for metadata driven Lakehouse implementation in Microsoft Fabric”  ( 9 min )
    Secure, comply, collaborate: Item Permissions in Fabric Data Warehouse
    In today’s data-driven world, managing access to data is crucial for maintaining security, ensuring compliance, and optimizing collaboration. Item permissions play a vital role in controlling who can access, modify, and share data within an organization. This blog post will delve into the rationale behind the need for item permissions, what permissions can be assigned … Continue reading “Secure, comply, collaborate: Item Permissions in Fabric Data Warehouse”  ( 7 min )
    Introducing SQL Audit Logs for Fabric Data Warehouse
    Introducing SQL Audit Logs for Fabric Data Warehouse, a powerful new feature designed to enhance security, compliance, and operational insights for our users. The Role of SQL Audit Logs in Fabric Data Warehouse Security SQL Audit Logs in Microsoft Fabric Data Warehouse provide a comprehensive and immutable record of all database activities, capturing critical details … Continue reading “Introducing SQL Audit Logs for Fabric Data Warehouse”  ( 7 min )
    Introducing the Fabric CLI (Preview)
    ⚡️ TL;DR Give it a try. Break things. Tell us what you want next. 👉 Install the CLI and get started We’re excited to announce that the Fabric Command Line Interface (CLI) is now available in public preview — bringing a fast, flexible, and scriptable way to work with Microsoft Fabric from your terminal. What … Continue reading “Introducing the Fabric CLI (Preview)”  ( 8 min )
  • Open

    Understanding 'Always On' vs. Health Check in Azure App Service
    The 'Always On' feature in Azure App Service helps keep your app warm by ensuring it remains running and responsive, even during periods of inactivity with no incoming traffic. As this feature pings to root URI after every 5 minutes.              On Other hand Health-check feature helps pinging configured path every minute to monitor the application availability on each instance.   What is 'Always On' in Azure App Service? The Always On feature ensures that the host process of your web app stays running continuously. This results in better responsiveness after idle periods since the app doesn’t need to cold boot when a request arrives. How to enable Always On: Navigate to the Azure Portal and open your Web App. Go to Configuration > General Settings. Toggle Always On to On. What is Healt…  ( 23 min )
    Understanding 'Always On' vs. Health Check in Azure App Service
    The 'Always On' feature in Azure App Service helps keep your app warm by ensuring it remains running and responsive, even during periods of inactivity with no incoming traffic. As this feature pings to root URI after every 5 minutes.              On Other hand Health-check feature helps pinging configured path every minute to monitor the application availability on each instance.   What is 'Always On' in Azure App Service? The Always On feature ensures that the host process of your web app stays running continuously. This results in better responsiveness after idle periods since the app doesn’t need to cold boot when a request arrives. How to enable Always On: Navigate to the Azure Portal and open your Web App. Go to Configuration > General Settings. Toggle Always On to On. What is Healt…
    Discover new tools, skills, and best practices with Microsoft MVPs and Developer Influencers
    For April, we’re highlighting some of the great content being created by and featuring Microsoft MVPs, experts, and developer influencers. Dive deeper into GitHub Copilot, find hidden features in Visual Studio, learn how to bring GenAI into JavaScript apps, and more. We’ve got how-to videos, live events, demos, hackathons, and other learning resources that will help you level up your dev skills. You’ll find opportunities to connect with Microsoft experts, learn new skills, and explore the latest tools and feature updates for developers.     AI Agents Hackathon 2025 is here It’s time for AI Agents Hackathon 2025! Join this free hackathon to learn new skills, get hands-on experience, build an agent, and maybe even win a prize. Event runs April 8–30, 2025. Get details and register.    New Mic…  ( 35 min )
    Discover new tools, skills, and best practices with Microsoft MVPs and Developer Influencers
    For April, we’re highlighting some of the great content being created by and featuring Microsoft MVPs, experts, and developer influencers. Dive deeper into GitHub Copilot, find hidden features in Visual Studio, learn how to bring GenAI into JavaScript apps, and more. We’ve got how-to videos, live events, demos, hackathons, and other learning resources that will help you level up your dev skills. You’ll find opportunities to connect with Microsoft experts, learn new skills, and explore the latest tools and feature updates for developers.     AI Agents Hackathon 2025 is here It’s time for AI Agents Hackathon 2025! Join this free hackathon to learn new skills, get hands-on experience, build an agent, and maybe even win a prize. Event runs April 8–30, 2025. Get details and register.    New Mic…
  • Open

    Fast deploy and evaluate AI model performance on AML/AI Foundry
    This source code of this article: https://github.com/xinyuwei-david/AI-Foundry-Model-Performance.git Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git    Note: This repository is designed to test the performance of open-source models from the Azure Machine Learning Model Catalog in Managed Compute. I tested the performance of nearly 20 AI models in my repository. Due to space limitations, this article only shows the testing of two models to help readers understand how to use my script for testing. More detailed info,please refer to https://github.com/xinyuwei-david/AI-Foundry-Model-Performance.git  Deploying models Methods https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/deployments-overview NameAzure OpenA…  ( 41 min )
    Fast deploy and evaluate AI model performance on AML/AI Foundry
    This source code of this article: https://github.com/xinyuwei-david/AI-Foundry-Model-Performance.git Please refer to my repo to get more AI resources, wellcome to star it: https://github.com/xinyuwei-david/david-share.git    Note: This repository is designed to test the performance of open-source models from the Azure Machine Learning Model Catalog in Managed Compute. I tested the performance of nearly 20 AI models in my repository. Due to space limitations, this article only shows the testing of two models to help readers understand how to use my script for testing. More detailed info,please refer to https://github.com/xinyuwei-david/AI-Foundry-Model-Performance.git  Deploying models Methods https://learn.microsoft.com/en-us/azure/ai-foundry/concepts/deployments-overview NameAzure OpenA…
  • Open

    Automating PowerPoint Generation with AI: A Learn Live Series Case Study
    Introduction A Learn Live is a series of events where over a period of 45 to 60 minutes, a presenter walks attendees through a learning module or pathway. The show/series, takes you through a Microsoft Learn Module, Challenge or a particular sample. Between April 15 to May 13, we will be hosting a Learn Live series on "Master the Skills to Create AI Agents."  This premise is necessary for the blog because I was tasked with generating slides for the different presenters. Challenge: generation of the slides The series is based on the learning path: Develop AI agents on Azure and each session tackles one of the learn modules in the path. In addition, Learn Live series usually have a presentation template each speaker is provided with to help run their sessions. Each session has the same forma…  ( 31 min )
    Automating PowerPoint Generation with AI: A Learn Live Series Case Study
    Introduction A Learn Live is a series of events where over a period of 45 to 60 minutes, a presenter walks attendees through a learning module or pathway. The show/series, takes you through a Microsoft Learn Module, Challenge or a particular sample. Between April 15 to May 13, we will be hosting a Learn Live series on "Master the Skills to Create AI Agents."  This premise is necessary for the blog because I was tasked with generating slides for the different presenters. Challenge: generation of the slides The series is based on the learning path: Develop AI agents on Azure and each session tackles one of the learn modules in the path. In addition, Learn Live series usually have a presentation template each speaker is provided with to help run their sessions. Each session has the same forma…
  • Open

    Cut Costs and Speed Up AI API Responses with Semantic Caching in Azure API Management
    This article is part of a series of articles on API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. We previously covered the hidden risks of AI APIs in today's AI-driven technological landscape. In this article, we dive deeper into one of the supported Gen AI policies in API Management, which allows you to minimize Azure OpenAI costs and make your applications more performant by reducing the number of calls sent to your LLM service. How does it currently work without the semantic caching policy? For simplicity, let's look at a scenario where we only have a single client app, a single user, and a single model deployment. This of course does not represent most real-wo…  ( 29 min )
    Cut Costs and Speed Up AI API Responses with Semantic Caching in Azure API Management
    This article is part of a series of articles on API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. We previously covered the hidden risks of AI APIs in today's AI-driven technological landscape. In this article, we dive deeper into one of the supported Gen AI policies in API Management, which allows you to minimize Azure OpenAI costs and make your applications more performant by reducing the number of calls sent to your LLM service. How does it currently work without the semantic caching policy? For simplicity, let's look at a scenario where we only have a single client app, a single user, and a single model deployment. This of course does not represent most real-wo…
  • Open

    What’s new with Microsoft in open-source and Kubernetes at KubeCon + CloudNativeCon Europe 2025
    We are thrilled to join the community at this year’s KubeCon + CloudNativeCon Europe 2025 in London! The post What’s new with Microsoft in open-source and Kubernetes at KubeCon + CloudNativeCon Europe 2025 appeared first on Microsoft Open Source Blog.  ( 14 min )
  • Open

    Important Update: Server Name Indication (SNI) Now Mandatory for Azure DevOps Services
    Earlier this year, we announced an upgrade to our network infrastructure and the new IP addresses you need to allow list in your firewall – Update to Azure DevOps Allowed IP addresses – Azure DevOps Blog. This is our second blog post to inform you that starting from April 23rd, 2025, we will be requiring […] The post Important Update: Server Name Indication (SNI) Now Mandatory for Azure DevOps Services appeared first on Azure DevOps Blog.  ( 23 min )
  • Open

    Microsoft Graph APIs for permanent deletion of mailbox items now available
    We’re happy to announce the general availability (GA) of the permanent delete APIs for contacts, messages, and events as well as for contact folders, mail folders, and calendars. The post Microsoft Graph APIs for permanent deletion of mailbox items now available appeared first on Microsoft 365 Developer Blog.  ( 22 min )

  • Open

    Announcing the General Availability of CI/CD and REST APIs for Fabric Eventstream
    Collaborating on data streaming solutions can be challenging, especially when multiple developers work on the same Eventstream item. Version control challenges, deployment inefficiencies, and conflicts often slow down development. Since introducing Fabric CI/CD tools for Eventstream last year, many customers have streamlined their workflows, ensuring better source control and seamless versioning. Now, we’re excited to … Continue reading “Announcing the General Availability of CI/CD and REST APIs for Fabric Eventstream”  ( 6 min )
    Build event-driven workflows with Azure and Fabric Events (Generally Available)
    Business environments are more dynamic than ever, demanding real-time insights and automated responses to stay ahead. Organizations rely on event-driven solutions to detect changes, automate workflows, and drive intelligent actions as soon as events occur. Today, we’re excited to announce the general availability of Azure and Fabric Events, a powerful capability that allows organizations to … Continue reading “Build event-driven workflows with Azure and Fabric Events (Generally Available)”  ( 7 min )
    Seamlessly connect Azure Logic Apps to Fabric Eventstream using Managed Identity
    Eventstream’s Custom Endpoint is a powerful feature that allows users to send and fetch data from Eventstream. It provides two authentication methods for integrating external application:   Microsoft Entra ID Shared access signature (SAS) Keys While SAS Keys provide quick integration, they require users to store, rotate, and manage secrets manually, increasing security risks. On the … Continue reading “Seamlessly connect Azure Logic Apps to Fabric Eventstream using Managed Identity”  ( 7 min )
    Another dimension of Functions in Data Warehouse
    Today, we are announcing new types of Functions in Fabric Data Warehouse and Lakehouse SQL endpoint. Continue reading to find out more and if interested refer to sign up form for Functions preview in Fabric Data Warehouse. About functions Functions in SQL encapsulates specific logic that can be executed by invoking the function within queries, … Continue reading “Another dimension of Functions in Data Warehouse”  ( 8 min )
    Exciting New Features for Mirroring for Azure SQL in Fabric
    Attention data engineers, database developers, and data analysts! We’re pumped to reveal exciting upgrades to Mirroring for Azure SQL Database in Fabric today at the Fabric Conference in Las Vegas 2025. Since it became Generally Available, Mirroring for Azure SQL Database has been a game-changer, letting you replicate data seamlessly and integrate it within the … Continue reading “Exciting New Features for Mirroring for Azure SQL in Fabric”  ( 6 min )
    Revolutionizing Enterprise Network Security: support for VNET Data Gateway in Data pipeline and more (Preview)
    Virtual Network Data Gateway Support for Fabric Pipeline, Fast Copy in Dataflow Gen2, and Copy Job in Preview Unlocking seamless and Secure Data Integration for Enterprises In today’s data-driven world, network security and secure data transmission are paramount concerns for enterprises handling sensitive information. At Microsoft, we are committed to empowering businesses with the tools … Continue reading “Revolutionizing Enterprise Network Security: support for VNET Data Gateway in Data pipeline and more (Preview)”  ( 6 min )
    Running Apache Airflow jobs seamlessly in Microsoft Fabric
    Apache Airflow is a powerful platform to programmatically author, schedule, and monitor workflows. It is widely adopted for its flexibility, scalability, and ability to handle complex workflows with ease. With Apache Airflow, you can orchestrate your data pipelines, ensuring they run smoothly and efficiently. In May 2024, we launched the preview of Apache Airflow job … Continue reading “Running Apache Airflow jobs seamlessly in Microsoft Fabric”  ( 6 min )
    High Concurrency mode for notebooks in pipelines (Generally Available)
    High Concurrency mode for notebooks in pipelines is now generally available (GA)! This powerful feature enhances enterprise data ingestion and transformation by optimizing session sharing within one of the most widely used orchestration mechanisms. With this release, we’re also introducing Comprehensive Monitoring for High-Concurrency Spark Applications, bringing deeper visibility and control to your workloads. Key … Continue reading “High Concurrency mode for notebooks in pipelines (Generally Available)”  ( 6 min )
    Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric
    Introducing predefined Spark resource profiles in Microsoft Fabric—making it easier than ever for data engineers to optimize their compute configurations based on workload needs. Whether you’re handling read-heavy, write-heavy, or mixed workloads, Fabric now provides a property bag-based approach that streamlines Spark tuning with just a simple setting. With these new configurations, users can effortlessly … Continue reading “Supercharge your workloads: write-optimized default Spark configurations in Microsoft Fabric”  ( 6 min )
    Supporting Database Mirroring sources behind a firewall
    Database mirroring is a powerful feature in Microsoft Fabric, enabling seamless data replication and high availability for critical workloads (learn more about mirroring). However, connecting to mirrored databases behind a firewall requires the right integration approach. Database Mirroring now supports firewall connectivity for Azure SQL Database, with Snowflake and Azure SQL Managed Instance coming soon, ensuring … Continue reading “Supporting Database Mirroring sources behind a firewall”  ( 6 min )
    Open Mirroring UI enhancements and CSV support to help you get started today
    Open Mirroring: Mirroring is one of the easiest ways to get data into Fabric, it creates a copy of your data in OneLake and keeps it up to date; no ETL required. Open Mirroring empowers everyone to create their own Mirroring Source using the publicly available API that allows you to replicate data from anywhere. … Continue reading “Open Mirroring UI enhancements and CSV support to help you get started today”  ( 6 min )
    What’s new for SQL database in Fabric?
    Spring 2025 Round up: Performance, Developer Experience, and Data Management! Co-author:  Idris Motiwala This week, at the 2025 Fabric Conference in Las Vegas, we are unveiling a series of features for the SQL database in Fabric, including: performance enhancements, streamlined developer workflows, and improved data pipeline management. Here are the high-level features you can look … Continue reading “What’s new for SQL database in Fabric?”  ( 6 min )
  • Open

    Enabling e2e TLS with Azure Container Apps
    This post will cover how to enable end-to-end TLS on Azure Container Apps.  ( 4 min )
  • Open

    Unlock the Power of Azure Container Apps for Java Developers
    Are you ready to dive into the world of Azure Container Apps and take your Java development skills to the next level? We have an exciting new video series just for you! 🎉 Check out the first video in our series, where we introduce Azure Container Apps for Java developers. This video is packed with valuable insights and practical tips to help you get started with Azure Container Apps.  But that's not all! This is just the beginning. We have more videos lined up to guide you through the journey of mastering Azure Container Apps. Stay tuned for upcoming videos that will cover advanced topics and best practices. Don't miss out on updates! Subscribe to the Java at Microsoft YouTube channel to be notified of each new video as soon as it's published. Click here and hit the subscribe button to be at the forefront of Java at Microsoft. Happy coding! 🚀  ( 19 min )
    Unlock the Power of Azure Container Apps for Java Developers
    Are you ready to dive into the world of Azure Container Apps and take your Java development skills to the next level? We have an exciting new video series just for you! 🎉 Check out the first video in our series, where we introduce Azure Container Apps for Java developers. This video is packed with valuable insights and practical tips to help you get started with Azure Container Apps.  But that's not all! This is just the beginning. We have more videos lined up to guide you through the journey of mastering Azure Container Apps. Stay tuned for upcoming videos that will cover advanced topics and best practices. Don't miss out on updates! Subscribe to the Java at Microsoft YouTube channel to be notified of each new video as soon as it's published. Click here and hit the subscribe button to be at the forefront of Java at Microsoft. Happy coding! 🚀
  • Open

    Actualizaciones en Visual Studio Code 1.98
    La versión 1.98 de Visual Studio Code ya está disponible y llega con una serie de novedades que llevarán tu experiencia de desarrollo al siguiente nivel. Entre los principales destaques se encuentran nuevas integraciones avanzadas con la inteligencia artificial de GitHub Copilot, como el Modo Agente (en vista previa), Copilot Edits para notebooks y el innovador Copilot Vision, que permite interactuar con imágenes directamente en las conversaciones de chat. Si quieres consultar todas las actualizaciones en detalle, visita la página de Novedades en el sitio oficial. Insiders: ¿Te gustaría probar estas funcionalidades antes que nadie? Descarga la versión Insiders y accede a los recursos más recientes en cuanto estén disponibles. Copilot Agent Mode (Vista previa) El Modo Agente de Copilot (v…  ( 34 min )
    Actualizaciones en Visual Studio Code 1.98
    La versión 1.98 de Visual Studio Code ya está disponible y llega con una serie de novedades que llevarán tu experiencia de desarrollo al siguiente nivel. Entre los principales destaques se encuentran nuevas integraciones avanzadas con la inteligencia artificial de GitHub Copilot, como el Modo Agente (en vista previa), Copilot Edits para notebooks y el innovador Copilot Vision, que permite interactuar con imágenes directamente en las conversaciones de chat. Si quieres consultar todas las actualizaciones en detalle, visita la página de Novedades en el sitio oficial. Insiders: ¿Te gustaría probar estas funcionalidades antes que nadie? Descarga la versión Insiders y accede a los recursos más recientes en cuanto estén disponibles. Copilot Agent Mode (Vista previa) El Modo Agente de Copilot (v…
  • Open

    Configure time-based scaling in Azure Container Apps
    Azure Container Apps leverages cron-type KEDA scaling rules to schedule autoscaling actions at specific times. This feature is ideal for applications with predictable workload fluctuations (e.g., batch jobs, reporting systems) that require scaling based on time-of-day or day-of-week patterns. This guide walks you through configuring and optimizing time-based scaling. Prerequisites An active Azure subscription with access to Azure Container Apps. Basic understanding of KEDA (Kubernetes Event-driven Autoscaling) concepts. A deployed application in Azure Container Apps (see Quickstart Guide). How Time-Based Scaling Works Time-based scaling in Azure Container Apps is achieved by defining cron-type scale rules(https://keda.sh/docs/2.15/scalers/cron/). It uses cron expressions to define start …  ( 26 min )
    Configure time-based scaling in Azure Container Apps
    Azure Container Apps leverages cron-type KEDA scaling rules to schedule autoscaling actions at specific times. This feature is ideal for applications with predictable workload fluctuations (e.g., batch jobs, reporting systems) that require scaling based on time-of-day or day-of-week patterns. This guide walks you through configuring and optimizing time-based scaling. Prerequisites An active Azure subscription with access to Azure Container Apps. Basic understanding of KEDA (Kubernetes Event-driven Autoscaling) concepts. A deployed application in Azure Container Apps (see Quickstart Guide). How Time-Based Scaling Works Time-based scaling in Azure Container Apps is achieved by defining cron-type scale rules(https://keda.sh/docs/2.15/scalers/cron/). It uses cron expressions to define start …
    Getting Started with Python WebJobs on App Service Linux
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting one of the Python runtime stacks, create your App Service Web App. The stack must be Python, since we plan on writing our WebJob using Python and a bash startup script. For this example, we’ll use Python 3.13. Next, we’l…  ( 25 min )
    Getting Started with Python WebJobs on App Service Linux
    WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting one of the Python runtime stacks, create your App Service Web App. The stack must be Python, since we plan on writing our WebJob using Python and a bash startup script. For this example, we’ll use Python 3.13. Next, we’l…
  • Open

    Migrating your Docker Compose applications to the Sidecar feature
    As we continue to enhance the developer experience on Azure App Service, we’re announcing the retirement of the Docker Compose feature on March 31, 2027. If you’re currently using Docker Compose to deploy and manage multi-container applications on App Service, now is the time to start planning your transition to the new Sidecar feature.  ( 8 min )
  • Open

    Kickstarting AI Agent Development with Synthetic Data: A GenAI Approach on Azure
    Introduction When building AI agents—especially for internal enterprise use cases—one of the biggest challenges is data access. Real organizational data may be: Disorganized or poorly labeled Stored across legacy systems Gatekept due to security or compliance Unavailable for a variety of reasons during early PoC stages Instead of waiting months for a data-wrangling project before testing AI Agent feasibility, you can bootstrap your efforts with synthetic data using Azure OpenAI. This lets you validate functionality, test LLM capabilities, and build a working prototype before touching production data. In this post, we’ll walk through how to use Azure OpenAI to generate realistic, structured synthetic data to power early-stage AI agents for internal tools such as CRM bots, HR assistants, a…  ( 31 min )
    Kickstarting AI Agent Development with Synthetic Data: A GenAI Approach on Azure
    Introduction When building AI agents—especially for internal enterprise use cases—one of the biggest challenges is data access. Real organizational data may be: Disorganized or poorly labeled Stored across legacy systems Gatekept due to security or compliance Unavailable for a variety of reasons during early PoC stages Instead of waiting months for a data-wrangling project before testing AI Agent feasibility, you can bootstrap your efforts with synthetic data using Azure OpenAI. This lets you validate functionality, test LLM capabilities, and build a working prototype before touching production data. In this post, we’ll walk through how to use Azure OpenAI to generate realistic, structured synthetic data to power early-stage AI agents for internal tools such as CRM bots, HR assistants, a…
    Best Practices for Kickstarting AI Agents with Azure OpenAI and Synthetic Data
    Introduction When building AI agents—especially for internal enterprise use cases—one of the biggest challenges is data access. Real organizational data may be: Disorganized or poorly labeled Stored across legacy systems Gatekept due to security or compliance Unavailable for a variety of reasons during early PoC stages Instead of waiting months for a data-wrangling project before testing AI Agent feasibility, you can bootstrap your efforts with synthetic data using Azure OpenAI. This lets you validate functionality, test LLM capabilities, and build a working prototype before touching production data. In this post, we’ll walk through how to use Azure OpenAI to generate realistic, structured synthetic data to power early-stage AI agents for internal tools such as CRM bots, HR assistants, a…

  • Open

    Improve the security of Generation 2 VMs via Trusted Launch in Azure DevTest Labs
    We’re thrilled to announce the public preview of the Trusted Launch feature for Generation 2 (Gen2) Virtual machines (VMs) in Azure DevTest Labs! 🌟 This game-changing feature is designed to enhance security of virtual machines (VMs), protecting against advanced and persistent attack techniques. Here are the key benefits: Securely deploy VMs with verified boot loaders, OS kernels, […] The post Improve the security of Generation 2 VMs via Trusted Launch in Azure DevTest Labs appeared first on Develop from the cloud.  ( 23 min )
  • Open

    Expand Azure AI Agent with New Knowledge Tools: Microsoft Fabric and Tripadvisor
    To help AI Agents make well-informed decisions with confidence, knowledge serves as the foundation for generating accurate and grounded responses. By integrating comprehensive and precise data, Azure AI Agent Service enhances accuracy and delivers effective solutions, thereby improving the overall customer experience. Azure AI Agent Service aims to provide a wide range of knowledge tools to address various customer use cases, encompassing unstructured text data, structured data, private data, licensed data, public web data, and more.  Today, we are thrilled to announce the public preview of two new knowledge tools - Microsoft Fabric and Tripadvisor – designed to further empower your AI agents. Alongside existing capabilities such as Azure AI Search, File Search, and Grounding with Bing Se…  ( 29 min )
    Expand Azure AI Agent with New Knowledge Tools: Microsoft Fabric and Tripadvisor
    To help AI Agents make well-informed decisions with confidence, knowledge serves as the foundation for generating accurate and grounded responses. By integrating comprehensive and precise data, Azure AI Agent Service enhances accuracy and delivers effective solutions, thereby improving the overall customer experience. Azure AI Agent Service aims to provide a wide range of knowledge tools to address various customer use cases, encompassing unstructured text data, structured data, private data, licensed data, public web data, and more.  Today, we are thrilled to announce the public preview of two new knowledge tools - Microsoft Fabric and Tripadvisor – designed to further empower your AI agents. Alongside existing capabilities such as Azure AI Search, File Search, and Grounding with Bing Se…
    Best Practices for Using Generative AI in Automated Response Generation for Complex Decision Making
    Real-world AI Solutions: Lessons from the Field Overview Generative AI offers significant potential to streamline processes in domains with complex regulatory or clinical documentation. For example, in the context of prior authorization for surgical procedures, automated response generation can help parse detailed guidelines—such as eligibility criteria based on patient age, BMI thresholds, comorbid conditions, and documented behavioral interventions—to produce accurate and consistent outputs. The following document outlines best practices along with recommended architecture and process breakdown approaches to ensure that GenAI-powered responses are accurate, compliant, and reliable. 1. Understanding the Use Case Recognize the complexity of policy and clinical documents. Use cases like pr…  ( 34 min )
    Best Practices for Using Generative AI in Automated Response Generation for Complex Decision Making
    Real-world AI Solutions: Lessons from the Field Overview Generative AI offers significant potential to streamline processes in domains with complex regulatory or clinical documentation. For example, in the context of prior authorization for surgical procedures, automated response generation can help parse detailed guidelines—such as eligibility criteria based on patient age, BMI thresholds, comorbid conditions, and documented behavioral interventions—to produce accurate and consistent outputs. The following document outlines best practices along with recommended architecture and process breakdown approaches to ensure that GenAI-powered responses are accurate, compliant, and reliable. 1. Understanding the Use Case Recognize the complexity of policy and clinical documents. Use cases like pr…
    March 2025: Azure AI Speech’s HD voices are generally available and more
    Authors: Yufei Liu, Lihui Wang, Yao Qian, Yang Zheng, Jiajun Zhang, Bing Liu, Yang Cui, Peter Pan, Yan Deng, Songrui Wu, Gang Wang, Xi Wang, Shaofei Zhang, Sheng Zhao   We are pleased to announce that our Azure AI Speech’s Dragon HD neural text to speech (language model-based TTS, similar to model design for text LLM) voices, which have been available to users for some time, are now moving to general availability (GA). These voices have gained significant traction across various scenarios and have received valuable feedback from our users. This milestone is a testament to the extensive feedback and growing popularity of Azure AI Speech’s HD voices. As we continue to enhance the user experience, we remain committed to exploring and experimenting with new voices and advanced models to push t…  ( 32 min )
    March 2025: Azure AI Speech’s HD voices are generally available and more
    Authors: Yufei Liu, Lihui Wang, Yao Qian, Yang Zheng, Jiajun Zhang, Bing Liu, Yang Cui, Peter Pan, Yan Deng, Songrui Wu, Gang Wang, Xi Wang, Shaofei Zhang, Sheng Zhao   We are pleased to announce that our Azure AI Speech’s Dragon HD neural text to speech (language model-based TTS, similar to model design for text LLM) voices, which have been available to users for some time, are now moving to general availability (GA). These voices have gained significant traction across various scenarios and have received valuable feedback from our users. This milestone is a testament to the extensive feedback and growing popularity of Azure AI Speech’s HD voices. As we continue to enhance the user experience, we remain committed to exploring and experimenting with new voices and advanced models to push t…
  • Open

    Terraform Provider for Microsoft Fabric (Generally Available)
    Unlocking the full potential of Microsoft Fabric with Terraform Provider Terraform Provider for Microsoft Fabric is now generally available (GA)! The first version of the Terraform Provider for Fabric was released six months ago, enabling engineers to automate key aspects of their Fabric Data Platform. Since then, adoption has grown significantly, now even more customers … Continue reading “Terraform Provider for Microsoft Fabric (Generally Available)”  ( 7 min )
    Simplify Your Data Ingestion with Copy Job (Generally Available)
    Copy Job is now generally available, bringing you a simpler, faster, and more intuitive way to move data! Whether you’re handling batch transfers or need the efficiency of incremental data movement, Copy Job gives you the flexibility and reliability to get the job done.  Since it’s preview last September, we’ve received incredible feedback from you. … Continue reading “Simplify Your Data Ingestion with Copy Job (Generally Available)”  ( 6 min )
    Simplify your Warehouse ALM with DacFx integration in Git and Deployment pipelines for Fabric Warehouse
    Managing data warehouse changes and automating deployments is now simpler than ever with the integration of DacFx with Git and Deployment Pipelines for Fabric Warehouse. This integration enables seamless export and import of your data warehouses, efficient schema change management, and deployment through Git-connected workflows. Whether you’re collaborating with a team using tools like VS … Continue reading “Simplify your Warehouse ALM with DacFx integration in Git and Deployment pipelines for Fabric Warehouse”  ( 7 min )
    Easily load Fabric OneLake data into Excel — OneLake catalog and Get Data are integrated into Excel for Windows (Preview)
    We are excited to announce that the Get Data experience, along with the OneLake catalog, is now integrated into Excel for Windows (Preview). Like OneDrive, OneLake is a single, unified, logical data lake for your whole organization analytics data. This makes it crucial to have a streamlined method for loading OneLake data into Excel, enabling … Continue reading “Easily load Fabric OneLake data into Excel — OneLake catalog and Get Data are integrated into Excel for Windows (Preview)”  ( 5 min )
    New Solace PubSub+ Connector: seamlessly connect Fabric Eventstream with Solace PubSub+ (Preview)
    Real-time data is crucial for enterprises to stay competitive, enabling instant decision-making, enhanced customer experiences, and operational efficiency. It helps detect fraud, optimize supply chains, and personalize interactions. By leveraging continuous data streams, businesses can unlock new opportunities, improve resilience, and drive smarter automation in a data-driven world. What are Fabric Event Streams and Solace … Continue reading “New Solace PubSub+ Connector: seamlessly connect Fabric Eventstream with Solace PubSub+ (Preview)”  ( 7 min )
    AI Ready Apps: build RAG Data pipeline from Azure Blob Storage to SQL Database in Microsoft Fabric within minutes
    Microsoft Fabric is a unified, secure, and user-friendly data platform equipped with features necessary for developing enterprise-grade applications with minimal or no coding required. Last year, the platform was enhanced by introducing SQL Database in Fabric, facilitating AI application development within Microsoft Fabric. In a previous blog post, we discussed how to build a chatbot … Continue reading “AI Ready Apps: build RAG Data pipeline from Azure Blob Storage to SQL Database in Microsoft Fabric within minutes”  ( 13 min )
    Mirroring in Fabric – What’s new
    Mirroring is a powerful feature in Microsoft Fabric, enabling you to seamlessly reflect your existing data estate continuously from any database or data warehouse into OneLake in Fabric. Once Mirroring starts the replication process, the mirrored data is automatically kept up to date at near real-time in Fabric OneLake. With your data estates landed into … Continue reading “Mirroring in Fabric – What’s new”  ( 9 min )
  • Open

    AI Agents: Mastering Agentic RAG - Part 5
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a deep dive into Agentic RAG (Retrieval-Augmented Generation). In previous posts (links at the end!), we've explored the foundations of AI agents. Now, we'll explore how Agentic RAG elevates traditional RAG by empowering LLMs to autonomously plan, retrieve information, and refine their reasoning process. I've even created some code samples demonstrating Agentic RAG with different tools and frameworks, which we'll explore below. What is Agentic RAG? Agentic RAG represents a significant evolution in how LLMs interact with external data. Unlike traditional RAG, which follows a linear "retrieve-then-read" approach, Agentic RAG empowers the LLM to act as an agent, autonomous…  ( 27 min )
    AI Agents: Mastering Agentic RAG - Part 5
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a deep dive into Agentic RAG (Retrieval-Augmented Generation). In previous posts (links at the end!), we've explored the foundations of AI agents. Now, we'll explore how Agentic RAG elevates traditional RAG by empowering LLMs to autonomously plan, retrieve information, and refine their reasoning process. I've even created some code samples demonstrating Agentic RAG with different tools and frameworks, which we'll explore below. What is Agentic RAG? Agentic RAG represents a significant evolution in how LLMs interact with external data. Unlike traditional RAG, which follows a linear "retrieve-then-read" approach, Agentic RAG empowers the LLM to act as an agent, autonomous…
  • Open

    Lifecycle Management of Azure storage blobs using Azure Data Factory (ADF)
    Background:  Many times, we have a requirement to delete the page blobs automatically after certain period of times from the Storage account as currently Lifecyle management does not support Page blob deletion  Note:  we can delete All blobs (Page/Block/Append blob) from the ADF   Deletion of page blobs (or any blob type) from the storage account can be achieved using Azure Storage explorer, REST API, SDK’s, PowerShell, Azure Data Factory, Azure logic App, Azure Function app, Azure storage actions (Preview) etc. This blog shows how to use ADF to delete blobs.  Step 1:  Create an azure data factory resource from azure portal.  If you are new to ADF, please refer this link on how to create one:  https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal   Step …  ( 22 min )
    Lifecycle Management of Azure storage blobs using Azure Data Factory (ADF)
    Background:  Many times, we have a requirement to delete the page blobs automatically after certain period of times from the Storage account as currently Lifecyle management does not support Page blob deletion  Note:  we can delete All blobs (Page/Block/Append blob) from the ADF   Deletion of page blobs (or any blob type) from the storage account can be achieved using Azure Storage explorer, REST API, SDK’s, PowerShell, Azure Data Factory, Azure logic App, Azure Function app, Azure storage actions (Preview) etc. This blog shows how to use ADF to delete blobs.  Step 1:  Create an azure data factory resource from azure portal.  If you are new to ADF, please refer this link on how to create one:  https://docs.microsoft.com/en-us/azure/data-factory/quickstart-create-data-factory-portal   Step …

  • Open

    Superfast Installing Code Push Server in a Windows Web App
    TOC Introduction Setup Debugging References 1. Introduction CodePush Server is a self-hosted backend for Microsoft CodePush, allowing you to manage and deploy over-the-air updates for React Native and Cordova apps. It provides update versioning, deployment history, and authentication controls. It is typically designed to run on Linux-based Node environments. If you want to deploy it on Azure Windows Web App, you can follow this tutorial to apply the necessary modifications. 2. Setup 1. Create a Windows Node.js Web App. In this example, we use Node.js 20 LTS.     2. After the Web App is created, go to the Overview tab and copy its FQDN. You'll need this in later steps.     3. Create a standard Storage Account.     4. Once created, go to Access keys and copy the Storage Account’s name a…  ( 29 min )
    Superfast Installing Code Push Server in a Windows Web App
    TOC Introduction Setup Debugging References 1. Introduction CodePush Server is a self-hosted backend for Microsoft CodePush, allowing you to manage and deploy over-the-air updates for React Native and Cordova apps. It provides update versioning, deployment history, and authentication controls. It is typically designed to run on Linux-based Node environments. If you want to deploy it on Azure Windows Web App, you can follow this tutorial to apply the necessary modifications. 2. Setup 1. Create a Windows Node.js Web App. In this example, we use Node.js 20 LTS.     2. After the Web App is created, go to the Overview tab and copy its FQDN. You'll need this in later steps.     3. Create a standard Storage Account.     4. Once created, go to Access keys and copy the Storage Account’s name a…
  • Open

    Model Mondays: Lights, Prompts, Action!
    In the world of visual generative AI—whether you're crafting marketing visuals, building creative tools, or enhancing user experiences with rich media—imagination is the new interface. And while traditional content creation tools require time, talent, and iterations, there’s now a faster, smarter way to bring ideas to life: visual generative models. From turning plain text into photorealistic imagery to transforming rough sketches into refined art, these models are redefining what’s possible in content creation. They don’t just generate visuals—they unlock creativity at scale. On Monday March 31, grab a front-row seat to the future of creative AI and get ready for a visually mind-blowing episode, where we’ll dive into the dazzling world of Visual Generative AI. We’re talking next-gen text…  ( 22 min )
    Model Mondays: Lights, Prompts, Action!
    In the world of visual generative AI—whether you're crafting marketing visuals, building creative tools, or enhancing user experiences with rich media—imagination is the new interface. And while traditional content creation tools require time, talent, and iterations, there’s now a faster, smarter way to bring ideas to life: visual generative models. From turning plain text into photorealistic imagery to transforming rough sketches into refined art, these models are redefining what’s possible in content creation. They don’t just generate visuals—they unlock creativity at scale. On Monday March 31, grab a front-row seat to the future of creative AI and get ready for a visually mind-blowing episode, where we’ll dive into the dazzling world of Visual Generative AI. We’re talking next-gen text…
  • Open

    Essentials of Azure and AI project performance and security | New training!
    Are you ready to elevate your cloud skills and master the essentials of reliability, security, and performance of Azure and AI project? Join us for comprehensive training in Microsoft Azure Virtual Training Day events, where you'll gain the knowledge and tools to adopt the cloud at scale and optimize your cloud spend. Event Highlights: Two-Day Agenda: Dive deep into how-to learning on cloud and AI adoption, financial best practices, workload design, environment management, and more. Expert Guidance: Learn from industry experts and gain insights into designing with optimization in mind with the Azure Well-Architected Framework and the Cloud Adoption Framework for Azure. Hands-On Learning: Participate in interactive sessions and case studies to apply essentials of Azure and AI best practices in real-world scenarios, like reviewing and remediating workloads. FinOps in the Era of AI: Discover how to build a culture of cost efficiency and maximize the business value of the cloud with the FinOps Framework, including principles, phases, domains, and capabilities. Why Attend? Build Reliable and Secure Systems: Understand the shared responsibility between Microsoft and its customers to build resilient and secure systems. Optimize Cloud Spend: Learn best practices for cloud spend optimization and drive market differentiation through savings. Enhance Productivity: Improve productivity, customer experience, and competitive advantage by elevating the resiliency and security of your critical workloads. Don't miss the opportunity to transform your cloud strategy and take your skills to the next level. Register now and join us for an insightful and engaging virtual training experience! Register today!  Aka.ms/AzureEssentialsVTD  Eager to learn before the next event? Dive into our free self-paced training modules: Cost efficiency of Azure and AI Projects | on Microsoft Learn Resiliency and security of Azure and AI Projects | on Microsoft Learn   Overview of essential skilling for Azure and AI workloads | on Microsoft Learn  ( 21 min )
    Essentials of Azure and AI project performance and security | New training!
    Are you ready to elevate your cloud skills and master the essentials of reliability, security, and performance of Azure and AI project? Join us for comprehensive training in Microsoft Azure Virtual Training Day events, where you'll gain the knowledge and tools to adopt the cloud at scale and optimize your cloud spend. Event Highlights: Two-Day Agenda: Dive deep into how-to learning on cloud and AI adoption, financial best practices, workload design, environment management, and more. Expert Guidance: Learn from industry experts and gain insights into designing with optimization in mind with the Azure Well-Architected Framework and the Cloud Adoption Framework for Azure. Hands-On Learning: Participate in interactive sessions and case studies to apply essentials of Azure and AI best practices in real-world scenarios, like reviewing and remediating workloads. FinOps in the Era of AI: Discover how to build a culture of cost efficiency and maximize the business value of the cloud with the FinOps Framework, including principles, phases, domains, and capabilities. Why Attend? Build Reliable and Secure Systems: Understand the shared responsibility between Microsoft and its customers to build resilient and secure systems. Optimize Cloud Spend: Learn best practices for cloud spend optimization and drive market differentiation through savings. Enhance Productivity: Improve productivity, customer experience, and competitive advantage by elevating the resiliency and security of your critical workloads. Don't miss the opportunity to transform your cloud strategy and take your skills to the next level. Register now and join us for an insightful and engaging virtual training experience! Register today!  Aka.ms/AzureEssentialsVTD  Eager to learn before the next event? Dive into our free self-paced training modules: Cost efficiency of Azure and AI Projects | on Microsoft Learn Resiliency and security of Azure and AI Projects | on Microsoft Learn   Overview of essential skilling for Azure and AI workloads | on Microsoft Learn
    Monitoring Azure VMware Solution Basics
    The focus is on what to monitor with guidance based on Microsoft and VMware native tools; however, third-party tools can maintain the environment's infrastructure and health maintain the infrastructure and health of the environment. The focus is on what to monitor with guidance based on Microsoft and VMware native tools, however 3rd party tools can of course be used as alternatives. Solution components that write to the VMware syslog include VMware ESXi, VMware vSAN, VMware NSX-T Data Center, and VMware vCenter Server.   When Diagnostics is enabled, those logs are written to the designated Log Analytics Workspace. Basic health Impact: Operational Excellence Host Operations: Ensure you are aware of pending host operations by setting up the notifications for host remediation activities (chan…  ( 52 min )
    Monitoring Azure VMware Solution Basics
    The focus is on what to monitor with guidance based on Microsoft and VMware native tools; however, third-party tools can maintain the environment's infrastructure and health maintain the infrastructure and health of the environment. The focus is on what to monitor with guidance based on Microsoft and VMware native tools, however 3rd party tools can of course be used as alternatives. Solution components that write to the VMware syslog include VMware ESXi, VMware vSAN, VMware NSX-T Data Center, and VMware vCenter Server.   When Diagnostics is enabled, those logs are written to the designated Log Analytics Workspace. Basic health Impact: Operational Excellence Host Operations: Ensure you are aware of pending host operations by setting up the notifications for host remediation activities (chan…

  • Open

    Building a Model Context Protocol Server with Semantic Kernel
    This is second MCP related blog post that is part of a series of blog posts that will cover how to use Semantic Kernel (SK) with the Model Context Protocol (MCP). This blog post demonstrates how to build an MCP server using MCP C# SDK and SK, expose SK plugins as MCP tools and call […] The post Building a Model Context Protocol Server with Semantic Kernel appeared first on Semantic Kernel.  ( 24 min )
  • Open

    Scaling Cloud ETL: Optimizing Performance and Resolving Azure Data Factory Copy Bottlenecks
    Optimizing ETL Data Pipelines When building ETL data pipelines using Azure Data Factory (ADF) to process huge amounts of data from different sources, you may often run into performance and design-related challenges. This article will serve as a guide in building high-performance ETL pipelines that are both efficient and scalable. Below are the major guidelines to consider when building optimized ETL data pipelines in ADF: Linked Services and Datasets A linked service is a connection to a data source that can be created once and reused across multiple pipelines within the same ADF. It is efficient to create one linked service per source for easy maintenance. Similarly, datasets are derived from the linked services to fetch the data from the source. These should ideally be a single dataset f…  ( 35 min )
    Scaling Cloud ETL: Optimizing Performance and Resolving Azure Data Factory Copy Bottlenecks
    Optimizing ETL Data Pipelines When building ETL data pipelines using Azure Data Factory (ADF) to process huge amounts of data from different sources, you may often run into performance and design-related challenges. This article will serve as a guide in building high-performance ETL pipelines that are both efficient and scalable. Below are the major guidelines to consider when building optimized ETL data pipelines in ADF: Linked Services and Datasets A linked service is a connection to a data source that can be created once and reused across multiple pipelines within the same ADF. It is efficient to create one linked service per source for easy maintenance. Similarly, datasets are derived from the linked services to fetch the data from the source. These should ideally be a single dataset f…
  • Open

    Keep Your Azure Functions Up to Date: Identify Apps Running on Retired Versions
    Running Azure Functions on retired language versions can lead to security risks, performance issues, and potential service disruptions. While Azure Functions Team notifies users about upcoming retirements through the portal, emails, and warnings, identifying affected Function Apps across multiple subscriptions can be challenging. To simplify this, we’ve provided Azure CLI scripts to help you:✅ Identify all Function Apps using a specific runtime version✅ Find apps running on unsupported or soon-to-be-retired versions✅ Take proactive steps to upgrade and maintain a secure, supported environment Read on for the full set of Azure CLI scripts and instructions on how to upgrade your apps today! Why Upgrading Your Azure Functions Matters Azure Functions supports six different programming language…  ( 32 min )
    Keep Your Azure Functions Up to Date: Identify Apps Running on Retired Versions
    Running Azure Functions on retired language versions can lead to security risks, performance issues, and potential service disruptions. While Azure Functions Team notifies users about upcoming retirements through the portal, emails, and warnings, identifying affected Function Apps across multiple subscriptions can be challenging. To simplify this, we’ve provided Azure CLI scripts to help you:✅ Identify all Function Apps using a specific runtime version✅ Find apps running on unsupported or soon-to-be-retired versions✅ Take proactive steps to upgrade and maintain a secure, supported environment Read on for the full set of Azure CLI scripts and instructions on how to upgrade your apps today! Why Upgrading Your Azure Functions Matters Azure Functions supports six different programming language…

  • Open

    Azure DevTest Labs’ feedback forum has a new home!
    We’re thrilled to announce that Azure DevTest Labs has joined the Visual Studio Developer Community to collect valuable feedback and suggestions from our customers. 🌟 This fantastic feedback portal is here to make your experience smoother and more engaging. It’s your go-to place to connect with the Azure DevTest Labs product and engineering teams to […] The post Azure DevTest Labs’ feedback forum has a new home! appeared first on Develop from the cloud.  ( 22 min )
  • Open

    Microsoft 365 Certification control spotlight: Data retention, back-up, and disposal
    Learn how Microsoft 365 Certification validates data retention, back-up, and disposal controls for Microsoft 365 apps. The post Microsoft 365 Certification control spotlight: Data retention, back-up, and disposal appeared first on Microsoft 365 Developer Blog.  ( 22 min )
  • Open

    Recovering Large Number of Soft-deleted Blobs Using Storage Actions
    Blob soft delete is a crucial feature that protects your data from accidental deletions or overwrites. By preserving deleted data for a defined period, it helps maintain data integrity and availability, even in cases of human error. However, restoring soft-deleted data can be time-consuming, as each deleted blob must be individually restored using the undelete API. At present, there is no option to bulk restore all soft-deleted blobs.   In this blog, we present a no-code solution for efficiently restoring soft-deleted data using Azure Storage tasks. This approach is especially useful when dealing with a large number of blobs, eliminating the need for custom scripts. Additionally, it allows you to apply multiple filters to restore only the necessary blobs.   Note: This feature is currently …  ( 24 min )
    Recovering Large Number of Soft-deleted Blobs Using Storage Actions
    Blob soft delete is a crucial feature that protects your data from accidental deletions or overwrites. By preserving deleted data for a defined period, it helps maintain data integrity and availability, even in cases of human error. However, restoring soft-deleted data can be time-consuming, as each deleted blob must be individually restored using the undelete API. At present, there is no option to bulk restore all soft-deleted blobs.   In this blog, we present a no-code solution for efficiently restoring soft-deleted data using Azure Storage tasks. This approach is especially useful when dealing with a large number of blobs, eliminating the need for custom scripts. Additionally, it allows you to apply multiple filters to restore only the necessary blobs.   Note: This feature is currently …
    [AI Search] Troubleshooting OneLake Files Connection via Wizard
    Are unable to connect to your OneLakeFiles? In this documentation you will be able to troubleshoot and find the solution for each issue.  *This function is at PREVIEW, therefore, please be informed on it (as of 2025.03.26) Are you are looking for how to integrate I Search with Fabric OneLake, please look at this article which explains an overview of the objective and the instructions. (article link) Also make sure that you have all the prerequisites checked before moving down to the article. You can find the prerequisites here. If you are still reading this article, that means you are having an issue with Connect to your data section as below . [Error 1 - The workspace or the lakehouse specified cannot be found]   This error can occur if AI Search and Lakehouse are not in the same tenant.  Please make sure that both services are in the same tenant.  You can check this article to find out more about how to find the Tenant ID.   [Error 2 - Unable to list items within the lakehouse using the specified identity as access to the workspace was denied] This error occurs for two reasons. In this article, we will demonstrate a situation with system-assigned. Make sure your AI Search is in system-assigned identity or user-assigned. You can find your configuration under AI Search Services > Settings > Identity Make sure that Status is ON – If using system-assigned.   Go to your Fabric OneLake Workspace and provide your AI Search service contributor role to the workspace. The process may take 5-15 minutes, please allow some time for it to be processed. If you would like to use user-assigned, here is an example that you can refer to as an example. However if you are unable to find your AI Search in your OneLake workspace, make sure to enable the below configuration from Fabric One Lake   Go to "app.powerbi.com" and click the configuration > Governance and Insights > Admin Portal From the Admin Portal > Tenant Settings > Search “API”   Make sure to enable Service Principals can use Fabric APIs.  ( 22 min )
    [AI Search] Troubleshooting OneLake Files Connection via Wizard
    Are unable to connect to your OneLakeFiles? In this documentation you will be able to troubleshoot and find the solution for each issue.  *This function is at PREVIEW, therefore, please be informed on it (as of 2025.03.26) Are you are looking for how to integrate I Search with Fabric OneLake, please look at this article which explains an overview of the objective and the instructions. (article link) Also make sure that you have all the prerequisites checked before moving down to the article. You can find the prerequisites here. If you are still reading this article, that means you are having an issue with Connect to your data section as below . [Error 1 - The workspace or the lakehouse specified cannot be found]   This error can occur if AI Search and Lakehouse are not in the same tenant.  Please make sure that both services are in the same tenant.  You can check this article to find out more about how to find the Tenant ID.   [Error 2 - Unable to list items within the lakehouse using the specified identity as access to the workspace was denied] This error occurs for two reasons. In this article, we will demonstrate a situation with system-assigned. Make sure your AI Search is in system-assigned identity or user-assigned. You can find your configuration under AI Search Services > Settings > Identity Make sure that Status is ON – If using system-assigned.   Go to your Fabric OneLake Workspace and provide your AI Search service contributor role to the workspace. The process may take 5-15 minutes, please allow some time for it to be processed. If you would like to use user-assigned, here is an example that you can refer to as an example. However if you are unable to find your AI Search in your OneLake workspace, make sure to enable the below configuration from Fabric One Lake   Go to "app.powerbi.com" and click the configuration > Governance and Insights > Admin Portal From the Admin Portal > Tenant Settings > Search “API”   Make sure to enable Service Principals can use Fabric APIs.
  • Open

    Observe Quarkus Apps with Azure Application Insights using OpenTelemetry
    Overview This blog shows you how to observe Red Hat Quarkus applications with Azure Application Insights using OpenTelemetry. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app utilizes OpenTelemetry to instrument, generate, collect, and export telemetry data for observability. The blog guides you to test your app locally, deploy it to Azure Container Apps and observe its telemetry data with Azure Application Insights. Prerequisites An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Prepare a local machine with Unix-like operating system installed - for example, Ubuntu, macOS, or Windows Subsystem for Linux. …  ( 47 min )
    Observe Quarkus Apps with Azure Application Insights using OpenTelemetry
    Overview This blog shows you how to observe Red Hat Quarkus applications with Azure Application Insights using OpenTelemetry. The application is a "to do list" with a JavaScript front end and a REST endpoint. Azure Database for PostgreSQL Flexible Server provides the persistence layer for the app. The app utilizes OpenTelemetry to instrument, generate, collect, and export telemetry data for observability. The blog guides you to test your app locally, deploy it to Azure Container Apps and observe its telemetry data with Azure Application Insights. Prerequisites An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Prepare a local machine with Unix-like operating system installed - for example, Ubuntu, macOS, or Windows Subsystem for Linux. …
    Getting Started with Java WebJobs on Azure App Service
    Getting Started with Linux WebJobs on App Service - Java   WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting one of the Java runtime stacks, create your App Service Web App. The stack must be Java, since we plan on writing our WebJob using Java and a bash startup script…  ( 26 min )
    Getting Started with Java WebJobs on Azure App Service
    Getting Started with Linux WebJobs on App Service - Java   WebJobs Intro WebJobs is a feature of Azure App Service that enables you to run a program or script in the same instance as a web app. All app service plans support WebJobs. There's no extra cost to use WebJobs. This sample uses a Triggered (scheduled) WebJob to output the system time once every 15 minutes. Create Web App Before creating our WebJobs, we need to create an App Service webapp. If you already have an App Service Web App, skip to the next step Otherwise, in the portal, select App Services > Create > Web App. After following the create instructions and selecting one of the Java runtime stacks, create your App Service Web App. The stack must be Java, since we plan on writing our WebJob using Java and a bash startup script…
  • Open

    Announcing backup storage billing for SQL database in Microsoft Fabric: what you need to know
    Ensuring data protection with automated backups SQL database in Microsoft Fabric provides automatic backups from the moment a database is created, ensuring seamless data protection and recovery. The system follows a robust backup strategy: This approach lets users restore their database to any point in the past seven days, making data recovery simple and efficient. … Continue reading “Announcing backup storage billing for SQL database in Microsoft Fabric: what you need to know”  ( 5 min )
  • Open

    Unleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
    Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge.  Key Features of MCP Standardized Comm…  ( 32 min )
    Unleashing the Power of Model Context Protocol (MCP): A Game-Changer in AI Integration
    Artificial Intelligence is evolving rapidly, and one of the most pressing challenges is enabling AI models to interact effectively with external tools, data sources, and APIs. The Model Context Protocol (MCP) solves this problem by acting as a bridge between AI models and external services, creating a standardized communication framework that enhances tool integration, accessibility, and AI reasoning capabilities. What is Model Context Protocol (MCP)? MCP is a protocol designed to enable AI models, such as Azure OpenAI models, to interact seamlessly with external tools and services. Think of MCP as a universal USB-C connector for AI, allowing language models to fetch information, interact with APIs, and execute tasks beyond their built-in knowledge.  Key Features of MCP Standardized Comm…
  • Open

    New Overlapping Secrets on Azure DevOps OAuth
    As you may have read, Azure DevOps OAuth apps are due for deprecation in 2026. All developers are encouraged to migrate their applications to use Microsoft Entra ID OAuth, which can access all Azure DevOps APIs and has the added benefit of enhanced security features and long-term investment. Although we are nearing Azure DevOps OAuth’s […] The post New Overlapping Secrets on Azure DevOps OAuth appeared first on Azure DevOps Blog.  ( 23 min )
  • Open

    RAG Time Journey 4: Advanced Multimodal Indexing
    Introduction Welcome to RAG Time Journey 4, the next step in our deep dive into Retrieval-Augmented Generation (RAG). If you’ve been following along, you might remember that in Journey 2 , we explored data ingestion, hybrid search, and semantic reranking—key concepts that laid the groundwork for effective search and retrieval. Now, we’re moving beyond text and into the multimodal world, where text, images, audio, and video coexist in search environments that demand more sophisticated retrieval capabilities. Modern AI-powered applications require more than just keyword matching. They need to understand relationships across multiple data types, extract meaning from diverse formats, and provide accurate, context-rich results. That’s where multimodal indexing in Azure AI Search comes into play…  ( 37 min )
    RAG Time Journey 4: Advanced Multimodal Indexing
    Introduction Welcome to RAG Time Journey 4, the next step in our deep dive into Retrieval-Augmented Generation (RAG). If you’ve been following along, you might remember that in Journey 2 , we explored data ingestion, hybrid search, and semantic reranking—key concepts that laid the groundwork for effective search and retrieval. Now, we’re moving beyond text and into the multimodal world, where text, images, audio, and video coexist in search environments that demand more sophisticated retrieval capabilities. Modern AI-powered applications require more than just keyword matching. They need to understand relationships across multiple data types, extract meaning from diverse formats, and provide accurate, context-rich results. That’s where multimodal indexing in Azure AI Search comes into play…

  • Open

    Improve performance and security using Standard Load Balancer and Standard SKU public IP addresses in Azure DevTest Labs
    We are excited to announce preview of enhancements in Azure DevTest Labs designed to accommodate two upcoming retirements in Azure: Retirement Date Details Azure Basic Load Balancer September 30, 2025 The Azure Basic Load Balancer will be retired. You can continue using your existing Basic Load Balancers until this date, but you will not be […] The post Improve performance and security using Standard Load Balancer and Standard SKU public IP addresses in Azure DevTest Labs appeared first on Develop from the cloud.  ( 22 min )
  • Open

    Hyperlight Wasm: Fast, secure, and OS-free
    We're announcing the release of Hyperlight Wasm: a Hyperlight virtual machine “micro-guest” that can run wasm component workloads written in many programming languages. The post Hyperlight Wasm: Fast, secure, and OS-free appeared first on Microsoft Open Source Blog.  ( 16 min )
  • Open

    Best Practices for Requesting Quota Increase for Azure OpenAI Models
    Introduction This document outlines a set of best practices to guide users in submitting quota increase requests for Azure OpenAI models. Following these recommendations will help streamline the process, ensure proper documentation, and improve the likelihood of a successful request.  Understand the Quota and Limitations Before submitting a quota increase request, make sure you have a clear understanding of: The current quota and limits for your Azure OpenAI instance. Your specific use case requirements, including estimated daily/weekly/monthly usage. The rate limits for API calls and how they affect your solution's performance. Use the Azure portal or CLI to monitor your current usage and identify patterns that justify the need for a quota increase. Provide a Clear and Detailed Justific…  ( 26 min )
    Best Practices for Requesting Quota Increase for Azure OpenAI Models
    Introduction This document outlines a set of best practices to guide users in submitting quota increase requests for Azure OpenAI models. Following these recommendations will help streamline the process, ensure proper documentation, and improve the likelihood of a successful request.  Understand the Quota and Limitations Before submitting a quota increase request, make sure you have a clear understanding of: The current quota and limits for your Azure OpenAI instance. Your specific use case requirements, including estimated daily/weekly/monthly usage. The rate limits for API calls and how they affect your solution's performance. Use the Azure portal or CLI to monitor your current usage and identify patterns that justify the need for a quota increase. Provide a Clear and Detailed Justific…
    Best Practices for Structured Extraction from Documents Using Azure OpenAI
    Introduction In a recent project, a customer needed to extract structured data from legal documents to populate a standardized form. The legal documents varied in length and structure, and the customer required consistent and accurate outputs that mapped directly to the expected form schema. The implemented solution leveraged Azure OpenAI to iteratively process document chunks and update the form output dynamically. A key component to successfully extract the correct output was using Structured Outputs to enforce the desired output fields to populate the form. This article outlines best practices derived from this project, with a focus on reliable structure enforcement and iterative processing of unstructured legal data. These lessons learned can be leveraged for additional scenarios and d…  ( 27 min )
    Best Practices for Structured Extraction from Documents Using Azure OpenAI
    Introduction In a recent project, a customer needed to extract structured data from legal documents to populate a standardized form. The legal documents varied in length and structure, and the customer required consistent and accurate outputs that mapped directly to the expected form schema. The implemented solution leveraged Azure OpenAI to iteratively process document chunks and update the form output dynamically. A key component to successfully extract the correct output was using Structured Outputs to enforce the desired output fields to populate the form. This article outlines best practices derived from this project, with a focus on reliable structure enforcement and iterative processing of unstructured legal data. These lessons learned can be leveraged for additional scenarios and d…
  • Open

    Elevate Your AI Expertise with Microsoft Azure: Learn Live Series for Developers
    Unlock the power of Azure AI and master the art of creating advanced AI agents. Starting from April 15th, embark on a comprehensive learning journey designed specifically for professional developers like you. This series will guide you through the official Microsoft Learn Plan, focused on the latest agentic AI technologies and innovations. Generative AI has evolved to become an essential tool for crafting intelligent applications, and AI agents are leading the charge. Here's your opportunity to deepen your expertise in building powerful, scalable agent-based solutions using the Azure AI Foundry, Azure AI Agent Service, and the Semantic Kernel Framework. Why Attend? This Learn Live series will provide you with: In-depth Knowledge: Understand when to use AI agents, how they function, and th…  ( 24 min )
    Elevate Your AI Expertise with Microsoft Azure: Learn Live Series for Developers
    Unlock the power of Azure AI and master the art of creating advanced AI agents. Starting from April 15th, embark on a comprehensive learning journey designed specifically for professional developers like you. This series will guide you through the official Microsoft Learn Plan, focused on the latest agentic AI technologies and innovations. Generative AI has evolved to become an essential tool for crafting intelligent applications, and AI agents are leading the charge. Here's your opportunity to deepen your expertise in building powerful, scalable agent-based solutions using the Azure AI Foundry, Azure AI Agent Service, and the Semantic Kernel Framework. Why Attend? This Learn Live series will provide you with: In-depth Knowledge: Understand when to use AI agents, how they function, and th…
  • Open

    Kickstart Your Web Development: VS Code Basics & GitHub Integration
    As students we get access to an amazing set of developer resources for FREE! Microsoft offers all registered students world wide $100 of Azure Credit and over 25 FREE services with Microsoft Azure for Student. GitHub offers all students FREE Codespaces and GitHub Copilot with the GitHub Education Pack so let walk through how you can get started with these resources. 1. Setting Up VS Code Installing VS Code Download VS Code from official site. Install it on your system (Windows, macOS, or Linux). Open VS Code and customize your settings. Essential Extensions Extensions enhance productivity by adding new functionalities. Some must-have extensions are: GitHub Copilot – AI-powered code suggestions. Prettier – Code formatter for clean and consistent code. Live Server – For real-time web dev…  ( 27 min )
    Kickstart Your Web Development: VS Code Basics & GitHub Integration
    As students we get access to an amazing set of developer resources for FREE! Microsoft offers all registered students world wide $100 of Azure Credit and over 25 FREE services with Microsoft Azure for Student. GitHub offers all students FREE Codespaces and GitHub Copilot with the GitHub Education Pack so let walk through how you can get started with these resources. 1. Setting Up VS Code Installing VS Code Download VS Code from official site. Install it on your system (Windows, macOS, or Linux). Open VS Code and customize your settings. Essential Extensions Extensions enhance productivity by adding new functionalities. Some must-have extensions are: GitHub Copilot – AI-powered code suggestions. Prettier – Code formatter for clean and consistent code. Live Server – For real-time web dev…

  • Open

    Announcing the Extension of Some QnA Maker Functionality
    In 2022, we announced the deprecation of QnA Maker by March 31, 2025 with a recommendation to migrate to Custom Question Answering (CQA). In response to feedback from our valued customers, we have decided to extend the availability of certain functionalities in QnA Maker until October 31, 2025. This extension aims to support our customers in their smooth migration to Custom Question Answering (CQA), ensuring minimal disruption to their operations.  Extension Details  Here are some details on how the QnA Maker functionality will change:  Inference Support: You will be able to continue using your existing QnA Maker bots for query inferencing. This ensures that the QnA Maker bots remain operational and can be used as they are currently configured until October 31, 2025.   Portal Shutdown: The QnA Maker portal will no longer be available after March 31, 2025. You will not be able to make any edits or changes to your QnA Maker bots through the online QnA Maker portal.   Programmatic Bot Changes: You will be able to make changes to your QnA Maker bots programmatically via the QnA Maker API.  In preparation for this change, we recommend that you migrate all of your knowledge bases to your offline storage before the portal is shutdown on March 31, 2025.   Looking Ahead  After October 31, 2025, the QnA Maker service will be fully deprecated, and any query inferencing requests will return an error message. We encourage all our customers to complete their migration to CQA as soon as possible to avoid any disruptions.  We appreciate your understanding and cooperation as we work together to ensure a smooth migration.   Thank you for your continued support and trust in our services.  ( 21 min )
    Announcing the Extension of Some QnA Maker Functionality
    In 2022, we announced the deprecation of QnA Maker by March 31, 2025 with a recommendation to migrate to Custom Question Answering (CQA). In response to feedback from our valued customers, we have decided to extend the availability of certain functionalities in QnA Maker until October 31, 2025. This extension aims to support our customers in their smooth migration to Custom Question Answering (CQA), ensuring minimal disruption to their operations.  Extension Details  Here are some details on how the QnA Maker functionality will change:  Inference Support: You will be able to continue using your existing QnA Maker bots for query inferencing. This ensures that the QnA Maker bots remain operational and can be used as they are currently configured until October 31, 2025.   Portal Shutdown: The QnA Maker portal will no longer be available after March 31, 2025. You will not be able to make any edits or changes to your QnA Maker bots through the online QnA Maker portal.   Programmatic Bot Changes: You will be able to make changes to your QnA Maker bots programmatically via the QnA Maker API.  In preparation for this change, we recommend that you migrate all of your knowledge bases to your offline storage before the portal is shutdown on March 31, 2025.   Looking Ahead  After October 31, 2025, the QnA Maker service will be fully deprecated, and any query inferencing requests will return an error message. We encourage all our customers to complete their migration to CQA as soon as possible to avoid any disruptions.  We appreciate your understanding and cooperation as we work together to ensure a smooth migration.   Thank you for your continued support and trust in our services.
    Agentic P2P Automation: Harnessing the Power of OpenAI's Responses API
    The Procure-to-Pay (P2P) process is traditionally error-prone and labor-intensive, requiring someone to manually open each purchase invoice, look up contract details in a separate system, and painstakingly compare the two to identify anomalies—a task prone to oversight and inconsistency. About the sample Application The 'Agentic' characteristics demonstrated here using the Responses API are: The client application makes a single call to the Responses API that internally handles all the actions autonomously, processes the information and returns the response. In other words, the client application does not have to perform those actions itself. These actions that the Responses API uses, are Hosted tools like (file search, vision-based reasoning). Function calling is used to invoke custom ac…  ( 40 min )
    Agentic P2P Automation: Harnessing the Power of OpenAI's Responses API
    The Procure-to-Pay (P2P) process is traditionally error-prone and labor-intensive, requiring someone to manually open each purchase invoice, look up contract details in a separate system, and painstakingly compare the two to identify anomalies—a task prone to oversight and inconsistency. About the sample Application The 'Agentic' characteristics demonstrated here using the Responses API are: The client application makes a single call to the Responses API that internally handles all the actions autonomously, processes the information and returns the response. In other words, the client application does not have to perform those actions itself. These actions that the Responses API uses, are Hosted tools like (file search, vision-based reasoning). Function calling is used to invoke custom ac…
  • Open

    AI Toolkit for Visual Studio Code Now Supports NVIDIA NIM Microservices for RTX AI PCs
    AI Toolkit now supports NVIDIA NIM™ microservice-based foundation models for inference testing in the model playground and advanced features like bulk run, evaluation and building prompts.  This collaboration helps AI Engineers streamline development processes with foundational AI models.  About AI Toolkit AI Toolkit is a VS Code extension for AI engineers to build, deploy, and manage AI solutions. It includes model and prompt-centric features that allow users to explore and test different AI models, create and evaluate prompts, and perform model finetuning, all from within VS Code. Since its preview launch in 2024, AI Toolkit has helped developers worldwide learn about generative AI models and start building AI solutions.  NVIDIA NIM Microservices This January, NVIDIA announced that state…  ( 23 min )
    AI Toolkit for Visual Studio Code Now Supports NVIDIA NIM Microservices for RTX AI PCs
    AI Toolkit now supports NVIDIA NIM™ microservice-based foundation models for inference testing in the model playground and advanced features like bulk run, evaluation and building prompts.  This collaboration helps AI Engineers streamline development processes with foundational AI models.  About AI Toolkit AI Toolkit is a VS Code extension for AI engineers to build, deploy, and manage AI solutions. It includes model and prompt-centric features that allow users to explore and test different AI models, create and evaluate prompts, and perform model finetuning, all from within VS Code. Since its preview launch in 2024, AI Toolkit has helped developers worldwide learn about generative AI models and start building AI solutions.  NVIDIA NIM Microservices This January, NVIDIA announced that state…
    Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
    Did you attend a Microsoft AI Tour stop? Did you enjoy the content delivered? Did you know that the same technical presentations, open-source curriculum, and hands-on workshops you experienced are now available for you to redeliver and share with your community? Whether you're a Microsoft MVP, Developer, or IT Professional, these resources make it easier than ever to equip fellow professionals with the skills to successfully adopt Microsoft AI services. From expert-led skilling sessions to interactive networking opportunities, the Microsoft AI Tour offers more than just knowledge—it fosters collaboration and real-world impact. By delivering these sessions, you can help audiences simplify AI adoption, accelerate digital transformation, and drive innovation within their organizations. Whethe…  ( 45 min )
    Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
    Did you attend a Microsoft AI Tour stop? Did you enjoy the content delivered? Did you know that the same technical presentations, open-source curriculum, and hands-on workshops you experienced are now available for you to redeliver and share with your community? Whether you're a Microsoft MVP, Developer, or IT Professional, these resources make it easier than ever to equip fellow professionals with the skills to successfully adopt Microsoft AI services. From expert-led skilling sessions to interactive networking opportunities, the Microsoft AI Tour offers more than just knowledge—it fosters collaboration and real-world impact. By delivering these sessions, you can help audiences simplify AI adoption, accelerate digital transformation, and drive innovation within their organizations. Whethe…
    Global AI Bootcamp Bari – in presenza e online
    Dopo il successo del Global AI Bootcamp Milan dello scorso 12 Marzo, siamo entusiasti di annunciare un altro imperdibile evento nel Sud della penisola, organizzato nell’ambito dell’iniziativa Global AI Bootcamp 2025 dalla community Data Masters. Data Masters è l’AI Academy italiana che offre percorsi di formazione nei settori della Data Science, del Machine Learning e dell’Intelligenza Artificiale, guidando aziende e professionisti in percorsi di upskilling e reskilling. 🗓️ Quando? 11 Aprile, h17:00 CET📍Dove? In presenza c/o Data Masters e onlineUnisciti a noi, registrandoti sul sito ufficiale 👉🏼 Global AI Bootcamp - Italy - Bari - Global AI Community   Rivedi l'evento on-demand: Cos’ è il Global AI BootcampIl Global AI Bootcamp è un evento annuale e globale organizzato dalla più gra…  ( 23 min )
    Global AI Bootcamp Bari – in presenza e online
    Dopo il successo del Global AI Bootcamp Milan dello scorso 12 Marzo, siamo entusiasti di annunciare un altro imperdibile evento nel Sud della penisola, organizzato nell’ambito dell’iniziativa Global AI Bootcamp 2025 dalla community Data Masters. Data Masters è l’AI Academy italiana che offre percorsi di formazione nei settori della Data Science, del Machine Learning e dell’Intelligenza Artificiale, guidando aziende e professionisti in percorsi di upskilling e reskilling. 🗓️ Quando? 11 Aprile, h17:00 CET📍Dove? In presenza c/o Data Masters e onlineUnisciti a noi, registrandoti sul sito ufficiale 👉🏼 Global AI Bootcamp - Italy - Bari - Global AI Community    Cos’ è il Global AI BootcampIl Global AI Bootcamp è un evento annuale e globale organizzato dalla più grande community di appassiona…
  • Open

    Announcing: Azure API Center Hands-on Workshop 🚀
    What is the Azure API Center Workshop? The Azure API Center flash workshop is a resource designed to expand your understanding of how organizations can enhance and streamline their API management and governance strategies using Azure API Center. With this practical knowledge and insights, you will be able to streamline secure API integration and enforce security and compliance with tools that evolve to meet your growing business needs. GIF showing Contoso Airlines API Center Azure API Center is a centralized inventory designed to track all your APIs, regardless of their type, lifecycle stage, or deployment location. It enhances discoverability, development, and reuse of APIs. While Azure API Management focuses on API deployment and runtime governance, Azure API Center complements it by cen…  ( 26 min )
    Announcing: Azure API Center Hands-on Workshop 🚀
    What is the Azure API Center Workshop? The Azure API Center flash workshop is a resource designed to expand your understanding of how organizations can enhance and streamline their API management and governance strategies using Azure API Center. With this practical knowledge and insights, you will be able to streamline secure API integration and enforce security and compliance with tools that evolve to meet your growing business needs. GIF showing Contoso Airlines API Center Azure API Center is a centralized inventory designed to track all your APIs, regardless of their type, lifecycle stage, or deployment location. It enhances discoverability, development, and reuse of APIs. While Azure API Management focuses on API deployment and runtime governance, Azure API Center complements it by cen…
  • Open

    Fabric Espresso – Episodes about Data Warehousing & Storage Solutions in Microsoft Fabric
    For the past 1.5 years, the Microsoft Fabric Product Group Product Managers have been publishing a YouTube series featuring deep dives into Microsoft Fabric’s features. These episodes cover both technical functionalities and real-world scenarios, providing insights into the product roadmap and the people driving innovation.  ( 5 min )
  • Open

    Announcing: Azure API Center Hands-on Workshop 🚀
    What is the Azure API Center Workshop? The Azure API Center flash workshop is a resource designed to expand your understanding of how organizations can enhance and streamline their API management and governance strategies using Azure API Center. With this practical knowledge and insights, you will be able to streamline secure API integration and enforce security and compliance with tools that evolve to meet your growing business needs. GIF showing Contoso Airlines API Center Azure API Center is a centralized inventory designed to track all your APIs, regardless of their type, lifecycle stage, or deployment location. It enhances discoverability, development, and reuse of APIs. While Azure API Management focuses on API deployment and runtime governance, Azure API Center complements it by cen…  ( 26 min )
    Announcing: Azure API Center Hands-on Workshop 🚀
    What is the Azure API Center Workshop? The Azure API Center flash workshop is a resource designed to expand your understanding of how organizations can enhance and streamline their API management and governance strategies using Azure API Center. With this practical knowledge and insights, you will be able to streamline secure API integration and enforce security and compliance with tools that evolve to meet your growing business needs. GIF showing Contoso Airlines API Center Azure API Center is a centralized inventory designed to track all your APIs, regardless of their type, lifecycle stage, or deployment location. It enhances discoverability, development, and reuse of APIs. While Azure API Management focuses on API deployment and runtime governance, Azure API Center complements it by cen…
    Microsoft AI Agents Learn Live Starting 15th April
    Join us for an exciting Learn Live webinar where we dive into the fundamentals of using Azure AI Foundry and AI Agents. The series is to help you build powerful Agent applications.This learn live series will help you understand the AI agents, including when to use them and how to build them, using Azure AI Agent Service and Semantic Kernel Agent Framework. By the end of this learning series, you will have the skills needed to develop AI agents on Azure.This sessions will introduce you to AI agents, the next frontier in intelligent applications and explore how they can be developed and deployed on Microsoft Azure. Through this webinar, you'll gain essential skills to begin creating agents with the Azure AI Agent Service. We'll also discuss how to take your agents to the next level by integr…  ( 27 min )
    Microsoft AI Agents Learn Live Starting 15th April
    Join us for an exciting Learn Live webinar where we dive into the fundamentals of using Azure AI Foundry and AI Agents. The series is to help you build powerful Agent applications.This learn live series will help you understand the AI agents, including when to use them and how to build them, using Azure AI Agent Service and Semantic Kernel Agent Framework. By the end of this learning series, you will have the skills needed to develop AI agents on Azure.This sessions will introduce you to AI agents, the next frontier in intelligent applications and explore how they can be developed and deployed on Microsoft Azure. Through this webinar, you'll gain essential skills to begin creating agents with the Azure AI Agent Service. We'll also discuss how to take your agents to the next level by integr…
  • Open

    Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
    Did you attend a Microsoft AI Tour stop? Did you enjoy the content delivered? Did you know that the same technical presentations, open-source curriculum, and hands-on workshops you experienced are now available for you to redeliver and share with your community? Whether you're a Microsoft MVP, Developer, or IT Professional, these resources make it easier than ever to equip fellow professionals with the skills to successfully adopt Microsoft AI services. From expert-led skilling sessions to interactive networking opportunities, the Microsoft AI Tour offers more than just knowledge—it fosters collaboration and real-world impact. By delivering these sessions, you can help audiences simplify AI adoption, accelerate digital transformation, and drive innovation within their organizations. Whethe…  ( 45 min )
    Essential Microsoft Resources for MVPs & the Tech Community from the AI Tour
    Did you attend a Microsoft AI Tour stop? Did you enjoy the content delivered? Did you know that the same technical presentations, open-source curriculum, and hands-on workshops you experienced are now available for you to redeliver and share with your community? Whether you're a Microsoft MVP, Developer, or IT Professional, these resources make it easier than ever to equip fellow professionals with the skills to successfully adopt Microsoft AI services. From expert-led skilling sessions to interactive networking opportunities, the Microsoft AI Tour offers more than just knowledge—it fosters collaboration and real-world impact. By delivering these sessions, you can help audiences simplify AI adoption, accelerate digital transformation, and drive innovation within their organizations. Whethe…
  • Open

    Skill your team to increase performance efficiency of Azure and AI projects
    The cost and performance benefits of moving your workload to the cloud are clear — reduced latency, improved elasticity, and great agility of resources — but it’s also critical to learn to manage ongoing performance efficiency beyond initial migrating to see optimal results. Best practices in performance efficiency go beyond designing your workloads so you only pay for what you need; it’s building the best of cloud computing into every design choice. Ideally, a workload should meet performance targets without overprovisioning, which makes the resources, skilling and how-to guidance offered by Azure Essentials crucial considerations for any team looking to scale efficiently. Built to provide help at your point of need, the resources available in Azure Essentials have helped clients complete…  ( 31 min )
    Skill your team to increase performance efficiency of Azure and AI projects
    The cost and performance benefits of moving your workload to the cloud are clear — reduced latency, improved elasticity, and great agility of resources — but it’s also critical to learn to manage ongoing performance efficiency beyond initial migrating to see optimal results. Best practices in performance efficiency go beyond designing your workloads so you only pay for what you need; it’s building the best of cloud computing into every design choice. Ideally, a workload should meet performance targets without overprovisioning, which makes the resources, skilling and how-to guidance offered by Azure Essentials crucial considerations for any team looking to scale efficiently. Built to provide help at your point of need, the resources available in Azure Essentials have helped clients complete…
    Cross-Region Resiliency for Ecommerce Reference Application
    Authors: Radu Dilirici (radudilirici@microsoft.com) Ioan Dragan (ioan.dragan@microsoft.com) Ciprian Amzuloiu (camzuloiu@microsoft.com) Introduction The initial Resilient Ecommerce Reference Application demonstrated the best practices to achieve regional resiliency using Azure’s availability zones. Expanding on this foundation, in the current article we aim to achieve cross-region resiliency, ensuring high availability and disaster recovery capabilities across multiple geographic regions. This article outlines the enhancements made to extend the application into a cross-region resilient architecture. The app is publicly available on GitHub and can be used for educational purposes or as a starting point for developing cross-region resilient applications.  Overview of Cross-Region Enhanceme…  ( 25 min )
    Cross-Region Resiliency for Ecommerce Reference Application
    Authors: Radu Dilirici (radudilirici@microsoft.com) Ioan Dragan (ioan.dragan@microsoft.com) Ciprian Amzuloiu (camzuloiu@microsoft.com) Introduction The initial Resilient Ecommerce Reference Application demonstrated the best practices to achieve regional resiliency using Azure’s availability zones. Expanding on this foundation, in the current article we aim to achieve cross-region resiliency, ensuring high availability and disaster recovery capabilities across multiple geographic regions. This article outlines the enhancements made to extend the application into a cross-region resilient architecture. The app is publicly available on GitHub and can be used for educational purposes or as a starting point for developing cross-region resilient applications.  Overview of Cross-Region Enhanceme…
  • Open

    Semantic Kernel Agent Framework RC2
    Three weeks ago we released the Release the Agents! SK Agents Framework RC1 | Semantic Kernel and we’ve been thrilled to see the momentum grow. Thank you to everyone who has shared feedback, filed issues, and started building with agents in Semantic Kernel—we’re seeing more developers try agents than ever before. Today, we’re declaring build […] The post Semantic Kernel Agent Framework RC2 appeared first on Semantic Kernel.  ( 23 min )
  • Open

    Introducing Microsoft Purview Data Security Investigations
    Investigate data security, risk and leak cases faster by leveraging AI-driven insights with Microsoft Purview Data Security Investigations. This goes beyond the superficial metadata and activity-only signals found in incident management and SIEM tools, by analyzing the content itself within compromised files, emails, messages, and Microsoft Copilot interactions. Data Security Investigations allows you to pinpoint sensitive data and assess risks at a deeper level — quickly understanding the value of what’s been exposed.  Then by mapping connections between compromised data and activities, you can easily find the source of the security risk or exposure. And using real-time risk insights, you can also apply the right protections to minimize future vulnerabilities. Data Security Investigations is also integrated with Microsoft Defender incident management as part your broader SOC toolset. Nick Robinson, Microsoft Purview Principal Product Manager, joins Jeremy Chapman to share how to enhance your ability to safeguard critical information. Find the source of a data leak fast.  ( 19 min )
    Introducing Microsoft Purview Data Security Investigations
    Investigate data security, risk and leak cases faster by leveraging AI-driven insights with Microsoft Purview Data Security Investigations. This goes beyond the superficial metadata and activity-only signals found in incident management and SIEM tools, by analyzing the content itself within compromised files, emails, messages, and Microsoft Copilot interactions. Data Security Investigations allows you to pinpoint sensitive data and assess risks at a deeper level — quickly understanding the value of what’s been exposed.  Then by mapping connections between compromised data and activities, you can easily find the source of the security risk or exposure. And using real-time risk insights, you can also apply the right protections to minimize future vulnerabilities. Data Security Investigations is also integrated with Microsoft Defender incident management as part your broader SOC toolset. Nick Robinson, Microsoft Purview Principal Product Manager, joins Jeremy Chapman to share how to enhance your ability to safeguard critical information. Find the source of a data leak fast.

  • Open

    Teams Toolkit for Visual Studio Code update – March 2025
    We’re excited to announce the latest updates for Teams Toolkit for Visual Studio Code featuring tenant switching, new capabilities for declarative agents, and more. The post Teams Toolkit for Visual Studio Code update – March 2025 appeared first on Microsoft 365 Developer Blog.  ( 25 min )
  • Open

    Simplify data transformation and management with Copilot for Data Factory
    The process of extracting, transforming, and loading (ETL) data is important for turning raw data into actionable insights. ETL allows data to be collected from various sources, cleansed and formatted into a standard structure, and then loaded into a data warehouse for analysis. This process ensures that data is accurate, consistent, and ready for business … Continue reading “Simplify data transformation and management with Copilot for Data Factory”  ( 7 min )
  • Open

    Comment l’IA générative impacte-t-elle l’expérience développeur?
    Adlene Sifi explore l’impact de l’IA générative sur l’expérience des développeurs. Dans cet article, nous allons tenter de déterminer s’il existe un lien entre l’utilisation de l’IA générative (ex. : GitHub Copilot) et l’expérience développeur (DevEx). Nous souhaitons vérifier plus précisément si l’utilisation de l’IA générative a un impact positif sur l’expérience développeur. Nous tenterons même […] The post Comment l’IA générative impacte-t-elle l’expérience développeur? appeared first on Developer Support.  ( 30 min )
    How does generative AI impact Developer Experience?
    Adlene Sifi explores the impact of generative AI on developer experience. In this article, we will try to determine if there is a link between the use of generative AI (e.g., GitHub Copilot) and developer experience (DevEx). Specifically, we aim to verify whether the use of generative AI has a positive impact on developer experience. […] The post How does generative AI impact Developer Experience? appeared first on Developer Support.  ( 29 min )
  • Open

    AI Agents: Mastering the Tool Use Design Pattern - Part 4
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a deep dive into the Tool Use Design Pattern. In previous posts (links at the end!), we covered agent fundamentals, frameworks, and design principles. Now, we'll explore how tools empower agents to interact with the world, expanding their capabilities and enabling them to perform a wider range of tasks. What is the Tool Use Design Pattern? The Tool Use Design Pattern enables Large Language Models (LLMs) within AI agents to leverage external tools. These tools are essentially code blocks, ranging from simple functions like calculators to complex API calls, that agents execute to perform actions, access information, and achieve goals. Crucially, these tools are invoked th…  ( 26 min )
    AI Agents: Mastering the Tool Use Design Pattern - Part 4
    Hi everyone, Shivam Goyal here! This blog series, based on Microsoft's AI Agents for Beginners repository, continues with a deep dive into the Tool Use Design Pattern. In previous posts (links at the end!), we covered agent fundamentals, frameworks, and design principles. Now, we'll explore how tools empower agents to interact with the world, expanding their capabilities and enabling them to perform a wider range of tasks. What is the Tool Use Design Pattern? The Tool Use Design Pattern enables Large Language Models (LLMs) within AI agents to leverage external tools. These tools are essentially code blocks, ranging from simple functions like calculators to complex API calls, that agents execute to perform actions, access information, and achieve goals. Crucially, these tools are invoked th…
  • Open

    GitHub Copilot for Azure: Deploy an AI RAG App to ACA using AZD
    Recently, I had to develop a Retrieval-Augmented Generation (RAG) prototype for an internal project. Since I enjoy working with LlamaIndex, I decided to use GitHub Copilot for Azure to quickly find an existing sample that I could use as a starting point and deploy it to Azure Container Apps. Getting Started with GitHub Copilot for Azure To begin, I installed the GitHub Copilot for Azure extension in VS Code. This extension allows me to interact with Azure directly using the azure command. I used this feature to ask my Copilot to help me locate a relevant sample to use as a foundation for my project. After querying available Azure resources, the extension found a LlamaIndex JavaScript sample, which was ideal for my needs. I then copied the Azure Developer CLI (azd) command to initialize my…  ( 23 min )
    GitHub Copilot for Azure: Deploy an AI RAG App to ACA using AZD
    Recently, I had to develop a Retrieval-Augmented Generation (RAG) prototype for an internal project. Since I enjoy working with LlamaIndex, I decided to use GitHub Copilot for Azure to quickly find an existing sample that I could use as a starting point and deploy it to Azure Container Apps. Getting Started with GitHub Copilot for Azure To begin, I installed the GitHub Copilot for Azure extension in VS Code. This extension allows me to interact with Azure directly using the azure command. I used this feature to ask my Copilot to help me locate a relevant sample to use as a foundation for my project. After querying available Azure resources, the extension found a LlamaIndex JavaScript sample, which was ideal for my needs. I then copied the Azure Developer CLI (azd) command to initialize my…
  • Open

    Rehost mainframe applications by using NTT DATA UniKix
    UniKix is a mainframe-rehosting software suite from NTT DATA. This suite provides a way to run migrated legacy assets on Azure. Example assets include IBM CICS transactions, IBM IMS applications, batch workloads, and JCL workloads. This article outlines a solution for rehosting mainframe applications on Azure. Besides UniKix, the solution's core components include Azure ExpressRoute, Azure Site Recovery, and Azure storage and database services. Mainframe architecture The following diagram shows a legacy mainframe system before it's rehosted to the cloud:     Workflow On-premises users interact with the mainframe by using TCP/IP (A): Admin users interact through a TN3270 terminal emulator. Web interface users interact via a web browser over TLS 1.3 port 443. Mainframes use communication…  ( 37 min )
    Rehost mainframe applications by using NTT DATA UniKix
    UniKix is a mainframe-rehosting software suite from NTT DATA. This suite provides a way to run migrated legacy assets on Azure. Example assets include IBM CICS transactions, IBM IMS applications, batch workloads, and JCL workloads. This article outlines a solution for rehosting mainframe applications on Azure. Besides UniKix, the solution's core components include Azure ExpressRoute, Azure Site Recovery, and Azure storage and database services. Mainframe architecture The following diagram shows a legacy mainframe system before it's rehosted to the cloud:     Workflow On-premises users interact with the mainframe by using TCP/IP (A): Admin users interact through a TN3270 terminal emulator. Web interface users interact via a web browser over TLS 1.3 port 443. Mainframes use communication…
    Refactor mainframe applications with Amdocs
    Amdocs automated COBOL refactoring solution delivers cloud-enabled applications and databases that do the same things as their legacy counterparts. The refactored applications run as Azure applications in virtual machines provided by Azure Virtual Machines. Azure ExpressRoute makes them available to users, and Azure Load Balancer distributes the load. Mainframe architecture Here's a mainframe architecture that represents the kind of system that's suitable for the Amdocs refactoring solution.         Dataflow TN3270 and HTTP(S) user input arrives over TCP/IP. Mainframe input uses standard mainframe protocols. There are batch and online applications. Applications written in COBOL, PL/I, Assembler, and other languages run in an enabled environment. Data is held in files and in hierarchical,…  ( 43 min )
    Refactor mainframe applications with Amdocs
    Amdocs automated COBOL refactoring solution delivers cloud-enabled applications and databases that do the same things as their legacy counterparts. The refactored applications run as Azure applications in virtual machines provided by Azure Virtual Machines. Azure ExpressRoute makes them available to users, and Azure Load Balancer distributes the load. Mainframe architecture Here's a mainframe architecture that represents the kind of system that's suitable for the Amdocs refactoring solution.         Dataflow TN3270 and HTTP(S) user input arrives over TCP/IP. Mainframe input uses standard mainframe protocols. There are batch and online applications. Applications written in COBOL, PL/I, Assembler, and other languages run in an enabled environment. Data is held in files and in hierarchical,…
    Migrate IBM mainframe applications to Azure with TmaxSoft OpenFrame
    Migrate IBM mainframe applications to Azure with TmaxSoft OpenFrame  Lift and shift, also known as rehosting, is the process of mainframe migration to produce an exact copy of an application, workload, and all associated data from one environment to another. Mainframe applications can be migrated from on-premises to public or private cloud.  TmaxSoft OpenFrame is a rehosting solution that makes it easy to lift-and-shift existing IBM zSeries mainframe applications to Microsoft Azure, using a no-code approach. TmaxSoft quickly migrates an existing application, as is, to a zSeries mainframe emulation environment on Azure.  This article illustrates how the TmaxSoft OpenFrame solution runs on Azure. The approach consists of two virtual machines (VMs) running Linux in an active-active configurat…  ( 36 min )
    Migrate IBM mainframe applications to Azure with TmaxSoft OpenFrame
    Migrate IBM mainframe applications to Azure with TmaxSoft OpenFrame  Lift and shift, also known as rehosting, is the process of mainframe migration to produce an exact copy of an application, workload, and all associated data from one environment to another. Mainframe applications can be migrated from on-premises to public or private cloud.  TmaxSoft OpenFrame is a rehosting solution that makes it easy to lift-and-shift existing IBM zSeries mainframe applications to Microsoft Azure, using a no-code approach. TmaxSoft quickly migrates an existing application, as is, to a zSeries mainframe emulation environment on Azure.  This article illustrates how the TmaxSoft OpenFrame solution runs on Azure. The approach consists of two virtual machines (VMs) running Linux in an active-active configurat…
2025-08-20T01:39:00.677Z osmosfeed 1.15.1