This week in Azure

John Savill shared two shorter videos this week. One walks through your first hour with M365 Copilot, covering useful prompts, built-in agents, and creating your first agent. The other is a quick 30-second guide to removing those fake malware alert overlays you get from clicking something you shouldn’t have.

The weekly update is heavy on AI models. There’s a real avalanche this week, and John makes a point I agree with: don’t build your strategy around any single model. They change too fast, and the “best” model is a moving target. What actually matters is your ability to integrate AI with your business, your data readiness, governance, guardrails, and adoption. The models themselves will become commodities.

Outside of AI, the confidential VMs v6 going GA and the Azure Arc Gateway are the two updates I find most interesting.

Category Update Status
Compute DCe/ECe v6 confidential VMs (Intel TDX) GA
Networking Azure Firewall draft and deploy GA
Database Azure Databricks workspace network config update GA
Database Azure Databricks Lakebase GA
AI Grok 4.0, Qwen 3.5 medium, GPT-5.3 Chat, GPT-5.4 GA
AI Phi-4-Reasoning-Vision in Foundry GA
Management Azure Arc Gateway GA
Management Azure Policy faster enforcement GA

DCe and ECe v6 confidential VMs (GA)

The DCe and ECe v6 confidential virtual machines are now generally available. These use Intel TDX (Trust Domain Extensions) to encrypt the entire VM while it’s running.

The important distinction: this is whole-VM encryption, not the enclave-based approach where you have to modify your application. You deploy your existing workload, and the VM itself is encrypted in use. No code changes required. The DCe series is general purpose, the ECe series is memory optimized.

Data protection lifecycle:
At rest ──→ encrypted (storage encryption, disk encryption)
In transit ──→ encrypted (TLS, VPN, ExpressRoute)
In use ──→ encrypted (confidential VMs) ← this is the piece that completes it

The v6 SKUs bring higher throughput, lower latency, and a “D” variant of each size that includes local ephemeral storage for when you need fast temporary disk. They all support Azure Boost for high-throughput remote storage and networking, plus Intel AMX acceleration for confidential AI workloads.

If you’ve been waiting for confidential compute to mature before using it in production, this generation removes a lot of the earlier friction.

Azure Firewall draft and deploy (GA)

Azure Firewall policy updates now support a two-stage draft-and-deploy workflow.

The old problem: you update one policy rule, it starts propagating. You update another rule, it starts propagating separately. Each incremental change triggers its own rollout, and you get delays stacking on top of each other.

Now you can draft multiple policy changes without affecting the live environment. Once all your changes are ready, you deploy them as a single unit that replaces the current policy.

Before: After:
Change A ──→ propagate Draft A ─┐
Change B ──→ propagate (delayed) Draft B ─┤──→ Deploy all at once
Change C ──→ propagate (delayed) Draft C ─┘

It’s a straightforward improvement. Batch your changes, review them together, deploy once. Less risk of inconsistent intermediate states during complex rule updates.

Azure Databricks workspace network config

Two updates to how Databricks workspaces handle networking.

First, if you’re currently using a managed virtual network with your Databricks workspace, you can now move it into your own VNet using VNet injection. That gives you on-premises connectivity via ExpressRoute or peering, your own NSGs, and full control over the network topology.

Second, if you already have VNet injection set up, you can now update the VNet configuration of your existing workspace without having to recreate it. Previously, changing the VNet config meant tearing down and rebuilding.

Both of these remove friction for teams that started with managed networking and later need more control.

Azure Databricks Lakebase (GA)

Lakebase adds an operational database layer to Azure Databricks. It’s a managed PostgreSQL environment, but the data gets written directly into lakehouse storage.

The practical benefit: you get a transactional relational database for your application, and that data is instantly available for Databricks analytics without building your own ETL pipelines or transformations. No separate sync process to maintain.

If you’ve been running a separate PostgreSQL instance and writing pipelines to move data into your lakehouse for analytics, Lakebase collapses those two steps into one.

New AI models in Azure AI Foundry

This was a busy week for model releases. Here’s what landed:

Grok 4.0 is now available. Grok 4.1 Fast (non-reasoning) is also there, with the reasoning version coming soon.

Qwen 3.5 medium model series is available in Foundry in three different sizes. Its strength is image-and-text to text capabilities.

GPT-5.3 Chat brings more accurate safety filtering. That doesn’t mean relaxed safety; it means fewer false positives. You won’t get blocked as often for harmless requests. It also combines trained knowledge and web information more effectively, and follows instructions better. OpenAI described it as “more accurate, less cringe.”

GPT-5.4 is available in both Foundry and GitHub Copilot. This one is built for long multi-turn reasoning sessions that can run for hours. It includes integrated computer use (for interacting with things that don’t have an API), better tool invocation, artifact generation (spreadsheets, presentations, documents), all the code generation from GPT-5.3 Codex, a 1 million token context window, and 128K output tokens.

Phi-4-Reasoning-Vision is a 15 billion parameter model with high-resolution visual perception. Developers can toggle reasoning on or off depending on whether they need accuracy or low latency. It’s good at interpreting diagrams, documents, charts, and tables, and could work as the vision layer for a computer-use agent that needs to understand graphical interfaces.

A note on model strategy — There is no single "best" model anymore. Different models are suited for different purposes: long reasoning, quick responses, image understanding, code generation. As an organization, your strategy should be around integration, governance, data readiness, and adoption, not around picking one winner. The models will keep changing. Your ability to use them effectively is what matters.

Azure Arc Gateway (GA)

Azure Arc extends the Azure control plane to operating systems and Kubernetes environments outside of Azure. One persistent problem: the number of network endpoints that needed to be reachable kept growing with each Arc-enabled service you added. It could scale past 100 endpoints, which made firewall and proxy configuration a real headache.

Arc Gateway reduces that to 7 endpoints. That’s it.

Before Arc Gateway: With Arc Gateway:
Arc Agent ──→ 100+ endpoints Arc Agent
└── Arc Proxy
└── Enterprise Proxy
└── Arc Gateway (Azure)
└── All services

The gateway is an Azure-based resource. On your Arc-managed machines, the connected machine agent now includes an Arc Proxy that routes through your enterprise proxy to the gateway. Instead of opening firewall rules for every individual service endpoint, you point everything at the gateway.

If the endpoint sprawl has been blocking your Arc adoption, this removes that barrier.

Azure Policy faster enforcement

Azure Policy changes now propagate within 5 minutes. The old workaround of logging out and back in to force faster policy evaluation is being retired at the end of April 2026 since it’s no longer needed.

Not a flashy update, but if you’ve been frustrated by the delay between making a policy change and seeing it take effect, this is a welcome fix.

Final thoughts

The confidential VMs going GA completes a story that’s been building for a while. Data encrypted at rest, in transit, and now in use, with no application changes needed. The v6 SKUs are performant enough that the “confidential” part doesn’t have to mean “slower.”

Arc Gateway solving the endpoint problem is the kind of pragmatic fix that actually unblocks adoption. Going from 100+ endpoints to 7 changes the conversation from “we can’t open that many firewall rules” to “okay, let’s set it up.”

On the AI model front, I’d echo John’s advice: don’t anchor your strategy to any specific model. The pace of releases makes that impossible anyway. Invest in the integration layer, the governance, the evaluation, and the adoption. That’s the durable part.


Sources

  1. John Savill, “Azure Update - 6th March 2026,” YouTube, https://www.youtube.com/watch?v=VuXRLdt5dIc