From Ops to C-Level – AI Adoption in Practice – Festive Tech Calendar 2025

Introduction

It’s great to be back for another year of the Festive Tech Calendar. Always an excellent event across the month of December, and looking through the schedule, it is an amazing year of content. Kudos to the team for all the work they put in, and thanks for having me again!

This year, I’m continuing my AI adoption theme, but expanding it slightly beyond the scope of Azure, and taking a bit of a ‘business via tech’ approach. This is for a couple of reasons; first, I think AI has changed in the last 12 months, and second, I think the way everyone approaches it also has. This might seem simple, but in reality, or in practice for an IT decision maker – it really isn’t. So, I thought I would share some learnings from where I have seen things work well and work not so well!

AI Opinion

This of course all starts with AI. If you don’t know what that is, I would be slightly worried as to how you found yourself reading this… but AI isn’t just one thing anymore, and to be honest, it never was to begin with.

The GPT models have become the Google of search engine colloquialism for common users – ‘just ask ChatGPT’ etc. This is great for awareness, but also no doubt frustrating for the teams creating Copilot, Gemini etc. Even more frustrating for the teams responsible for the original Azure AI Services (Cognitive Services – remember that!?). That brings me onto my next point, and one of the key challenges with AI adoption I have seen – AI isn’t just generative and the perception gap that opens up.

AI ‘Types’ I Think are Relevant

For most users, working with an AI assistant of some sort, is the majority of AI interaction they will have. So general and/or company based knowledge helping users find or create content that assists with work. I genuinely haven’t seen any work setting where stupid images etc are being created, so don’t believe the hype or prevalence of that based on social media.

Next, a subset of your user base may use more tuned AI assistance. Think Github Copilot as the example here. Chat based, powerful and specific AI based assistance. This is often seen as a more impactful ‘upgrade’ to a user/teams skillset, but much less clearer in terms of adoption requirements.

Then we move into the one-to-many AI – agents. A huge range of options here and with the capabilities in Azure AI Foundry, a near endless set of use cases. From experience, I’ve seen these used to take on one information source, or one task. These work well, roll out rapidly and require little to no real guidance. I have also seen attempts at multi-agent frameworks/workflows with less success, and finally very few agents that take action without explicit supervision. Take an over simplistic example – “Write a reminder email for the upcoming meeting and send to all attendees, cc my boss” you need serious confidence in that agent to handle information as you need it to.

Finally, there has been large scale adoption of AI within systems or existing applications. However, don’t mistake large scale adoption with actual success of AI. This is easily the example where I have seen the most ‘AI washing’ – reinventing some existing feature by calling it AI. This part really bugs me, as I believe it is driving up costs of already expensive systems, while also interrupting roadmaps of upgrades, fixes, new features that could have more impact.

Ok – let’s get into some points around adoption in practice. Ultimately I’ve seen it boil down to a balance of use case vs blockers. If you can outdo the list of blockers, and the use case is valid – success. Now I have drastically simplified that, but let’s start with blockers, so we can start to see the meat of the issue.

Blockers

When C-level look for AI adoption, they think of productivity return-on-investment, people management, time to market, and competition. This is no different than any other type of adoption, but I think AI carries a much tougher perception problem. In a presentation back in the Summer, and in every AI discussion since, I have discussed and named this as the ‘surely gap’.

Without exception this is the number one issue derailing AI adoption. If you cannot address the perception issue early, you are doomed to have a project that will be viewed as not meeting expectations, disappointing, or even a failure. Even though AI might be delivering 50-80% increases in productivity, output, or accuracy etc. The second you hear “Surely AI can solve this, surely AI can create that, surely AI can remove that team – you are surely, in trouble.

Flip the approach to the ops team, or IT people. I see two equal priority issues:

  • Bad data projects – “I’d love to use AI, but our data isn’t ready” – this is either a security issue, or a compliance issue, or both. Often it can be as simple as permissions and structure of data, commonly an issue in any SharePoint online environment that has been around a while. Plenty of simple work can address this, but the fear of sensitive data being exposed will put the brakes on any AI work. Now a positive swing here is perhaps you can now get that data project budget based on AI, but it’s still a significant challenge.
  • Sprawling use cases – this is causing more of a compliance and regulatory issue, with no real resolution I have seen, only mitigation via policy. Take a system that can do five things, but your users only need two of them. So you risk assess and secure those two. However, if you can’t disable the other three features, users can simply use them if they wish. And it might not be as simple as features, it becomes more complex with generative AI. I expect changes in monitoring, analytics and feature decoupling to come as the EU AI act takes hold.

Lessons Learned

The first challenge with any blocker is knowing about it. Once you know about a problem, you can start to formulate a plan to solve it. And with the blockers I’ve outlined, you can probably already guess some solutions.

First and most important in my experience is dealing with the perception issue. I view AI as the least IT project a company can take on at present. Something like Copilot has little to no real IT or Ops work to configure it. License the user – milestone complete. But if an Ops team is given this as a project it can miss the real beginning an end of the project – perception and adoption.

Address the perception up front – why do we need AI, what will it do, what does good look like? Work backwards, and pick use cases that have simple, measurable outcomes.

Plan the adoption the second you have defined use cases – targeted users, timeframe, and a cadence of revisit. Most AI usage requires habit forming of some sort, adoption plans need to promote and push that.

In terms of Ops challenges, the most important lesson I have learned is get the team onboard, and get them out of the way. AI has the worst risk of shadow IT and data leakage I have ever seen. Users will want to try and use it. Give them something early, give them something decent and give them guidance. Then and only then – block everything else you can.

My Wishlist

This is tricky, but maybe not too complex. Instead of a list, I think I would push for one change and one change only – greater awareness or understanding of how to align a use case and capability. Nobody thinks email is the entire answer to their comms requirement, and they aren’t disappointed when it can’t do instant messaging or video calling. I know that isn’t necessarily a fair comparison, but if we can get AI understanding closer to that, I think it will greatly improve not only the adoption rates, but the success of AI projects.

I have another point on cost, and probably several points on environmental impact, but they are at least a blog post each. Perhaps something on that in Azure Spring Clean 2026…

In Closing

To close I will convert my opinion to a simple list:

  • Speak to your stakeholders, gather opinions and use cases. Identify enablers and detractors, plan for both.
  • Pick your use cases, always start with the simple ones that have measurable confirmed outcomes.
  • Address Ops concerns, get the team onboard for rollout. Create your plan for enablement and adoption.
  • Meet your stakeholders again, get the use case and outcome crystal clear. Leave no room for the ‘surely gap’
  • Rollout and continuously adopt. Revisit use and the use case. Evolve the plan as the use case does.

Joepilot or Copilot? Networking edition | Azure Back to School 2025

Introduction

Firstly, it’s great to be featured for Azure Back to School in 2025. It is an excellent event every year, and the people involved run it brilliantly. Thank you once again for having me!

I have posted about Copilot in Azure previously, including a network focussed post for this event last year. After last year’s event, Microsoft announced Networking specific focussed features that really caught my attention. Finally, Microsoft announced that Copilot in Azure eventually hit GA earlier this year. All of this together made me think I should write a follow-up and try get to an updated answer for the question – Joepilot or Copilot?

So – what’s different to last year? At a basic level, you should see improvements essentially everywhere. Performance of responses, accuracy of output, capability expansion and just general usability. Oh and also, it’s still ‘free’! However, that’s all a bit broad. So I decided to focus on 3 aspects that Copilot promises to be/deliver – Specialist, Architect, Engineer.


This is how Microsoft frame this:

Think of Copilot as an all-encompassing AI-Powered Azure Networking Assistant. It acts as:

  • Your Cloud Networking Specialist by quickly answering questions about Azure networking services, providing product guidance, and configuration suggestions.
  • Your Cloud Network Architect by helping you select the right network services, architectures, and patterns to connect, secure, and scale your workloads in Azure.
  • Your Cloud Network Engineer by helping you diagnose and troubleshoot network connectivity issues with step-by-step guidance.

The Specialist โ€“ Deep Knowledge on Demand

  • What I Expected:
    • Copilot as a subject matter expert: quick answers, best practices, and troubleshooting tips.
  • What I Tested:
    • Several real-world scenarios: e.g. configuring VNet peering, diagnosing NSG rules, or BGP route issues.
  • What I Found:
    • Strengths: A big jump in speed, accuracy, and contextual awareness.
    • Weaknesses: gaps in nuanced scenarios, but that’s me being pushy/specific (Still can’t answer my effective routes on a gateway subnet question ๐Ÿ™‚ ), and the odd hallucination.
  • Verdict:
    • Does it replace a human specialist, or just accelerate one? I think it now truly accelerates a specialist. And if you are not a specialist, I think you finally have a legitimate assistant to answer questions, or even sanity check approaches with.

I’ve screen grabbed my favourite example below; vnet peering with a twist. I like that it asked for context on peering type, it does get the scenario correct in terms of advice at line 4, however I would have preferred it a bit earlier, as line 1 is potentially confusing otherwise. It did just make up the different regions bit, classic hallucination. Follow-on prompts are both useful and accurate.

The Architect โ€“ Designing the Big Picture

  • What I Expected:
    • Copilot helping with architecture scenarios, design patterns, and compliance considerations.
  • What I Tested:
    • Example: designing a hub-and-spoke network with security and governance baked in. Complex routing leads to service/resource recommendations. Design vs service scenarios.
  • What I Found:
    • Pros: quick generation of templates, reference architectures.
    • Cons: lacks business context, sometimes over-simplifies.
  • Verdict:
    • Can Copilot think like an architect, or is it just a pattern matcher? I think it’s both. It is accurate enough that you can sanity check your own ideas, or it can suggest ideas for areas you are not familiar with. However, it still lacks the deep nuanced guidance an expert offers.

Again, I have screen grabbed my best example of use below. I felt this was a good test scenario, a unique requirement, such as branch-to-branch, and specific additions, like secure internet. It gets this nearly perfect, however item 3 needs refinement relative to VWAN, or it’s simply wrong! I like the addition of an optional ER and the control and governance of NSGs and Watcher are a welcome inclusion. Being transparent, I had mixed success with direct follow-ups on routing design specifics.

I did have a follow-up I was impressed with though and it’s also a nice segue to the next section! For the sake of this article length and readability, I am not going to paste all of the code, but it worked and was accurate for what I asked for – an example build. Think about just how handy this is! It generated that in less than 60 seconds.

The Engineer โ€“ Hands on the Console

  • What I Expected:
    • Copilot writing scripts, ARM/Bicep templates, and CLI commands.
  • What I Tested:
    • Deploying a secure VNet with subnets, NSGs, and routing. Adding a VNG with specific SKU requirements.
  • What I Found:
    • Great at the basics, decent at custom tweaks, but watch for syntax quirks.
  • Verdict:
    • Does it save time or create rework? It saves a lot of time getting you past a blank page or from zero knowledge to working. However, once you start layering complexity, you enter a world of rework. Stick to what it is good at and I have no complaints.

This is where I see the most challenging task for Copilot. We move away from advice, greenfield approach etc into messy, complicated existing Azure environments (I would never have a messy environment you say – I have seen enough of them to know we are all guilty here!). I started with testing some of the default prompts – How do I create a new resource group and move resources into it with Azure CLI? – and not only is the output good/accurate the instructions around it are too, with advice and usage flags.

Next a greenfield deployment of a VNET with complex/specific requirements, trying to include nuanced items that will limit you later, like the subnet size of a GatewaySubnet.

I want to build a secure VNet with subnets, NSGs, and routing tables. NSGS and route tables can be default for now. It must include a gateway subnet capable of accommodating two VNGs, it must have 4 other equally sized subnets named - app, dev, infra, dmz. It should use the address space 10.10.155.0/23

The code here was good, but not 100% accurate, as it included no route tables (I am putting this down to my wording relative to default routing, as repeating this with a few tweaks actually did include it). I wouldn’t have made any edits myself except for personal preference formatting etc. So I moved onto something more complex:

Update this to include a VNG to support a resilient VPN with a bandwidth of at least 1Gbps

This time I will include the code… My hope here was that it would simply add the required aspects to include a VNG of the correct SKU within the existing code of a single VNET. However, for some reason I think it took resilient as duplicate! I now have code for two VNETs, it no longer adheres to my address space requirements, and it connected the two VNETS via VPN!

// The code was generated by AI which may result in unintended or inaccurate code. A human must validate that it accomplishes the desired goal before deploying the template.

@description('The shared key used to establish connection between the two vNet Gateways.')
@secure()
param sharedKey string

@description('The SKU for the VPN Gateway. Must support at least 1Gbps bandwidth.')
@allowed([
  'VpnGw2'
  'VpnGw3'
])
param gatewaySku string = 'VpnGw2'

@description('Location of the resources')
param location string = resourceGroup().location

var vnet1cfg = {
  name: 'vNet1-${location}'
  addressSpacePrefix: '10.0.0.0/23'
  subnetName: 'subnet1'
  subnetPrefix: '10.0.0.0/24'
  gatewayName: 'vNet1-Gateway'
  gatewaySubnetPrefix: '10.0.1.224/27'
  gatewayPublicIPName: 'gw1pip${uniqueString(resourceGroup().id)}'
  connectionName: 'vNet1-to-vNet2'
  asn: 65010
}
var vnet2cfg = {
  name: 'vnet2-${location}'
  addressSpacePrefix: '10.0.2.0/23'
  subnetName: 'subnet1'
  subnetPrefix: '10.0.2.0/24'
  gatewayName: 'vnet2-Gateway'
  gatewaySubnetPrefix: '10.0.3.224/27'
  gatewayPublicIPName: 'gw2pip${uniqueString(resourceGroup().id)}'
  connectionName: 'vnet2-to-vnet1'
  asn: 65050
}

resource vnet1 'Microsoft.Network/virtualNetworks@2020-06-01' = {
  name: vnet1cfg.name
  location: location
  properties: {
    addressSpace: {
      addressPrefixes: [
        vnet1cfg.addressSpacePrefix
      ]
    }
    subnets: [
      {
        name: vnet1cfg.subnetName
        properties: {
          addressPrefix: vnet1cfg.subnetPrefix
        }
      }
      {
        name: 'GatewaySubnet'
        properties: {
          addressPrefix: vnet1cfg.gatewaySubnetPrefix
        }
      }
    ]
  }
}

resource vnet2 'Microsoft.Network/virtualNetworks@2020-06-01' = {
  name: vnet2cfg.name
  location: location
  properties: {
    addressSpace: {
      addressPrefixes: [
        vnet2cfg.addressSpacePrefix
      ]
    }
    subnets: [
      {
        name: vnet2cfg.subnetName
        properties: {
          addressPrefix: vnet2cfg.subnetPrefix
        }
      }
      {
        name: 'GatewaySubnet'
        properties: {
          addressPrefix: vnet2cfg.gatewaySubnetPrefix
        }
      }
    ]
  }
}

resource gw1pip 'Microsoft.Network/publicIPAddresses@2020-06-01' = {
  name: vnet1cfg.gatewayPublicIPName
  location: location
  properties: {
    publicIPAllocationMethod: 'Dynamic'
  }
}

resource gw2pip 'Microsoft.Network/publicIPAddresses@2020-06-01' = {
  name: vnet2cfg.gatewayPublicIPName
  location: location
  properties: {
    publicIPAllocationMethod: 'Dynamic'
  }
}

resource vnet1Gateway 'Microsoft.Network/virtualNetworkGateways@2020-06-01' = {
  name: vnet1cfg.gatewayName
  location: location
  properties: {
    ipConfigurations: [
      {
        name: 'vnet1GatewayConfig'
        properties: {
          privateIPAllocationMethod: 'Dynamic'
          subnet: {
            id: resourceId('Microsoft.Network/virtualNetworks/subnets', vnet1.name, 'GatewaySubnet')
          }
          publicIPAddress: {
            id: gw1pip.id
          }
        }
      }
    ]
    gatewayType: 'Vpn'
    sku: {
      name: gatewaySku
      tier: gatewaySku
    }
    vpnType: 'RouteBased'
    enableBgp: true
    bgpSettings: {
      asn: vnet1cfg.asn
    }
  }
}

resource vnet2Gateway 'Microsoft.Network/virtualNetworkGateways@2020-06-01' = {
  name: vnet2cfg.gatewayName
  location: location
  properties: {
    ipConfigurations: [
      {
        name: 'vNet2GatewayConfig'
        properties: {
          privateIPAllocationMethod: 'Dynamic'
          subnet: {
            id: resourceId('Microsoft.Network/virtualNetworks/subnets', vnet2.name, 'GatewaySubnet')
          }
          publicIPAddress: {
            id: gw2pip.id
          }
        }
      }
    ]
    gatewayType: 'Vpn'
    sku: {
      name: gatewaySku
      tier: gatewaySku
    }
    vpnType: 'RouteBased'
    enableBgp: true
    bgpSettings: {
      asn: vnet2cfg.asn
    }
  }
}

resource vpn1to2Connection 'Microsoft.Network/connections@2020-06-01' = {
  name: vnet1cfg.connectionName
  location: location
  properties: {
    virtualNetworkGateway1: {
      id: vnet1Gateway.id
      properties: {}
    }
    virtualNetworkGateway2: {
      id: vnet2Gateway.id
      properties: {}
    }
    connectionType: 'Vnet2Vnet'
    routingWeight: 3
    sharedKey: sharedKey
    enableBgp: true
  }
}

resource vpn2to1Connection 'Microsoft.Network/connections@2020-06-01' = {
  name: vnet2cfg.connectionName
  location: location
  properties: {
    virtualNetworkGateway1: {
      id: vnet2Gateway.id
      properties: {}
    }
    virtualNetworkGateway2: {
      id: vnet1Gateway.id
      properties: {}
    }
    connectionType: 'Vnet2Vnet'
    routingWeight: 3
    sharedKey: sharedKey
    enableBgp: true
  }
}

The Reality Check โ€“ Am I Smarter Than Copilot?

  • Summarize findings across all three roles.
  • Discuss where Copilot shines and where human expertise is irreplaceable.
  • Reflect on the โ€œpartnershipโ€ model: Copilot as an accelerator, not a replacement.

This is one of those articles that I really enjoy writing. You sit down to research, plan, and test with a genuine spark of interest but no clear outcome. To start, I have to say I am impressed with the progress. It’s a very different tool to last years. In my view, it is definitively capable across all three roles, I don’t think that can be debated any longer.

The improvements in performance and accuracy, the addition of skills, and potentially the increased user familiarity all combine to make an impressive assistant to your day-to-day work in Azure. And as it’s ‘free’, why would you not make use of the quick wins and detail it can offer?

However, with my head held high I can confidently say – I am still better than it! Having said that, I think I now find myself thinking that maybe the more important question, and perhaps the one that matters is – am I better with it our without it? Perhaps that’s a follow up post…

wedoAI 2025

Our AI focussed event is back for another year. We’ve had a 20% increase in sessions, and an excellent view of the broadness of impact that AI can offer to Microsoft Cloud. We’re very proud of the collection of content, and think you will be too!

All of the content is fully available on the event site – https://wedoai.ie

Azure Spring Clean 2025

And thatโ€™s it for another year! Azure Spring Clean finished yesterday, with over 50 contributors across the week.

We had articles ranging from landing zone acceleration, to AI agents, to migration patterns, all the way to cost management.

If you havenโ€™t had a chance, everything will remain live on the site to browse through.

Thank you again to our contributors and participants!

Festive Tech Calendar: Adopt AI like a PRO with Azure Essentials

Another year, and another fantastic Festive Tech Calendar. While it wasn’t the first event I participated in, I do think it is my longest running, annual event. I have been a fan since it’s inception and am delighted to see it continue. This year, the team are raising funds for Beatson Cancer Charity. Donations are appreciated via the Just Giving page.

Now, this post is all about Azure AI adoption via the new offering of Azure Essentials. So, we will start off by explaining what that is and why it matters. Over the years, Microsoft has introduced new ways of doing things, new approaches or methods; sometimes these have been simple renames, and sometimes they have been a completely different vision of Azure. Often, they can be confusing regardless. This post aims to help understand Azure Essentials better, using the “tech of the moment” Azure AI.

So – let’s get started. What exactly is Azure Essentials? As we’re working related to AI, let’s set the scene using Copilot…

Copilot for M365 in Teams (please don’t @ me about the structure of that name, I cannot keep up with how to reference Copilot!) was helpful:

Copilot in Azure…not so much:

What you need to take away at this stage is, rather than this being an entirely new thing, it has consolidated existing good work to allow for the consumption of it to be simpler or more refined. At a theory level this seems to make sense, however, we all know the implementation of these things can be very tricky.

With this in mind, how to use or approach Azure Essentials shifts a bit. The first point that struck me was this is most useful for people new to Azure. However, that is not to say it is not useful for those experienced. But if you are new, we have a lot of assumptions that people will know and make use of offerings like CAF and WAF, will reference the Architecture Centre for design guidance, etc. When that is likely not the case.

Centralise core guidance as Azure Essentials is a great idea in my opinion. However, it hasn’t just centralised guidance. Disclosing at this point that I work for a Microsoft Partner. Essentials also includes recommendations for finding a partner, leveraging funding programs, which products are useful, and customer testimonials. This is nice for companies like mine as a marketing/contact channel, but I am not sure if I would define it as “essential”.

What is essential though is how it frames guidance and aligns customers into the right frame of approach in my opinion. The site is a touch confusing on this point though. So the new resource kit is right at the top, it’s the first link on the page. But scenario or use case guidance is further down and brings you elsewhere. Sticking with our original idea regarding AI adoption, there is a use case listed, and this brings you to an Azure Architecture blog from July – this is not what we want…

Whereas if we open the Resource Kit, then check it’s contents, we get a ‘common scenario’ with click through links

Now before we dig in there, one item I noted when researching this was that some messaging implies, or potentially confuses some elements of this with changes to, or improvements upon the Cloud Adoption Framework (CAF). In my opinion, Azure Essentials doesn’t change CAF, it’s not even listed on the What’s New page. However, it is an improvement to how people may be guided to CAF. And anything that brings more people to CAF and allows for efficient and more well governed deployments is a positive to me!

So, what exactly does Essentials recommend then as it’s ideal detail required for AI adoption? Six steps and some learning material. I am delighted to see the inclusion of learning material, its becoming more and more important as the rate of change increases. Let’s have a look at the six steps:

  1. Assess your Azure AI readiness
  2. Explore Azure AI pricing
  3. Prepare your AI environment
  4. Design your AI workloads
  5. Develop Well-Architected AI workloads
  6. Deploy, Manage, and operate your AI workloads

At first glance this looks like a good set to me. I don’t think I would have ranked pricing as high in the sequence, but perhaps it’s important to get that out of the way early! ๐Ÿ™‚

The first ask is here is to take an Assessment. The Azure AI readiness assessment focusses on core areas of adoption strategy within your business. It can be a lengthy process, it notes 45 minutes, but if you choose all of the areas available, it will give you 100+ pages of questions to complete to attain your score. Anyone familiar with Azure Well Architected Reviews, or the old Governance Assessment will see the immediate similarities here and understand the usefulness of having something that asks people to think about things in the correct way and offers a score to guide expectations.

Next, it’s pricing. Again, this is tricky for me. To be remotely accurate with pricing, I think you need to have some form of design to then dictate resources, which lead to price. You are then happy, or shocked, and rework your design. Rinse repeat to get to where you need to be. Unfortunately, the link in the resource kit lands you on the default pricing page for Azure, nothing AI specific. So you really are starting at the bottom. Some more AI specific guidance here would be a great inclusion for the next version. For example, this placement link, bring you to the menu item for AI pricing on this page, a small but helpful direction.

Next, we’re onto preparation. A good note on a Landing Zone, but I would have expected as this is Azure Essentials that would link through to some guidance on Landing Zones. We then get two links to Design Architectures for Azure AI in the Architecture Centre. This could be more confusing than helpful, and it’s not the preparation guidance I would expect. This is Azure Essentials, and here is the first AI architecture Visio you see…

My concern here is complexity. I know people may have more interest in using OpenAI models and the whole chat functionality. But I would have gone a different route here. Most likely document based, something that uses one of the more mature services, like Document Intelligence, and a simpler architecture for guidance. Make it easier to see the objective rather than the mountain that is presented above. I don’t think there is actually a perfect set of links here, too many variables and too much information dependent on where the user perception of AI is. Will be very interesting to see how this progresses and it may always require further expertise and information to be properly impactful.

Next, design, one of my favourite areas. No other aspect of Azure excites me like creating the solution design. With a vast platform you start with everything and toil away until you have what works for what is needed. Here we get a note to choose from reference architectures – good point, but which ones? No links are provided, but having said that, there is no single link that works here. The reference architectures are spread out amongst the different products. Next, we get a great link to the AI architecture design overview page. I think I might have switched step 3 and 4 here actually. Doing this first, I believe it gives people a much better starting point to learn from and then understand point 3 more comprehensively. Bookmark this page for your AI adoption journey, it’s like a TOC of what to read for which service/product.

The penultimate step guides us to well architected workloads. The note is simply a note, however the point is valid but I think it should have included this link, as the start point for this step. It’s really useful and helps you quickly jump where you need to with reference to the Well Architected Framework (can anyone else just not call it WAF? Too confusing for me with Web Application Firewall). However the included link, which focusses on Azure OpenAI is good. It has the expected pillar guidance for Well Architected, and it has a comprehensive set of accurate click-through links. I think this step is important and placed in the right place too, so it flows well at this point of the resource kit.

Finally, we have a deploy and manage step. This feels a little bit like the weakest of the six steps. First of all the title is repeated as the first bullet point – not great.

Then it notes we should use best practice – again, no guidance as to what that means. Or how that is practically when it comes to deployment and management. Finally, it links to a guide page regarding responsible use of AI. Responsible use is incredibly important, it is valid when operating AI workloads, but it is useless as a single link for this step. There is a literal AI management page on CAF already that could be used. I have waited until this step to link to this area of CAF, as it hasn’t been updated since the start of 2024, but it has a lot of detail this kit should include, and with an update, would make much more sense than some of the links included.

In conclusion, I think the kit needs some work, a revision so to speak. First, I would tweak the steps to be as follows:

  1. Assess your Azure AI readiness
  2. Develop Well-Architected AI workloads
  3. Design your AI workloads
  4. Prepare your AI environment
  5. Deploy, Manage, and operate your AI workloads
  6. Explore Azure AI pricing

Next, I would rely more heavily on CAF and Architecture Center with context for links, or linking to overview pages with a note to use the links within. Like a ‘further reading’ note or similar. I know it is meant to be Essentials, but let’s give essential guidance rather than minimum perhaps?

Finally, if you want to adopt AI like a Pro – I think Essentials is useful as a sanity check, but you are better investing your time on the already existing items on Learn, CAF and WAF.