Skip to content
adatum
  • Home
  •  About adatum
  •  Learn Azure Bicep
  •  SCOM Web API
Azure Monitor

Azure managed Prometheus for existing AKS clusters

  • 04/11/202407/01/2025
  • by Martin Ehrnst

In the company I work for, we have multiple AKS clusters. They have different purpose, and are split between environments, like dev and prod. Nothing special at all. We recently decided to enable the clusters for Prometheus, and build our application dashboards using Grafana. A lot has changed since my last look at Azure Managed Prometheus. This time around I encountered a few challenges, below you’ll find a summary.

Our clusters are deployed using Terraform, and the monitoring “stack” with Bicep. Apart from a small difference in language, we also have decided that Prometheus and Grafana should exist in one region only, and we would only split the data source between dev and production environments. The Grafana instance is the same for both.

Enable monitoring

Azure portal button to enable managed prometheus.

The button above makes it pretty simple to enable Azure Managed Prometheus for that specific cluster – but since we want to do this using code, we need to modify our modules. And what exactly does this Configure button do? It creates a deployment which consist of a data collection rule, data collection endpoint, and a few Prometheus recording rules. During the process it also allows you to specify an existing managed Prometheus (Azure monitor metrics workspace) and managed Grafana.

Deployments created by the automatic onboarding to prometheus

The data collection rule, and association is similar to what we already have with Log Analytics and container insights. That would mean a quick change to our existing Terraform code, adding a new collection rule. I thought…

All my issues is explained in various Microsoft Doc’s and GitHub repositories. However, piecing everything together together took a bit of time.

  • With clusters in multiple regions. The data collection rule need to exist in the same location as the Azure monitor workspace (Prometheus). Unless you want the collection endpoint to also be in the same region. You will need to create two. One in the cluster region, and one in the monitor workspace region. I used this example as an inspiration, and this doc as a deployment reference guide.
  • The automatic onboarding process deploy 1:1 relationship of the recording rules for the clusters. I did not want to manage the recording rules together with our clusters. And ended up creating them along-side Prometheus. By only specifying the prometheusWorkspaceId in the scope, these rules are applied to all clusters sending data to the specific workspace. An example Bicep module here. You will also find them here, but without the UX rules.
  • We did not want to keep performance metrics sent to Log Analytics. If you don’t want that either. You’ll need to modify the data collection rule by specifying the streams you want. Specifically, remove Microsoft-Perf and Microsoft-InsightsMetrics
Portal experience with UX rules.

Share this:

  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
  • Click to share on Reddit (Opens in new window) Reddit
Migration to GitHub illustration Azure DevOps

Migrate from Azure DevOps to GitHub – what you…

  • 16/02/202407/01/2025
  • by Martin Ehrnst

Developers love GitHub, and I do to! However, migrating from one platform to another is akin to orchestrating a complex dance. The rhythm of code, the choreography of pipelines, and the harmony of collaboration—all must align seamlessly. But beneath the surface lies the intricate steps: data mapping, permissions, and legacy dependencies. As the curtain rises, let us delve into the intricacies of this migration journey, where every line of code carries the weight of history, and every commit echoes the promise of a new beginning.

I’ve been apart of this project before, and now I find my self in the same situation. The goal is the same, but not all solutions are identical. You will need to adjust your tasks to fit your company’s situation. But in this blog post, I will outline what you need to know when migrating from Azure DevOps to GitHub.

GitHub for organization and GitHub enterprise.

GitHub enterprise is probably what you’re looking for. You can get some things done with an Organization only, but if you want to use some security features, let’s say, branch policies for internal repositories, you’re forced to go the enterprise route.

GitHub Organizations are well-suited for open-source projects, small teams, or individual developers.

  • They provide a collaborative space for managing repositories, teams, and access permissions.
  • Features include team-based access control, issue tracking, and project management.
  • Ideal for community-driven development and public repositories.

GitHub Enterprise caters to larger organizations and enterprises.

  • It offers enhanced security, scalability, and administrative controls.
  • Features include advanced auditing, single sign-on (which you want), and enterprise-grade support.
  • Perfect for companies with (any) compliance requirements and a need for robust infrastructure.

GitHub Enterprise – account types

Time to chose how accounts are managed. In DevOps you probably used your company’s existing accounts synced with Entra ID. For GitHub you have two options.

Individual accounts using SAML: These are user accounts that are linked to an identity provider (IdP) using Security Assertion Markup Language (SAML). Users can sign in to GitHub Enterprise Cloud with their existing credentials with a linked account to your identity provider, most likely Azure Entra ID.

The other option is Enterprise managed users: Here the user accounts are provisioned an managed through the IdP. Of course this is what you want, right. Full control! However, in both projects we ended up with option 1, individual accounts with SAML. What you do comes down to whether you want to favor developer experience a bit more than full control and central management.

Enterprise managed users are totally blocked from any collaboration outside your enterprise. This means creating issues on a public repo, etc. I really hope GitHub will change this, because what we actually want is both!

Migrating repositories

Let’s delve in to the more technical side of things. You want your repositories moved from DevOps to GitHub. And that is pretty damn simple, possibly the easiest part of the whole project as both is using Git as the underlying technology.

If you only have a handful repo’s a simple git clone can do the job. But most likely, you want to do a bit more, and if you are like me, working as a platform engineer or similar, you probably would like to streamline the process, and have each repository set up with some baseline settings. All this will require some scripting.

Enter GitHub CLI and ADO2GH extension despite having a super annoying limitation requiring personal access tokens (PAT) for bot DevOps and GitHub, I still think this is you best option. I spent a few hours trying to find out how to use GH CLI with an application, but without luck. Considering this is a time limited project, using a PAT from a service account (will consume a license) is acceptable.

Our solution for migrating repositories is a workflow in GitHub developers can run. Below is an example on the workflow and the PowerShell migration script.

param(
[Parameter(HelpMessage="The Azure DevOps organization.")]
[string]$adoOrg = "Adatum",
[Parameter(Mandatory=$true, HelpMessage="The Azure DevOps team project.")]
[string]$adoTeamProject,
[Parameter(Mandatory=$true, HelpMessage="The Azure DevOps repository.")]
[string]$adoRepo,
[Parameter(HelpMessage="The GitHub organization.")]
[string]$githubOrg = "Adatum",
[Parameter(HelpMessage="The GitHub repository.")]
[bool]$lockAdoRepo = $false,
[Parameter(HelpMessage="Repository owner.", Mandatory=$true)]
[string]$repoOwner
)
# Use the Azure DevOps repository name as the GitHub repository name
[string]$githubRepo = $adoRepo
gh auth login —with–token $env:GH_TOKEN
$repoExists = $null
$repoExists = gh repo view $githubOrg/$githubRepo
if ($null -eq $repoExists) {
# Use the custom extension to migrate the repository
try {
gh ado2gh migrate–repo —ado–org $adoOrg —ado–team–project $adoTeamProject —ado–repo $adoRepo —github–org $githubOrg —github–repo $githubRepo —target–repo–visibility 'internal'
# get default branch and set branch protection
Write-Output "Setting branch protection…"
$defaultBranch = gh repo view "${githubOrg}/${githubRepo}" —json defaultBranchRef —jq '.defaultBranchRef.name'
gh api repos/$githubOrg/$githubRepo/branches/$defaultBranch/protection —method PUT `
–H "Accept: application/vnd.github+json" `
-F "required_pull_request_reviews[required_approving_review_count]=1" `
-F "required_status_checks=null" `
-F "restrictions=null" `
-F "enforce_admins=true" `
# setting the repo admin
gh api repos/$githubOrg/$githubRepo/collaborators/$repoOwner —method PUT -F permission=admin
Write-Output "creating environments…"
gh api repos/$githubOrg/$githubRepo/environments/production —method PUT `
–H "Accept: application/vnd.github+json" `
-F "deployment_branch_policy[protected_branches]=true" `
-F "deployment_branch_policy[custom_branch_policies]=false"
gh api repos/$githubOrg/$githubRepo/environments/dev —method PUT `
–H "Accept: application/vnd.github+json"
gh api repos/$githubOrg/$githubRepo/environments/qa —method PUT `
–H "Accept: application/vnd.github+json"
}
catch {
if ($LASTEXITCODE -eq 1) {
Write-Output "Migration failed. Aborting…"
gh ado2gh abort–migration —ado–org $adoOrg —ado–team–project $adoTeamProject —ado–repo $adoRepo —github–org $githubOrg —github–repo $githubRepo
break
}
}
if ($lockAdoRepo) {
Write-Output "Disabling Azure DevOps repository…"
gh ado2gh disable-ado–repo —ado–org $adoOrg —ado–team–project $adoTeamProject —ado–repo $adoRepo
}
} else {
Write-Output "Repository already exists. Migration skipped."
}
view raw migrate-devops-repo.ps1 hosted with ❤ by GitHub
name: Migrate DevOps Repo
on:
workflow_dispatch:
inputs:
adoTeamProject:
description: 'Azure DevOps team project'
required: true
adoRepo:
description: 'Azure DevOps repository'
required: true
lockAdoRepo:
description: 'Lock Azure DevOps repository'
required: false
type: boolean
default: false
jobs:
migrate:
name: Migrate repo, ${{ github.event.inputs.adoRepo }}
runs-on: ubuntu-latest
steps:
– name: Checkout code
uses: actions/checkout@v4
with:
persist-credentials: false
ref: ${{ github.head_ref }}
– name: Install GH CLI Extension
run: |
gh extension install github/gh-ado2gh
– name: Run PowerShell script
shell: pwsh
run: |
.\scripts\migrate-devops-repo.ps1 -adoTeamProject "${{ github.event.inputs.adoTeamProject }}" -adoRepo "${{ github.event.inputs.adoRepo }}" -repoOwner "${{ github.triggering_actor }}"
env:
GH_PAT: ${{ secrets.GH_PAT }}
ADO_PAT: ${{ secrets.ADO_PAT }}
view raw run-repo-migration.yaml hosted with ❤ by GitHub

From Azure Pipelines to GitHub workflows

Next up in your migration is CI/CD. What do you do? In our case, we have also discussed if it’s time to ditch our deployment pipelines in favor of GitOps, using Flux or ArgoCD. All our applications run on Kubernetes (AKS), which makes this a viable option. However, it is a broader discussion, and most likely some developers want to move, and some others will not. It’s reasonable to think deployment pipelines will be a part of our setup for a long time.

Question is, should you try using the GitHub actions importer, or is a refactor about time anyway? Given the fact that the importer tool has some limitation, and you probably have wanted to do some adjustment to your existing pipelines already, I believe this project will force some refactoring anyway.

As a platform engineer, I always strive to create self-service options. Now for pipelines, I can create custom starter workflows. I really like this approach, as it provides the DevOps/Platform team a way to streamline, and scaffold the bare minimum of what’s required, and developers can adjust to their application specific needs. The example in the image above is nothing but a slightly modified standard workflow. However, with the starter workflow we can add references to our container registries, use organization secrets, use and pre-populate connection to our Azure environment. As I mentioned above with the user accounts. We want both, freedom and control!

Azure Work Items and GitHub issues

Azure work items translate to GitHub issues (almost). Work items is a part of Azure Boards, and boards are almost similar to GitHub projects. With some careful consideration and new thinking, I believe it is possible to ditch Azure Boards and work items in favor of Issues with projects. If not, you can connect Azure Boards to GitHub. As you probably have noticed, I haven’t solved this yet, but I will do my best in making it happen.

The biggest difference between the work items and issues is that work items are linked to the board, where issues are tied to one repository. After making the repo migration, you will have to create a script to pull work items from Azure DevOps and create them as issues in the correct repository on GitHub. After that, we can re-create the boards in GitHub projects. There’s a few options/scripts for doing the first task, but I believe every organization use these features differently, so customization is needed. This solution by Josh Johanning is where I will start.

I will update this post when i hit a wall or find different solutions. Until then, happy migration!

Share this:

  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
  • Click to share on Reddit (Opens in new window) Reddit
laptop computer showing c application Infrastructure As Code

Azure Infrastructure as code – Pulumi

  • 10/12/202007/01/2025
  • by Martin Ehrnst

Infrastructure As Code is here to stay. And all companies work with this in a variety of ways. Recently I changed job, and with that comes new challenges. The team I joined I highly skilled and is responsible for a very complex, and large infrastructure in Azure. A great part of this infrastructure is deployed and maintained using a tool called Pulumi.

My new role does not require me to become a developer creating Apps. But I have been advocating and teaching fellow IT pro’s the importance of embracing developer tools and processes for our infrastructure management tasks. My knowledge around infrastructure as code is with PowerShell, Terraform, and ARM. My C# skills are very limited, although I have some experience.
Pulumi is definitely putting developers first, and I need to step up my game.

What is Pulumi

Azure Resource Manager and Azure Bicep are both domain-specific languages, meaning they only work with Azure. Terraform, is another popular tool (almost a standard), which also has it’s own language (HCL). HCL differs from ARM as it works with more than Azure.

Create, deploy, and manage infrastructure on any cloud using familiar programming languages and tools.

Pulumi

Pulumi on the other hand, use general-purpose programming languages. This means you can deploy and maintain your infrastructure with ‘real programming languages’, like C#, Java, TypeScript, and Go.

How does Pulumi work

Pulumi is a declarative infrastructure as code tool. And it’s core engine will ‘build’ your desired infrastructure, and keep track of its state.

Projects and stacks

You start with something called a Project. The project folder is controlled via a Pulumi.yml file looking something like this, where name and runtime are mandatory.

name: core-infra
runtime: dotnet
description: my very first pulumi project

After creating the project you will need to create a stack. The stack is an instance of your project. For example, staging and production of project core-infra would be two separate stacks.

State management

You might be familiar with this concept already, but if not here’s what’s what;
Pulumi keeps a snapshot of your infrastructure, referred to as ‘state’. This allows Pulumi to delete, create, and change your infrastructure components. But it also means you have to think about where you perform edits (only within the Pulumi stack/project), and where to store your state files.

By default Pulumi will store and manage state with their online service, Pulumi Console.

Getting started with Pulumi for Azure

My short goal for self learning Pulumi is to replicate what I demoed in me and Marcel Zehner’s Live streams on Azure resource manager and infrastructure as code.
for Pulumi I am using this repository

For some reason, I assume you run Windows and CSharp, but if you fancy any of the other options, they are documented as well.

To run Pulumi on Azure you will need to install Pulumi, log in/sign up, install .NET 3.1, and Azure CLI (if you don’t have it already). The process is documented on the getting started page.
I tried to run with .NET 5.0, without any luck, but that might be solved soon.

Your next task is to create your project. In all essence, you run a few commands against an empty folder. This will generate the Pulumi program files and your project metadata files. Below is my configuration

cd C:users\MartinEhrnst\repos\Pulumi\
mkdir 1.ResourceGroup-storageAccount
cd 1.ResourceGroup-storageAccount
pulumi new azure-csharp

After filling in your mandatory project parameters, a getting started code will be generated for you. This will create an Azure resource group and a storage account.
In the above picture, I have changed this slightly to include a storage container, and change some of the default parameters. You can find my latest Pulumi code in this GitHub repo

For those experienced with C#, you can see that Pulumi has classes for the Azure resources. But since this is C#, we can use common coding techniques, like iterations (for-each) to deploy our infrastructure.

Pulumi deployments

If I now want to deploy my infrastructure. I will need to run Pulumi, which translates this code into something Azure Resource Manager can understand. To my knowledge, Pulumi uses the Azure Resource Manager REST APIs to run the deployment.

To deploy the resources, you can follow this guide. In my environment above, this is the code and output from my review.

PS C:\Users\MartinEhrnst\repos\Pulumi\1.ResourceGroup-storageAccount> pulumi up
Previewing update (dev)

View Live:

     Type                         Name                Plan
 +   pulumi:pulumi:Stack          rg-and-storage-dev  create
 +   ├─ azure:core:ResourceGroup  resourceGroup       create
 +   ├─ azure:storage:Account     storage             create
 +   └─ azure:storage:Container   container           create
 
Resources:
    + 4 to create

Do you want to perform this update? details
+ pulumi:pulumi:Stack: (create)
    [urn=urn:pulumi:dev::rg-and-storage::pulumi:pulumi:Stack::rg-and-storage-dev]
    + azure:core/resourceGroup:ResourceGroup: (create)
        [urn=urn:pulumi:dev::rg-and-storage::azure:core/resourceGroup:ResourceGroup::resourceGroup]
        [provider=urn:pulumi:dev::rg-and-storage::pulumi:providers:azure::default_3_33_2::04da6b54-80e4-46f7-96ec-]
        location  : "norwayeast"
        name      : "rg-PulumiStorage"
    + azure:storage/account:Account: (create)
        [urn=urn:pulumi:dev::rg-and-storage::azure:storage/account:Account::storage]
        [provider=urn:pulumi:dev::rg-and-storage::pulumi:providers:azure::default_3_33_2::04da6b54-80e4-46f7-96ec-b56ff0331ba9]
        accountKind           : "StorageV2"
        accountReplicationType: "LRS"
        accountTier           : "Standard"
        allowBlobPublicAccess : false
        enableHttpsTrafficOnly: true
        isHnsEnabled          : false
        location              : output<string>
        minTlsVersion         : "TLS1_0"
        name                  : "storage2966fa9"
        resourceGroupName     : "rg-PulumiStorage"
    + azure:storage/container:Container: (create)
        [urn=urn:pulumi:dev::rg-and-storage::azure:storage/container:Container::container]
        [provider=urn:pulumi:dev::rg-and-storage::pulumi:providers:azure::default_3_33_2::04da6b54-80e4-46f7-96ec-b56ff0331ba9]
        containerAccessType: "private"
        name               : "images"
        storageAccountName : "storageab46f04"

In Azure, I can now see that the storage account and resource group are created. But I cannot find this as deployments. I suspect this has to do with how Pulumi interacts with Azure resource manager. This might not be an issue for you, but if you rely on the deployment plane, you should have given this a thought.

Should you use Pulumi for Azure?

Given my very limited knowledge of the product that is hard for me to answer. But there are things you should consider.
As I said, I have advocated for a few years about the ‘Modern IT pro’. Meaning we need to adopt and use more developer-oriented software and processes, like Git for example.

By using Pulumi you are not only adopting processes, but you also assume your team knows CSharp or any of the other supported languages. If your team consists of IT Pro’s who are beginning to explore the Dev side of the DevOps circle. Pulumi will give you some rough weeks ahead.

On the other hand, if your team is developer heavy, looking into the operations side. Pulumi might be your best choice. As a developer, it must seem alluring to be able to provision infrastructure together with your application code.
However, the responsibility for correct configuration, governance, and security is still the most important for your infrastructure. Can this be done with the same team and codebase, you can definitely consider using Pulumi.

Pulumi ARM template converter

A tool to convert ARM templates to Pulumi already exists. During my initial testing, I had success converting less complex templates, but when I tried to convert a nested template with a Copy loop the tool failed.

I suggest you try it out with your own templates, and since it’s open-sourced, you could always try to improve it your self. If not, the community will at some point.

Share this:

  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
  • Click to share on Reddit (Opens in new window) Reddit

Posts pagination

1 2 3 … 6

Popular blog posts

  • Webinar: Multi-tenant resource management at scale with Azure Lighthouse
  • Azure Application registrations, Enterprise Apps, and managed identities
  • Azure Monitor Managed Prometheus
  • Azure token from a custom app registration
  • OpsMgr & External Services PT2

Categories

Automation Azure Azure Active Directory Azure Bicep Azure DevOps Azure Functions Azure Lighthouse Azure Logic Apps Azure Monitor Azure Policy Community Conferences CSP Monitoring DevOps GitHub Guest blogs Infrastructure As Code Kubernetes Microsoft CSP MPAuthoring OMS Operations Manager Podcast Powershell Uncategorised Windows Admin Center Windows Server

Follow Martin Ehrnst

  • X
  • LinkedIn

RSS feed RSS - Posts

RSS feed RSS - Comments

Microsoft Azure MVP

Martin Ehrnst Microsoft Azure MVP
Adatum.no use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Cookie Policy
Theme by Colorlib Powered by WordPress