Skip to content
adatum
  •  Learn Azure Bicep
  •  SCOM Web API
  •  About adatum
Azure Bicep

Speaking at Nordic Infrastructure Conference

  • 18/05/202218/05/2022
  • by Martin Ehrnst

Nordic Infrastructure Conference is back! This is NICs tenth anniversary, and I am glad to say I am once again able to speak at this conference.

This year I have one session on Azure Bicep, where I will go through (almost) everything you need to know in order to be productive + some bonus tricks and real world scenarios from working with Bicep.
As always with NIC, there’s less slides and more demos!

Share this:

  • LinkedIn
  • Twitter
  • Reddit
Azure

Track changes to Azure resources

  • 23/02/202223/02/2022
  • by Martin Ehrnst

Changes to your Azure resources are quite common, and they are difficult to identify. Until now. Microsoft recently released (in preview) the ability to detect change events using Azure Resource Graph. Meaning you do not have to decipher the administrative events in the activity log of your resources. Much like the change tracking solution for Azure VMs.

How does resource changes work in resource graph?

Resource changes are stored in the resource graph under a new table called ResourceChanges. This table is populated from the activity log and removes the complexity we earlier had with matching correlationId, changeId, and so on.

Note:

Resource changes only reflect data in the Resources table in resource graph

What can I do with this?

There are a lot of reasons and possibility that comes with this feature. Microsoft themself mention

  • Incident handeling – what happened prior to the incident
  • CMDB update on changes
  • Azure policy initiated changes

The first reason is probably a good place to start. In my next post, I will show you how to add an azure monitor alert rule, action group, and an Azure function that can pull relevant change information from the impacted resource.

Share this:

  • LinkedIn
  • Twitter
  • Reddit
Azure

How to move Azure blobs up the path

  • 25/01/202225/01/2022
  • by Martin Ehrnst

This is the short, but for me, pretty intense story from when I uploaded 900 blobs of one Gb each to the wrong path in a storage container. Eventually I was able to move these files using azcopy and PowerShell

Thanos persistent Prometheus metrics

In our Azure Kubernetes environment (AKS) we use Prometheus and Thanos for application metrics. Thanos allow us to use Azure Storage for long term retention and high availability Prometheus setup. The other day I was challenged with deleting a series of metrics causing high cardinality. Meaning that a lot of new series of data was written due to a parameter being inserted during scraping.

The way Thanos works is that it takes raw prometheus data, downsamples it and upload it to Azure Storage for long term retention. Each time this process runs, it will create a new blob. In our production environment we had around 900 blobs and 900gb of data.

Thanos has a built in tool to rewrite these blobs and remove the metric we wanted, which seemed easy enough to do, but we had no idea when the problem first started, so I had to analyze, rewrite and upload all the data. It all seemed to work fine, util I discovered no metrics where available. It turned out that the tool I used inherited my local path and uploaded all the modified data to <guid>/chunks/c:/users/appdata/local/[...]/00001.blob

So no matter how satisfied I was, all the data was useless as thanos expected the files to be under <guid>/chunks/00001. On the bright side, all data was there, so the challenge was to move the files from <guid>/chunks/c:/users/appdata/local/[...] to <guid>/chunks/. From the two pictures below you can see the folder structure. Going trough a download and upload approach was the last thing I wanted to do.

Azure storage explorer

AzCopy and PowerShell to the rescue

I already knew my way around azcopy. But I did not know the process actually run on the Azure backbone if you copy within or between storage accounts. Luckily my dear Twitter friends was there to help where I failed to read the documentation.

To perform the copy operation I used a combination of Azure Powershell and AzCopy.

  • Connect
  • Get all current blobs
  • Filter them
  • Actually copy
  • Second loop to delete

Below is my complete script. This could be way smarter but I quickly put it together to get the job done.

## connect to storage using SAS
$storageName = ""
$sasToken = ""
$container = ""
$ctx = New-AzureStorageContext –StorageAccountName $storageName –SasToken $sasToken
# get all the blobs
$blobs = Get-AzureStorageBlob –Container $container –Context $ctx
# a date to filter on
$date = (get-date –Date 20.01.2022 –AsUTC)
# filter the blobs for date and where name has /c:/..
$blobsToModify = $blobs | where { ($_.LastModified.DateTime -ge $date) -and ($_.LastModified.DateTime -le $date.AddHours(24)) -and ($_.Name -like "*/chunks/C:/Users/*") }
# loop through the blobs
# get the original folder name and the blob name with some splitting
foreach ($blob in $blobsToModify) {
$blobtoMove = $blob.name
$original = $blob.Name.split("/",2)[0] # trim to original name
$newBlob = $blob.Name.split("/")[-1] # trim to original chunk name
# actually copy
./azcopy.exe copy "https://$storageName.blob.core.windows.net/thanos/$blobToMove$sasToken" "https://$storageName.blob.core.windows.net/thanos/$original/chunks/$newBlob$sasToken" —overwrite=prompt —s2s–preserve–access–tier=false —include–directory–stub=false —recursive —log–level=INFO;
}
# antother loop to delete the whole c:/ folder after the chunks of data is moved
# i have a separate loop as there might be multiple chunks in the folder.
foreach ($blob in $blobsToModify) {
$original = $blob.Name.split("/",2)[0] # trim to original name
./azcopy.exe remove "https://$storageName.blob.core.windows.net/thanos/$original/chunks/C%3A/$sasToken" —from–to=BlobTrash —recursive —log–level=INFO;
}
view raw azcopy-move.ps1 hosted with ❤ by GitHub

Summary

I hope this helps someone else who accidentaly upload a lot of data to the wrong place. If you by any chance are using Thanos. I filed this as a bug.

Share this:

  • LinkedIn
  • Twitter
  • Reddit
Azure Active Directory

Azure token from a custom app registration

  • 20/01/202220/01/2022
  • by Martin Ehrnst

There’s no secret you can get an Azure AD token and access API resources like Microsoft Graph, Azure Resource Manager (ARM), etc. It’s also pretty straightforward to authenticate a custom API using client credentials. In fact, I have written about how to do that previously where we accessed a custom API built on Azure Functions. Authentication-wise, I also wrote a post on how to access the Azure Monitor REST APIs using client credentials (app registration).

Get an Azure token with delegated user credentials from a custom API

The above examples are fine. But they both use a separate app registration for authenticating against our custom API, the Azure Function, and against ARM to access Azure Monitor. But what if I want to use my own, personal credentials instead of client credentials. For ARM resources, like Azure Monitor, Resource Graph, etc. You can do that already using Azure CLI, or the PowerShell example below.

Connect-AzAccount
Get-AzAccessToken -ResourceUrl "https://management.azure.com"

Wheater you use Az CLI or PowerShell, the output is similar to the above. you can decipher the token using jwt.io. And get a human-readable output.

Always be careful when using services like JWT.io Your token is after all your credentials and can give access to resources.


App registration expose an API

Instead of specifying ARM as we did above, you can also generate a token against your custom app registration using delegated permissions from Azure CLI or PowerShell. The secret lies in the “expose and API”, or more specifically, “Authorized client applications”.

To allow delegated access and the ability to receive a token from your custom app registration do the following

  • Make sure your user is allowed to access the app, you can add that in the enterprise app blade.
  • Create a scope under “expose and API”
  • Add client application(s) to the scope
    • Azure CLI client application ID: 04b07795-8ddb-461a-bbee-02f9e1bf7b46
    • Azure PowerShell client application ID: 1950a258-227b-4e31-a9cf-717495945fc2

Get access token from custom API using Azure CLI or PowerShell

Pull out your favorite shell and change you’re ResourceUrl from management.azure.com to your app id or URI. In my case, this is api://adatum-auth-test-app

After getting the token you can again use JWT.io and see the details. Pay attention to the appId and aud. AppId in this case is Azure PowerShell.

Final words

This post has been laying around in my draft for more than a year. But yesterday I got a question from a colleague about this and figured it was time to release it to the masses.

The reason I had it laying in drafts is that I am unsure of the supportability from Microsoft and the potential security vulnerability it may add to your services. However, keep that in mind and use the feature when needed.
If you want to learn more about application registrations, enterprise apps, and managed identities in general. Please read my other post about the topic.

Share this:

  • LinkedIn
  • Twitter
  • Reddit
Azure Bicep

Share Bicep modules with private registry

  • 08/11/202106/11/2021
  • by Martin Ehrnst

A common problem for many organizations is to share and consume infrastructure templates. Many ended up with a storage account in Azure, but that had some limitations around versioning and sharing of secrets. Directly consuming templates from a Git repository is another option. However, that’s not exactly problem-free either. For example, what happens if a colleague makes changes, and you reference the template without knowing what has changed. In the best case, a missing parameter will fail your pipeline. In the worst case, you have downtime.

The key concept of having Bicep modules in a common store is for everyone inside your organisation to use these modules when they’re provisioning infrastructure. Picture this scenario. Multiple development team is often using the same type of resource, like Azure SQL, Azure Functions, storage accounts, etc. Your organization likely have a few governance rules applied. Like tagging strategy, allowed SKU, different configuration for test and production and so on. Pre-created and easially consumable modules taking care of this is what you need. Azure Container Registry for Bicep files is now available.

Azure Bicep private registry

With Azure bicep version 0.4.1008 you have a built-in option to publish your Bicep modules to a private registry. The private registry is not a new resource type, in fact, you are uploading your Bicep files to Azure Container Registry which allows you to leverage versioning which will make sure you do not break templates for everyone each time people make changes. Once your Bicep module is uploaded to ACR everyone with permission to pull images can use your modules.

Azure bicep private registry

Upload Bicep modules to ACR

A shared bicep module is used in the same way as a local module, but instead of the local path, you specify the URL and version of the file within the registry. More on that later.

Assuming you already have one Bicep module or a set of them, you only need to provision a container registry. To push the “image” you need acrPush permissions and to consume you need acrPull. Below is the syntax used for uploading a bicep file to ACR.

az bicep publish storage.bicep --target br:exampleregistry.azurecr.io/bicep/modules/storage:v1

I am using this code base against my existing registry, so my command to upload is as follows;

bicep publish .\Bicep\3.modules\sql.bicep --target "br:acrbicepehrnst.azurecr.io/modules/azuresql/sql:v0.1"

No response is given on successful upload, but to make sure everything is alright we can confirm with this PowerShell command, which will list all repositories in your registry.

upload Bicep module to Private registry
Get-AzContainerRegistryRepository -RegistryName acrbicepehrnst

Using modules from the registry

Including modules from a private registry is as easy as using local modules. With Bicep extension enabled in VSCode, you also get validation of the remote modules.

pssst… if you by any change use EMACS you can have the same through LSP and Bicep Lang server

upload Bicep module to Private registry - syntax highlight
var tags = {
  'owner': 'Martin Ehrnst'
  'purpose': 'Bicep demo'
}

module SQL 'br:acrbicepehrnst.azurecr.io/modules/azuresql/sql:v0.1' = {
  name: 'sqlDeploy'
  params: {
    databaseName: 'moduletest'
    dbAdId: '8776fb6e-5de0-408c-be03-c17a67b079d0'
    dbAdLoginName: 'name@company.com'
    env: 'prod'
    tags: tags
  }
}

Summary

In this post I have showed you the core concept, how you upload and how you consume the modules. To me this is only half the story. In my next post I will go through how we can put everything inside a pipeline and add a better versioning to the modules.
Azure Bicep private registry is probably here to stay. Upuntil now it is the best solution to share infrastructure templates within an organization.

Share this:

  • LinkedIn
  • Twitter
  • Reddit

Posts navigation

1 2 3 … 21

Top Posts & Pages

  • Azure Application registrations, Enterprise Apps, and managed identities
  • Automate Azure DevOps like a boss
  • Multi subscription deployment with DevOps and Azure Lighthouse
  • Creating Azure AD Application using Powershell
  • Azure token from a custom app registration
  • Azure AD authentication in Azure Functions
  • How to move Azure blobs up the path
  • Script to add SCOM agent management group
  • Track changes to Azure resources
  • Azure Bicep modules, variables, and T-shirt sizing

Tags

agent announcements api ARM authoring Automation azcopy Azure AzureAD Azure Bicep AzureDevOps AzureFunctions AzureLighthouse AzureManagement AzureMonitor AzureSpringClean Bicep Community CSP database EventGrid healthservicestore IaC Infrastructure as code Integrations logs management pack Microsoft Build Microsoft Partner monitoring MSIgnite MSOMS MSP nicconf Nordic Virtual Summit OperationsManager OpsMgr Powershell QUickPublish rest SCOM Serverless SquaredUP SysCtr system center

Follow Martin Ehrnst

  • Twitter
  • LinkedIn

RSS feed RSS - Posts

RSS feed RSS - Comments

Microsoft Azure MVP

Martin Ehrnst Microsoft Azure MVP
Adatum.no use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Cookie Policy
Theme by Colorlib Powered by WordPress
adatum
Proudly powered by WordPress Theme: Shapely.