Skip to content
adatum
  • Home
  •  About adatum
  •  Learn Azure Bicep
  •  SCOM Web API
DevOps

Moving to Azure and team requirements

  • 08/12/201907/01/2025
  • by Martin Ehrnst

Around ten years ago, traditional IT-professionals started to talk about losing their jobs due to the public cloud.
It’s no secret that people actually have lost their job in the last years. But I’m pretty confident those who managed to make some changes kept their job or got a better position in another company.

The specialists

In my job, working for a service provider, I have seen the change from many angles. Ten years ago, we helped companies with traditional IT infrastructure. Smaller companies running physical servers On-Premises, or larger companies with an ESX stack in their office. They were all managed by our specialists. Specialists on network devices knew everything about the hardware, and how to place them in the racks.
The server team was Windows Server specialists, but also knew ‘everything’ about storage. Disks, SAN, etc.

Cultural changes

Many companies hire new people to handle the infrastructure on Azure or other public and private clouds. But I think that is a bad idea. Investment in making cultural changes should be the main priority. Moving to Azure introduces a new platform and products to your ecosystem, but the knowledge from traditional infrastructure is still very valid. Subnetting and firewalls still exist, as do operating system patches and failed backups. Your existing team already knows your infrastructure, making them the best fit to manage it on Azure as well.

Moving to Azure or any other cloud will require a code-first approach to manage and deploy the infrastructure. Git, Pipelines and pull requests is now as normal as calculating subnets.

Team requirements

The team required to move your infrastructure, and successfully manage it should be familiar with the DevOps methodology, and how to define the infrastructure in code.

Another thing to keep in mind is that platform knowledge is very important for everyone managing the workloads running in Azure and other cloud platforms.
Let’s take a look at monitoring. In order to have a good monitoring platform, you require a few basics like alerting. Alerting is built into every monitoring platform out there, and is the foundation of monitoring. But alerts need to be handled. Hopefully, you have killed email alerts already, and you have other routines in place, like an ITSM platform or maybe your alert remediation is a PowerShell script.
To have an alert automatically remediated in Azure Monitor, you will need to learn a new product like Azure Functions to run your PowerShell script.

PS: Microsoft replaced SCOM for Azure monitor, driven by cultural changes and DevOps

Modern IT-Pro’s

At its core, IT hasn’t changed that much. It is still zero’s and one’s. The cultural changes and the desire to learn new technology and work methodology is, in my opinion, the biggest challenge.
The amount of pressure to adopt a new way of work and to learn new technology is not a joke.
If you can call your self a modern IT pro, you deserve many high fives, and you can be very proud of what you have achieved.

Share this:

  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
  • Click to share on Reddit (Opens in new window) Reddit
Azure

Azure Lighthouse why is it so important

  • 15/08/201907/01/2025
  • by Martin Ehrnst

Working for a Managed Service Provider (MSP) I have many times faced the challenges of managing multiple separate customers from one single pane. Whether it is a multi-tenant active directory, single AD or a vanilla Azure tenant. An MSP is only good when they can build tools to manage all customers in a streamlined fashion.

In the Microsoft sphere, partners and large enterprises have faced many of the same challenges. If you are a large enterprise, you might be eligible for an Enterprise Agreement.
As a partner, you can apply to become a (tier 1) Cloud Solution Provider (CSP). The tools provided are are far from good enough. The challenge is that you are still bound to tenant isolation. If you wanted to have a view of all alerts in Azure Monitor for all your customers. You need to create a tool that authenticates against each individual tenant and retrieves this information. Similar to what I did with SCOM.

Project Towboat

Last year I attended a side meeting for MSPs at Ignite. We discussed at scale management in the Azure Portal. We were promised that something called Project Towboat was planned. Since then it have been dead silent.
Out of the blue, Microsoft announced Azure Lighthouse. Promising simplified cross tenant resource management. So what makes this so great?

Delegated resource access

Azure Lighthouse uses delegated resource access. In essence, the customer establishes a trust with your (management/master) tenant. This allows for the users in the management directory (tenant) to manage resources on behalf of their customers. Many use Azure AD B2b to manage resources across multiple tenants. With Azure Lighthouse, you can do that without changing the context of the user.

In my opinion. Here are some of the features that make Azure Lighthouse so important to MSPs, and others managing multiple tenants.

Cross tenant monitoring in Azure

Azure Monitor is now multi-tenant. As long as the resource group or subscription is available for the person using Azure Monitor. Application and infrastructure monitoring is available from a single pane of glass.

Multi-tenant Log Analytics queries

Log Analytics is a part of Azure Monitor and is called Azure Monitor Logs, the engine behind is Log Analytics.
Log Analytics is already capable of searching within multiple workspaces. Since Azure Lighthouse will surface your customer’s workspaces, you can run cross tenant queries, how cool is that?

Azure security center for all customers

The beauty with delegated resource management just continues. Another great thing for your security team apart from Log Analytics is Azure Security Center is available in Azure Lighthouse. This means that the team (or that one person) can look at one single dashboard, or write the integration against one tenant.

Summary

With Azure Lighthouse greatly simplifies at scale and cross tenant management. Being tightly integrated with Azure Resource Manager for deployment, as well as Azure Monitor and Security Center for monitoring infrastructure and security.

I am really looking forward to creating solutions and working more with Azure Lighthouse. It is a long-awaited product, and with this launch, Microsoft is way ahead of its competitors.
Expect more dedicated posts on how to manage and automate using lighthouse in the future.

You can read more and find examples on the official Azure Lighthouse documentation and Azure Lighthouse github examples

Share this:

  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
  • Click to share on Reddit (Opens in new window) Reddit
Azure Monitor

Metric alerts for Azure monitor logs

  • 12/06/201907/01/2025
  • by Martin Ehrnst

A common thing for traditional companies is to have one team responsible for monitoring. A few years ago, this team where close friends with the team provisioning infrastructure. Now, more and more companies are shifting to the “DevOps” world. Even Microsoft have killed SCOM and are only using Azure Monitor. Meaning that the one deployed the code (and the infrastructure) should be responsible for monitoring.  In essence, this is great. But this transition takes time, and one should not underestimate the knowledge of the team who have been responsible for monitoring your entire infrastructure for decades.

If you are familiar with SCOM, you know that rules and monitors is targeted against a class of objects. IE, Windows 2016 operating system. When we move our workloads to Azure, we want to use Azure Monitor to monitor our workloads and VMs.

Enter Log alerts

Log Alerts has been around for quite some time and is commonly used to alert on actual log data. IE custom application logs, Windows event log and so on. But Log alerts has a “hidden” feature, especially for your monitoring teams, not wanting to manage hundreds of duplicate rules.

By using Log alerts with metric measurements you can almost replicate the what discoveries in SCOM does- find resources of a specific type, and attach some kind of monitoring to them. For example, you can create a search query for all your IaaS VMs and alert on their CPU counter.

This will let your monitoring team recreate all their logic, and have control over the entire infrastructure, almost as they had on-permises. At the same time you can leverage more DevOps practices and at the end have every team responsible for their own work.

Kusto examlpe

Below is a simple example that will list all VMs and their processor time. You can create an alert straight from Azure Monitor logs (former Log Analytics) or start from a new alert.

Perf | where ObjectName == "Processor" and CounterName == "% Processor Time" | summarize AggregatedValue = avg(CounterValue) by bin(TimeGenerated, 5m), Computer

Summary

You have the option to monitor multiple VMs using one Alert Rule in Azure Monitor already. But one limitation is that this solution will not add new VMs to the alert rule. And for the time being, it only supports virtual machines
Log alerts are dependent on your query. So as long as your data is available, you can alert on it. Whether it is a web app, a SQL server or a custom log.

With Log Alerts, the transition to a public cloud-based infrastructure might be easier. Your operations teams can use their knowledge and re-create their on-premises monitoring logic as searches.
Application alerts could still be handled by the developers, and you can provision those using ARM templates or similar.

PS: I was going to write a longer post on how to manage and programmatically create log alerts, but with these great examples in Microsoft docs, there’s no need to re-invent the wheel.

Share this:

  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on X (Opens in new window) X
  • Click to share on Reddit (Opens in new window) Reddit

Posts pagination

1 … 7 8 9 10 11 … 19

Popular blog posts

  • SCOM Alerts to Microsoft Teams and Mattermost
  • How to move Azure blobs up the path
  • Creating Azure AD Application using Powershell
  • SCOM and OMS: The agents
  • Azure Application registrations, Enterprise Apps, and managed identities

Categories

Automation Azure Azure Active Directory Azure Bicep Azure DevOps Azure Functions Azure Lighthouse Azure Logic Apps Azure Monitor Azure Policy Community Conferences CSP Monitoring DevOps GitHub Guest blogs Infrastructure As Code Kubernetes Microsoft CSP MPAuthoring OMS Operations Manager Podcast Powershell Uncategorised Windows Admin Center Windows Server

Follow Martin Ehrnst

  • X
  • LinkedIn

RSS feed RSS - Posts

RSS feed RSS - Comments

Microsoft Azure MVP

Martin Ehrnst Microsoft Azure MVP
Adatum.no use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Cookie Policy
Theme by Colorlib Powered by WordPress