Azure Monitor Managed Prometheus
Azure Monitor for Prometheus is released in preview in the fall of 2022. Now let’s just get our terminology right. The Prometheus offering is a part of Azure Monitor Metrics. One part of monitor is Logs, and the other one is Metrics. For some time now Prometheus scraping has been available, but the previous implementation utilized the log part of Azure Monitor. This new offering uses the metric store, which is a way better solution, and the latter is partially why I have an interest in checking it out.
During writing of this post. Azure Monitor Managed Prometheus was in preview. During Microsoft Build 2023. The service is generally available. See the Build Book of news
Why chose a managed Prometheus offering
Generally speaking, if there’s a managed offer, I am keen to see if it’s better than what you can do yourself. Currently, there are multiple managed Prometheus offerings out there, Grafana being one. And Microsoft just jumped in recently with managed Grafana, and now Azure Monitor Managed Prometheus.
In our environment, we introduced Prometheus before any large players had a solid offering, and for a long time, it did not have persistent storage. So if the service was restarted, metrics were lost. That’s not a good solution in the long run. After a while, we jumped both feet in and set up persistent storage using Thanos, backed by Azure storage for long-term retention. However, this comes with a cost. As with any other system, you host yourself, you need to maintain it, and sometimes things go south. Like when I had to clean up our storage. Since we are 100 percent in Azure, it makes total sense to play around with the newly managed Prometheus offering.
Adding managed Prometheus to an existing AKS cluster
To enable managed Prometheus you need an Azure Monitor workspace. Monitor workspace is like a log analytics workspace, but for the Azure monitor metrics store. This is confusing to many (myself included), and I hope Microsoft cleans up the terminology once things are released. I am not going to dive into how you can start from scratch, or how you can create any of this using Azure Bicep. Stanislav Zhelyazkov already went through this in an excellent post with a lot of good comments. Instead, let’s take a look at how Prometheus can be enabled on an existing cluster, and how you can tune your setup.
The backdrop here is that we already had a Prometheus installation in our cluster, and we also had it connected to Log Analytics using Azure Monitor for Containers. I did not want to interfere with any of the existing stuff, so I manually added the preview extension. This will deploy monitor-metrics
pods in your kube-system
namespace, and a default scraping configuration.
You can now port-forward to any of the ama-metrics
pods and see the configuration and targets. If you are familiar with Prometheus, you would probably expect to find the basic, built-in dashboard solution. But that’s not available. This installation is running in agent mode, using the remote write (or Microsoft’s own abbreviation of it) feature to ship the time series to Azure Monitor. This means Grafana is your simplest solution to query any data.
Tuning the default Prometheus configuration
cAdvisor metrics are good to have. However, the default configuration does not scrape any of our custom application metrics. For that, you need to enable it by writing a Prometheus scraping config.
Many existing Prometheus setups utilize either a Service Monitor or a Pod Monitor. However these types of CRDs is not supported.
Microsoft has provided us with documentation on how to create a prometheus scrape config so if you know your way around Prometheus this is familiar stuff. However, there are some quirks. The files you create is merged with the default config, and if you do not follow the documentation point-by-point you are bound to create something that’s not working.
There are two config maps to be created. One to enable or disable “system” scraping and one for creating your custom scrape configuration. The documentation state you should do a kubectl create […] –from-file prometheus-config and that the name of the file is very important. The name is indeed very important if you follow Microsoft docs. But if you suddenly find your way to the GitHub repository with the example files, a keen eye will see these are Kubernetes manifest-files.
If you (like i did) try to kubectl create
any of these files with that very special name you’ll end up with a broken YAML file as it will try to create the manifest for you.
After some support from a Microsoft representative, I understood what they where trying to explain, so I expect the documentation to change pretty soon.
Anyway, instead of following the documentation. I modified the K8S manifest files and did a kubectl apply
which will create the manifest just how you wrote it (and how the github examples look). Below you can see a custom scraping configuration. This example will scrape all pods with annotation prometheus.io/scrape: true
.
kind: ConfigMap
apiVersion: v1
data:
prometheus-config: |-
global:
scrape_interval: 30s
scrape_timeout: 10s
scrape_configs:
- job_name: testjob
honor_labels: true
scrape_interval: 30s
scheme: http
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $1:$2
target_label: __address__
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- target_label: pod_name
source_labels: [__meta_kubernetes_pod_name]
debug-mode: |-
enabled = true
metadata:
name: ama-metrics-prometheus-config
namespace: kube-system
We got metrics!
During Christmas 2022 Microsoft released a fix for the content length issue mentioned below. I Have confirmed the fix in our clusters and everything works as expected
In my Azure hosted Grafana I am now able to query and display our custom metrics. As well as the default cAdvisor data. However, after some time I noticed we had some drops in the collected metrics. I tried to find the same drops in our other Prometheus/Thanos setup but there everything looked normal. Checking the logs gave me the answer.
"ContentLengthLimitExceeded\",\"Message\":\"Maximum allowed content length: 1048576 bytes (1 MB). Provided content length: 25849524
We are producing too large time series, and data is not shipped to Azure Monitor. Due to this we also have pretty high memory usage for the ama-metrics
pod, which I am able to check using our other Prometheus instance.
Azure Monitor Prometheus pricing
During the preview, the service is free of charge. However, the pricing is publicly available.
Feature | Price |
---|---|
Metrics ingestion (preview) | $0.16 / 10 million samples ingested |
Metrics queries (preview)10 | $0.10 / 1 billion samples processed |
USD $0.16 per 10 million metric samples is good compared to the other options out there. But you need to calculate in the queries as well. I suspect in larger environments, dashboards running auto-refresh, etc. this will be a significant part of the total bill. Not to mention that you also need some kind of graphical interface to query your metrics. You could host your own Grafana, or check out Azure Managed Grafana. And remember to calculate the total cost.
Grafana cloud is another option. They offer Prometheus and log ingestion on a per-user basis in their pro plan.
I have yet to sit down and compare pricing between options and will make sure to update or create a new post when we have conducted that work. At first glance, Azure seems favorable, but we will find out once the numbers are crunched.
Summary.
Due to the previous limitation in Azure managed Prometheus we managed to tune our existing scraping configuration. By removing unused time series and labels, we reduced the memory consumption by 1GB. Unfortunately, this tuning was not enough to get under 1MB.
Since a fix was released in December 2022 I will continue my testing with our pre-production cluster. I am really keen to replace the current Thanos setup, and Azure monitor Prometheus looks like a promising option.