We’ve had a few questions from customers on how to easily build better dashboards around Azure Monitor Alerts, particularly around Log based alerts where some of the information, such as the affected resource, may not be exposed easily.
When you create Log based alerts, the affected resource identified in the alert will always be the Logs workspace that the query is run against.
The machine name, instance name and other information is exposed as extended information, which is not available either in the Alerts table in Logs or exposed in the Alerts connector in workbooks.
In the email notification, this information is exposed as insights:
In the raw alert output, this is exposed in the SearchResults
If you want to build a dashboard that displays the number of alerts per machine for log-based alerts, you have two options:
· Use Logic Apps to write the Alert information into Logs with the Log Analytics Data Collector API, and then use queries to create your dashboard
· Use the Scheduled Query Rules Alerts REST API
In this blog post, we will look at the first option, which has three components:
We will use a log search alert rule, which will use an action group to trigger a logic app, which will use the Log Analytics Data Collector API to write the Alert information into a custom log in Log Analytics.
Create a Log Search Alert rule
For this example, we will create a log alert rule for Processor Utilisation exceeding threshold, using the following query:
| where ObjectName == “Processor” and CounterName == “% Processor Time”
| where TimeGenerated >= ago(30m)
| summarize AggregatedValue = avg(CounterValue) by bin(TimeGenerated, 10m), Computer, InstanceName
This query will average the % Processor time countervalue over 10 minute buckets and look at values in the last 30 minutes.
You can select the Metric Measurement logic, and then set your threshold.
Set your trigger options, as well as the evaluation periods as appropriate.
Create a logic app
To allow the Logic App to receive the alert data, you will need to select the When a HTTP request is received as your starting activity
You can then use the Log Alert for Log Analytics Sample Payload to generate the schema. This will expose the alert components to the next steps.
The extended alert information is written into a nested array under Tables, and exposed in Rows. For this reason, you will need to add two nested For Each loops
The first activity will use the tables output, while the second will use the rows output.
As you will need to reference the loops in the later output, we renamed the activities to make it easier to reference.
Now you can add the Send Data activity and populate it with the content from the request received. The Request Body requires the correct JSON format, including field headers, to be populated.
To reference the content from the rows, you will need to map out the order of the values, starting from 0, and then reference them using the items() referencing function, along with the value number.
Note: You may have to create the Logic App with the first step only and the Action group before you populate the Send Data activity. That will allow you to view the raw output of the HTTP Request received activity – which you will need to map the field names for the items() dynamic content.
You can view this in the SearchResults area of raw output. In this example, the Computername is the second column in the results, so it is referenced as follows:
You can add additional steps here to also extract additional information other sources to enrich the alert information, such as Service Owner from a CMDB. You can also use this same Logic App to send out the notifications with this enriched information.
Create an action group to trigger the Logic App
You can now create an action group to trigger the Logic App you have created.
You are now able to edit the Logic App if required to complete the steps.