Modern observability demands not only the collection of metrics and logs but also proactive alerting mechanisms. When working with Grafana Loki for log aggregation in Kubernetes, combining it with PrometheusRule alerting provides real-time notifications based on log patterns. With Grafana Alloy (previously known as Agent), we gain a unified agent that enables scraping, alerting, and forwarding logs. In this article, we’ll explore how to configure PrometheusRule alerts based on Loki logs in Kubernetes using Grafana Alloy and Helm.
Overview Of The Architecture
Before diving into the setup, let’s understand the key components involved:
-
Loki: A log aggregation system designed by Grafana Labs for storing and querying logs.
-
Grafana Alloy: A telemetry agent that can replace Prometheus, Promtail, and more, enabling log collection, metrics scraping, and rule evaluation.
-
PrometheusRule: A custom resource in Kubernetes that allows you to define alerting rules evaluated by Prometheus or Alloy.
-
Helm: A package manager for Kubernetes used to install and manage applications.
Here’s what we aim to achieve:
-
Loki collects logs from Kubernetes pods.
-
Grafana Alloy evaluates Prometheus-style alert rules that use LogQL queries.
-
Alerts are triggered when specific log patterns (e.g., “error”, “panic”) are detected.
-
Alertmanager receives alerts and routes them to your preferred notification channels.
Prerequisites
Before starting, ensure the following are installed and configured:
-
A Kubernetes cluster (Minikube, Kind, EKS, etc.)
-
kubectl
configured to access the cluster -
helm
CLI installed -
Grafana Helm repositories added
-
Basic familiarity with Loki and Prometheus
Install Loki via Helm
First, install Loki using the official Grafana Helm chart.
Then install Loki with persistence disabled for simplicity (enable it in production):
This will deploy:
-
Loki for log aggregation
-
Promtail to ship pod logs to Loki
Install Grafana Alloy
Grafana Alloy can be installed via the grafana/alloy Helm chart:
You’ll need to create the
config.alloy
file next, which defines how logs are scraped and how alerts are evaluated.
Configure Alloy For Log Alerting
Grafana Alloy uses a configuration language called Flow mode. Here’s a basic config.alloy
file to collect logs, forward them to Loki, and evaluate alerting rules.
Apply this configuration using the Helm --set-file
parameter or mount it via a ConfigMap and volume in the Alloy pod.
Create PrometheusRule For Log Alerts
Now let’s create a PrometheusRule custom resource that defines a LogQL-based alert rule.
Apply it using:
This rule will trigger an alert if more than 0.1 “error” log entries per second appear in a pod in the default
namespace for over 2 minutes.
Deploy Alertmanager (Optional)
If you don’t already have Alertmanager running, install it with:
You can configure Alertmanager to route alerts to Slack, PagerDuty, email, etc. Here’s a simple Alertmanager config snippet:
Apply it via a Secret or Helm values override.
Verify Alert Triggering
To test the alert:
-
Deploy a pod that logs the word “error” frequently:
-
Monitor the logs in Loki via Grafana (if installed) or query Loki directly via its HTTP API:
-
Wait for 2 minutes to see if the alert triggers.
-
Use this to check alerts:
-
Check active alerts (via Alertmanager or Grafana UI).
View Alerts In Grafana (Optional)
If Grafana is installed, you can connect it to Loki and Alertmanager to view and manage alerts:
-
Add Loki as a data source
-
Add Alertmanager as a data source
-
Go to the Alerting → Alerts page to see active alerts
Best Practices
-
Namespace your PrometheusRules logically to avoid clutter.
-
Use recording rules for complex queries to reduce computation overhead.
-
Store alert history in Alertmanager or Grafana for auditing.
-
Always validate rule expressions using Grafana Explore or Loki API.
-
Secure Alertmanager with RBAC and authentication if exposed.
Troubleshooting Tips
-
If alerts don’t trigger, verify:
-
Logs are reaching Loki (
kubectl logs promtail
or Alloy logs) -
Alloy is evaluating the rules (
kubectl logs grafana-alloy
) -
PrometheusRule is applied and valid (
kubectl describe
) -
Alertmanager is reachable and healthy
-
-
Confirm your
expr
works by testing in Grafana’s Explore tab.
Conclusion
Enabling PrometheusRule alerts for Loki logs in a Kubernetes environment using Grafana Alloy and Helm is more than just a technical configuration—it’s a strategic step toward building a proactive, intelligent observability system tailored for modern, cloud-native applications.
By integrating these components, you unlock a highly modular and scalable architecture where logs, metrics, and alerts work together. Grafana Loki allows you to aggregate logs efficiently with minimal resource overhead, while Grafana Alloy acts as a powerful multipurpose agent that brings together log forwarding, rule evaluation, and alert dispatching—all in one place. PrometheusRules, traditionally associated with metric-based alerting, now extend their power into log-based observability through LogQL expressions, enabling alerts that are far more reflective of actual application behavior and runtime issues.
This system offers several operational benefits:
-
Real-time awareness: Developers and operators get notified within minutes of suspicious log patterns, such as surges in
error
,panic
, or specific trace messages. -
Centralized control: With Alloy managing the rules and forwarding alerts to Alertmanager, you have a centralized point of control for all telemetry signals.
-
Scalable configuration: Using Helm makes the deployment repeatable and scalable, fitting seamlessly into GitOps pipelines and CI/CD workflows.
-
Customizable and secure: You can fine-tune alerts per namespace, label them with severities, and route them to appropriate teams or systems using Alertmanager routing.
Beyond the basics covered in this tutorial, there’s room for substantial enhancement and extension:
-
Multi-cluster observability: Alloy and Loki can be configured to aggregate logs from multiple clusters into a central observability platform.
-
Correlation of logs and metrics: Alerts based on logs can be cross-referenced with Prometheus metrics for richer diagnostics and root cause analysis.
-
Integration with incident response tools: Alerts can be routed to systems like PagerDuty, Opsgenie, or Jira, tying directly into your incident response workflows.
-
High availability and redundancy: In production environments, both Loki and Alertmanager should be configured for high availability with persistent storage and replicas.
-
Security and compliance: Protect log data with RBAC, TLS, and audit logging to meet enterprise and regulatory requirements.
Looking forward, the ability to write and manage PrometheusRule-based log alerts declaratively via Kubernetes resources positions your team to adopt Infrastructure as Code (IaC) practices fully for observability. It allows SRE and DevOps teams to codify alert logic, version-control it, and review it just like any other mission-critical configuration.
Furthermore, Grafana Alloy’s modular and extensible nature means you are not locked into a single observability vendor. It supports OpenTelemetry pipelines, multiple receivers, remote writes, and integrates well with distributed tracing backends—making it a future-proof choice for evolving observability strategies.
In conclusion, implementing PrometheusRule alerts for Loki logs using Grafana Alloy and Helm empowers your Kubernetes platform with high-fidelity, log-aware alerting, reduces your Mean Time to Detect (MTTD), and increases overall service reliability. It’s a move that not only strengthens operational excellence today but also lays the foundation for more intelligent, automated observability systems tomorrow.
Whether you’re just getting started with Kubernetes monitoring or looking to enhance your existing observability stack, this approach is a scalable, modular, and enterprise-grade solution you can build on with confidence.