Modern observability demands not only the collection of metrics and logs but also proactive alerting mechanisms. When working with Grafana Loki for log aggregation in Kubernetes, combining it with PrometheusRule alerting provides real-time notifications based on log patterns. With Grafana Alloy (previously known as Agent), we gain a unified agent that enables scraping, alerting, and forwarding logs. In this article, we’ll explore how to configure PrometheusRule alerts based on Loki logs in Kubernetes using Grafana Alloy and Helm.

Overview Of The Architecture

Before diving into the setup, let’s understand the key components involved:

  • Loki: A log aggregation system designed by Grafana Labs for storing and querying logs.

  • Grafana Alloy: A telemetry agent that can replace Prometheus, Promtail, and more, enabling log collection, metrics scraping, and rule evaluation.

  • PrometheusRule: A custom resource in Kubernetes that allows you to define alerting rules evaluated by Prometheus or Alloy.

  • Helm: A package manager for Kubernetes used to install and manage applications.

Here’s what we aim to achieve:

  1. Loki collects logs from Kubernetes pods.

  2. Grafana Alloy evaluates Prometheus-style alert rules that use LogQL queries.

  3. Alerts are triggered when specific log patterns (e.g., “error”, “panic”) are detected.

  4. Alertmanager receives alerts and routes them to your preferred notification channels.

Prerequisites

Before starting, ensure the following are installed and configured:

  • A Kubernetes cluster (Minikube, Kind, EKS, etc.)

  • kubectl configured to access the cluster

  • helm CLI installed

  • Grafana Helm repositories added

  • Basic familiarity with Loki and Prometheus

Install Loki via Helm

First, install Loki using the official Grafana Helm chart.

bash
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

Then install Loki with persistence disabled for simplicity (enable it in production):

bash
helm upgrade --install loki grafana/loki-stack \
--namespace observability --create-namespace \
--set promtail.enabled=true \
--set loki.persistence.enabled=false \
--set grafana.enabled=false \
--set prometheus.enabled=false \
--set alertmanager.enabled=false

This will deploy:

  • Loki for log aggregation

  • Promtail to ship pod logs to Loki

Install Grafana Alloy

Grafana Alloy can be installed via the grafana/alloy Helm chart:

bash
helm upgrade --install grafana-alloy grafana/alloy \
--namespace observability \
--set alloy.mode=flow \
--set-file config.alloy=config.alloy

You’ll need to create the config.alloy file next, which defines how logs are scraped and how alerts are evaluated.

Configure Alloy For Log Alerting

Grafana Alloy uses a configuration language called Flow mode. Here’s a basic config.alloy file to collect logs, forward them to Loki, and evaluate alerting rules.

hcl

// config.alloy

logging {
level = “info”
format = “logfmt”
}

loki.source.kubernetes “logs” {
targets {
pod_logs = true
}
}

loki.write “default” {
endpoint {
url = “http://loki.observability.svc.cluster.local:3100/loki/api/v1/push”
}

forward_to = [loki.source.kubernetes.logs.receiver]
}

// Evaluate alerting rules from Kubernetes
prometheus.rule.kubernetes “rules” {
forward_to = [prometheus.alertmanager.receiver]
}

prometheus.alertmanager “alertmanager” {
endpoints = [“http://alertmanager.observability.svc.cluster.local:9093”]
}

Apply this configuration using the Helm --set-file parameter or mount it via a ConfigMap and volume in the Alloy pod.

Create PrometheusRule For Log Alerts

Now let’s create a PrometheusRule custom resource that defines a LogQL-based alert rule.

yaml
# loki-log-alert.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: loki-log-alert
namespace: observability
spec:
groups:
- name: loki-log-errors
rules:
- alert: HighErrorRate
expr: |
sum by (pod) (
rate({namespace="default"} |= "error" [1m])
) > 0.1
for: 2m
labels:
severity: warning
annotations:
summary: "High error rate in pod {{ $labels.pod }}"
description: "There are more than 0.1 error logs per second in pod {{ $labels.pod }}."

Apply it using:

bash
kubectl apply -f loki-log-alert.yaml

This rule will trigger an alert if more than 0.1 “error” log entries per second appear in a pod in the default namespace for over 2 minutes.

Deploy Alertmanager (Optional)

If you don’t already have Alertmanager running, install it with:

bash
helm upgrade --install alertmanager grafana/alertmanager \
--namespace observability

You can configure Alertmanager to route alerts to Slack, PagerDuty, email, etc. Here’s a simple Alertmanager config snippet:

yaml
# alertmanager-config.yaml
global:
resolve_timeout: 5m
receivers:
name: default-receiver
email_configs:
to: your-email@example.comroute:
receiver: default-receiver
group_wait: 10s
group_interval: 5m
repeat_interval: 3h

Apply it via a Secret or Helm values override.

Verify Alert Triggering

To test the alert:

  1. Deploy a pod that logs the word “error” frequently:

yaml
apiVersion: v1
kind: Pod
metadata:
name: error-logger
namespace: default
labels:
app: logger
spec:
containers:
- name: logger
image: busybox
command: ["/bin/sh", "-c"]
args:
- while true; do echo "error: something failed"; sleep 1; done
  1. Monitor the logs in Loki via Grafana (if installed) or query Loki directly via its HTTP API:

bash
kubectl port-forward svc/loki 3100:3100 -n observability
curl -G http://localhost:3100/loki/api/v1/query \
--data-urlencode 'query={namespace="default"} |= "error"'
  1. Wait for 2 minutes to see if the alert triggers.

  2. Use this to check alerts:

bash
kubectl get prometheusrules -n observability
kubectl describe prometheusrule loki-log-alert -n observability
  1. Check active alerts (via Alertmanager or Grafana UI).

View Alerts In Grafana (Optional)

If Grafana is installed, you can connect it to Loki and Alertmanager to view and manage alerts:

  • Add Loki as a data source

  • Add Alertmanager as a data source

  • Go to the Alerting → Alerts page to see active alerts

Best Practices

  • Namespace your PrometheusRules logically to avoid clutter.

  • Use recording rules for complex queries to reduce computation overhead.

  • Store alert history in Alertmanager or Grafana for auditing.

  • Always validate rule expressions using Grafana Explore or Loki API.

  • Secure Alertmanager with RBAC and authentication if exposed.

Troubleshooting Tips

  • If alerts don’t trigger, verify:

    • Logs are reaching Loki (kubectl logs promtail or Alloy logs)

    • Alloy is evaluating the rules (kubectl logs grafana-alloy)

    • PrometheusRule is applied and valid (kubectl describe)

    • Alertmanager is reachable and healthy

  • Confirm your expr works by testing in Grafana’s Explore tab.

Conclusion

Enabling PrometheusRule alerts for Loki logs in a Kubernetes environment using Grafana Alloy and Helm is more than just a technical configuration—it’s a strategic step toward building a proactive, intelligent observability system tailored for modern, cloud-native applications.

By integrating these components, you unlock a highly modular and scalable architecture where logs, metrics, and alerts work together. Grafana Loki allows you to aggregate logs efficiently with minimal resource overhead, while Grafana Alloy acts as a powerful multipurpose agent that brings together log forwarding, rule evaluation, and alert dispatching—all in one place. PrometheusRules, traditionally associated with metric-based alerting, now extend their power into log-based observability through LogQL expressions, enabling alerts that are far more reflective of actual application behavior and runtime issues.

This system offers several operational benefits:

  • Real-time awareness: Developers and operators get notified within minutes of suspicious log patterns, such as surges in error, panic, or specific trace messages.

  • Centralized control: With Alloy managing the rules and forwarding alerts to Alertmanager, you have a centralized point of control for all telemetry signals.

  • Scalable configuration: Using Helm makes the deployment repeatable and scalable, fitting seamlessly into GitOps pipelines and CI/CD workflows.

  • Customizable and secure: You can fine-tune alerts per namespace, label them with severities, and route them to appropriate teams or systems using Alertmanager routing.

Beyond the basics covered in this tutorial, there’s room for substantial enhancement and extension:

  • Multi-cluster observability: Alloy and Loki can be configured to aggregate logs from multiple clusters into a central observability platform.

  • Correlation of logs and metrics: Alerts based on logs can be cross-referenced with Prometheus metrics for richer diagnostics and root cause analysis.

  • Integration with incident response tools: Alerts can be routed to systems like PagerDuty, Opsgenie, or Jira, tying directly into your incident response workflows.

  • High availability and redundancy: In production environments, both Loki and Alertmanager should be configured for high availability with persistent storage and replicas.

  • Security and compliance: Protect log data with RBAC, TLS, and audit logging to meet enterprise and regulatory requirements.

Looking forward, the ability to write and manage PrometheusRule-based log alerts declaratively via Kubernetes resources positions your team to adopt Infrastructure as Code (IaC) practices fully for observability. It allows SRE and DevOps teams to codify alert logic, version-control it, and review it just like any other mission-critical configuration.

Furthermore, Grafana Alloy’s modular and extensible nature means you are not locked into a single observability vendor. It supports OpenTelemetry pipelines, multiple receivers, remote writes, and integrates well with distributed tracing backends—making it a future-proof choice for evolving observability strategies.

In conclusion, implementing PrometheusRule alerts for Loki logs using Grafana Alloy and Helm empowers your Kubernetes platform with high-fidelity, log-aware alerting, reduces your Mean Time to Detect (MTTD), and increases overall service reliability. It’s a move that not only strengthens operational excellence today but also lays the foundation for more intelligent, automated observability systems tomorrow.

Whether you’re just getting started with Kubernetes monitoring or looking to enhance your existing observability stack, this approach is a scalable, modular, and enterprise-grade solution you can build on with confidence.