Pi Cluster Documentation
Pi Cluster Documentation

Observability Visualization (Grafana Operator)

Grafana is the visualization layer of the observability platform used in Pi Cluster. It connects to the telemetry backends deployed in the cluster: Prometheus for metrics, Loki for logs, and Tempo for traces.

This document explains how to deploy and configure Grafana with Grafana Operator using step-by-step Kubernetes manifests.

Why Grafana Operator

Grafana Operator makes Grafana configuration Kubernetes-native.

Main benefits:

  • Grafana itself is managed as a Grafana custom resource.
  • Datasources, folders, and dashboards are managed as first-class custom resources.
  • Changes are continuously reconciled by the operator.
  • Configuration becomes easier to split into small manifests instead of one large Helm values file.
  • Dashboards can be imported from Grafana.com, raw JSON URLs, inline JSON payloads, or existing ConfigMaps generated by third-party charts.

If you are still using the classic Helm subchart based deployment model, see Observability Visualization (Grafana). This page describes the operator-based deployment flow.

Installation

The installation phase creates the operator, the bootstrap credentials, and the Grafana instance.

Install the Operator

Grafana Operator is distributed as a Helm chart.

  • Create the namespace:

    kubectl create namespace grafana
    
  • Install the operator:

    helm upgrade --install grafana-operator \
      oci://ghcr.io/grafana/helm-charts/grafana-operator \
      --namespace grafana \
      --version 5.22.2
    
  • Optionally enable operator metrics and dashboard:

    serviceMonitor:
      enabled: true
      interval: 30s
    dashboard:
      enabled: true
    
  • Verify the installation:

    kubectl -n grafana get pods
    kubectl get crd | grep grafana.integreatly.org
    

You should see the operator pod running and the Grafana CRDs installed.

Create the Admin Credentials Secret

The simplest way to bootstrap Grafana is to provide the admin username and password through a Secret.

apiVersion: v1
kind: Secret
metadata:
  name: grafana-admin-credentials
  namespace: grafana
type: Opaque
stringData:
  admin-user: admin
  admin-password: change-me

Apply it:

kubectl apply -f grafana-admin-secret.yaml

If you already use a secret manager, this Secret can also be created indirectly with tools such as External Secrets Operator or Sealed Secrets.

Create the Grafana Instance

Once the operator is installed, create a Grafana custom resource.

apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
  name: grafana
  namespace: grafana
  labels:
    dashboards: grafana
spec:
  version: 13.0.1
  disableDefaultAdminSecret: true
  config:
    analytics:
      check_for_updates: "false"
      check_for_plugin_updates: "false"
      feedback_links_enabled: "false"
      reporting_enabled: "false"
    log:
      mode: console
    metrics:
      enabled: "true"
  deployment:
    spec:
      template:
        spec:
          containers:
            - name: grafana
              env:
                - name: GF_SECURITY_ADMIN_USER
                  valueFrom:
                    secretKeyRef:
                      name: grafana-admin-credentials
                      key: admin-user
                - name: GF_SECURITY_ADMIN_PASSWORD
                  valueFrom:
                    secretKeyRef:
                      name: grafana-admin-credentials
                      key: admin-password
                - name: GF_PLUGINS_PREINSTALL_SYNC
                  value: elasticsearch,grafana-piechart-panel
                - name: GF_PLUGINS_PREINSTALL_AUTO_UPDATE
                  value: "false"

Starting with Grafana v13, Elasticsearch is a standalone datasource plugin. In operator-managed deployments that run Grafana from a read-only image, disable preinstall auto-updates so Grafana does not try to rewrite the bundled Elasticsearch plugin during startup.

Apply it:

kubectl apply -f grafana.yaml

Verify that the instance becomes ready:

kubectl -n grafana get grafana
kubectl -n grafana get pods
kubectl -n grafana get svc

Important points:

  • the label dashboards: grafana is used later by datasources, folders, and dashboards through instanceSelector
  • disableDefaultAdminSecret: true tells the operator to use your own credentials Secret instead of generating one
  • container environment variables are the most direct way to pass credentials into Grafana runtime configuration

Configuration

After the Grafana instance exists, configure how it is exposed and what resources it manages.

Gateway API Configuration

Grafana is exposed in Pi Cluster through Kubernetes Gateway API and Envoy Gateway using a dedicated hostname.

Expose Grafana using Kubernetes Gateway API

Serving Grafana with Envoy Gateway

With Grafana Operator the route is configured directly in the Grafana resource using spec.httpRoute

apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
  name: grafana
  namespace: grafana
spec:
  httpRoute:
    spec:
      hostnames:
        - grafana.${CLUSTER_DOMAIN}
      parentRefs:
        - name: public-gateway
          namespace: envoy-gateway-system
      rules:
        - backendRefs:
            - name: grafana-service
              port: 3000
          matches:
            - path:
                type: PathPrefix
                value: /
  config:
    server:
      domain: grafana.${CLUSTER_DOMAIN}
      root_url: https://%(domain)s/

With this configuration:

  • Grafana public URL is https://grafana.${CLUSTER_DOMAIN}/
  • Envoy Gateway routes traffic through HTTPRoute
  • Grafana keeps its own server configuration aligned with the public hostname

Provisioning Data Sources

With Grafana Operator, datasources are provisioned using GrafanaDatasource custom resources instead of Helm provisioning files or Grafana sidecars.

This approach has two main advantages over the Helm-sidecar model:

  • datasources are declarative Kubernetes resources
  • reconciliation is handled by the operator instead of by filesystem provisioning inside the Grafana pod

Prometheus

Prometheus is the default metrics datasource used by dashboards, Explore, and trace-to-metrics links.

Official Grafana documentation:

apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDatasource
metadata:
  name: prometheus
  namespace: grafana
spec:
  instanceSelector:
    matchLabels:
      dashboards: grafana
  datasource:
    name: Prometheus
    uid: prometheus
    type: prometheus
    access: proxy
    url: http://kube-prometheus-stack-prometheus.kube-prom-stack.svc.cluster.local:9090
    isDefault: true

Alertmanager

Alertmanager is configured as a separate datasource so Grafana Alerting can browse Prometheus Alertmanager resources, especially silences.

Official Grafana documentation:

apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDatasource
metadata:
  name: alertmanager
spec:
  instanceSelector:
    matchLabels:
      dashboards: grafana
  datasource:
    name: Alertmanager
    uid: alertmanager
    type: alertmanager
    access: proxy
    url: http://kube-prometheus-stack-alertmanager.kube-prom-stack.svc.cluster.local:9093
    jsonData:
      implementation: prometheus

Loki

Loki is the logs datasource. In addition to the service URL, the datasource config defines a derived field so log entries with a trace_id label can deep-link into Tempo.

Official Grafana documentation:

apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDatasource
metadata:
  name: loki
spec:
  instanceSelector:
    matchLabels:
      dashboards: grafana
  datasource:
    name: Loki
    uid: loki
    type: loki
    access: proxy
    url: http://loki-read-headless.loki.svc.cluster.local:3100
    jsonData:
      derivedFields:
        - datasourceUid: tempo
          matcherRegex: trace_id
          matcherType: label
          name: TraceID
          url: $${__value.raw}

Tempo

Tempo is the tracing datasource. Its configuration also wires Tempo to Loki for trace-to-logs, to Prometheus for trace-to-metrics and service maps, and enables node graph and streaming features.

Official Grafana documentation:

apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDatasource
metadata:
  name: tempo
spec:
  instanceSelector:
    matchLabels:
      dashboards: grafana
  datasource:
    name: Tempo
    uid: tempo
    type: tempo
    access: proxy
    url: http://tempo-query-frontend.tempo.svc.cluster.local:3200
    basicAuth: false
    jsonData:
      tracesToLogsV2:
        datasourceUid: loki
        spanStartTimeShift: -1h
        spanEndTimeShift: 1h
        filterByTraceID: false
        filterBySpanID: false
        customQuery: true
        query: '{$${__tags}} | trace_id="$${__span.traceId}"'
      tracesToMetrics:
        datasourceUid: prometheus
        spanStartTimeShift: -1h
        spanEndTimeShift: 1h
        tags:
          - key: service.name
            value: service
          - key: job
        queries:
          - name: Sample query
            query: sum(rate(traces_spanmetrics_latency_bucket{$__tags}[5m]))
      serviceMap:
        datasourceUid: prometheus
      nodeGraph:
        enabled: true
      search:
        hide: false
      traceQuery:
        timeShiftEnabled: true
        spanStartTimeShift: -1h
        spanEndTimeShift: 1h
      spanBar:
        type: Tag
        tag: http.path
      streamingEnabled:
        search: true

Elasticsearch

Elasticsearch is used as the logs backend for OpenTelemetry log records. In Pi Cluster, credentials are injected from a Kubernetes Secret through valuesFrom, while the datasource itself targets the logs-*.otel-* data streams.

Starting with Grafana v13, the Elasticsearch datasource is a standalone plugin. When Grafana is deployed from a read-only container image, keep the plugin preinstalled but disable preinstall auto-update to avoid startup failures caused by attempts to rewrite plugins-bundled.

A dedicated grafana user with read-only permissions to the relevant indices/data streams is recommended for production environments.

Official Grafana documentation:

Enable ES|QL for Elasticsearch

Grafana must enable the Elasticsearch ES QL feature toggle before dashboards can run datasource queries with queryType: esql.

In Grafana Operator deployments, enable it by adding GF_FEATURE_TOGGLES_ENABLE with value elasticsearchESQLQuery environment variable to the Grafana container:

apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
  name: grafana
  labels:
    dashboards: grafana
spec:
  deployment:
    spec:
      template:
        spec:
          containers:
            - name: grafana
              env:
                - name: GF_FEATURE_TOGGLES_ENABLE
                  value: elasticsearchESQLQuery

Prerequisites: create Elasticsearch user and role

Create a read-only user and role in Elasticsearch for Grafana to use. The role should have permissions to read from the relevant indices or data streams, such as logs-*.otel-*.

The role must include at least:

  • Cluster privilege: monitor
  • Index patterns: logs-*.otel-*
  • Index privileges: read

Example role definition:

{
  "cluster": ["monitor"],
  "indices": [
    {
      "names": ["logs-*.otel-*"],
      "privileges": ["read", "view_index_metadata"]
    }
  ]
}

Example user definition:

{
  "username": "grafana",
  "password": "change-me",
  "roles": ["grafana-readonly"]
}



```yaml
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDatasource
metadata:
  name: elasticsearch
spec:
  valuesFrom:
    - targetPath: basicAuthUser
      valueFrom:
        secretKeyRef:
          name: grafana
          key: elasticsearch-username
    - targetPath: secureJsonData.basicAuthPassword
      valueFrom:
        secretKeyRef:
          name: grafana
          key: elasticsearch-password
  instanceSelector:
    matchLabels:
      dashboards: grafana
  datasource:
    name: Elasticsearch
    uid: elasticsearch
    type: elasticsearch
    access: proxy
    basicAuth: true
    basicAuthUser: ${elasticsearch-username}
    url: http://efk-es-http.elastic.svc.cluster.local:9200
    secureJsonData:
      basicAuthPassword: ${elasticsearch-password}
    jsonData:
      index: logs-*.otel-*
      timeField: '@timestamp'

valuesFrom performs substitution into an existing ${...} placeholder in the datasource model. The placeholder name must match the referenced Secret key.

Provisioning Dashboards

Provisioning patterns

Grafana Operator supports several dashboard provisioning patterns.

Pattern 1: Import a dashboard from Grafana.com
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
  name: grafana-operator-dashboard
  namespace: grafana
spec:
  folder: Grafana
  instanceSelector:
    matchLabels:
      dashboards: grafana
  grafanaCom:
    id: 22785
    revision: 2
Pattern 2: Import a dashboard from a raw JSON URL
apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
  name: custom-dashboard
  namespace: grafana
spec:
  folder: Infrastructure
  instanceSelector:
    matchLabels:
      dashboards: grafana
  url: https://raw.githubusercontent.com/example-org/example-repo/main/dashboards/example.json
Pattern 3: Import a dashboard from an existing ConfigMap

This is useful when another Helm chart already publishes dashboard JSON in a ConfigMap.

apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaDashboard
metadata:
  name: fluent-bit
  namespace: grafana
spec:
  allowCrossNamespaceImport: true
  instanceSelector:
    matchLabels:
      dashboards: grafana
  datasources:
    - datasourceName: Prometheus
      inputName: DS_PROMETHEUS
  folder: Logging
  configMapRef:
    name: fluent-bit-dashboard-fluent-bit
    key: fluent-bit-fluent-bit.json

This pattern is used for applications whose Helm charts already publish dashboards as ConfigMaps, such as Fluent Bit, Fluentd, Cilium, Strimzi, and CloudNativePG.

Creating K3s-specific Grafana dashboards

For the K3s-specific monitoring stack, this repository also generates dashboards and Prometheus rules from the same monitoring mixins consumed by kube-prometheus-stack, as indicated in (/docs/monitoring/#k3s-duplicate-metrics-issue).

The main difference from the legacy Helm-sidecar approach is that dashboards are generated directly as GrafanaDashboard resources instead of ConfigMap objects labeled for Grafana discovery.

The build files live under kubernetes/platform/kube-prometheus-stack/k3s-mixins/build and produce two kinds of output:

  • GrafanaDashboard resources for Grafana Operator
  • Prometheus rule manifests generated from the same mixins

The Jsonnet entrypoint defines the mixins to render and converts each generated dashboard into a GrafanaDashboard custom resource:

# We use helper functions from kube-prometheus to generate dashboards and alerts for Kubernetes.
local addMixin = (import 'kube-prometheus/lib/mixin.libsonnet');

local kubernetesMixin = addMixin({
  name: 'kubernetes',
  dashboardFolder: 'Kubernetes',
  mixin: (import 'kubernetes-mixin/mixin.libsonnet') + {
    _config+:: {
      cadvisorSelector: 'job="kubelet"',
      kubeletSelector: 'job="kubelet"',
      kubeSchedulerSelector: 'job="kubelet"',
      kubeControllerManagerSelector: 'job="kubelet"',
      kubeApiserverSelector: 'job="kubelet"',
      kubeProxySelector: 'job="kubelet"',
      showMultiCluster: false,
    },
  },
});

local nodeExporterMixin = addMixin({
  name: 'node-exporter',
  dashboardFolder: 'General',
  mixin: (import 'node-mixin/mixin.libsonnet') + {
    _config+:: {
      nodeExporterSelector: 'job="node-exporter"',
      showMultiCluster: false,
    },
  },
});

local corednsMixin = addMixin({
  name: 'coredns',
  dashboardFolder: 'DNS',
  mixin: (import 'coredns-mixin/mixin.libsonnet') + {
    _config+:: {
      corednsSelector: 'job="coredns"',
    },
  },
});

local etcdMixin = addMixin({
  name: 'etcd',
  dashboardFolder: 'Kubernetes',
  mixin: (import 'github.com/etcd-io/etcd/contrib/mixin/mixin.libsonnet') + {
    _config+:: {
      clusterLabel: 'cluster',
    },
  },
});

local grafanaMixin = addMixin({
  name: 'grafana',
  dashboardFolder: 'Grafana',
  mixin: (import 'grafana-mixin/mixin.libsonnet') + {
    _config+:: {},
  },
});

local prometheusMixin = addMixin({
  name: 'prometheus',
  dashboardFolder: 'Prometheus',
  mixin: (import 'prometheus/mixin.libsonnet') + {
    _config+:: {
      showMultiCluster: false,
    },
  },
});

local prometheusOperatorMixin = addMixin({
  name: 'prometheus-operator',
  dashboardFolder: 'Prometheus Operator',
  mixin: (import 'prometheus-operator-mixin/mixin.libsonnet') + {
    _config+:: {},
  },
});

local stripJsonExtension(name) =
  local extensionIndex = std.findSubstr('.json', name);
  local n = if std.length(extensionIndex) < 1 then name else std.substr(name, 0, extensionIndex[0]);
  n;

local grafanaDashboardResource(folder, name, json) = {
  apiVersion: 'grafana.integreatly.org/v1beta1',
  kind: 'GrafanaDashboard',
  metadata: {
    name: 'grafana-dashboard-%s' % stripJsonExtension(name),
  },
  spec: {
    allowCrossNamespaceImport: true,
    folder: folder,
    instanceSelector: {
      matchLabels: {
        dashboards: 'grafana',
      },
    },
    json: std.manifestJsonEx(json, '    '),
  },
};

local generateGrafanaDashboardResources(mixin) = if std.objectHas(mixin, 'grafanaDashboards') && mixin.grafanaDashboards != null then {
  ['grafana-dashboard-' + stripJsonExtension(name)]: grafanaDashboardResource(folder, name, mixin.grafanaDashboards[folder][name])
  for folder in std.objectFields(mixin.grafanaDashboards)
  for name in std.objectFields(mixin.grafanaDashboards[folder])
} else {};

local nodeExporterMixinHelmGrafanaDashboards = generateGrafanaDashboardResources(nodeExporterMixin);
local kubernetesMixinHelmGrafanaDashboards = generateGrafanaDashboardResources(kubernetesMixin);
local corednsMixinHelmGrafanaDashboards = generateGrafanaDashboardResources(corednsMixin);
local etcdMixinHelmGrafanaDashboards = generateGrafanaDashboardResources(etcdMixin);
local grafanaMixinHelmGrafanaDashboards = generateGrafanaDashboardResources(grafanaMixin);
local prometheusMixinHelmGrafanaDashboards = generateGrafanaDashboardResources(prometheusMixin);
local prometheusOperatorMixinHelmGrafanaDashboards = generateGrafanaDashboardResources(prometheusOperatorMixin);

local grafanaDashboards =
  kubernetesMixinHelmGrafanaDashboards +
  nodeExporterMixinHelmGrafanaDashboards +
  corednsMixinHelmGrafanaDashboards +
  etcdMixinHelmGrafanaDashboards +
  grafanaMixinHelmGrafanaDashboards +
  prometheusMixinHelmGrafanaDashboards +
  prometheusOperatorMixinHelmGrafanaDashboards;

local prometheusAlerts = {
  'kubernetes-mixin-rules': kubernetesMixin.prometheusRules,
  'node-exporter-mixin-rules': nodeExporterMixin.prometheusRules,
  'coredns-mixin-rules': corednsMixin.prometheusRules,
  'etcd-mixin-rules': etcdMixin.prometheusRules,
  'grafana-mixin-rules': grafanaMixin.prometheusRules,
  'prometheus-mixin-rules': prometheusMixin.prometheusRules,
  'prometheus-operator-mixin-rules': prometheusOperatorMixin.prometheusRules,
};

grafanaDashboards + prometheusAlerts

The generated dashboards are written as standalone YAML manifests, not embedded into ConfigMaps. The generation script is still responsible for converting Jsonnet output to YAML and escaping only the Prometheus rule files so Helm-style template markers remain valid where needed:

#!/bin/sh

set -e # Exit on any error
set -u # Treat unset variables as an error

# Define paths
MIXINS_DIR="./templates"

# Function to escape YAML content
escape_yaml() {
  local file_path="$1"
  echo "Escaping $file_path..."
  sed -i \
    -e 's/{{/{{`{{/g' \
    -e 's/}}/}}`}}/g' \
    -e 's/{{`{{/{{`{{`}}/g' \
    -e 's/}}`}}/{{`}}`}}/g' \
    "$file_path"
  echo "Escaped $file_path."
}

echo "Cleaning templates directory..."
rm -rf ${MIXINS_DIR}/*
echo "Templates directory cleaned."

echo "Converting Jsonnet to YAML..."
jsonnet main.jsonnet -J vendor -m ${MIXINS_DIR} | xargs -I{} sh -c 'cat {} | gojsontoyaml > {}.yaml' -- {}
echo "Jsonnet conversion completed."

echo "Removing non-YAML files..."
find ${MIXINS_DIR} -type f ! -name "*.yaml" -exec rm {} +
echo "Non-YAML files removed."

echo "Escaping YAML files..."
find ${MIXINS_DIR} -name '*-rules.yaml' | while read -r file; do
  escape_yaml "$file"
done
echo "YAML files escaped."

echo "Processing completed successfully!"

The Docker build environment and local target used in this repository are:

FROM golang:1.26.1-alpine AS build
LABEL stage=builder

WORKDIR /k3s-mixins

COPY src/ .

RUN apk add git
RUN go install github.com/google/go-jsonnet/cmd/jsonnet@latest
RUN go install github.com/jsonnet-bundler/jsonnet-bundler/cmd/jb@latest
RUN go install github.com/brancz/gojsontoyaml@latest

RUN jb init
RUN jb install github.com/kubernetes-monitoring/kubernetes-mixin@master
RUN jb install github.com/prometheus-operator/kube-prometheus/jsonnet/kube-prometheus@main
RUN jb install github.com/povilasv/coredns-mixin@master

RUN mkdir templates
RUN chmod +x generate.sh
RUN ./generate.sh

FROM scratch AS mixins
COPY --from=build /k3s-mixins/templates /
.PHONY: k3s-mixins

k3s-mixins:
	docker build --no-cache --target mixins --output out/ .
	mv out/*-rules.yaml ../base/rules/.
	mv out/*.yaml ../base/dashboards/.

Run the build from kubernetes/platform/kube-prometheus-stack/k3s-mixins/build:

make k3s-mixins

With this workflow:

  • dashboards generated from mixins land in kubernetes/platform/kube-prometheus-stack/k3s-mixins/base/dashboards as GrafanaDashboard manifests
  • Prometheus rules land in kubernetes/platform/kube-prometheus-stack/k3s-mixins/base/rules
  • the resulting dashboards are reconciled natively by Grafana Operator instead of being discovered by Grafana sidecars through labeled ConfigMaps

Provisioning Folders

Grafana folders are also managed declaratively using GrafanaFolder resources.

Example:

apiVersion: grafana.integreatly.org/v1beta1
kind: GrafanaFolder
metadata:
  name: kubernetes-folder
spec:
  title: Kubernetes
  instanceSelector:
    matchLabels:
      dashboards: grafana

Apply the folder manifests:

kubectl apply -f grafana-folders.yaml

Persistence

If Grafana should keep plugins, dashboards written through the UI, alert history, or local state across pod restarts, add a persistent volume claim.

apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
  name: grafana
  namespace: grafana
spec:
  persistentVolumeClaim:
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: longhorn
      resources:
        requests:
          storage: 5Gi

For a single replica with a single-writer volume, Recreate is the safest rollout strategy:

apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
  name: grafana
  namespace: grafana
spec:
  deployment:
    spec:
      strategy:
        type: Recreate

Single Sign-On - IAM Integration

Grafana can be integrated with the cluster IAM solution to delegate authentication and enable SSO. Grafana OSS supports OpenID Connect / OAuth 2.0.

In Pi Cluster, SSO is implemented with Keycloak.

See details about Keycloak installation in “SSO with Keycloak”.

Keycloak Configuration: Configure Grafana Client

The same Keycloak client configuration principles described in Observability Visualization (Grafana) apply here as well:

  • Create OIDC client grafana
  • Configure redirect URI https://grafana.${CLUSTER_DOMAIN}/login/generic_oauth
  • Create client roles admin, editor, and viewer
  • Make roles available in ID token / user info

Grafana SSO Configuration

With Grafana Operator, the SSO configuration is patched directly into the Grafana resource instead of being written to Helm values.

SSO patch used in this repository:

apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
  name: grafana
  namespace: grafana
spec:
  config:
    auth.generic_oauth:
      enabled: "true"
      name: Keycloak-OAuth
      allow_sign_up: "true"
      client_id: $${GF_AUTH_GENERIC_OAUTH_CLIENT_ID}
      client_secret: $${GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET}
      scopes: openid email profile offline_access roles
      email_attribute_path: email
      login_attribute_path: username
      name_attribute_path: full_name
      auth_url: https://iam.${CLUSTER_DOMAIN}/realms/picluster/protocol/openid-connect/auth
      token_url: https://iam.${CLUSTER_DOMAIN}/realms/picluster/protocol/openid-connect/token
      api_url: https://iam.${CLUSTER_DOMAIN}/realms/picluster/protocol/openid-connect/userinfo
      role_attribute_path: contains(resource_access.grafana.roles[*], 'admin') && 'Admin' || contains(resource_access.grafana.roles[*], 'editor') && 'Editor' || (contains(resource_access.grafana.roles[*], 'viewer') && 'Viewer')
      signout_redirect_url: https://iam.${CLUSTER_DOMAIN}/realms/picluster/protocol/openid-connect/logout?client_id=grafana&post_logout_redirect_uri=https%3A%2F%2Fgrafana.${CLUSTER_DOMAIN}%2Flogin%2Fgeneric_oauth

Client credentials are loaded from a Secret through environment variables injected into the Grafana container:

- name: GF_AUTH_GENERIC_OAUTH_CLIENT_ID
  valueFrom:
    secretKeyRef:
      name: grafana-env-secret
      key: GF_AUTH_GENERIC_OAUTH_CLIENT_ID
- name: GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET
  valueFrom:
    secretKeyRef:
      name: grafana-env-secret
      key: GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET

This keeps OAuth credentials out of the Grafana manifest while still allowing the operator to reconcile the full runtime configuration.

Observability

Metrics

Grafana metrics can be enabled directly in the Grafana spec:

apiVersion: grafana.integreatly.org/v1beta1
kind: Grafana
metadata:
  name: grafana
  namespace: grafana
spec:
  config:
    metrics:
      enabled: "true"

If Prometheus Operator is installed, scrape Grafana with a ServiceMonitor:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: grafana
  namespace: grafana
spec:
  endpoints:
    - interval: 30s
      path: /metrics
      port: grafana
  selector:
    matchLabels:
      app.kubernetes.io/managed-by: grafana-operator
      dashboards: grafana

This ServiceMonitor matches the Grafana service created by the operator with the label dashboards: grafana and scrapes the /metrics endpoint every 30 seconds.

Verify the metric endpoint:

kubectl -n grafana port-forward svc/grafana-service 3000:3000
curl http://127.0.0.1:3000/metrics

Last Update: Apr 05, 2026

Comments: