-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Closed
Labels
bugSomething isn't workingSomething isn't workingreceiver/prometheusPrometheus receiverPrometheus receiver
Description
Component(s)
receiver/prometheus
What happened?
Description
The Prometheus receiver fails to scrape the federate endpoint when honor_labels: true
and the target metric does not have instance
and job
labels, which can be the case for aggregated metrics.
Log message:
2024-04-19T09:26:29.254Z warn internal/transaction.go:123 Failed to scrape Prometheus endpoint {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1713518789233, "target_labels": "{__name__=\"up\", instance=\"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\", job=\"federate\"}"}
Steps to Reproduce
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'federate'
scrape_interval: 5s
scheme: https
tls_config:
ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
honor_labels: true
params:
'match[]':
- '{__name__=~"cluster:usage:containers:sum"}'
Expected Result
Working scraping of the federate endpoint with honor_labels: true
for metrics that don't have instance
and job
labels.
Actual Result
2024-04-19T09:26:29.254Z warn internal/transaction.go:123 Failed to scrape Prometheus endpoint {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1713518789233, "target_labels": "{__name__=\"up\", instance=\"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\", job=\"federate\"}"}
From
opentelemetry-collector-contrib/receiver/prometheusreceiver/internal/transaction.go
Line 129 in 13fca79
t.logger.Warn("Failed to scrape Prometheus endpoint", |
Collector version
0.93.0
Environment information
Environment
OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
Kubernetes
OpenTelemetry Collector configuration
prometheus:
config:
scrape_configs:
- job_name: 'federate'
scrape_interval: 5s
honor_labels: true
params:
'match[]':
- '{__name__=~"cluster:usage:containers:sum"}'
- '{__name__=~"cluster:cpu_usage_cores:sum|cluster:memory_usage_bytes:sum"}'
- '{__name__=~"workload:cpu_usage_cores:sum|workload:memory_usage_bytes:sum"}'
- '{__name__="namespace_memory:kube_pod_container_resource_limits:sum|namespace_memory:kube_pod_container_resource_requests:sum|namespace_cpu:kube_pod_container_resource_limits:sum|namespace_cpu:kube_pod_container_resource_requests:sum|"}'
metrics_path: '/federate'
static_configs:
- targets:
- "prometheus-k8s.openshift-monitoring.svc.cluster.local:9091"
Log output
2024-04-19T09:26:29.254Z warn internal/transaction.go:123 Failed to scrape Prometheus endpoint {"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1713518789233, "target_labels": "{__name__=\"up\", instance=\"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\", job=\"federate\"}"}
Additional context
- Prometheus Receiver - honor_labels set to true doesnt work with federation #5757
- federate docs suggest setting
honor_labels: true
https://prometheus.io/docs/prometheus/latest/federation/
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingreceiver/prometheusPrometheus receiverPrometheus receiver