Skip to content

Prometheus receiver fails on federate endpoint when job and instance labels are missing #32555

@pavolloffay

Description

@pavolloffay

Component(s)

receiver/prometheus

What happened?

Description

The Prometheus receiver fails to scrape the federate endpoint when honor_labels: true and the target metric does not have instance and job labels, which can be the case for aggregated metrics.

Log message:

2024-04-19T09:26:29.254Z	warn	internal/transaction.go:123	Failed to scrape Prometheus endpoint	{"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1713518789233, "target_labels": "{__name__=\"up\", instance=\"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\", job=\"federate\"}"}

Steps to Reproduce

    receivers:
      prometheus:
        config:
          scrape_configs:
            - job_name: 'federate'
              scrape_interval: 5s
              scheme: https
              tls_config:
                ca_file: /etc/pki/ca-trust/source/service-ca/service-ca.crt
              bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
              honor_labels: true
              params:
                'match[]':
                  - '{__name__=~"cluster:usage:containers:sum"}'

Expected Result

Working scraping of the federate endpoint with honor_labels: true for metrics that don't have instance and job labels.

Actual Result

2024-04-19T09:26:29.254Z	warn	internal/transaction.go:123	Failed to scrape Prometheus endpoint	{"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1713518789233, "target_labels": "{__name__=\"up\", instance=\"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\", job=\"federate\"}"}

From

t.logger.Warn("Failed to scrape Prometheus endpoint",

Collector version

0.93.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")
Kubernetes

OpenTelemetry Collector configuration

prometheus:
        config:
          scrape_configs:
            - job_name: 'federate'
              scrape_interval: 5s
              honor_labels: true
              params:
                'match[]':
                  - '{__name__=~"cluster:usage:containers:sum"}'
                  - '{__name__=~"cluster:cpu_usage_cores:sum|cluster:memory_usage_bytes:sum"}'
                  - '{__name__=~"workload:cpu_usage_cores:sum|workload:memory_usage_bytes:sum"}'
                  - '{__name__="namespace_memory:kube_pod_container_resource_limits:sum|namespace_memory:kube_pod_container_resource_requests:sum|namespace_cpu:kube_pod_container_resource_limits:sum|namespace_cpu:kube_pod_container_resource_requests:sum|"}'
              metrics_path: '/federate'
              static_configs:
                - targets: 
                  - "prometheus-k8s.openshift-monitoring.svc.cluster.local:9091"

Log output

2024-04-19T09:26:29.254Z	warn	internal/transaction.go:123	Failed to scrape Prometheus endpoint	{"kind": "receiver", "name": "prometheus", "data_type": "metrics", "scrape_timestamp": 1713518789233, "target_labels": "{__name__=\"up\", instance=\"prometheus-k8s.openshift-monitoring.svc.cluster.local:9091\", job=\"federate\"}"}

Additional context

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions