Closed
Description
Component(s)
receiver/mongodb
Describe the issue you're reporting
I have a central collector running on k8 cluster. I have configured mongodb receiver in collector and the collector exports the metrics to googlemanaged prometheus. I am able to see mongodb metrics like mongo_cache_operations and mongo_collection_count but the metrics do not have any labels about which instance is reporting these metrics. Below is my collector configuration. Can you let me know how I can fetch mongodb instance details in metric labels.
Collector version: 0.97.0
receivers:
otlp:
protocols:
grpc:
http:
mongodb:
hosts:
- endpoint: 10.224.10.25:27017
transport: "tcp"
- endpoint: 10.224.10.28:27017
transport: "tcp"
- endpoint: 10.224.10.31:27017
transport: "tcp"
collection_interval: 60s
initial_delay: 1s
replica_set: rs0
tls:
insecure: true
insecure_skip_verify: true
processors:
resourcedetection:
detectors: [gcp]
timeout: 10s
transform:
# "location", "cluster", "namespace", "job", "instance", and "project_id" are reserved, and
# metrics containing these labels will be rejected. Prefix them with exported_ to prevent this.
metric_statements:
- context: datapoint
statements:
- set(attributes["exported_location"], attributes["location"])
- delete_key(attributes, "location")
- set(attributes["exported_cluster"], attributes["cluster"])
- delete_key(attributes, "cluster")
- set(attributes["exported_namespace"], attributes["namespace"])
- delete_key(attributes, "namespace")
- set(attributes["exported_job"], attributes["job"])
- delete_key(attributes, "job")
- set(attributes["exported_instance"], attributes["instance"])
- delete_key(attributes, "instance")
- set(attributes["exported_project_id"], attributes["project_id"])
- delete_key(attributes, "project_id")
- set(attributes["host_name"], resource.attributes["host.name"])
batch:
# batch metrics before sending to reduce API usage
send_batch_max_size: 200
send_batch_size: 200
timeout: 5s
memory_limiter:
# drop metrics if memory usage gets too high
check_interval: 1s
limit_percentage: 65
spike_limit_percentage: 20
probabilistic_sampler:
hash_seed: 22
sampling_percentage: 50
extensions:
health_check:
endpoint: 0.0.0.0:13133
exporters:
googlecloud:
project: test
sending_queue:
enabled: true
num_consumers: 10
queue_size: 2500
googlemanagedprometheus:
logging:
connectors:
spanmetrics:
resource_metrics_key_attributes:
- service.name
- telemetry.sdk.language
- telemetry.sdk.name
service:
telemetry:
logs:
level: "debug"
extensions: [health_check]
pipelines:
metrics:
receivers: [otlp,spanmetrics,mongodb]
processors: [transform,batch, memory_limiter,resourcedetection]
exporters: [googlemanagedprometheus]
traces:
receivers: [otlp]
processors: [filter/ottl,probabilistic_sampler]
exporters: [googlecloud,spanmetrics]
logs:
receivers: [otlp]
processors: []
exporters: [logging]