-
Hi, |
Beta Was this translation helpful? Give feedback.
Replies: 13 comments 1 reply
-
At this time, it is not possible to collect metrics from the OTEL components. Below are links to documentation you may find useful. Troubleshoot the Splunk OpenTelemetry Collector Troubleshooting |
Beta Was this translation helpful? Give feedback.
-
okay I understand, could you please make a feature like that? I think it would be quite useful. I know you could probably get something setup from Splunk Cloud side to get alerted, but from the agent side it would be nice as well I think i.e. if volume of data/logs suddenly drops, timeouts, bottlenecks or errors occur impacting data shipment, it would be easy to build monitoring & alerting using well known Prometheus stack based on such metrics. |
Beta Was this translation helpful? Give feedback.
-
The best thing to do is to submit the idea at https://ideas.splunk.com Thank You |
Beta Was this translation helpful? Give feedback.
-
@hgus-gushernandez I also found this: https://github.com/open-telemetry/opentelemetry-collector/blob/main/docs/monitoring.md |
Beta Was this translation helpful? Give feedback.
-
The Metrics or this document is specific to Splunk Observability and not Splunk Cloud You can read more about each product in the links I have added for each product |
Beta Was this translation helpful? Give feedback.
-
do you have any better links docs? those links seems to me like high level marketing stuff. Isn't this repo stuff based on open telemetry collector? it has it even in the name? It would look like to me those are maybe two different Cloud backends with different capabilities maybe, but the agent/client side should be pretty much common and based on open telemetry collector which is oss? |
Beta Was this translation helpful? Give feedback.
-
Unfortunately, no. |
Beta Was this translation helpful? Give feedback.
-
@hgus-gushernandez so I actually logged into one of the DaemonSet splunk-otel-collector-agent pods and was able to find out the otelcol process actually has the port 8889 opened and appears to be talking http and guess what? it even responds with metrics.. btw the port 24321 is not there, even tho it is in container spec and named http-metrics...
|
Beta Was this translation helpful? Give feedback.
-
We scrape this endpoint as part of our deployment, please see here: |
Beta Was this translation helpful? Give feedback.
-
@atoulme okay, but where's this port enabled on the agent DaemonSet? On my Helm inflated manifests I don't see 8889 opened on containers anywhere, and looking in the chart values file I don't seem to find 8889 here? splunk-otel-collector-chart/helm-charts/splunk-otel-collector/values.yaml Lines 313 to 356 in ea3c4f9 I'm going to patch it to add on top like this:
|
Beta Was this translation helpful? Give feedback.
-
The port wasn't added to the daemonset spec. I believe this was intentionally done because we only wanted the agent collecting it's own metrics. |
Beta Was this translation helpful? Give feedback.
-
@jvoravong but how do I then get those metrics into my Promethus in the cluster to alert me on say:
? |
Beta Was this translation helpful? Give feedback.
-
We do not support exporting to Prometheus at this time. You can send to Splunk Enterprise or Splunk Observability Cloud. |
Beta Was this translation helpful? Give feedback.
We do not support exporting to Prometheus at this time. You can send to Splunk Enterprise or Splunk Observability Cloud.