Description
Component(s)
exporter/azuremonitor
What happened?
Description
When using Istio with the OpenTelemetry Azure Monitor exporter, traces are not being propagated to Azure Monitor.
Steps to Reproduce
- Set up an Istio service mesh with tracing enabled.
- Deploy the OpenTelemetry Collector with the azuremonitor exporter to send trace data to Application Insights.
- Generate traffic within the mesh and observe the trace data in Application Insights
Expected Result
Traces generated by Istio should appear correctly in Application Insights with proper formatting and trace context propagation using W3C Trace Context headers.
Actual Result
Traces are not appearing in Application Insights suggesting a failure to convert B3 headers into the W3C Trace Context format. No logging provided that shows the trace data was rejected by Application Insights
Collector version
923eb1cf
Environment information
Environment
OS: Ubuntu 22.04.04
Istio installed with Helm
helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update
helm install istio-base istio/base -n istio-system --create-namespace
helm install istiod istio/istiod -n istio-system --wait
helm status istiod -n istio-system
Configure Providers
kubectl get configmap istio -n istio-system -o yaml > configmap.yaml
Update configmap for grpc OTLP format Traces
mesh: |-
defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
tracing: {}
defaultProviders:
metrics:
- prometheus
enablePrometheusMerge: true
rootNamespace: istio-system
trustDomain: cluster.local
enableTracing: true
extensionProviders:
- name: otel-tracing
opentelemetry:
port: 4317
service: opentelemetry-collector.otel.svc.cluster.local
grpc: {}
kubectl apply -f configmap.yaml
Install Otel Collector
kubectl create namespace otel
kubectl label namespace otel istio-injection=enabled
cat <<EOF > otel-collector-contrib.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: opentelemetry-collector-conf
labels:
app: opentelemetry-collector
data:
opentelemetry-collector-config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
exporters:
logging:
loglevel: debug
azuremonitor:
connection_string: "InstrumentationKey="
spaneventsenabled: true
maxbatchinterval: .05s
sending_queue:
enabled: true
num_consumers: 10
queue_size: 2
extensions:
health_check:
port: 13133
service:
extensions:
- health_check
telemetry:
logs:
debug:
verbosity: detailed
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [logging, azuremonitor]
traces:
receivers: [otlp]
exporters: [logging, azuremonitor]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: opentelemetry-collector
spec:
selector:
matchLabels:
app: opentelemetry-collector
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: opentelemetry-collector
spec:
containers:
- name: opentelemetry-collector
image: otel/opentelemetry-collector-contrib:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4317
protocol: TCP
- containerPort: 4318
protocol: TCP
resources:
limits:
cpu: "2"
memory: 4Gi
requests:
cpu: 200m
memory: 400Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: opentelemetry-collector-config-vol
mountPath: /etc/otel
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
items:
- key: opentelemetry-collector-config
path: opentelemetry-collector-config.yaml
name: opentelemetry-collector-conf
name: opentelemetry-collector-config-vol
---
apiVersion: v1
kind: Service
metadata:
name: opentelemetry-collector
labels:
app: opentelemetry-collector
spec:
ports:
- name: grpc-otlp # Default endpoint for OpenTelemetry receiver.
port: 4317
protocol: TCP
targetPort: 4317
- name: http-otlp # HTTP endpoint for OpenTelemetry receiver.
port: 4318
protocol: TCP
targetPort: 4318
selector:
app: opentelemetry-collector
EOF
kubectl apply -f otel-collector-contrib.yaml -n otel
Set up demo
kubectl create namespace demo
kubectl label namespace demo istio-injection=enabled
Create Telemetry Rule
cat < tel-rule-otel-tracing.yaml
apiVersion: telemetry.istio.io/v1
kind: Telemetry
metadata:
name: otel-tracing
namespace: demo
spec:
tracing:
- providers:
- name: otel-tracing
randomSamplingPercentage: 100
customTags:
"app-insights":
literal:
value: "from-otel-collector"
EOF
- name: otel-tracing
kubectl apply -f tel-rule-otel-tracing.yaml
Generate and view traces
kubectl apply -f https://raw.githubusercontent.com/istio/istio/master/samples/bookinfo/platform/kube/bookinfo.yaml -n demo
kubectl get pods -n demo
Generate traces
for i in
Verify traces on console
kubectl logs -n otel "$(kubectl get pods -n otel -l app=opentelemetry-collector -o jsonpath='{.items[0].metadata.name}')" | grep "app-insights"
Verify traces on Application Insights
OpenTelemetry Collector configuration
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
exporters:
logging:
loglevel: debug
azuremonitor:
connection_string: "InstrumentationKey="
spaneventsenabled: true
maxbatchinterval: .05s
sending_queue:
enabled: true
num_consumers: 10
queue_size: 2
extensions:
health_check:
port: 13133
service:
extensions:
- health_check
telemetry:
logs:
debug:
verbosity: detailed
pipelines:
logs:
receivers: [otlp]
processors: [batch]
exporters: [logging, azuremonitor]
traces:
receivers: [otlp]
exporters: [logging, azuremonitor]
Log output
2024-09-04T15:46:16.212Z info TracesExporter {"kind": "exporter", "data_type": "traces", "name": "debug", "resource spans": 1, "spans": 1}
2024-09-04T15:46:16.212Z info ResourceSpans #0
Resource SchemaURL:
Resource attributes:
-> service.name: Str(details.demo)
ScopeSpans #0
ScopeSpans SchemaURL:
InstrumentationScope
Span #0
Trace ID : e7e12ca17cf7655ff46ecd79cec9451c
Parent ID : 9e8b852dba31ab84
ID : 05226d559db0c33d
Name : details.demo.svc.cluster.local:9080/*
Kind : Server
Start time : 2024-09-04 15:46:14.652164 +0000 UTC
End time : 2024-09-04 15:46:14.655316 +0000 UTC
Status code : Unset
Status message :
Attributes:
-> node_id: Str(sidecar~10.244.0.13~details-v1-79dfbd6fff-j7jjs.demo~demo.svc.cluster.local)
-> zone: Str()
-> guid:x-request-id: Str(d17c4164-5765-9003-a163-04039b0b98e0)
-> http.url: Str(http://details:9080/details/0)
-> http.method: Str(GET)
-> downstream_cluster: Str(-)
-> user_agent: Str(curl/7.88.1)
-> http.protocol: Str(HTTP/1.1)
-> peer.address: Str(10.244.0.18)
-> request_size: Str(0)
-> response_size: Str(178)
-> component: Str(proxy)
-> upstream_cluster: Str(inbound|9080||)
-> upstream_cluster.name: Str(inbound|9080||;)
-> http.status_code: Str(200)
-> response_flags: Str(-)
-> istio.mesh_id: Str(cluster.local)
-> istio.canonical_revision: Str(v1)
-> istio.canonical_service: Str(details)
-> app-insights: Str(otel)
-> istio.cluster_id: Str(Kubernetes)
-> istio.namespace: Str(demo)
{"kind": "exporter", "data_type": "traces", "name": "debug"}
Additional context
No response