# Monitoring Advanced

## Observability Stack Architecture

### Three Pillars of Observability

```
┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│     Metrics     │    │      Logs        │    │      Traces     │
│                 │    │                  │    │                 │
│ • Prometheus    │    │ • ELK/Loki Stack │    │ • Jaeger        │
│ • Grafana       │    │ • Fluent Bit     │    │ • Zipkin        │
│ • AlertManager  │    │ • Vector         │    │ • OpenTelemetry │
│ • Custom Metrics│    │ • Log Aggregation│    │ • Distributed   │
└─────────────────┘    └──────────────────┘    └─────────────────┘
```

### Full Observability Stack

#### Prometheus Advanced Configuration

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: monitoring
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      evaluation_interval: 15s
      external_labels:
        cluster: 'production'
        region: 'us-west-2'
        environment: 'prod'

    rule_files:
      - "/etc/prometheus/rules/*.yml"

    alerting:
      alertmanagers:
        - static_configs:
            - targets:
              - alertmanager:9093

    remote_write:
      - url: "https://prometheus-remote-write.example.com/api/v1/write"
        queue_config:
          max_samples_per_send: 1000
          max_shards: 200
          capacity: 2500
        write_relabel_configs:
          - source_labels: [__name__]
            regex: 'go_.*|process_.*|prometheus_.*'
            action: drop

    scrape_configs:
      # Kubernetes API Server
      - job_name: 'kubernetes-apiservers'
        kubernetes_sd_configs:
          - role: endpoints
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
          - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
            action: keep
            regex: default;kubernetes;https

      # Node Exporter
      - job_name: 'kubernetes-nodes'
        kubernetes_sd_configs:
          - role: node
        relabel_configs:
          - action: labelmap
            regex: __meta_kubernetes_node_label_(.+)
          - target_label: __address__
            replacement: kubernetes.default.svc:443
          - source_labels: [__meta_kubernetes_node_name]
            regex: (.+)
            target_label: __metrics_path__
            replacement: /api/v1/nodes/${1}/proxy/metrics

      # Kubelet
      - job_name: 'kubernetes-kubelet'
        kubernetes_sd_configs:
          - role: node
        scheme: https
        metrics_path: /metrics
        tls_config:
          insecure_skip_verify: true
        relabel_configs:
          - action: labelmap
            regex: __meta_kubernetes_node_label_(.+)

      # Kube State Metrics
      - job_name: 'kube-state-metrics'
        kubernetes_sd_configs:
          - role: endpoints
          namespaces:
            names:
              - monitoring
        relabel_configs:
          - source_labels: [__meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
            action: keep
            regex: kube-state-metrics;http

      # Application Metrics
      - job_name: 'application-metrics'
        kubernetes_sd_configs:
          - role: pod
        relabel_configs:
          - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
            action: keep
            regex: true
          - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
            action: replace
            target_label: __metrics_path__
            regex: (.+)
          - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
            action: replace
            regex: ([^:]+)(?::\d+)?;(\d+)
            replacement: $1:$2
            target_label: __address__
          - action: labelmap
            regex: __meta_kubernetes_pod_label_(.+)
          - source_labels: [__meta_kubernetes_namespace]
            action: replace
            target_label: kubernetes_namespace
          - source_labels: [__meta_kubernetes_pod_name]
            action: replace
            target_label: kubernetes_pod_name
```

#### Prometheus Recording Rules

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-rules
  namespace: monitoring
data:
  kubernetes-recording-rules.yml: |
    groups:
    - name: kubernetes-apps
      rules:
      # CPU Usage Rate
      - record: kubernetes:container_cpu_usage_seconds_total:sum_rate
        expr: sum(rate(container_cpu_usage_seconds_total{container!="",container!="POD"}[5m])) by (namespace, pod, container)

      # Memory Usage
      - record: kubernetes:container_memory_working_set_bytes
        expr: sum(container_memory_working_set_bytes{container!="",container!="POD"}) by (namespace, pod, container)

      # Network I/O
      - record: kubernetes:pod_network_receive_bytes:sum_rate
        expr: sum(rate(container_network_receive_bytes_total[5m])) by (namespace, pod)

      - record: kubernetes:pod_network_transmit_bytes:sum_rate
        expr: sum(rate(container_network_transmit_bytes_total[5m])) by (namespace, pod)

      # Filesystem Usage
      - record: kubernetes:container_fs_usage_bytes
        expr: sum(container_fs_usage_bytes{container!="",container!="POD"}) by (namespace, pod, container, device)

    - name: kubernetes-resources
      rules:
      # Pod Count by Status
      - record: kubernetes:pod_count:by_phase
        expr: sum by (phase) (kube_pod_status_phase{phase=~"Running|Pending|Failed|Succeeded"})

      # Node Status
      - record: kubernetes:node_count:by_status
        expr: sum by (condition) (kube_node_status_condition{condition="Ready", status="true"})

      # PVC Usage
      - record: kubernetes:pvc_usage_percentage
        expr: (kubelet_volume_stats_used_bytes / kubelet_volume_stats_capacity_bytes) * 100

    - name: kubernetes-performance
      rules:
      # High CPU Usage Pods
      - record: kubernetes:pods_high_cpu_usage
        expr: sum by (namespace, pod) (rate(container_cpu_usage_seconds_total{container!="",container!="POD"}[5m])) > 0.8

      # High Memory Usage Pods
      - record: kubernetes:pods_high_memory_usage
        expr: sum by (namespace, pod) (container_memory_working_set_bytes{container!="",container!="POD"}) / sum by (namespace, pod) (kube_pod_container_resource_limits{resource="memory"}) > 0.8

      # Restarts Count
      - record: kubernetes:pod_restart_count
        expr: sum by (namespace, pod) (kube_pod_container_status_restarts_total)
```

#### Prometheus Alerting Rules

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-alerts
  namespace: monitoring
data:
  kubernetes-alerts.yml: |
    groups:
    - name: kubernetes-apps
      rules:
      # Pod Crashing
      - alert: PodCrashing
        expr: rate(kube_pod_container_status_restarts_total[15m]) > 0
        for: 2m
        labels:
          severity: warning
        annotations:
          summary: "Pod {{ $labels.namespace }}/{{ $labels.pod }} is crashing"
          description: "Pod {{ $labels.namespace }}/{{ $labels.pod }} has been restarting {{ $value }} times in the last 15 minutes."

      # Pod Not Ready
      - alert: PodNotReady
        expr: kube_pod_status_ready{condition="true"} == 0
        for: 10m
        labels:
          severity: warning
        annotations:
          summary: "Pod {{ $labels.namespace }}/{{ $labels.pod }} not ready"
          description: "Pod {{ $labels.namespace }}/{{ $labels.pod }} has been not ready for more than 10 minutes."

      # Container OOMKilled
      - alert: ContainerOOMKilled
        expr: kube_pod_container_status_terminated_reason{reason="OOMKilled"} == 1
        for: 0m
        labels:
          severity: critical
        annotations:
          summary: "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} was OOMKilled"
          description: "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} was terminated due to Out Of Memory."

      # High CPU Usage
      - alert: HighCPUUsage
        expr: sum by (pod, namespace) (rate(container_cpu_usage_seconds_total{container!="",container!="POD"}[5m])) > 0.8
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "High CPU usage on pod {{ $labels.namespace }}/{{ $labels.pod }}"
          description: "Pod {{ $labels.namespace }}/{{ $labels.pod }} CPU usage is above 80% for more than 5 minutes."

      # High Memory Usage
      - alert: HighMemoryUsage
        expr: sum by (pod, namespace) (container_memory_working_set_bytes{container!="",container!="POD"}) / sum by (pod, namespace) (kube_pod_container_resource_limits{resource="memory"}) > 0.9
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "High memory usage on pod {{ $labels.namespace }}/{{ $labels.pod }}"
          description: "Pod {{ $labels.namespace }}/{{ $labels.pod }} memory usage is above 90% for more than 5 minutes."

    - name: kubernetes-cluster
      rules:
      # Node Down
      - alert: NodeDown
        expr: kube_node_status_condition{condition="Ready",status="true"} == 0
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "Node {{ $labels.node }} is down"
          description: "Node {{ $labels.node }} has been down for more than 5 minutes."

      # High Node Memory Usage
      - alert: HighNodeMemoryUsage
        expr: (node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes > 0.9
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "High memory usage on node {{ $labels.node }}"
          description: "Node {{ $labels.node }} memory usage is above 90%."

      # High Node CPU Usage
      - alert: HighNodeCPUUsage
        expr: 100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 90
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "High CPU usage on node {{ $labels.node }}"
          description: "Node {{ $labels.node }} CPU usage is above 90%."

      # Disk Space Low
      - alert: DiskSpaceLow
        expr: (node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 10
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "Disk space low on {{ $labels.node }}"
          description: "Disk space on {{ $labels.device }} at {{ $labels.node }} is below 10%."

    - name: kubernetes-api
      rules:
      # API Server Down
      - alert: KubernetesAPIServerDown
        expr: up{job="kubernetes-apiservers"} == 0
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "Kubernetes API server is down"
          description: "Kubernetes API server has been down for more than 5 minutes."

      # High API Server Latency
      - alert: KubernetesAPIServerHighLatency
        expr: histogram_quantile(0.99, sum(rate(apiserver_request_duration_seconds_bucket{verb!="WATCH"}[5m])) by (verb, resource)) > 1
        for: 5m
        labels:
          severity: warning
        annotations:
          summary: "High API server latency"
          description: "99th percentile latency for {{ $labels.verb }} {{ $labels.resource }} is {{ $value }}s."
```

## Advanced Logging Architecture

### Vector Log Processing

#### Vector Configuration for Kubernetes

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: vector-config
  namespace: monitoring
data:
  vector.yaml: |
    # Vector configuration file
    sources:
      kubernetes_logs:
        type: kubernetes_logs
        auto_partial_merge: true
        annotation_fields:
          container_image: "kubernetes.container_image"
          container_name: "kubernetes.container_name"
          pod_name: "kubernetes.pod_name"
          pod_namespace: "kubernetes.pod_namespace"
        fields:
          hostname = kubernetes.pod_ip
          source_type = "kubernetes"

      # Application logs from specific namespace
      app_logs:
        type: kubernetes_logs
        namespace: "production"
        auto_partial_merge: true

    transforms:
      parse_logs:
        type: remap
        inputs: ["kubernetes_logs", "app_logs"]
        source: |
          # Parse log level
          .level = "info"
          if contains(.message, "ERROR") {
            .level = "error"
          } else if contains(.message, "WARN") {
            .level = "warn"
          } else if contains(.message, "DEBUG") {
            .level = "debug"
          }

          # Extract timestamp from log if present
          if !exists(.timestamp) {
            .timestamp = now()
          }

          # Add structured fields
          .cluster_name = "production-cluster"
          .region = "us-west-2"

          # Remove unnecessary fields
          del(.kubernetes.pod_annotations)
          del(.kubernetes.pod_labels)

      # Filter and route logs by level
      filter_error_logs:
        type: filter
        inputs: ["parse_logs"]
        condition: .level == "error"

      filter_info_logs:
        type: filter
        inputs: ["parse_logs"]
        condition: .level == "info" || .level == "warn"

      # Aggregate metrics from logs
      log_metrics:
        type: log_to_metric
        inputs: ["parse_logs"]
        metrics:
          - type: counter
            field: level
            name: log_count_total
            tags:
              level: "{{ level }}"
              namespace: "{{ kubernetes.pod_namespace }}"
              app: "{{ kubernetes.pod_labels.app }}"

    sinks:
      # Send all logs to Loki
      loki:
        type: loki
        inputs: ["parse_logs"]
        endpoint: "http://loki.monitoring.svc.cluster.local:3100"
        encoding:
          codec: json
        labels:
          namespace: "{{ kubernetes.pod_namespace }}"
          pod: "{{ kubernetes.pod_name }}"
          container: "{{ kubernetes.container_name }}"
          level: "{{ level }}"
          app: "{{ kubernetes.pod_labels.app }}"

      # Send error logs to dedicated index
      loki_errors:
        type: loki
        inputs: ["filter_error_logs"]
        endpoint: "http://loki.monitoring.svc.cluster.local:3100"
        encoding:
          codec: json
        labels:
          namespace: "{{ kubernetes.pod_namespace }}"
          pod: "{{ kubernetes.pod_name }}"
          container: "{{ kubernetes.container_name }}"
          level: "error"
          priority: "high"

      # Send metrics to Prometheus
      prometheus:
        type: prometheus_remote_write
        inputs: ["log_metrics"]
        endpoint: "http://prometheus.monitoring.svc.cluster.local:9090/api/v1/write"

      # Archive logs to S3
      s3_archive:
        type: aws_s3
        inputs: ["filter_info_logs"]
        bucket: "kubernetes-logs-archive"
        key_prefix: "year={{ timestamp | format_timestamp("%Y") }}/month={{ timestamp | format_timestamp("%m") }}/day={{ timestamp | format_timestamp("%d") }}/"
        compression: "gzip"
        encoding:
          codec: json
        auth:
          access_key_id: "{{AWS_ACCESS_KEY_ID}}"
          secret_access_key: "{{AWS_SECRET_ACCESS_KEY}}"
          region: "us-west-2"
```

#### Loki Advanced Configuration

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: loki-config
  namespace: monitoring
data:
  loki.yaml: |
    auth_enabled: false

    server:
      http_listen_port: 3100
      grpc_listen_port: 9096

    ingester:
      lifecycler:
        address: 127.0.0.1:7946
        ring:
          kvstore:
            store: consul
            consul:
              host: consul.monitoring.svc.cluster.local:8500
          replication_factor: 1
        final_sleep: 0s
      chunk_idle_period: 1h
      max_chunk_age: 1h
      chunk_target_size: 1048576
      chunk_retain_period: 30s

    schema_config:
      configs:
        - from: 2020-10-24
          store: boltdb-shipper
          object_store: s3
          schema: v11
          index:
            prefix: index_
            period: 24h

    storage_config:
      boltdb_shipper:
        active_index_directory: /loki/boltdb-shipper-active
        cache_location: /loki/boltdb-shipper-cache
        shared_store: s3
        resync_interval: 5s
      s3:
        s3: http://minio.monitoring.svc.cluster.local:9000
        bucket_names: loki-data
        access_key_id: ${MINIO_ACCESS_KEY}
        secret_access_key: ${MINIO_SECRET_KEY}

    limits_config:
      enforce_metric_name: false
      reject_old_samples: true
      reject_old_samples_max_age: 168h
      ingestion_rate_mb: 16
      ingestion_burst_size_mb: 32
      max_entries_limit_per_query: 5000
      max_query_parallelism: 32

    chunk_store_config:
      max_look_back_period: 0s
      chunk_cache_config:
        enable_lookups: true

    table_manager:
      retention_deletes_enabled: true
      retention_period: 168h

    ruler:
      storage:
        type: local
        local:
          directory: /loki/rules
      rule_path: /loki/rules-temp
      alertmanager_url: http://alertmanager.monitoring.svc.cluster.local:9093
      ring:
        kvstore:
          store: consul
          consul:
            host: consul.monitoring.svc.cluster.local:8500
      enable_api: true
      enable_alertmanager_v2: true
```

## Distributed Tracing

### Jaeger Production Setup

#### Jaeger Operator Configuration

```yaml
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: production
  namespace: monitoring
spec:
  strategy: production
  storage:
    type: elasticsearch
    elasticsearch:
      nodeCount: 3
      storage:
        size: 200Gi
        storageClassName: fast-ssd
      resources:
        requests:
          cpu: 1
          memory: 4Gi
        limits:
          cpu: 2
          memory: 8Gi
  ingress:
    enabled: true
    hosts:
      - jaeger.example.com
    tls:
      enabled: true
      secretName: jaeger-tls
  agent:
    strategy: DaemonSet
    image: jaegertracing/jaeger-agent:1.48
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 500m
        memory: 512Mi
    tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
  collector:
    replicas: 3
    image: jaegertracing/jaeger-collector:1.48
    resources:
      requests:
        cpu: 500m
        memory: 1Gi
      limits:
        cpu: 2
        memory: 4Gi
  query:
    replicas: 2
    image: jaegertracing/jaeger-query:1.48
    resources:
      requests:
        cpu: 250m
        memory: 512Mi
      limits:
        cpu: 1
        memory: 2Gi
```

#### OpenTelemetry Collector Configuration

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: otel-collector-config
  namespace: monitoring
data:
  config.yaml: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318
      prometheus:
        config:
          scrape_configs:
            - job_name: 'kubernetes-pods'
              kubernetes_sd_configs:
                - role: pod
              relabel_configs:
                - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
                  action: keep
                  regex: true
      jaeger:
        protocols:
          thrift_http:
            endpoint: 0.0.0.0:14268
          grpc:
            endpoint: 0.0.0.0:14250

    processors:
      batch:
        timeout: 1s
        send_batch_size: 1024
      memory_limiter:
        limit_mib: 512
        spike_limit_mib: 128
        check_interval: 5s
      resource:
        attributes:
          - key: environment
            value: production
            action: upsert
          - key: cluster
            value: production-cluster
            action: upsert
      span:
        name:
          from_attributes: ["http.target"]
          to_attributes: ["http.route"]
      attributes:
        actions:
          - key: http.user_agent
            action: hash
          - key: http.remote_addr
            action: mask
      filter:
        spans:
          include:
            match_type: regexp
            span_names: [".*"]
        metrics:
          include:
            match_type: regexp
            metric_names: [".*"]
        logs:
          include:
            match_type: regexp
            log_names: [".*"]

    exporters:
      prometheus:
        endpoint: "0.0.0.0:8889"
        const_labels:
          environment: production
      jaeger:
        endpoint: jaeger-collector.monitoring.svc.cluster.local:14250
        tls:
          insecure: true
      elasticsearch:
        endpoints: ["http://elasticsearch.monitoring.svc.cluster.local:9200"]
        index: "otel-logs"
      logging:
        loglevel: info

    extensions:
      health_check:
        endpoint: 0.0.0.0:13133
      pprof:
        endpoint: 0.0.0.0:1777
      zpages:
        endpoint: 0.0.0.0:55679

    service:
      extensions: [health_check, pprof, zpages]
      pipelines:
        traces:
          receivers: [otlp, jaeger]
          processors: [memory_limiter, batch, resource, attributes]
          exporters: [jaeger]
        metrics:
          receivers: [otlp, prometheus]
          processors: [memory_limiter, batch, resource]
          exporters: [prometheus]
        logs:
          receivers: [otlp]
          processors: [memory_limiter, batch, resource, filter]
          exporters: [elasticsearch]
```

## Advanced Grafana Dashboards

#### Application Performance Dashboard

```json
{
  "dashboard": {
    "title": "Application Performance Dashboard",
    "tags": ["kubernetes", "application", "performance"],
    "timezone": "browser",
    "panels": [
      {
        "id": 1,
        "title": "Request Rate",
        "type": "graph",
        "targets": [
          {
            "expr": "sum(rate(http_requests_total{app=\"$app\", namespace=\"$namespace\"}[5m])) by (method, status)",
            "legendFormat": "{{method}} {{status}}"
          }
        ],
        "yAxes": [
          {
            "label": "Requests/sec"
          }
        ]
      },
      {
        "id": 2,
        "title": "Response Time P99",
        "type": "graph",
        "targets": [
          {
            "expr": "histogram_quantile(0.99, sum(rate(http_request_duration_seconds_bucket{app=\"$app\", namespace=\"$namespace\"}[5m])) by (le, method))",
            "legendFormat": "P99 {{method}}"
          }
        ],
        "yAxes": [
          {
            "label": "Seconds"
          }
        ]
      },
      {
        "id": 3,
        "title": "Error Rate",
        "type": "singlestat",
        "targets": [
          {
            "expr": "sum(rate(http_requests_total{app=\"$app\", namespace=\"$namespace\", status=~\"5..\"}[5m])) / sum(rate(http_requests_total{app=\"$app\", namespace=\"$namespace\"}[5m])) * 100",
            "legendFormat": "Error Rate %"
          }
        ],
        "valueMaps": [
          {
            "value": "null",
            "text": "N/A"
          }
        ],
        "thresholds": "1,5"
      },
      {
        "id": 4,
        "title": "Pod Status",
        "type": "piechart",
        "targets": [
          {
            "expr": "sum(kube_pod_status_phase{namespace=\"$namespace\", phase=\"Running\"}) by (phase)",
            "legendFormat": "Running"
          },
          {
            "expr": "sum(kube_pod_status_phase{namespace=\"$namespace\", phase=\"Pending\"}) by (phase)",
            "legendFormat": "Pending"
          },
          {
            "expr": "sum(kube_pod_status_phase{namespace=\"$namespace\", phase=\"Failed\"}) by (phase)",
            "legendFormat": "Failed"
          }
        ]
      },
      {
        "id": 5,
        "title": "Resource Usage",
        "type": "graph",
        "targets": [
          {
            "expr": "sum(rate(container_cpu_usage_seconds_total{namespace=\"$namespace\", pod=~\"$app-.*\"}[5m])) by (pod)",
            "legendFormat": "CPU {{pod}}"
          },
          {
            "expr": "sum(container_memory_working_set_bytes{namespace=\"$namespace\", pod=~\"$app-.*\"}) by (pod) / 1024 / 1024",
            "legendFormat": "Memory {{pod}} (MB)"
          }
        ],
        "yAxes": [
          {
            "label": "CPU Cores / MB"
          }
        ]
      }
    ],
    "templating": {
      "list": [
        {
          "name": "namespace",
          "type": "query",
          "query": "label_values(kube_pod_info, namespace)",
          "refresh": 1,
          "includeAll": false
        },
        {
          "name": "app",
          "type": "query",
          "query": "label_values(kube_pod_info{namespace=\"$namespace\"}, app)",
          "refresh": 1,
          "includeAll": false
        }
      ]
    }
  }
}
```

## Custom Metrics Development

### Application Metrics Integration

#### Go Application with Prometheus Metrics

```go
package main

import (
    "net/http"
    "time"
    "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promhttp"
)

var (
    // Request counter
    requestCounter = prometheus.NewCounterVec(
        prometheus.CounterOpts{
            Name: "http_requests_total",
            Help: "Total number of HTTP requests",
        },
        []string{"method", "endpoint", "status"},
    )

    // Request duration histogram
    requestDuration = prometheus.NewHistogramVec(
        prometheus.HistogramOpts{
            Name:    "http_request_duration_seconds",
            Help:    "HTTP request duration in seconds",
            Buckets: prometheus.DefBuckets,
        },
        []string{"method", "endpoint"},
    )

    // Active connections gauge
    activeConnections = prometheus.NewGauge(
        prometheus.GaugeOpts{
            Name: "http_active_connections",
            Help: "Number of active HTTP connections",
        },
    )

    // Database connection pool
    dbConnections = prometheus.NewGaugeVec(
        prometheus.GaugeOpts{
            Name: "db_connections_active",
            Help: "Number of active database connections",
        },
        []string{"database"},
    )
)

func init() {
    // Register metrics with Prometheus
    prometheus.MustRegister(requestCounter)
    prometheus.MustRegister(requestDuration)
    prometheus.MustRegister(activeConnections)
    prometheus.MustRegister(dbConnections)
}

func instrumentHandler(handler http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        start := time.Now()
        activeConnections.Inc()
        defer activeConnections.Dec()

        // Wrap response writer to capture status code
        rw := &responseWriter{ResponseWriter: w, statusCode: 200}

        // Call the original handler
        handler.ServeHTTP(rw, r)

        // Record metrics
        duration := time.Since(start).Seconds()
        requestCounter.WithLabelValues(r.Method, r.URL.Path, fmt.Sprintf("%d", rw.statusCode)).Inc()
        requestDuration.WithLabelValues(r.Method, r.URL.Path).Observe(duration)
    })
}

type responseWriter struct {
    http.ResponseWriter
    statusCode int
}

func (rw *responseWriter) WriteHeader(code int) {
    rw.statusCode = code
    rw.ResponseWriter.WriteHeader(code)
}

func main() {
    // Instrument your handlers
    http.Handle("/metrics", promhttp.Handler())
    http.Handle("/api", instrumentHandler(yourAPIHandler))

    // Update database connection metrics
    dbConnections.WithLabelValues("primary").Set(10)
    dbConnections.WithLabelValues("cache").Set(5)

    http.ListenAndServe(":8080", nil)
}
```

#### Java Application with Micrometer

```java
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
import io.micrometer.core.instrument.Counter;
import io.micrometer.core.instrument.Gauge;
import io.micrometer.prometheus.PrometheusConfig;
import io.micrometer.prometheus.PrometheusMeterRegistry;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer;

@RestController
public class MetricsController {

    private final Counter requestCounter;
    private final Timer requestTimer;
    private final MeterRegistry meterRegistry;

    public MetricsController(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;
        this.requestCounter = Counter.builder("api.requests.total")
            .description("Total API requests")
            .tag("method", "GET")
            .register(meterRegistry);

        this.requestTimer = Timer.builder("api.request.duration")
            .description("API request duration")
            .register(meterRegistry);

        // Database connection gauge
        Gauge.builder("db.connections.active")
            .description("Active database connections")
            .register(meterRegistry, this, MetricsController::getDbConnections);
    }

    @GetMapping("/api/data")
    public ResponseEntity<String> getData() {
        return Timer.Sample.start(meterRegistry)
            .stopSupplier(() -> {
                requestCounter.increment();
                // Your business logic here
                return ResponseEntity.ok("data");
            }, requestTimer);
    }

    private double getDbConnections() {
        // Return actual DB connection count
        return getActiveDbConnectionCount();
    }

    @Bean
    MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
        return registry -> registry.config().commonTags(
            "application", "my-app",
            "environment", "production"
        );
    }
}
```

## Alert Management

#### AlertManager Advanced Configuration

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: alertmanager-config
  namespace: monitoring
data:
  alertmanager.yml: |
    global:
      smtp_smarthost: 'smtp.example.com:587'
      smtp_from: 'alerts@example.com'
      smtp_auth_username: 'alerts@example.com'
      smtp_auth_password: 'password'
      slack_api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'

    route:
      group_by: ['alertname', 'cluster', 'service']
      group_wait: 10s
      group_interval: 10s
      repeat_interval: 12h
      receiver: 'default'
      routes:
        - match:
            severity: critical
          receiver: 'critical-alerts'
          group_wait: 5s
          repeat_interval: 5m
        - match:
            severity: warning
          receiver: 'warning-alerts'
          group_wait: 30s
          repeat_interval: 2h
        - match:
            service: database
          receiver: 'database-team'
        - match:
            service: api
          receiver: 'api-team'

    receivers:
      - name: 'default'
        slack_configs:
          - channel: '#alerts'
            title: 'Kubernetes Alert'
            text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}'

      - name: 'critical-alerts'
        slack_configs:
          - channel: '#critical'
            title: '🚨 CRITICAL ALERT'
            color: 'danger'
            text: |
              *Alert:* {{ .GroupLabels.alertname }}
              *Severity:* {{ .GroupLabels.severity }}
              *Description:* {{ range .Alerts }}{{ .Annotations.description }}{{ end }}
              *Runbook:* {{ range .Alerts }}{{ .Annotations.runbook_url }}{{ end }}
        email_configs:
          - to: 'oncall@example.com'
            subject: '🚨 CRITICAL: {{ .GroupLabels.alertname }}'
            body: |
              {{ range .Alerts }}
              Alert: {{ .Annotations.summary }}
              Description: {{ .Annotations.description }}
              Runbook: {{ .Annotations.runbook_url }}
              {{ end }}

      - name: 'warning-alerts'
        slack_configs:
          - channel: '#warnings'
            title: '⚠️ Warning Alert'
            color: 'warning'
            text: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}'

      - name: 'database-team'
        slack_configs:
          - channel: '#database-team'
            title: 'Database Alert'
            text: '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'

      - name: 'api-team'
        slack_configs:
          - channel: '#api-team'
            title: 'API Alert'
            text: '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'

    inhibit_rules:
      - source_match:
          severity: 'critical'
        target_match:
          severity: 'warning'
        equal: ['alertname', 'cluster', 'service']

    templates:
      - '/etc/alertmanager/templates/*.tmpl'
```

## Monitoring as Code

#### Terraform Configuration for Monitoring Stack

```hcl
# Prometheus Operator
resource "helm_release" "prometheus_operator" {
  name       = "kube-prometheus-stack"
  repository = "https://prometheus-community.github.io/helm-charts"
  chart      = "kube-prometheus-stack"
  namespace  = "monitoring"
  create_namespace = true

  set {
    name  = "prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.storageClassName"
    value = "fast-ssd"
  }

  set {
    name  = "prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage"
    value = "100Gi"
  }

  set {
    name  = "grafana.adminPassword"
    value = var.grafana_password
  }

  set {
    name  = "grafana.sidecar.dashboards.enabled"
    value = "true"
  }

  set {
    name  = "grafana.sidecar.datasources.enabled"
    value = "true"
  }
}

# Loki Stack
resource "helm_release" "loki" {
  name       = "loki"
  repository = "https://grafana.github.io/helm-charts"
  chart      = "loki-stack"
  namespace  = "monitoring"

  set {
    name  = "loki.persistence.enabled"
    value = "true"
  }

  set {
    name  = "loki.persistence.size"
    value = "50Gi"
  }

  set {
    name  = "promtail.enabled"
    value = "true"
  }

  set {
    name  = "grafana.enabled"
    value = "false"
  }
}

# Jaeger Operator
resource "helm_release" "jaeger_operator" {
  name       = "jaeger-operator"
  repository = "https://jaegertracing.github.io/helm-charts"
  chart      = "jaeger-operator"
  namespace  = "monitoring"

  set {
    name  = "rbac.create"
    value = "true"
  }

  set {
    name  = "nodeSelector.\"beta\\.kubernetes\\.io/os\""
    value = "linux"
  }
}

# Custom Grafana Dashboard
resource "kubernetes_config_map" "app_dashboard" {
  metadata {
    name      = "app-dashboard"
    namespace = "monitoring"
    labels = {
      grafana_dashboard = "1"
    }
  }

  data = {
    "app-dashboard.json" = file("${path.module}/dashboards/app-dashboard.json")
  }
}

# AlertManager Config
resource "kubernetes_secret" "alertmanager_config" {
  metadata {
    name      = "alertmanager-main"
    namespace = "monitoring"
  }

  data = {
    "alertmanager.yaml" = templatefile("${path.module}/alertmanager.yml", {
      slack_webhook = var.slack_webhook_url
      smtp_password = var.smtp_password
    })
  }

  type = "Opaque"
}
```

***

## 🚀 **Production Monitoring Setup**

### Complete Production Monitoring Configuration

```yaml
# Monitoring namespace
apiVersion: v1
kind: Namespace
metadata:
  name: monitoring
  labels:
    name: monitoring
    monitoring: enabled

---
# Prometheus ServiceMonitor for applications
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: application-metrics
  namespace: monitoring
  labels:
    app: application-monitor
spec:
  selector:
    matchLabels:
      app: my-application
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
    honorLabels: true

---
# Grafana Dashboard ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: production-dashboards
  namespace: monitoring
  labels:
    grafana_dashboard: "1"
data:
  kubernetes-overview.json: |
    {
      "dashboard": {
        "title": "Production Kubernetes Overview",
        "refresh": "30s",
        "time": {
          "from": "now-1h",
          "to": "now"
        },
        "panels": [
          {
            "title": "Cluster Overview",
            "type": "stat",
            "targets": [
              {
                "expr": "up{job=\"kubernetes-apiservers\"}",
                "legendFormat": "API Server"
              }
            ]
          }
        ]
      }
    }

---
# Alert for production monitoring
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: production-alerts
  namespace: monitoring
spec:
  groups:
  - name: production.rules
    rules:
    - alert: ProductionServiceDown
      expr: up{job=~".*production.*"} == 0
      for: 1m
      labels:
        severity: critical
        environment: production
      annotations:
        summary: "Production service {{ $labels.job }} is down"
        description: "Production service {{ $labels.job }} has been down for more than 1 minute"
        runbook_url: "https://runbooks.example.com/production-service-down"
```

***

## 📚 **Resources dan Referensi**

### Dokumentasi Official

* [Prometheus Configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/)
* [Grafana Dashboards](https://grafana.com/docs/grafana/latest/dashboards/)
* [Jaeger Documentation](https://www.jaegertracing.io/docs/)
* [Loki Configuration](https://grafana.com/docs/loki/latest/configuration/)

### Advanced Reading

* [OpenTelemetry Specification](https://opentelemetry.io/docs/reference/specification/)
* [Vector Documentation](https://vector.dev/docs/)
* [AlertManager Configuration](https://prometheus.io/docs/alerting/latest/configuration/)
* [Custom Metrics API](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis)

### Cheatsheet Summary

```bash
# Monitoring Commands
kubectl port-forward svc/prometheus-operated 9090:9090 -n monitoring
kubectl port-forward svc/grafana 3000:3000 -n monitoring
kubectl port-forward svc/loki 3100:3100 -n monitoring

# Query Commands
curl -G 'http://localhost:9090/api/v1/query' --data-urlencode 'query=up'
curl -G 'http://localhost:3100/loki/api/v1/query' --data-urlencode 'query={app="my-app"}'

# Debug Commands
kubectl get prometheusrules -A
kubectl get servicemonitors -A
kubectl get pods -n monitoring
```

Advanced monitoring documentation siap digunakan! 📊
