# Networking Advanced

> 🔥 **Deep Dive Networking**: Panduan lengkap networking Kubernetes yang kompleks, CNI, dan troubleshooting lanjutan.

***

## 📋 **Daftar Isi**

### **🌐 Container Network Interface (CNI)**

* [CNI Overview](#cni-overview)
* [Popular CNI Plugins](#popular-cni-plugins)
* [CNI Configuration](#cni-configuration)
* [Multi-CNI](#multi-cni)
* [CNI Troubleshooting](#cni-troubleshooting)

### **🔧 Network Policy Implementation**

* [Network Policy Deep Dive](#network-policy-deep-dive)
* [Advanced Policy Patterns](#advanced-policy-patterns)
* [Egress vs Ingress Control](#egress-vs-ingress-control)
* [Policy Performance](#policy-performance)
* [Policy Debugging](#policy-debugging)

### **🚀 Service Discovery**

* [CoreDNS Advanced](#coredns-advanced)
* [Custom DNS Configuration](#custom-dns-configuration)
* [Service Discovery Patterns](#service-discovery-patterns)
* [Headless Services](#headless-services)
* [ExternalName Services](#externalname-services)

### **🔗 Ingress Controllers**

* [NGINX Ingress Advanced](#nginx-ingress-advanced)
* [Traefik Configuration](#traefik-configuration)
* [HAProxy Ingress](#haproxy-ingress)
* [Ingress Performance](#ingress-performance)
* [Multi-Tenant Ingress](#multi-tenant-ingress)

### **📊 Load Balancing**

* [Load Balancer Types](#load-balancer-types)
* [Health Checks](#health-checkes)
* [Session Affinity](#session-affinity)
* [Traffic Distribution](#traffic-distribution)
* [Global Load Balancing](#global-load-balancing)

### **🔍 Network Troubleshooting**

* [Connectivity Issues](#connectivity-issues)
* [Performance Problems](#performance-problems)
* [DNS Issues](#dns-issues)
* [Packet Capture](#packet-capture)
* [Network Monitoring](#network-monitoring)

***

## 🌐 **Container Network Interface (CNI)**

### CNI Overview

**📖 Konsep Dasar CNI** Container Network Interface (CNI) adalah spesifikasi untuk menghubungkan containers ke network. CNI mendefinisikan bagaimana network interface harus dikonfigurasi ketika container dibuat dan dihapus.

**🎯 CNI Components**

* **CNI Plugin**: Plugin yang mengimplementasikan spesifikasi CNI
* **Network Configuration**: Data structure yang mendefinisikan network settings
* **Runtime Interface**: Interface antara container runtime dan CNI plugin
* **IPAM (IP Address Management)**: Manajemen alokasi IP address

**🔧 CNI Workflow**

```
Container Created → Runtime Calls CNI ADD → CNI Plugin Configures Network → IP Allocation → Interface Creation → Container Connected
```

### Popular CNI Plugins

**🚀 CNI Plugin Options**

| Plugin        | Type         | Features                       | Use Case                     |
| ------------- | ------------ | ------------------------------ | ---------------------------- |
| **Calico**    | BGP, Overlay | Network policies, BGP routing  | Production, security-focused |
| **Flannel**   | Overlay      | Simple, lightweight            | Development, testing         |
| **Weave Net** | Overlay      | Mesh networking, encryption    | Simple deployment            |
| **Cilium**    | eBPF         | Advanced features, performance | High-performance networking  |
| **Antrea**    | eBPF         | Open source, feature-rich      | Enterprise environments      |
| **Canal**     | Overlay      | Flannel + Calico               | Hybrid approach              |

**🔧 CNI Plugin Installation**

```bash
# Calico Installation
kubectl create clusterrolebinding calico-node-binding --clusterrole=calico-node --user=system:serviceaccount:kube-system:calico-node
curl https://raw.githubusercontent.com/projectcalico/calico/v3.26.0/manifests/calico.yaml | kubectl apply -f -

# Flannel Installation
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

# Cilium Installation
helm repo add cilium https://helm.cilium.io/
helm repo update
helm install cilium cilium/cilium --namespace kube-system

# Weave Net Installation
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
```

### CNI Configuration

**⚙️ Advanced CNI Configuration**

**Calico Advanced Configuration**

```yaml
# Calico ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: calico-config
  namespace: kube-system
data:
  calico_backend: "bird"
  cni_network_config: |
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "mtu": 1500,
          "ipam": {
            "type": "calico-ipam"
          },
          "policy": {
            "type": "k8s"
          },
          "kubernetes": {
            "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {"portMappings": true}
        }
      ]
    }
  typha_service_name: "none"
  veth_mtu: "1440"
  cni_default_network: "k8s-pod-network"
  enable_bandwidth_plugin: "true"
  cni_networking_backend: "calico"
```

**Cilium Advanced Configuration**

```yaml
# Cilium ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: cilium-config
  namespace: kube-system
data:
  enable-bandwidth-manager: "true"
  enable-bpf-masquerade: "true"
  enable-endpoint-routes: "true"
  enable-health-checking: "true"
  enable-hubble: "true"
  enable-ipv4: "true"
  enable-ipv6: "false"
  enable-l7-proxy: "true"
  enable-node-port: "true"
  kube-proxy-replacement: "strict"
  operator-prometheus-enable: "true"
  tunnel: "vxlan"
  # Advanced eBPF features
  sockops-enable: "true"
  enable-snat-frontend-handling: "true"
  bpf-lb-algorithm: "maglev"
  loadBalancer-acceleration: "true"
```

### Multi-CNI

**🔧 Multiple CNI Configuration**

**Chained CNI Configuration**

```yaml
# /etc/cni/net.d/10-calico.conf
{
  "name": "calico",
  "type": "calico",
  "cniVersion": "0.3.1",
  "ipam": {
    "type": "calico-ipam"
  },
  "policy": {
    "type": "k8s"
  },
  "kubernetes": {
    "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
  }
}

# /etc/cni/net.d/20-portmap.conf
{
  "name": "portmap",
  "type": "portmap",
  "cniVersion": "0.3.1",
  "capabilities": {
    "portMappings": true
  },
  "snat": true,
  "iptables": {
    "insert": true
  }
}

# /etc/cni/net.d/90-bandwidth.conf
{
  "name": "bandwidth",
  "type": "bandwidth",
  "cniVersion": "0.3.1",
  "capabilities": {
    "bandwidth": true
  },
  "ingressBps": 10000000,
  "egressBps": 10000000
}
```

### CNI Troubleshooting

**🔧 CNI Debug Commands**

```bash
# Check CNI plugin status
kubectl get pods -n kube-system -l k8s-app=calico-node
kubectl get pods -n kube-system -l app=flannel
kubectl get pods -n kube-system -l k8s-app=cilium

# Check CNI configuration
kubectl get configmap -n kube-system calico-config -o yaml
kubectl get configmap -n kube-system cilium-config -o yaml

# Check network policies
kubectl get networkpolicies --all-namespaces
kubectl describe networkpolicy <policy-name> -n <namespace>

# Test pod connectivity
kubectl run test-pod --image=busybox --rm -it --restart=Never -- wget -qO- http://google.com

# Check CNI logs
kubectl logs -n kube-system -l k8s-app=calico-node
kubectl logs -n kube-system -l app=flannel
kubectl logs -n kube-system -l k8s-app=cilium-agent

# Check CNI metrics
kubectl exec -n kube-system -l k8s-app=calico-node -- calicoctl get workloadendpoints
kubectl exec -n kube-system -l app=cilium-agent -- cilium status
```

***

## 🔧 **Network Policy Implementation**

### Network Policy Deep Dive

**📖 Network Policy Fundamentals** Network policies di Kubernetes mengontrol traffic flow pada level pod dan namespace. Mereka menggunakan label selector untuk mengidentifikasi target dan sumber traffic.

**🎯 Policy Engine Types**

* **iptables-based**: Traditional Linux iptables (kube-router, Calico)
* **eBPF-based**: High-performance eBPF (Cilium)
* **IPVS-based**: IP virtualization server (kube-proxy)
* **Userspace**: Application layer enforcement

**🔧 Advanced Network Policy Examples**

```yaml
# Layer 7 HTTP policy with Calico
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: allow-http-traffic
spec:
  selector: all()
  order: 100
  ingress:
  - action: Allow
    protocol: TCP
    source:
      selector: app == 'frontend'
      namespaceSelector: 'production'
    destination:
      selector: app == 'backend'
      ports:
      - 80
    http:
      methods:
      - "GET"
      - "POST"
      paths:
      - "/api/v1/*"
      - "/health"
      - "/metrics"
      host: "api.example.com"
      headers:
        "Authorization": "^Bearer .*"
        "X-API-Key": "^api-key-.*"

# Egress policy for external API access
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: external-api-access
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: api-client
  policyTypes:
  - Egress
  egress:
  - to: []
    ports:
    - protocol: TCP
      port: 443
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
  - to:
    - ipBlock:
        cidr: 52.28.192.0/18
        except:
        - 52.28.194.0/24
        - 52.28.196.0/24
    ports:
    - protocol: TCP
      port: 80
      port: 443

# Namespace isolation with cross-namespace communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: namespace-isolation
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: production
  - from:
    - namespaceSelector:
        matchLabels:
          name: staging
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: production
  - to:
    - namespaceSelector:
        matchLabels:
          name: database
    ports:
    - protocol: TCP
      port: 5432
```

### Advanced Policy Patterns

**🔧 Complex Network Policy Patterns**

**Service Mesh Integration**

```yaml
# Istio + Network Policy Integration
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: istio-integration
  namespace: production
spec:
  podSelector:
    matchLabels:
      istio-injection: enabled
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          istio-injection: enabled
    - podSelector:
        matchLabels:
          istio-injection: enabled
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          istio-injection: enabled
    - podSelector:
        matchLabels:
          istio-injection: enabled
  # Allow egress to istiod
  - to:
    - namespaceSelector:
        matchLabels:
          k8s-app: istiod
    ports:
    - protocol: TCP
      port: 15012
```

**Microsegmentation**

```yaml
# Zero Trust microsegmentation
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: zero-trust-db
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: database
      tier: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: api-gateway
          tier: backend
    - podSelector:
        matchLabels:
          app: payment-service
          tier: backend
    ports:
    - protocol: TCP
      port: 5432
    # Only allow specific operations
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          name: monitoring
    ports:
    - protocol: TCP
      port: 9090  # Prometheus
```

### Egress vs Ingress Control

**🌐 Egress and Ingress Control**

**Comprehensive Egress Control**

```yaml
# Egress policy with DNS and time restrictions
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restricted-egress
  namespace: production
spec:
  podSelector:
    matchLabels:
      environment: production
  policyTypes:
  - Egress
  egress:
  # Allow DNS
  - to: []
    ports:
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 53
  # Allow time sync
  - to: []
    ports:
    - protocol: UDP
      port: 123
  # Allow specific external services
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 10.0.0.0/8
        - 172.16.0.0/12
        - 192.168.0.0/16
    ports:
    - protocol: TCP
      port: 443
    - protocol: TCP
      port: 80
  # Block specific ranges
  - to:
    - ipBlock:
        cidr: 192.168.1.0/24
    ports:
    - protocol: TCP
      port: 22
      port: 3389
```

### Policy Performance

**📊 Network Policy Performance Optimization**

**Calico Performance Tuning**

```yaml
# Calico performance configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: calico-config
  namespace: kube-system
data:
  calico_backend: "bird"
  cni_network_config: |
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "mtu": 1500,
          "ipam": {
            "type": "calico-ipam",
            "assign_ipv4": "true",
            "assign_ipv6": "false"
          },
          "policy": {
            "type": "k8s",
            "k8s_api_root": "https://10.96.0.1:443",
            "k8s_client_key": "/var/run/secrets/kubernetes.io/serviceaccount/calico-node/token"
          }
        }
      ]
    }
  # Performance optimizations
  ipam_type: "calico-ipam"
  calico_node: "true"
  enable_ipv4: "true"
  enable_ipv6: "false"
  mtu: "1440"
  # Optimizations for large clusters
  cluster_type: "k8s"
  # BGP configuration for performance
  bgp_log_severity: "Info"
  logseverityscreen: "info"
  bpfEnabled: "true"
  # Reduce log verbosity for performance
  log_file_path: "/var/log/calico/cni.log"
  log_level: "info"
```

### Policy Debugging

**🔧 Network Policy Debugging**

**Debug Commands and Tools**

```bash
# Check network policy enforcement
kubectl exec -it <pod-name> -- iptables -L -n vcalico-pod
kubectl exec -it <pod-name> -- ip rule show
kubectl exec -it <pod-name> -- ip route show

# Calico policy debugging
kubectl exec -it calico-node-xxxx -n kube-system -- calicoctl get workloadendpoints
kubectl exec -it calico-node-xxxx -n kube-system -- calicoctl get networkpolicy
kubectl exec -it calico-node-xxxx -n kube-system -- calicoctl get profiles

# Cilium policy debugging
kubectl exec -n kube-system cilium-xxxx -- cilium policy list
kubectl exec -n kube-system cilium-xxxx -- cilium endpoint list
kubectl exec -n kube-system cilium-xxxx -- cilium bpf policy dump

# Test policy connectivity
kubectl run test-pod --image=nicolaka/netshoot --rm -it --restart=Never -- \
  wget -qO- http://<service-name>.<namespace>.svc.cluster.local

# Generate network policy report
kubectl get networkpolicy -A -o json | jq '.items[] | {name: .metadata.name, namespace: .metadata.namespace, podSelector: .spec.podSelector}'
```

***

## 🚀 **Service Discovery**

### CoreDNS Advanced

**🔧 CoreDNS Configuration**

**Advanced CoreDNS Configuration**

```yaml
# CoreDNS ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
          ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
        # Custom zones
        example.com:53 {
          errors
          cache 5
          forward . 8.8.8.8 8.8.4.4
        }
        # Custom plugins
        hosts {
          192.168.1.100 api.internal.local
          192.168.1.101 db.internal.local
          fallthrough
        }
        # Logging
        log
    }
    # Stub domains
    stubDomains: |
      example.com:
        - 8.8.8.8
        - 8.8.4.4
      internal.local:
        - 192.168.1.1
        - 192.168.1.2
    # Custom DNS entries
    customEntries: |
      api.example.com: 192.168.1.100
      db.example.com: 192.168.1.101
```

**CoreDNS Performance Tuning**

```yaml
# Performance-optimized CoreDNS
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        # Performance optimizations
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
          ttl 5  # Lower TTL for faster updates
          transfer /etc/coredns.zones
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          policy random
          max_concurrent 1000
          expire 5s
        }
        cache 30 {
          serve_stale 60
          prefetch 10
        }
        loop
        reload
        loadbalance round_robin
        # DNSSEC validation
        dnssec {
          validation: true
        }
        # Logging with rate limiting
        log {
          class error
          errors
          log_level: error
          max_entries 1000
          rate_limit 10
        }
    }
```

### Custom DNS Configuration

**🔧 Custom DNS Setup**

**External DNS Integration**

```yaml
# External DNS for cloud DNS management
apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
  namespace: kube-system
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
      matchLabels:
        app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      containers:
      - name: external-dns
        image: registry.k8s.io/external-dns/external-dns:v0.13.5
        args:
        - --provider=aws
        - --policy=upsert-only
        - --aws-zone-type=public
        - --registry=txt
        - --txt-owner-id=external-dns
        - --domain-filter=example.com
        - --txt-prefix=external-dns
        - --log-level=info
        - --interval=1m
        env:
        - name: AWS_ACCESS_KEY_ID
          valueFrom:
            secretKeyRef:
              name: external-dns
              key: AWS_ACCESS_KEY_ID
        - name: AWS_SECRET_ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: external-dns
              key: AWS_SECRET_ACCESS_KEY
        resources:
          requests:
            cpu: 50m
            memory: 50Mi
          limits:
            cpu: 100m
            memory: 100Mi
```

**Split-Horizon DNS**

```yaml
# DNS configuration for internal vs external resolution
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-split-horizon
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        # Internal cluster resolution
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
          ttl 30
        }
        # Internal domain resolution
        internal.local:53 {
          errors
          cache 30
          forward . 192.168.1.1
          log
        }
        # External resolution
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
        log
    }
```

### Service Discovery Patterns

**🔧 Advanced Service Discovery Patterns**

**Service Discovery with Consul**

```yaml
# Consul service mesh integration
apiVersion: v1
kind: Service
metadata:
  name: consul
  namespace: consul
  labels:
    app: consul
spec:
  ports:
  - name: http
    port: 8500
    targetPort: 8500
  - name: dns
    port: 8600
    targetPort: 8600
    protocol: TCP
  selector:
    app: consul

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: consul
  namespace: consul
spec:
  serviceName: consul
  replicas: 3
  selector:
    matchLabels:
      app: consul
  template:
    metadata:
      labels:
        app: consul
    spec:
      containers:
      - name: consul
        image: consul:1.15
        env:
        - name: CONSUL_BIND_ADDR
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: CONSUL_GOSSIP_JOIN
          value: "dns-consul.consul"
        ports:
        - containerPort: 8500
        - containerPort: 8600
        volumeMounts:
        - name: consul-data
          mountPath: /consul/data
  volumeClaimTemplates:
  - metadata:
      name: consul-data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 1Gi
```

### Headless Services

**🔧 Headless Service Configuration**

```yaml
# Headless service for direct pod communication
apiVersion: v1
kind: Service
metadata:
  name: backend-headless
  namespace: production
  labels:
    app: backend
spec:
  clusterIP: None  # This makes it a headless service
  selector:
    app: backend
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  - name: grpc
    port: 9090
    targetPort: 9090

---
# Deployment for headless service
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: backend
  namespace: production
spec:
  serviceName: backend-headless
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend
        image: my-backend:latest
        ports:
        - containerPort: 8080
          name: http
        - containerPort: 9090
          name: grpc
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
```

### ExternalName Services

**🔧 ExternalName Service Configuration**

```yaml
# ExternalName service for external service aliasing
apiVersion: v1
kind: Service
metadata:
  name: external-api
  namespace: production
spec:
  type: ExternalName
  externalName: api.external-service.com

---
# Service with port mapping
apiVersion: v1
kind: Service
metadata:
  name: mapped-external-api
  namespace: production
spec:
  type: ExternalName
  externalName: api.external-service.com
  ports:
  - port: 80
    targetPort: 443
    protocol: TCP
    name: https

---
# Service with multiple external names
apiVersion: v1
kind: Service
metadata:
  name: multi-database
  namespace: production
spec:
  type: ExternalName
  externalName: postgresql.rds.amazonaws.com
  ports:
  - port: 5432
    targetPort: 5432
    protocol: TCP
    name: postgres
  - port: 3306
    targetPort: 3306
    protocol: TCP
    name: mysql
```

***

## 🔗 **Ingress Controllers**

### NGINX Ingress Advanced

**🚀 NGINX Ingress Advanced Configuration**

```yaml
# Advanced NGINX Ingress Controller
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
    spec:
      containers:
      - name: nginx-ingress-controller
        image: k8s.gcr.io/ingress-nginx/controller:v1.8.1
        args:
        - /nginx-ingress-controller
        - --publish-status-address=192.168.1.100
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/nginx-configuration
        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
        - --annotations-prefix=nginx.ingress.kubernetes.io
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --enable-metrics=true
        - --global-endpoints-slice
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        ports:
        - name: http
          containerPort: 80
          protocol: TCP
        - name: https
          containerPort: 443
          protocol: TCP
        - name: metrics
          containerPort: 10254
          protocol: TCP
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
          limits:
            cpu: 1000m
            memory: 512Mi
        livenessProbe:
          httpGet:
            path: /healthz
            port: 10254
          initialDelaySeconds: 10
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /healthz
            port: 10254
          initialDelaySeconds: 5
          periodSeconds: 5
        volumeMounts:
        - name: nginx-ingress-controller
          mountPath: /etc/nginx
        - name: nginx-ingress-controller
          mountPath: /etc/nginx-template
      volumes:
      - name: nginx-ingress-controller
        configMap:
          name: nginx-configuration
      - name: nginx-ingress-controller
        configMap:
          name: nginx-template

---
# NGINX Configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
data:
  nginx.conf: |
    worker_processes auto;
    worker_cpu_affinity auto;
    worker_rlimit_nofile 1048576;

    events {
        worker_connections 16384;
        use epoll;
        multi_accept on;
    }

    http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        client_max_body_size 100m;

        # Gzip compression
        gzip on;
        gzip_vary on;
        gzip_min_length 1024;
        gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        # Rate limiting
        limit_req_zone $binary_remote_addr zone=perip:10m rate=10r/s;
        limit_req_zone $server_name zone=perserver:10m rate=100r/s;

        # Logging
        log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';
        access_log /var/log/nginx/access.log main;

        # Performance tuning
        open_file_cache max=1000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # Security headers
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header X-Content-Type-Options "nosniff" always;
        add_header X-XSS-Protection "1; mode=block" always;
        add_header Referrer-Policy "no-referrer-when-downgrade" always;
        add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;

        include /etc/nginx/conf.d/*.conf;
    }
```

### Traefik Configuration

**🔧 Traefik Advanced Setup**

```yaml
# Traefik Ingress Controller
apiVersion: apps/v1
kind: Deployment
metadata:
  name: traefik
  namespace: traefik
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: traefik
  template:
    metadata:
      labels:
        app.kubernetes.io/name: traefik
    spec:
      serviceAccountName: traefik
      containers:
      - name: traefik
        image: traefik:v2.9
        args:
        - --api.dashboard=true
        - --providers.kubernetes
        - --entrypoints.web.address=:80
        - --entrypoints.websecure.address=:443
        - --certificatesresolvers.myresolver.acme.tlschallenge=true
        - --metrics.prometheus=true
        - --accesslog=true
        - --log.level=INFO
        - --global.checknewversion=false
        - --global.sendanonymoususage=false
        ports:
        - name: web
          containerPort: 80
        - name: websecure
          containerPort: 443
        - name: metrics
          containerPort: 8080
        - name: dashboard
          containerPort: 8080
        volumeMounts:
        - name: config
          mountPath: /etc/traefik
        - name: acme
          mountPath: /acme
      volumes:
      - name: config
        configMap:
          name: traefik-config
      - name: acme
        persistentVolumeClaim:
          claimName: traefik-acme

---
# Traefik Configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: traefik-config
  namespace: traefik
data:
  traefik.yml: |
    global:
      checkNewVersion: false
      sendAnonymousUsage: false

    api:
      dashboard: true
      insecure: true

    metrics:
      prometheus:
        addEntryPointsLabels: true
        addServicesLabels: true
        manualRouting: true

    accessLog:
      filePath: "/var/log/traefik/access.log"
      format: json
      bufferingSize: 100
      fields:
        defaultMode: keep
        names:
          ClientUsername: drop
        headers:
          defaultMode: keep
          names:
            User-Agent: keep
            Authorization: keep
            Content-Type: keep

    entryPoints:
      web:
        address: ":80"
        http:
          redirections:
            entryPoint:
              to: websecure
              scheme: https
              permanent: true
      websecure:
        address: ":443"
        http:
          tls:
            certResolver: myresolver
            domains:
              - main: "example.com"
              - sans: "*.example.com"

    certificatesResolvers:
      myresolver:
        acme:
          email: admin@example.com
          storage: "/acme/acme.json"
          httpChallenge:
            entryPoint: web

    providers:
      kubernetesIngress:
        publishedService:
          enabled: true
          allowEmptyServices: true
        namespaces:
          - production
          - staging
      kubernetesCRD:
        enabled: true
        namespaces:
          - production
          - staging
```

### HAProxy Ingress

**⚖️ HAProxy Advanced Configuration**

```yaml
# HAProxy Ingress Controller
apiVersion: apps/v1
kind: Deployment
metadata:
  name: haproxy-ingress
  namespace: haproxy-ingress
spec:
  replicas: 3
  selector:
    matchLabels:
      run: haproxy-ingress
  template:
    metadata:
      labels:
        run: haproxy-ingress
    spec:
      containers:
      - name: haproxy-ingress
        image: haproxytech/kubernetes-ingress:1.9.0
        ports:
        - name: http
          containerPort: 80
        - name: https
          containerPort: 443
        - name: stat
          containerPort: 1936
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        args:
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --ingress-class=haproxy
        - --configmap=$(POD_NAMESPACE)/haproxy-ingress
        - --stats-port=1936
        - --http-port=80
        - --https-port=443
        - --sync-period=10s
        - --wait-before-shutdown=0
        - --backend-server-slots-increment=2
        - --healthz-port=10253
        - --healthz-path=/healthz
        volumeMounts:
        - name: haproxy-ingress
          mountPath: /etc/haproxy-ingress
      volumes:
      - name: haproxy-ingress
        configMap:
          name: haproxy-ingress

---
# HAProxy Configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: haproxy-ingress
  namespace: haproxy-ingress
data:
  haproxy.cfg: |
    global
      maxconn 2000
      log stdout format raw local0
      daemon
      stats socket /var/run/haproxy.sock mode 660 level admin
      stats timeout 30s
      tune.ssl.default-dh-param 2048
      tune.ssl.ciphers HIGH:!aNULL:!MD5
      tune.ssl.options no-sslv3 no-tlsv1

    defaults
      mode http
      timeout connect 5000
      timeout client 50000
      timeout server 50000
      timeout http-request 10s
      timeout http-keep-alive 10s
      timeout check 10s
      maxconn 2000
      log-format "%ci:%cp:%si:%st:%t:%B:%T:%f:%b:%s"
      option httplog
      option dontlognull
      option httpchk GET /healthz
      option tcplog
      option tcpka
      option http-server-close
      option forwardfor

    frontend stats
      bind *:1936
      mode http
      stats enable
      stats uri /
      stats realm HAProxy\ Statistics
      stats auth admin:password

    listen http
      bind *:80
      mode http
      option httpchk GET /healthz
      balance roundrobin
      server-template server 1 _._._.http.health-check.check 80 check

    listen https
      bind *:443
      mode tcp
      option tcplog
      balance roundrobin
      server-template server 1 _._._.https.health-check.check 443 check
```

### Ingress Performance

**📊 Ingress Performance Optimization**

**Performance Metrics Collection**

```yaml
# Prometheus configuration for Ingress monitoring
apiVersion: v1
kind: ServiceMonitor
metadata:
  name: nginx-ingress-metrics
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics

---
# Grafana dashboard for Ingress performance
apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-performance-dashboard
  namespace: monitoring
  labels:
    grafana_dashboard: "1"
data:
  ingress-performance.json: |
    {
      "dashboard": {
        "title": "Ingress Performance Dashboard",
        "panels": [
          {
            "title": "Request Rate",
            "type": "graph",
            "targets": [
              {
                "expr": "rate(nginx_ingress_controller_requests[5m])",
                "legendFormat": "{{method}} {{ingress}}"
              }
            ]
          },
          {
            "title": "Response Time",
            "type": "graph",
            "targets": [
              {
                "expr": "histogram_quantile(0.50, nginx_ingress_controller_response_duration_seconds_bucket)",
                "legendFormat": "P50"
              }
            ]
          }
        ]
      }
    }
```

### Multi-Tenant Ingress

**🏢 Multi-Tenant Ingress Architecture**

```yaml
# Multi-tenant ingress with isolation
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tenant-a-ingress
  namespace: tenant-a
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /tenant-a/$2
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/limit-rps: "100"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-Tenant-ID "tenant-a";
      add_header X-Tenant-ID "tenant-a" always;
spec:
  tls:
  - hosts:
    - tenant-a.example.com
    secretName: tenant-a-tls
  rules:
  - host: tenant-a.example.com
    http:
      paths:
      - path: /api/(.*)
        pathType: Prefix
        backend:
          service:
            name: tenant-a-api
            port:
              number: 8080
      - path: /
        pathType: Prefix
        backend:
          service:
            name: tenant-a-frontend
            port:
              number: 3000

---
# Tenant-specific rate limiting
apiVersion: v1
kind: ConfigMap
metadata:
  name: tenant-a-ratelimit
  namespace: ingress-nginx
data:
  ratelimit.conf: |
    # 100 requests per second for tenant-a
    rate-limiting: "100"
    # 10 concurrent connections
    limit-conn: "10"
```

***

## 📊 **Load Balancing**

### Load Balancer Types

**⚖️ Kubernetes Load Balancer Options**

| Type             | Description                  | Use Case             | Pros         | Cons               |
| ---------------- | ---------------------------- | -------------------- | ------------ | ------------------ |
| **NodePort**     | Exposes service on node port | Testing, development | Simple       | Limited port range |
| **ClusterIP**    | Internal cluster access      | Internal services    | Fast         | Cluster-only       |
| **LoadBalancer** | Cloud provider LB            | Production           | Auto-scaling | Costly             |
| **ExternalName** | DNS alias                    | External services    | Simple       | No health checks   |

**Advanced LoadBalancer Configuration**

```yaml
# AWS NLB with advanced features
apiVersion: v1
kind: Service
metadata:
  name: advanced-loadbalancer
  namespace: production
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internet-facing"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "tcp"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/health"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "30"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:123456789012:certificate/12345678-1234-1234-1234-123456789012"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Environment=production,Team=backend"
spec:
  type: LoadBalancer
  selector:
    app: web-app
  ports:
  - name: http
    port: 80
    targetPort: 8080
  - name: https
    port: 443
    targetPort: 8443
  externalTrafficPolicy: Local
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 3600
```

### Health Checks

**🔍 Advanced Health Check Configuration**

```yaml
# Comprehensive health checks
apiVersion: v1
kind: Service
metadata:
  name: health-check-service
  namespace: production
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: "/health"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "30"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-timeout: "5"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: "HTTP"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-http-method: "GET"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-http-code: "200,202"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "3"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "HTTP"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-success-codes: "200,201,202"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-group-ip-address-type: "ipv4"
spec:
  type: LoadBalancer
  selector:
    app: web-app
  ports:
  - name: http
    port: 80
    targetPort: 8080
  - name: https
    port: 443
    targetPort: 8443

---
# Application with multiple health endpoints
apiVersion: v1
kind: Pod
metadata:
  name: health-check-pod
  labels:
    app: web-app
spec:
  containers:
  - name: web-app
    image: my-app:latest
    ports:
    - containerPort: 8080
      name: http
    - containerPort: 8443
      name: https
    # Liveness probe
    livenessProbe:
      httpGet:
        path: /health/live
        port: 8080
      initialDelaySeconds: 30
      periodSeconds: 10
      timeoutSeconds: 5
      failureThreshold: 3
      successThreshold: 1
    # Readiness probe
    readinessProbe:
      httpGet:
        path: /health/ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 5
      timeoutSeconds: 3
      failureThreshold: 3
      successThreshold: 1
    # Startup probe
    startupProbe:
      httpGet:
        path: /health/startup
        port: 8080
      failureThreshold: 30
      periodSeconds: 10
    # Additional health check
    lifecycle:
      preStop:
        httpGet:
          path: /health/shutdown
          port: 8080
```

### Session Affinity

**🔧 Session Persistence Configuration**

```yaml
# Service with session affinity
apiVersion: v1
kind: Service
metadata:
  name: session-affinity-service
  namespace: production
spec:
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 8080
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 3600
  externalTrafficPolicy: Local

---
# Alternative session affinity using headers
apiVersion: v1
kind: ConfigMap
metadata:
  name: session-config
  namespace: production
data:
  nginx.conf: |
    upstream backend {
      ip_hash;
      server app-1:8080 max_fails=3 fail_timeout=30s;
      server app-2:8080 max_fails=3 fail_timeout=30s;
      server app-3:8080 max_fails=3 fail_timeout=30s;
    }

    server {
      listen 80;
      server_name _;

      location / {
        proxy_pass http://backend;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_set_header X-Session-ID $cookie_session_id;
      }
    }
```

### Traffic Distribution

**📊 Advanced Traffic Distribution**

```yaml
# Weighted traffic distribution
apiVersion: v1
kind: Service
metadata:
  name: weighted-service
  namespace: production
spec:
  selector:
    app: web-app
  ports:
  - port: 80
    targetPort: 8080
  # Using annotations for weighted distribution (with Ingress)
  # Weight: 70% to v1, 30% to v2

---
# Ingress with weighted routing
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: weighted-routing
  namespace: production
  annotations:
    nginx.ingress.kubernetes.io/canary-weight: "30"
    nginx.ingress.kubernetes.io/canary-by-header: "X-Canary"
    nginx.ingress.kubernetes.io/canary-by-header-value: "true"
spec:
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service-v1
            port:
              number: 80
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service-v2
            port:
              number: 80

---
# Advanced load balancing with custom headers
apiVersion: v1
kind: Service
metadata:
  name: custom-lb-service
  namespace: production
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: "Environment=production,Team=backend"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol-version: "HTTP/1.1"
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-2:123456789012:certificate/12345678-1234-1234-1234-123456789012"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "https"
spec:
  type: LoadBalancer
  selector:
    app: web-app
  ports:
  - name: http
    port: 80
    targetPort: 8080
  - name: https
    port: 443
    targetPort: 8443
  externalTrafficPolicy: Local
```

### Global Load Balancing

**🌍 Global Load Balancing Architecture**

```yaml
# Global DNS-based load balancing
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: global-loadbalancer
  namespace: production
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      # GeoIP based routing
      geo $country $geoip_country {
        default US;
        US "United States";
        CA "Canada";
        GB "United Kingdom";
        DE "Germany";
        FR "France";
        JP "Japan";
        AU "Australia";
      }

      # Regional routing
      if ($geoip_country = "US") {
        proxy_pass http://us-west-app;
      }
      if ($geoip_country = "CA") {
        proxy_pass http://ca-east-app;
      }
      if ($geoip_country = "GB") {
        proxy_pass http://eu-west-app;
      }
      if ($geoip_country = "DE") {
        proxy_pass http://eu-central-app;
      }
      if ($geoip_country = "FR") {
        proxy_pass http://eu-west-app;
      }
      if ($geoip_country = "JP") {
        proxy_pass http://ap-northeast-app;
      }
      if ($geoip_country = "AU") {
        proxy_pass http://ap-southeast-app;
      }

      # Add country header
      proxy_set_header X-Country $geoip_country;

      # Health check
      location /health {
        access_log off;
        return 200 "healthy";
      }
spec:
  tls:
  - hosts:
    - app.example.com
    secretName: global-tls
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: default-app
            port:
              number: 80

---
# Regional service configurations
apiVersion: v1
kind: Service
metadata:
  name: us-west-app
  namespace: production
  labels:
    region: us-west
spec:
  selector:
    app: web-app
    region: us-west
  ports:
  - port: 80
    targetPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: eu-west-app
  namespace: production
  labels:
    region: eu-west
spec:
  selector:
    app: web-app
    region: eu-west
  ports:
  - port: 80
    targetPort: 8080

---
# Multi-cluster service discovery
apiVersion: v1
kind: Service
metadata:
  name: global-app
  namespace: production
  annotations:
    external-dns.alpha.kubernetes.io/hostname: "app.example.com"
    external-dns.alpha.kubernetes.io/ttl: "60"
spec:
  type: ExternalName
  externalName: global-loadbalancer.production.svc.cluster.local
```

***

## 🔍 **Network Troubleshooting**

### Connectivity Issues

**🔧 Network Connectivity Debugging**

**Common Connectivity Problems**

```bash
# Basic connectivity tests
kubectl run test-pod --image=busybox --rm -it --restart=Never -- \
  /bin/sh -c "nslookup kubernetes.default.svc.cluster.local"

kubectl run test-pod --image=nicolaka/netshoot --rm -it --restart=Never -- \
  /bin/sh -c "ping -c 3 google.com"

kubectl run test-pod --image=nicolaka/netshoot --rm -it --restart=Never -- \
  /bin/sh -c "curl -I http://google.com"

# Service connectivity testing
kubectl exec -it test-pod -- curl http://<service-name>.<namespace>.svc.cluster.local

# Pod-to-pod connectivity
kubectl exec -it <pod-1> -- ping <pod-2>.<namespace>.pod.cluster.local

# DNS resolution debugging
kubectl exec -it <pod-name> -- nslookup kubernetes.default.svc.cluster.local
kubectl exec -it <pod-name> -- cat /etc/resolv.conf

# Check network policies
kubectl get networkpolicies --all-namespaces
kubectl describe networkpolicy <policy-name> -n <namespace>
```

**Advanced Troubleshooting Tools**

```yaml
# Network debugging pod
apiVersion: v1
kind: Pod
metadata:
  name: network-debugger
  namespace: production
spec:
  containers:
  - name: netdebug
    image: nicolaka/netshoot
    command: ["/bin/sh", "-c", "sleep infinity"]
    securityContext:
      privileged: true
    volumeMounts:
    - name: host-root
      mountPath: /host
  volumes:
  - name: host-root
    hostPath:
      path: /
  restartPolicy: Never
```

### Performance Problems

**📊 Network Performance Analysis**

**Performance Debugging Commands**

```bash
# Check network latency
kubectl exec -it <pod-name> -- ping -c 10 <target-ip>

# Check bandwidth
kubectl exec -it <pod-name> -- iperf3 -c <target-ip> -t 30

# Check DNS performance
kubectl exec -it <pod-name> -- dig @kubernetes.default.svc.cluster.local kubernetes.default.svc.cluster.local

# Monitor network metrics
kubectl top node
kubectl top pod
kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes"

# Check CNI performance
kubectl exec -n kube-system -l k8s-app=calico-node -- calicoctl node status
kubectl exec -n kube-system -l k8s-app=cilium -- cilium status
```

### DNS Issues

**🔍 DNS Troubleshooting**

**Common DNS Problems and Solutions**

```bash
# Check CoreDNS status
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl logs -n kube-system -l k8s-app=kube-dns

# Test DNS resolution
kubectl exec -it <pod-name> -- nslookup kubernetes.default.svc.cluster.local
kubectl exec -it <pod-name> -- dig kubernetes.default.svc.cluster.local

# Check DNS configuration
kubectl get configmap coredns -n kube-system -o yaml

# Debug DNS with verbose logging
kubectl logs -n kube-system -l k8s-app=kube-dns --tail=100

# Test external DNS
kubectl exec -it <pod-name> -- nslookup google.com
kubectl exec -it <pod-name> -- dig google.com

# Check DNS forwarding
kubectl exec -it <pod-name> -- cat /etc/resolv.conf
```

### Packet Capture

**🔍 Network Packet Capture**

**Packet Capture with tcpdump**

```bash
# Install tcpdump in debug pod
kubectl run tcpdump --image=nicolaka/netshoot --rm -it --restart=Never -- \
  tcpdump -i any -n -vv host <target-ip>

# Capture specific traffic
kubectl exec -it <pod-name> -- tcpdump -i eth0 -n -vv tcp port 80

# Capture all traffic
kubectl exec -it <pod-name> -- tcpdump -i any -w /tmp/capture.pcap

# Capture with specific filters
kubectl exec -it <pod-name> -- tcpdump -i eth0 -n -vv 'tcp[13] & 8 != 0 and tcp[14] & 8 != 0'

# Capture in debug pod
apiVersion: v1
kind: Pod
metadata:
  name: packet-capture
spec:
  containers:
  - name: capture
    image: nicolaka/netshoot
    command: ["/bin/sh", "-c", "sleep infinity"]
    securityContext:
      privileged: true
  restartPolicy: Never
```

### Network Monitoring

**📊 Network Monitoring Setup**

**Prometheus Network Metrics**

```yaml
# Network monitoring ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: network-metrics
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: network-exporter
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics

---
# Node network metrics
apiVersion: v1
kind: Service
metadata:
  name: node-exporter
  namespace: monitoring
  labels:
    app: node-exporter
spec:
  ports:
  - name: metrics
    port: 9100
    protocol: TCP
  selector:
    app: node-exporter

---
# Network export DaemonSet
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app: node-exporter
  template:
    metadata:
      labels:
        app: node-exporter
    spec:
      containers:
      - name: node-exporter
        image: prom/node-exporter:v1.6.0
        args:
        - --path.rootfs=/host/rootfs
        - --path.sysfs=/host/sys
        - --path.proc=/host/proc
        - --collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host)($|/)
        ports:
        - containerPort: 9100
          name: metrics
        volumeMounts:
        - name: rootfs
          mountPath: /host/rootfs
        - name: sys
          mountPath: /host/sys
        - name: proc
          mountPath: /host/proc
      volumes:
      - name: rootfs
        hostPath:
          path: /
      - name: sys
        hostPath:
          path: /sys
      - name: proc
        hostPath:
          path: /proc
```

***

## 🎯 **Best Practices**

### **🌐 Networking Best Practices**

1. **Network Design**
   * Use proper CIDR planning
   * Implement network policies early
   * Use dedicated CNI plugins
   * Plan for multi-cluster networking
2. **Security**
   * Implement network policies for all workloads
   * Use encryption for all traffic
   * Monitor network security continuously
   * Regular security audits
3. **Performance**
   * Monitor network metrics
   * Optimize CNI configuration
   * Use appropriate load balancing
   * Implement proper caching
4. **Troubleshooting**
   * Document network architecture
   * Use proper logging and monitoring
   * Have troubleshooting tools ready
   * Regular network health checks

### **🔧 Performance Optimization**

1. **CNI Selection**
   * Choose CNI based on requirements
   * Test performance with workloads
   * Monitor CNI resource usage
   * Plan for scalability
2. **Load Balancing**
   * Use health checks properly
   * Implement session persistence when needed
   * Monitor load balancer metrics
   * Plan for high availability
3. **DNS Configuration**
   * Use proper TTL values
   * Monitor DNS performance
   * Implement DNS redundancy
   * Test DNS resolution regularly

***

## 🔗 **Referensi**

### **📚 Dokumentasi Resmi**

* [Kubernetes Networking](https://kubernetes.io/docs/concepts/services-networking/)
* [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
* [Ingress Controllers](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/)
* [CoreDNS](https://coredns.io/)

### **🛠️ Networking Tools**

* [Calico Documentation](https://docs.projectcalico.org/)
* [Cilium Documentation](https://cilium.io/)
* [NGINX Ingress](https://kubernetes.github.io/ingress-nginx/)
* [Traefik Documentation](https://doc.traefik.io/traefik/)

### **📖 Learning Resources**

* [Kubernetes Networking Guide](https://kubernetes.io/docs/concepts/cluster-administration/networking/)
* [Network Policy Examples](https://github.com/ahmetb/kubernetes-network-policy-recipes)
* [CNI Specification](https://github.com/containernetworking/cni)

***

\*🌐 \**Advanced networking adalah fondasi untuk scalable dan secure Kubernetes clusters*
