# Performance Advanced

## Performance Architecture Overview

### Performance Optimization Layers

```
┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│  Application    │    │   Kubernetes     │    │   Infrastructure│
│  Performance    │◀───│   Performance    │◀───│   Performance   │
│                 │    │                  │    │                 │
│ • Code          │    │ • Scheduling     │    │ • CPU/Memory    │
│ • Memory        │    │ • Networking     │    │ • Storage I/O   │
│ • Caching       │    │ • Resource Mgmt  │    │ • Network BW    │
│ • Database      │    │ • Autoscaling    │    │ • Disk Speed    │
└─────────────────┘    └──────────────────┘    └─────────────────┘
```

## Application Performance Optimization

### Container Optimization

#### Multi-Stage Docker Optimization

```dockerfile
# Optimized multi-stage Dockerfile
FROM golang:1.21-alpine AS builder

# Install build dependencies
RUN apk add --no-cache git ca-certificates tzdata

# Create appuser
RUN adduser -D -s /bin/sh appuser

# Set working directory
WORKDIR /src

# Copy go mod files
COPY go.mod go.sum ./

# Download dependencies
RUN go mod download

# Copy source code
COPY . .

# Build the application
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
    -ldflags='-w -s -extldflags "-static"' \
    -a -installsuffix cgo \
    -o main .

# Final stage
FROM scratch

# Import CA certificates
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/

# Import timezone data
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo

# Import user
COPY --from=builder /etc/passwd /etc/passwd

# Copy binary
COPY --from=builder /src/main /main

# Use non-root user
USER appuser

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD ["/main", "health"]

# Expose port
EXPOSE 8080

# Set entrypoint
ENTRYPOINT ["/main"]
```

#### Container Resource Optimization

```yaml
apiVersion: v1
kind: Pod
metadata:
  name: optimized-app
  annotations:
    sidecar.istio.io/inject: "false"
    prometheus.io/scrape: "true"
    prometheus.io/port: "9090"
    prometheus.io/path: "/metrics"
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
  containers:
  - name: app
    image: myapp:v1.0.0-optimized
    resources:
      requests:
        cpu: 100m      # Based on profiling
        memory: 128Mi   # Based on memory usage
        ephemeral-storage: 1Gi
      limits:
        cpu: 500m      # 5x requests for burst capacity
        memory: 256Mi   # 2x requests for safety margin
        ephemeral-storage: 2Gi
    env:
    - name: GOMAXPROCS
      value: "2"        # Match CPU limit
    - name: GOMEMLIMIT
      value: "200MiB"   # Match memory limit
    - name: GOGC
      value: "100"      # Aggressive GC
    - name: GOTRACEBACK
      value: "1"
    - name: TZ
      value: "UTC"
    ports:
    - containerPort: 8080
      name: http
      protocol: TCP
    - containerPort: 9090
      name: metrics
      protocol: TCP
    livenessProbe:
      httpGet:
        path: /health/live
        port: 8080
      initialDelaySeconds: 10
      periodSeconds: 5
      timeoutSeconds: 3
      successThreshold: 1
      failureThreshold: 3
    readinessProbe:
      httpGet:
        path: /health/ready
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 3
      timeoutSeconds: 2
      successThreshold: 1
      failureThreshold: 3
    startupProbe:
      httpGet:
        path: /health/startup
        port: 8080
      initialDelaySeconds: 5
      periodSeconds: 2
      timeoutSeconds: 2
      successThreshold: 1
      failureThreshold: 30
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      capabilities:
        drop:
        - ALL
      seccompProfile:
        type: RuntimeDefault
    volumeMounts:
    - name: tmp
      mountPath: /tmp
    - name: cache
      mountPath: /var/cache/app
    - name: config
      mountPath: /app/config
      readOnly: true
  volumes:
  - name: tmp
    emptyDir:
      sizeLimit: 100Mi
  - name: cache
    emptyDir:
      sizeLimit: 500Mi
  - name: config
    configMap:
      name: app-config
  terminationGracePeriodSeconds: 30
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - optimized-app
          topologyKey: kubernetes.io/hostname
  nodeSelector:
    node-type: compute
    cloud.google.com/gke-nodepool: high-performance
  tolerations:
  - key: "high-performance"
    operator: "Equal"
    value: "true"
    effect: "NoSchedule"
  priorityClassName: high-priority
```

### Application-Level Performance

#### Go Performance Optimization

```go
package main

import (
    "context"
    "fmt"
    "net/http"
    "runtime"
    "sync"
    "time"

    "github.com/prometheus/client_golang/prometheus"
    "github.com/prometheus/client_golang/prometheus/promhttp"
)

// Performance monitoring metrics
var (
    requestDuration = prometheus.NewHistogramVec(
        prometheus.HistogramOpts{
            Name:    "http_request_duration_seconds",
            Help:    "HTTP request duration in seconds",
            Buckets: prometheus.ExponentialBuckets(0.001, 2, 15),
        },
        []string{"method", "endpoint", "status"},
    )

    requestCount = prometheus.NewCounterVec(
        prometheus.CounterOpts{
            Name: "http_requests_total",
            Help: "Total number of HTTP requests",
        },
        []string{"method", "endpoint", "status"},
    )

    activeConnections = prometheus.NewGauge(
        prometheus.GaugeOpts{
            Name: "http_active_connections",
            Help: "Number of active HTTP connections",
        },
    )

    goroutineCount = prometheus.NewGauge(
        prometheus.GaugeOpts{
            Name: "go_goroutines",
            Help: "Number of goroutines",
        },
    )
)

func init() {
    prometheus.MustRegister(requestDuration)
    prometheus.MustRegister(requestCount)
    prometheus.MustRegister(activeConnections)
    prometheus.MustRegister(goroutineCount)
}

// Performance-optimized HTTP handler
func instrumentedHandler(next http.Handler) http.Handler {
    return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
        start := time.Now()
        activeConnections.Inc()
        defer activeConnections.Dec()

        // Update goroutine count
        goroutineCount.Set(float64(runtime.NumGoroutine()))

        // Response writer wrapper
        rw := &responseWriter{
            ResponseWriter: w,
            statusCode:     200,
        }

        // Serve request
        next.ServeHTTP(rw, r)

        // Record metrics
        duration := time.Since(start).Seconds()
        requestDuration.WithLabelValues(r.Method, r.URL.Path, fmt.Sprintf("%d", rw.statusCode)).Observe(duration)
        requestCount.WithLabelValues(r.Method, r.URL.Path, fmt.Sprintf("%d", rw.statusCode)).Inc()
    })
}

type responseWriter struct {
    http.ResponseWriter
    statusCode int
}

func (rw *responseWriter) WriteHeader(code int) {
    rw.statusCode = code
    rw.ResponseWriter.WriteHeader(code)
}

// Connection pooling
type connectionPool struct {
    mu    sync.RWMutex
    conns chan *http.Client
    size  int
}

func newConnectionPool(size int) *connectionPool {
    pool := &connectionPool{
        conns: make(chan *http.Client, size),
        size:  size,
    }

    // Pre-allocate connections
    for i := 0; i < size; i++ {
        client := &http.Client{
            Timeout: 30 * time.Second,
            Transport: &http.Transport{
                MaxIdleConns:        100,
                MaxIdleConnsPerHost: 10,
                IdleConnTimeout:     90 * time.Second,
                TLSHandshakeTimeout: 10 * time.Second,
            },
        }
        pool.conns <- client
    }

    return pool
}

func (p *connectionPool) Get() *http.Client {
    select {
    case conn := <-p.conns:
        return conn
    default:
        return &http.Client{
            Timeout: 30 * time.Second,
        }
    }
}

func (p *connectionPool) Put(conn *http.Client) {
    select {
    case p.conns <- conn:
    default:
        // Pool is full, discard connection
    }
}

// Caching layer
type cache struct {
    mu    sync.RWMutex
    items map[string]cacheItem
    ttl   time.Duration
}

type cacheItem struct {
    value      string
    expiration time.Time
}

func newCache(ttl time.Duration) *cache {
    cache := &cache{
        items: make(map[string]cacheItem),
        ttl:   ttl,
    }

    // Start cleanup goroutine
    go cache.cleanup()

    return cache
}

func (c *cache) Get(key string) (string, bool) {
    c.mu.RLock()
    defer c.mu.RUnlock()

    item, exists := c.items[key]
    if !exists || time.Now().After(item.expiration) {
        return "", false
    }

    return item.value, true
}

func (c *cache) Set(key, value string) {
    c.mu.Lock()
    defer c.mu.Unlock()

    c.items[key] = cacheItem{
        value:      value,
        expiration: time.Now().Add(c.ttl),
    }
}

func (c *cache) cleanup() {
    ticker := time.NewTicker(c.ttl / 2)
    defer ticker.Stop()

    for range ticker.C {
        c.mu.Lock()
        for key, item := range c.items {
            if time.Now().After(item.expiration) {
                delete(c.items, key)
            }
        }
        c.mu.Unlock()
    }
}

func main() {
    // Optimize runtime
    runtime.GOMAXPROCS(runtime.NumCPU())

    // Initialize components
    connPool := newConnectionPool(10)
    cache := newCache(5 * time.Minute)

    // Setup router with middleware
    mux := http.NewServeMux()
    mux.HandleFunc("/api/data", func(w http.ResponseWriter, r *http.Request) {
        // Try cache first
        if data, hit := cache.Get(r.URL.String()); hit {
            w.Header().Set("X-Cache", "HIT")
            w.Write([]byte(data))
            return
        }

        // Use connection pool for external calls
        client := connPool.Get()
        defer connPool.Put(client)

        // Process request...
        w.Header().Set("X-Cache", "MISS")
        w.Write([]byte("processed data"))
    })

    // Wrap with metrics
    handler := instrumentedHandler(mux)

    // Add metrics endpoint
    mux.Handle("/metrics", promhttp.Handler())

    // Configure server for performance
    server := &http.Server{
        Addr:         ":8080",
        Handler:      handler,
        ReadTimeout:  10 * time.Second,
        WriteTimeout: 10 * time.Second,
        IdleTimeout:  120 * time.Second,
    }

    fmt.Println("Server starting on :8080")
    if err := server.ListenAndServe(); err != nil {
        panic(err)
    }
}
```

#### Java Performance Optimization

```java
import io.micrometer.core.instrument.MeterRegistry;
import io.micrometer.core.instrument.Timer;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Bean;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.boot.actuate.autoconfigure.metrics.MeterRegistryCustomizer;

import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.Executors;
import java.util.concurrent.ScheduledExecutorService;
import java.util.concurrent.TimeUnit;

@SpringBootApplication
@RestController
public class PerformanceOptimizedApplication {

    private final MeterRegistry meterRegistry;
    private final ConcurrentHashMap<String, String> cache = new ConcurrentHashMap<>();
    private final ScheduledExecutorService cacheCleanup = Executors.newScheduledThreadPool(1);

    public PerformanceOptimizedApplication(MeterRegistry meterRegistry) {
        this.meterRegistry = meterRegistry;

        // Schedule cache cleanup
        cacheCleanup.scheduleAtFixedRate(this::cleanupCache, 1, 1, TimeUnit.HOURS);
    }

    @Bean
    MeterRegistryCustomizer<MeterRegistry> metricsCommonTags() {
        return registry -> registry.config().commonTags(
            "application", "performance-optimized",
            "environment", "production"
        );
    }

    @GetMapping("/api/data")
    public String getData() {
        return Timer.Sample.start(meterRegistry)
            .stopSupplier(() -> {
                // Try cache first
                String cached = cache.get("data-key");
                if (cached != null) {
                    return cached;
                }

                // Process expensive operation
                String result = expensiveOperation();

                // Cache result
                cache.put("data-key", result);

                return result;
            }, "api.request.duration");
    }

    private String expensiveOperation() {
        // Simulate expensive computation
        try {
            Thread.sleep(100);
        } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
        }
        return "processed-data-" + System.currentTimeMillis();
    }

    private void cleanupCache() {
        // Clean up old entries
        if (cache.size() > 1000) {
            cache.clear();
        }
    }

    @Bean
    public ScheduledExecutorService scheduledExecutorService() {
        return Executors.newScheduledThreadPool(4);
    }

    public static void main(String[] args) {
        // JVM optimizations
        System.setProperty("java.awt.headless", "true");
        System.setProperty("file.encoding", "UTF-8");

        SpringApplication.run(PerformanceOptimizedApplication.class, args);
    }
}
```

## Kubernetes Performance Tuning

### Cluster Performance Optimization

#### kube-scheduler Configuration

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: scheduler-config
  namespace: kube-system
data:
  config.yaml: |
    apiVersion: kubescheduler.config.k8s.io/v1beta3
    kind: KubeSchedulerConfiguration
    clientConnection:
      kubeconfig: /etc/kubernetes/scheduler.conf
    leaderElection:
      leaderElect: true
    profiles:
    - schedulerName: default-scheduler
      plugins:
        score:
          enabled:
          - name: NodeResourcesFit
          - name: NodeResourcesBalancedAllocation
          - name: ImageLocality
          - name: InterPodAffinity
          - name: NodeAffinity
          - name: TaintToleration
          - name: PodTopologySpread
          - name: ResourceAllocation
        multiPoint:
          enabled:
          - name: NodeResourcesFit
          - name: NodeResourcesBalancedAllocation
      pluginConfig:
      - name: NodeResourcesFit
        args:
          scoringStrategy:
            type: MostAllocated
            resources:
            - name: cpu
              weight: 1
            - name: memory
              weight: 1
      - name: NodeResourcesBalancedAllocation
        args:
          weight: 5
      - name: PodTopologySpread
        args:
          defaultConstraints:
          - maxSkew: 1
            topologyKey: kubernetes.io/hostname
            whenUnsatisfiable: DoNotSchedule
          weight: 2
    extenders:
    - urlPrefix: "http://custom-scheduler.default.svc.cluster.local:8080"
      filterVerb: "filter"
      bindVerb: "bind"
      weight: 100
      enableHTTPS: false
      managedResources:
      - name: "gpu"
        ignoredByScheduler: true
```

#### kubelet Performance Tuning

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: kubelet-config
  namespace: kube-system
data:
  config.yaml: |
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    cgroupDriver: systemd
    maxPods: 250
    podPidsLimit: 2048
    containerLogMaxSize: 100Mi
    containerLogMaxFiles: 5
    systemReserved:
      cpu: 500m
      memory: 1Gi
      ephemeral-storage: 1Gi
    kubeReserved:
      cpu: 200m
      memory: 512Mi
      ephemeral-storage: 500Mi
    evictionHard:
      imagefs.available: 15%
      memory.available: 100Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    evictionSoft:
      imagefs.available: 20%
      memory.available: 200Mi
      nodefs.available: 15%
      nodefs.inodesFree: 10%
    evictionSoftGracePeriod:
      imagefs.available: 2m
      memory.available: 1m30s
      nodefs.available: 2m
      nodefs.inodesFree: 2m
    evictionMinimumReclaim:
      imagefs.available: 100Mi
      memory.available: 50Mi
      nodefs.available: 100Mi
      nodefs.inodesFree: 5%
    cpuManagerPolicy: static
    cpuManagerReconcilePeriod: 10s
    topologyManagerPolicy: best-effort
    reservedSystemCPUs: 0,1
    maxOpenFiles: 1000000
    hairpinMode: promiscuous-bridge
    runtimeRequestTimeout: 10m
    serializeImagePulls: false
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    volumeStatsAggPeriod: 1m
    kubeletCgroups: /systemd/system.slice
    cgroupsPerQOS: true
    cgroupRoot: /
    protectKernelDefaults: true
    enableControllerAttachDetach: true
    failSwapOn: false
    enableDebuggingHandlers: true
    enableContentionProfiling: true
    oomScoreAdj: -999
    clusterDomain: cluster.local
    clusterDNS:
    - 169.254.20.10
    streamingConnectionIdleTimeout: 4h0m0s
    nodeStatusUpdateFrequency: 10s
    nodeStatusReportFrequency: 5m0s
    rotateCertificates: true
    serverTLSBootstrap: true
    authentication:
      anonymous:
        enabled: false
      webhook:
        enabled: true
        cacheTTL: 2m0s
      x509:
        clientCAFile: /etc/kubernetes/pki/ca.crt
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    registryPullQPS: 5
    registryBurst: 10
    eventRecordQPS: 5
    eventBurst: 10
    enableDebugFlagsHandler: true
    enableSystemLogHandler: true
    enableSystemLogQuery: true
```

### Network Performance Optimization

#### CNI Performance Tuning

```yaml
# Calico performance configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: calico-config
  namespace: kube-system
data:
  calico_backend: "bird"
  cni_network_config: |
    {
      "name": "k8s-pod-network",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "calico",
          "log_level": "info",
          "datastore_type": "kubernetes",
          "mtu": 1440,
          "ipam": {
            "type": "calico-ipam",
            "assign_ipv4": "true",
            "ipv4_pools": ["192.168.0.0/16"]
          },
          "policy": {
            "type": "k8s"
          },
          "kubernetes": {
            "kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
          }
        },
        {
          "type": "portmap",
          "snat": true,
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  typha_service_name: "none"
  veth_mtu: "1440"
```

#### Network Policy Performance

```yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: optimized-network-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: high-performance-app
  policyTypes:
  - Ingress
  - Egress
  ingress:
  # Allow from specific namespaces for better performance
  - from:
    - namespaceSelector:
        matchLabels:
          name: frontend
    - podSelector:
        matchLabels:
          app: api-gateway
    ports:
    - protocol: TCP
      port: 8080
  egress:
  # Allow only necessary egress traffic
  - to:
    - namespaceSelector:
        matchLabels:
          name: database
    ports:
    - protocol: TCP
      port: 5432
  - to: []
    ports:
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
    - protocol: TCP
      port: 443
```

## Storage Performance Optimization

### High-Performance Storage Classes

#### SSD-Optimized Storage Class

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: high-performance-ssd
  annotations:
    storageclass.kubernetes.io/is-default-class: "false"
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
  fsType: ext4
  encrypted: "true"
  # Performance optimizations
  provisionedIops: "3000"
  provisionedThroughput: "125"
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
mountOptions:
  - discard
  - noatime
  - nodiratime
```

#### Database-Optimized Storage

```yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: database-optimized
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io2
  iops: "10000"
  fsType: xfs
  encrypted: "true"
  # Database-specific optimizations
  blockSize: "4k"
  # XFS filesystem optimizations
  allocsize: "64k"
  logbufs: "8"
  logbsize: "32k"
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
mountOptions:
  - discard
  - noatime
  - nodiratime
  - allocsize=64k
  - logbufs=8
  - logbsize=32k
```

### StatefulSet Performance Optimization

```yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: high-performance-database
  namespace: production
spec:
  serviceName: "database"
  replicas: 3
  selector:
    matchLabels:
      app: database
  template:
    metadata:
      labels:
        app: database
    spec:
      securityContext:
        runAsUser: 999
        runAsGroup: 999
        fsGroup: 999
      containers:
      - name: database
        image: postgres:15-alpine
        resources:
          requests:
            cpu: 2000m
            memory: 8Gi
          limits:
            cpu: 4000m
            memory: 16Gi
        env:
        - name: POSTGRES_DB
          value: "production_db"
        - name: POSTGRES_USER
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: username
        - name: POSTGRES_PASSWORD
          valueFrom:
            secretKeyRef:
              name: db-credentials
              key: password
        - name: POSTGRES_INITDB_ARGS
          value: "--encoding=UTF-8 --lc-collate=C --lc-ctype=C"
        - name: PGDATA
          value: /var/lib/postgresql/data/pgdata
        # Performance tuning environment variables
        - name: POSTGRES_SHARED_PRELOAD_LIBRARIES
          value: "pg_stat_statements,auto_explain"
        - name: POSTGRES_SHARED_BUFFERS
          value: "4GB"
        - name: POSTGRES_EFFECTIVE_CACHE_SIZE
          value: "12GB"
        - name: POSTGRES_WORK_MEM
          value: "256MB"
        - name: POSTGRES_MAINTENANCE_WORK_MEM
          value: "1GB"
        - name: POSTGRES_MAX_CONNECTIONS
          value: "200"
        - name: POSTGRES_CHECKPOINT_COMPLETION_TARGET
          value: "0.9"
        - name: POSTGRES_WAL_BUFFERS
          value: "64MB"
        - name: POSTGRES_DEFAULT_STATISTICS_TARGET
          value: "100"
        - name: POSTGRES_RANDOM_PAGE_COST
          value: "1.1"
        - name: POSTGRES_EFFECTIVE_IO_CONCURRENCY
          value: "200"
        ports:
        - containerPort: 5432
          name: postgres
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
        - name: postgres-config
          mountPath: /etc/postgresql/postgresql.conf
          subPath: postgresql.conf
        livenessProbe:
          exec:
            command:
            - pg_isready
            - -U
            - postgres
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          exec:
            command:
            - pg_isready
            - -U
            - postgres
          initialDelaySeconds: 5
          periodSeconds: 5
        lifecycle:
          preStop:
            exec:
              command: ["/bin/sh", "-c", "pg_ctl stop -D $PGDATA -m fast"]
      volumes:
      - name: postgres-config
        configMap:
          name: postgres-config
  volumeClaimTemplates:
  - metadata:
      name: postgres-storage
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: database-optimized
      resources:
        requests:
          storage: 500Gi
  # Performance optimizations
  podManagementPolicy: Parallel
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
  # Anti-affinity for high availability
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchLabels:
            app: database
        topologyKey: kubernetes.io/hostname
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: node-type
            operator: In
            values:
            - database
          - key: performance-tier
            operator: In
            values:
            - high
  # Priority class for critical workloads
  priorityClassName: high-priority
```

## Resource Management Optimization

### Priority Classes

```yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: high-priority
value: 1000000
globalDefault: false
description: "High priority class for critical production workloads"
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: medium-priority
value: 500000
globalDefault: false
description: "Medium priority class for regular production workloads"
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata:
  name: low-priority
value: 1000
globalDefault: true
description: "Low priority class for batch and development workloads"
```

### Resource Quotas

```yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: production-quota
  namespace: production
spec:
  hard:
    requests.cpu: "20"
    requests.memory: 40Gi
    limits.cpu: "40"
    limits.memory: 80Gi
    persistentvolumeclaims: "10"
    services: "10"
    services.loadbalancers: "2"
    services.nodeports: "0"
    count/pods: "50"
    count/secrets: "20"
    count/configmaps: "20"
    count/ingresses: "5"
    count/deployments: "10"
    count/statefulsets: "5"
---
apiVersion: v1
kind: LimitRange
metadata:
  name: production-limits
  namespace: production
spec:
  limits:
  - default:
      cpu: "500m"
      memory: "512Mi"
      ephemeral-storage: 2Gi
    defaultRequest:
      cpu: "100m"
      memory: "128Mi"
      ephemeral-storage: 1Gi
    type: Container
  - max:
      cpu: "4"
      memory: "8Gi"
      ephemeral-storage: 20Gi
    min:
      cpu: "50m"
      memory: "64Mi"
      ephemeral-storage: 100Mi
    type: Container
  - max:
      storage: 1Ti
    min:
      storage: 1Gi
    type: PersistentVolumeClaim
```

## Autoscaling Performance Optimization

### Advanced HPA Configuration

```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: performance-optimized-hpa
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: high-performance-app
  minReplicas: 3
  maxReplicas: 50
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 60
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 70
  - type: Pods
    pods:
      metric:
        name: http_requests_per_second
      target:
        type: AverageValue
        averageValue: 100
  - type: External
    external:
      metric:
        name: queue_length
      target:
        type: AverageValue
        averageValue: 10
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
      - type: Percent
        value: 100
        periodSeconds: 15
      - type: Pods
        value: 10
        periodSeconds: 15
      selectPolicy: Max
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 25
        periodSeconds: 60
      selectPolicy: Min
```

### VPA Configuration

```yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: performance-vpa
  namespace: production
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: high-performance-app
  updatePolicy:
    updateMode: "Auto"
  resourcePolicy:
    containerPolicies:
    - containerName: app
      minAllowed:
        cpu: 100m
        memory: 128Mi
      maxAllowed:
        cpu: 8
        memory: 16Gi
      controlledResources: ["cpu", "memory"]
      controlledValues: "RequestsOnly"
```

## Monitoring Performance Metrics

### Performance Dashboard

```json
{
  "dashboard": {
    "title": "Kubernetes Performance Dashboard",
    "tags": ["kubernetes", "performance"],
    "timezone": "browser",
    "panels": [
      {
        "id": 1,
        "title": "Cluster Resource Utilization",
        "type": "graph",
        "targets": [
          {
            "expr": "sum(rate(container_cpu_usage_seconds_total{id=\"/\"}[5m])) by (instance) / sum(machine_cpu_cores) by (instance) * 100",
            "legendFormat": "CPU Usage % - {{instance}}"
          },
          {
            "expr": "sum(container_memory_working_set_bytes{id=\"/\"}) by (instance) / sum(machine_memory_bytes) by (instance) * 100",
            "legendFormat": "Memory Usage % - {{instance}}"
          }
        ]
      },
      {
        "id": 2,
        "title": "Pod Performance Metrics",
        "type": "graph",
        "targets": [
          {
            "expr": "sum(rate(http_requests_total{namespace=\"production\"}[5m])) by (pod)",
            "legendFormat": "Request Rate - {{pod}}"
          },
          {
            "expr": "histogram_quantile(0.95, sum(rate(http_request_duration_seconds_bucket{namespace=\"production\"}[5m])) by (le, pod))",
            "legendFormat": "95th Percentile - {{pod}}"
          }
        ]
      },
      {
        "id": 3,
        "title": "Storage I/O Performance",
        "type": "graph",
        "targets": [
          {
            "expr": "sum(rate(container_fs_reads_bytes_total[5m])) by (device)",
            "legendFormat": "Read I/O - {{device}}"
          },
          {
            "expr": "sum(rate(container_fs_writes_bytes_total[5m])) by (device)",
            "legendFormat": "Write I/O - {{device}}"
          }
        ]
      },
      {
        "id": 4,
        "title": "Network Performance",
        "type": "graph",
        "targets": [
          {
            "expr": "sum(rate(container_network_receive_bytes_total[5m])) by (pod)",
            "legendFormat": "Network In - {{pod}}"
          },
          {
            "expr": "sum(rate(container_network_transmit_bytes_total[5m])) by (pod)",
            "legendFormat": "Network Out - {{pod}}"
          }
        ]
      }
    ],
    "templating": {
      "list": [
        {
          "name": "namespace",
          "type": "query",
          "query": "label_values(kube_pod_info, namespace)",
          "refresh": 1,
          "includeAll": false
        }
      ]
    }
  }
}
```

***

## 🚀 **Production Performance Setup**

### Complete Performance Optimization Configuration

```yaml
# Performance namespace
apiVersion: v1
kind: Namespace
metadata:
  name: performance
  labels:
    name: performance
    performance-tier: high

---
# Performance profile for nodes
apiVersion: v1
kind: ConfigMap
metadata:
  name: performance-profile
  namespace: performance
data:
  sysctl.conf: |
    # Network performance tuning
    net.core.rmem_max = 134217728
    net.core.wmem_max = 134217728
    net.ipv4.tcp_rmem = 4096 87380 134217728
    net.ipv4.tcp_wmem = 4096 65536 134217728
    net.core.netdev_max_backlog = 5000

    # File system performance
    vm.dirty_ratio = 15
    vm.dirty_background_ratio = 5
    vm.swappiness = 1

    # Process scheduling
    kernel.sched_migration_cost_ns = 5000000

---
# Performance monitoring daemonset
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: performance-monitor
  namespace: performance
spec:
  selector:
    matchLabels:
      app: performance-monitor
  template:
    metadata:
      labels:
        app: performance-monitor
    spec:
      hostPID: true
      hostNetwork: true
      tolerations:
      - operator: Exists
      containers:
      - name: node-exporter
        image: prom/node-exporter:v1.6.1
        args:
        - "--path.rootfs=/host"
        ports:
        - containerPort: 9100
          name: metrics
        volumeMounts:
        - name: root
          mountPath: /host
          readOnly: true
        resources:
          requests:
            cpu: 50m
            memory: 64Mi
          limits:
            cpu: 100m
            memory: 128Mi
      volumes:
      - name: root
        hostPath:
          path: /

---
# Performance optimized service
apiVersion: v1
kind: Service
metadata:
  name: high-performance-service
  namespace: performance
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "9090"
spec:
  selector:
    app: high-performance-app
  ports:
  - port: 80
    targetPort: 8080
    name: http
  - port: 9090
    targetPort: 9090
    name: metrics
  type: ClusterIP
```

***

## 📚 **Resources dan Referensi**

### Dokumentasi Official

* [Kubernetes Performance Tuning](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/)
* [kubelet Configuration](https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/)
* [Node Performance](https://kubernetes.io/docs/setup/production-environment/node/)

### Performance Tools

* [kube-bench](https://github.com/aquasecurity/kube-bench)
* [kube-score](https://github.com/zegl/kube-score)
* [Popeye](https://github.com/derailed/popeye)

### Cheatsheet Summary

```bash
# Performance Commands
kubectl top nodes
kubectl top pods --all-namespaces
kubectl describe nodes
kubectl get events --all-namespaces --sort-by='.lastTimestamp'

# Performance Debugging
kubectl exec -it <pod-name> -- top
kubectl exec -it <pod-name> -- iostat -x 1
kubectl logs <pod-name> | grep -i "performance\|slow\|timeout"

# Resource Management
kubectl get nodes -o wide
kubectl describe node <node-name>
kubectl get pods -o wide --sort-by=.spec.nodeName
```

Advanced performance documentation siap digunakan! ⚡
