# Cloud Integration

## Multi-Cloud Architecture Overview

### Cloud Provider Integration Patterns

```
┌─────────────────┐    ┌──────────────────┐    ┌─────────────────┐
│    AWS          │    │     GCP          │    │     Azure        │
│                 │    │                  │    │                 │
│ • EKS Cluster   │    │ • GKE Cluster    │    │ • AKS Cluster   │
│ • RDS Storage   │    │ • Cloud SQL      │    │ • Azure SQL      │
│ • S3 Buckets    │    │ • GCS Buckets    │    │ • Blob Storage  │
│ • Load Balancer │    │ • Cloud LB       │    │ • Azure LB       │
└─────────────────┘    └──────────────────┘    └─────────────────┘
          │                       │                       │
          └───────────────────────┼───────────────────────┘
                                 │
                    ┌─────────────────────────┐
                    │   Management Plane       │
                    │   (Argo CD + Terraform)  │
                    │   (Monitoring & Logging)│
                    └─────────────────────────┘
```

### Federation and Multi-Cluster Management

#### Cluster Federation Setup

```yaml
# Federation control plane deployment
apiVersion: core.kubefed.io/v1beta1
kind: KubeFedCluster
metadata:
  name: aws-cluster
  namespace: kube-federation-system
spec:
  apiEndpoint: "https://EKS_API_ENDPOINT"
  caBundle: "CA_BUNDLE_BASE64"
  secretRef:
    name: aws-cluster-credentials
---
apiVersion: core.kubefed.io/v1beta1
kind: KubeFedCluster
metadata:
  name: gcp-cluster
  namespace: kube-federation-system
spec:
  apiEndpoint: "https://GKE_API_ENDPOINT"
  caBundle: "CA_BUNDLE_BASE64"
  secretRef:
    name: gcp-cluster-credentials
---
apiVersion: core.kubefed.io/v1beta1
kind: KubeFedCluster
metadata:
  name: azure-cluster
  namespace: kube-federation-system
spec:
  apiEndpoint: "https://AKS_API_ENDPOINT"
  caBundle: "CA_BUNDLE_BASE64"
  secretRef:
    name: azure-cluster-credentials
---
# Federated deployment across clusters
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
  name: multi-cloud-app
  namespace: production
spec:
  template:
    metadata:
      labels:
        app: multi-cloud-app
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: multi-cloud-app
      template:
        metadata:
          labels:
            app: multi-cloud-app
        spec:
          containers:
          - name: app
            image: nginx:1.21
            ports:
            - containerPort: 80
  placement:
    clusters:
    - name: aws-cluster
    - name: gcp-cluster
    - name: azure-cluster
  overrides:
  - clusterName: aws-cluster
    clusterOverrides:
      template:
        spec:
          replicas: 5
          nodeSelector:
            node-type: aws-compute
  - clusterName: gcp-cluster
    clusterOverrides:
      template:
        spec:
          replicas: 4
          nodeSelector:
            node-type: gcp-compute
  - clusterName: azure-cluster
    clusterOverrides:
      template:
        spec:
          replicas: 3
          nodeSelector:
            node-type: azure-compute
```

## AWS Integration

### EKS Cluster Configuration

#### EKS Cluster with Advanced Networking

```yaml
# EKS Cluster with advanced configuration
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: production-eks-cluster
  region: us-west-2
  version: "1.28"

iam:
  withOIDC: true
  serviceAccounts:
  - metadata:
      name: cluster-autoscaler
      namespace: kube-system
    wellKnownPolicies:
      autoScaler: true
  - metadata:
      name: aws-load-balancer-controller
      namespace: kube-system
    wellKnownPolicies:
      awsLoadBalancerController: true
  - metadata:
      name: ebs-csi-controller
      namespace: kube-system
    wellKnownPolicies:
      ebsCSIController: true

managedNodeGroups:
  - name: system-nodes
    instanceType: m6i.large
    desiredCapacity: 3
    minSize: 2
    maxSize: 5
    volumeSize: 100
    ssh:
      allow: true
    iam:
      withAddonPolicies:
        autoScaler: true
        cloudWatch: true
        ebs: true
        efs: true
        albIngress: true
    labels:
      node-type: system
      environment: production
    taints:
    - key: "system"
      value: "true"
      effect: "NoSchedule"
    preBootstrapCommands:
    - "echo 'kernel.sysctl = 1' >> /etc/sysctl.conf"
    - "sysctl -p"
    userData: |
      #!/bin/bash
      set -ex
      yum install -y amazon-ssm-agent
      systemctl enable amazon-ssm-agent
      systemctl start amazon-ssm-agent

  - name: compute-nodes
    instanceType: c6i.2xlarge
    desiredCapacity: 6
    minSize: 3
    maxSize: 20
    volumeSize: 200
    iam:
      withAddonPolicies:
        autoScaler: true
        cloudWatch: true
        ebs: true
    labels:
      node-type: compute
      environment: production
    instanceSelector:
      vCPUs: "8"
      memory: "16Gi"
      accelerators: "nvidia"
    availabilityZones:
    - us-west-2a
    - us-west-2b
    - us-west-2c
    preBootstrapCommands:
    - "echo 'vm.max_map_count=262144' >> /etc/sysctl.conf"
    - "echo 'fs.file-max=2097152' >> /etc/sysctl.conf"
    - "sysctl -p"
    tags:
      Environment: production
      Team: platform
      CostCenter: engineering

  - name: gpu-nodes
    instanceType: p4d.24xlarge
    desiredCapacity: 2
    minSize: 1
    maxSize: 5
    volumeSize: 1000
    iam:
      withAddonPolicies:
        autoScaler: true
        cloudWatch: true
        ebs: true
    labels:
      node-type: gpu
      environment: production
    taints:
    - key: "nvidia.com/gpu"
      value: "true"
      effect: "NoSchedule"
    availabilityZones:
    - us-west-2a
    - us-west-2b

addons:
  - name: vpc-cni
    version: latest
  - name: coredns
    version: latest
  - name: kube-proxy
    version: latest
  - name: aws-ebs-csi-driver
    version: latest
  - name: aws-efs-csi-driver
    version: latest
  - name: aws-load-balancer-controller
    version: latest

cloudWatch:
  clusterLogging:
    enable: ["api", "audit", "authenticator", "controllerManager", "scheduler"]
```

#### AWS Load Balancer Integration

```yaml
# AWS Load Balancer Controller installation
apiVersion: v1
kind: Namespace
metadata:
  name: aws-load-balancer-controller
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/name: aws-load-balancer-controller
  name: aws-load-balancer-controller
  namespace: aws-load-balancer-controller
  annotations:
    eks.amazonaws.io/role-arn: arn:aws:iam::ACCOUNT_ID:role/AWSLoadBalancerControllerRole
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/name: aws-load-balancer-controller
  name: aws-load-balancer-controller-role
rules:
- apiGroups:
  - ""
  - extensions
  resources:
  - configmaps
  - endpoints
  - events
  - ingresses
  - ingresses/status
  - services
  - services/status
  verbs:
  - create
  - get
  - list
  - update
  - watch
  - patch
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  - pods
  - services
  - namespaces
  - resourcequotas
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/name: aws-load-balancer-controller
  name: aws-load-balancer-controller-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: aws-load-balancer-controller-role
subjects:
- kind: ServiceAccount
  name: aws-load-balancer-controller
  namespace: aws-load-balancer-controller

# Application Load Balancer
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: aws-alb-ingress
  namespace: production
  annotations:
    kubernetes.io/ingress.class: "alb"
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/healthcheck-path: /health
    alb.ingress.kubernetes.io/healthcheck-interval-seconds: '30'
    alb.ingress.kubernetes.io/healthcheck-timeout-seconds: '5'
    alb.ingress.kubernetes.io/healthcheck-healthy-threshold-count: '2'
    alb.ingress.kubernetes.io/healthcheck-unhealthy-threshold-count: '3'
    alb.ingress.kubernetes.io/success-codes: '200,302'
    alb.ingress.kubernetes.io/load-balancer-name: production-app-alb
    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:ACCOUNT_ID:certificate/CERT_ID
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-TLS-1-2-2017-01
    alb.ingress.kubernetes.io/wafv2-acl-arn: arn:aws:wafv2:us-west-2:ACCOUNT_ID:regional/webacl/WEBACL_ID
    alb.ingress.kubernetes.io/route53-conditions: '[{"field":"http-header","httpHeaderName":"x-environment","values":["production"]}]'
spec:
  tls:
  - hosts:
    - app.example.com
  rules:
  - host: app.example.com
    http:
      paths:
      - path: /api
        pathType: Prefix
        backend:
          service:
            name: api-service
            port:
              number: 8080
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-service
            port:
              number: 80

# Network Load Balancer
apiVersion: v1
kind: Service
metadata:
  name: database-nlb
  namespace: production
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
    service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: "stickiness.enabled=true,stickiness.type=source_ip"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
spec:
  type: LoadBalancer
  selector:
    app: database
  ports:
  - port: 5432
    targetPort: 5432
    protocol: TCP
```

#### AWS Storage Integration

```yaml
# EBS Storage Classes
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-gp3-performance
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
  fsType: ext4
  encrypted: "true"
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ebs-io2-critical
provisioner: kubernetes.io/aws-ebs
parameters:
  type: io2
  iops: "10000"
  fsType: ext4
  encrypted: "true"
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
# EFS File System
apiVersion: v1
kind: PersistentVolume
metadata:
  name: shared-efs-pv
spec:
  capacity:
    storage: 100Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: efs-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: fs-12345678
    volumeAttributes:
      mounttarget: fsmt-12345678
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: shared-efs-claim
  namespace: production
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: efs-sc
  resources:
    requests:
      storage: 50Gi
---
# Application using AWS storage
apiVersion: apps/v1
kind: Deployment
metadata:
  name: aws-storage-app
  namespace: production
spec:
  replicas: 3
  selector:
    matchLabels:
      app: aws-storage-app
  template:
    metadata:
      labels:
        app: aws-storage-app
    spec:
      containers:
      - name: app
        image: nginx:1.21
        ports:
        - containerPort: 80
        volumeMounts:
        - name: ebs-storage
          mountPath: /data
          subPath: app-data
        - name: efs-storage
          mountPath: /shared
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi
      volumes:
      - name: ebs-storage
        persistentVolumeClaim:
          claimName: ebs-performance-claim
      - name: efs-storage
        persistentVolumeClaim:
          claimName: shared-efs-claim
```

## GCP Integration

### GKE Cluster Configuration

#### GKE with Advanced Features

```yaml
# GKE cluster configuration using Terraform
resource "google_container_cluster" "production_cluster" {
  name     = "production-gke-cluster"
  location = "us-central1"

  remove_default_node_pool = true
  initial_node_count       = 1

  network    = google_compute_network.vpc.name
  subnetwork = google_compute_subnetwork.private_subnet.name

  networking_mode = "VPC_NATIVE"

  ip_allocation_policy {
    cluster_ipv4_cidr_block  = "/16"
    services_ipv4_cidr_block = "/22"
  }

  addons_config {
    http_load_balancing {
      disabled = false
    }
    horizontal_pod_autoscaling {
      disabled = false
    }
    gce_persistent_disk_csi_driver {
      disabled = false
    }
    config_connector {
      enabled = true
    }
    gke_backup_agent {
      enabled = true
    }
  }

  resource_usage_export_config {
    enable_network_egress_export = true
  }

  database_encryption {
    state = "ENCRYPTED"
    key_name = google_kms_crypto_key.gke_key.id
  }

  workload_identity_config {
    workload_pool = "${google_project.project.project_id}.svc.id.goog"
  }

  confidential_nodes {
    enabled = false
  }

  pod_security_policy {
    enabled = false
  }

  binary_authorization {
    evaluation_mode = "PROJECT_SINGLETON_POLICY_ENFORCE"
  }

  master_authorized_networks_config {
    cidr_blocks {
      cidr_block   = "10.0.0.0/8"
      display_name = "Office network"
    }
  }

  master_authorized_networks_config {
    cidr_blocks {
      cidr_block   = "0.0.0.0/0"
      display_name = "Internet"
    }
  }

  maintenance_policy {
    daily_maintenance_window {
      start_time = "03:00"
    }
  }

  vertical_pod_autoscaling {
    enabled = true
  }

  enable_shielded_nodes = true
  enable_tpu = false

  release_channel {
    channel = "STABLE"
  }

  timeouts {
    create = "30m"
    update = "30m"
    delete = "30m"
  }
}

# Node pools
resource "google_container_node_pool" "system_nodes" {
  name       = "system-pool"
  location   = google_container_cluster.production_cluster.location
  cluster    = google_container_cluster.production_cluster.name
  node_count = 3

  node_config {
    machine_type = "e2-standard-2"
    disk_size_gb = 100
    disk_type    = "pd-ssd"
    image_type   = "COS_CONTAINERD"

    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
      "https://www.googleapis.com/auth/devstorage.read_only",
      "https://www.googleapis.com/auth/servicecontrol",
      "https://www.googleapis.com/auth/service.management.readonly",
      "https://www.googleapis.com/auth/trace.append",
    ]

    metadata = {
      disable-legacy-endpoints = "true"
    }

    labels = {
      node-type = "system"
      env       = "production"
    }

    tags = [
      "gke-system-pool",
      "production"
    ]
  }

  management {
    auto_repair  = true
    auto_upgrade = false
  }

  upgrade_settings {
    max_surge       = 1
    max_unavailable = 0
  }

  autoscaling {
    min_node_count = 2
    max_node_count = 5
  }
}

resource "google_container_node_pool" "compute_nodes" {
  name       = "compute-pool"
  location   = google_container_cluster.production_cluster.location
  cluster    = google_container_cluster.production_cluster.name
  node_count = 6

  node_config {
    machine_type = "c2-standard-8"
    disk_size_gb = 200
    disk_type    = "pd-ssd"
    image_type   = "COS_CONTAINERD"

    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
      "https://www.googleapis.com/auth/devstorage.read_only",
    ]

    labels = {
      node-type = "compute"
      env       = "production"
    }

    taints = {
      key    = "workload"
      value  = "compute"
      effect = "NO_SCHEDULE"
    }

    tags = [
      "gke-compute-pool",
      "production"
    ]
  }

  management {
    auto_repair  = true
    auto_upgrade = false
  }

  autoscaling {
    min_node_count = 3
    max_node_count = 20
  }
}
```

#### GCP Storage Integration

```yaml
# GKE Storage Classes
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gcp-ssd-performance
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd
  replication-type: regional-pd
  zones: us-central1-a,us-central1-b,us-central1-c
allowVolumeExpansion: true
reclaimPolicy: Retain
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gcp-standard-balanced
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-balanced
  replication-type: regional-pd
  zones: us-central1-a,us-central1-b,us-central1-c
allowVolumeExpansion: true
reclaimPolicy: Retain
---
# Filestore (NFS) integration
apiVersion: v1
kind: PersistentVolume
metadata:
  name: filestore-pv
spec:
  capacity:
    storage: 1Ti
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: filestore
  mountOptions:
    - hard
    - noresvport
    - vers=4.1
  nfs:
    server: 10.0.0.5
    path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: filestore-claim
  namespace: production
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: filestore
  resources:
    requests:
      storage: 500Gi
---
# Cloud SQL integration
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gcp-cloud-sql-proxy
  namespace: production
spec:
  replicas: 2
  selector:
    matchLabels:
      app: cloud-sql-proxy
  template:
    metadata:
      labels:
        app: cloud-sql-proxy
    spec:
      containers:
      - name: cloud-sql-proxy
        image: gcr.io/cloudsql-docker/gce-proxy:1.28.0
        command:
        - "/cloud_sql_proxy"
        - "-instances=PROJECT_ID:us-central1:my-db=tcp:5432"
        - "-credential_file=/secrets/cloudsql/credentials.json"
        ports:
        - containerPort: 5432
          name: postgres
        volumeMounts:
        - name: cloudsql-instance-credentials
          mountPath: /secrets/cloudsql
          readOnly: true
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 256Mi
      volumes:
      - name: cloudsql-instance-credentials
        secret:
          secretName: cloudsql-instance-credentials
```

## Azure Integration

### AKS Cluster Configuration

#### AKS with Advanced Networking

```yaml
# AKS cluster configuration using Terraform
resource "azurerm_kubernetes_cluster" "production_cluster" {
  name                = "production-aks-cluster"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  dns_prefix          = "prodaks"

  default_node_pool {
    name       = "system"
    node_count = 3
    vm_size    = "Standard_D2s_v3"

    os_disk_size_gb    = 100
    os_disk_type       = "Premium_LRS"
    vnet_subnet_id     = azurerm_subnet.aks_subnet.id
    max_pods          = 30
    node_taints       = ["system=true:NoSchedule"]

    upgrade_settings {
      max_surge = "10%"
    }
  }

  network_profile {
    network_plugin     = "azure"
    network_policy     = "calico"
    service_cidr       = "10.0.0.0/16"
    dns_service_ip     = "10.0.0.10"
    docker_bridge_cidr = "172.17.0.1/16"
    load_balancer_sku  = "standard"
    outbound_type      = "userDefinedRouting"
  }

  addon_profile {
    oms_agent {
      enabled                    = true
      log_analytics_workspace_id = azurerm_log_analytics_workspace.workspace.id
    }
    ingress_application_gateway {
      enabled = true
    }
    azure_policy {
      enabled = true
    }
    kube_dashboard {
      enabled = false
    }
    http_application_routing {
      enabled = false
    }
  }

  role_based_access_control_enabled = true
  azure_active_directory_role_based_access_control {
    managed            = true
    azure_rbac_enabled = true
    admin_group_object_ids = [
      data.azuread_group.admins.object_id
    ]
    azure_ad_admin_group_object_ids = [
      data.azuread_group.admins.object_id
    ]
    azure_ad_tenant_id = data.azurerm_client_config.current.tenant_id
  }

  automatic_channel_upgrade {
    channel_tag = "stable"
  }

  microsoft_defender {
    log_analytics_workspace_id = azurerm_log_analytics_workspace.workspace.id
  }

  api_server_authorized_ip_ranges = [
    "0.0.0.0/32"
  }

  identity {
    type = "SystemAssigned"
  }

  tags = {
    Environment = "production"
    Team        = "platform"
    CostCenter  = "engineering"
  }
}

# Additional node pools
resource "azurerm_kubernetes_cluster_node_pool" "compute_pool" {
  name                  = "compute"
  kubernetes_cluster_id  = azurerm_kubernetes_cluster.production_cluster.id
  vm_size               = "Standard_F8s_v2"
  node_count            = 3
  os_disk_size_gb       = 200
  os_disk_type          = "Premium_LRS"
  vnet_subnet_id        = azurerm_subnet.aks_subnet.id
  max_pods              = 30
  node_taints           = ["workload=compute:NoSchedule"]

  enable_auto_scaling = true
  min_count           = 2
  max_count           = 10

  upgrade_settings {
    max_surge = "33%"
  }

  tags = {
    node-type = "compute"
  }
}

resource "azurerm_kubernetes_cluster_node_pool" "gpu_pool" {
  name                  = "gpu"
  kubernetes_cluster_id  = azurerm_kubernetes_cluster.production_cluster.id
  vm_size               = "Standard_NC6s_v3"
  node_count            = 1
  os_disk_size_gb       = 500
  os_disk_type          = "Premium_LRS"
  vnet_subnet_id        = azurerm_subnet.aks_subnet.id
  max_pods              = 30
  node_taints           = ["nvidia.com/gpu=true:NoSchedule"]

  enable_auto_scaling = true
  min_count           = 1
  max_count           = 3

  tags = {
    node-type = "gpu"
  }
}
```

#### Azure Storage Integration

```yaml
# Azure Disk Storage Classes
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azure-disk-premium
provisioner: disk.csi.azure.com
parameters:
  skuName: Premium_LRS
  kind: Managed
  cachingMode: None
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azure-ultra-ssd
provisioner: disk.csi.azure.com
parameters:
  skuName: UltraSSD_LRS
  kind: Managed
  cachingMode: None
  DiskIOPSReadWrite: "3000"
  DiskMBpsReadWrite: "125"
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
---
# Azure File Storage Classes
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: azure-file-premium
provisioner: file.csi.azure.com
parameters:
  skuName: Premium_LRS
allowVolumeExpansion: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
# Application using Azure storage
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: azure-storage-app
  namespace: production
spec:
  serviceName: "azure-storage-app"
  replicas: 3
  selector:
    matchLabels:
      app: azure-storage-app
  template:
    metadata:
      labels:
        app: azure-storage-app
    spec:
      containers:
      - name: app
        image: nginx:1.21
        ports:
        - containerPort: 80
        volumeMounts:
        - name: azure-disk
          mountPath: /data
        - name: azure-file
          mountPath: /shared
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi
  volumeClaimTemplates:
  - metadata:
      name: azure-disk
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: azure-disk-premium
      resources:
        requests:
          storage: 50Gi
  - metadata:
      name: azure-file
    spec:
      accessModes: ["ReadWriteMany"]
      storageClassName: azure-file-premium
      resources:
        requests:
          storage: 100Gi
```

## Multi-Cloud Service Mesh

#### Istio Multi-Cloud Gateway

```yaml
# Multi-cloud Istio gateway configuration
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: multi-cloud-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: multi-cloud-tls
    hosts:
    - "*.example.com"
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: multi-cloud-routing
  namespace: production
spec:
  hosts:
  - app.example.com
  gateways:
  - multi-cloud-gateway
  http:
  - match:
    - headers:
        x-cloud:
          exact: aws
    route:
    - destination:
        host: app-service.aws-production.svc.cluster.local
        port:
          number: 80
  - match:
    - headers:
        x-cloud:
          exact: gcp
    route:
    - destination:
        host: app-service.gcp-production.svc.cluster.local
        port:
          number: 80
  - match:
    - headers:
        x-cloud:
          exact: azure
    route:
    - destination:
        host: app-service.azure-production.svc.cluster.local
        port:
          number: 80
  - route:
    - destination:
        host: app-service.aws-production.svc.cluster.local
        port:
          number: 80
      weight: 40
    - destination:
        host: app-service.gcp-production.svc.cluster.local
        port:
          number: 80
      weight: 35
    - destination:
        host: app-service.azure-production.svc.cluster.local
        port:
          number: 80
      weight: 25
---
# Global traffic management
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: multi-cloud-dr
  namespace: production
spec:
  host: app-service
  trafficPolicy:
    loadBalancer:
      simple: ROUND_ROBIN
    connectionPool:
      tcp:
        maxConnections: 100
      http:
        http1MaxPendingRequests: 50
        maxRequestsPerConnection: 10
    circuitBreaker:
      consecutiveErrors: 3
      interval: 30s
      baseEjectionTime: 30s
    outlierDetection:
      consecutiveGatewayErrors: 5
      interval: 30s
      baseEjectionTime: 30s
      maxEjectionPercent: 100
```

## Disaster Recovery Across Clouds

### Cross-Cloud Backup Strategy

#### Multi-Cloud Backup Configuration

```yaml
# Velero multi-cloud backup configuration
apiVersion: v1
kind: Secret
metadata:
  name: cloud-credentials
  namespace: velero
type: Opaque
data:
  aws: <base64-encoded-aws-credentials>
  gcp: <base64-encoded-gcp-credentials>
  azure: <base64-encoded-azure-credentials>
---
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  name: aws-backup-location
  namespace: velero
spec:
  provider: aws
  objectStorage:
    bucket: velero-backups-aws
    prefix: production
  config:
    region: us-west-2
    s3ForcePathStyle: "false"
    s3Url: "https://s3.us-west-2.amazonaws.com"
---
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  name: gcp-backup-location
  namespace: velero
spec:
  provider: gcp
  objectStorage:
    bucket: velero-backups-gcp
    prefix: production
  config:
    project: my-gcp-project
    location: us-central1
---
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  name: azure-backup-location
  namespace: velero
spec:
  provider: azure
  objectStorage:
    bucket: velero-backups-azure
    prefix: production
  config:
    resourceGroup: velero-rg
    storageAccount: velerosa
    subscriptionId: AZURE_SUBSCRIPTION_ID
---
# Cross-cloud backup schedule
apiVersion: velero.io/v1
kind: Schedule
metadata:
  name: cross-cloud-daily-backup
  namespace: velero
spec:
  schedule: "0 2 * * *"  # Daily at 2 AM
  useOwnerReferencesInBackup: false
  template:
    includedNamespaces:
    - production
    storageLocations:
    - aws-backup-location
    - gcp-backup-location
    - azure-backup-location
    hooks:
      resources:
      - name: pre-backup-hook
        namespace: production
        labelSelector:
          matchLabels:
            backup-hook: pre
        operations:
        - backup
        group: ""
        kind: Pod
        exec:
          command:
          - /bin/bash
          - -c
          - echo "Taking application snapshot before backup"
          container: app
          onError: Fail
      - name: post-backup-hook
        namespace: production
        labelSelector:
          matchLabels:
            backup-hook: post
        operations:
        - backup
        group: ""
        kind: Pod
        exec:
          command:
          - /bin/bash
          - -c
          - echo "Backup completed, cleaning up"
          container: app
          onError: Continue
---
# Cross-cloud restore test
apiVersion: velero.io/v1
kind: Restore
metadata:
  name: cross-cloud-restore-test
  namespace: velero
spec:
  backupName: cross-cloud-daily-backup-20231201
  restorePVs: true
  includeNamespaces:
  - staging
  namespaceMapping:
    production: staging
  labelSelector:
    matchLabels:
      restore: "true"
```

## Cost Optimization

### Multi-Cloud Cost Management

#### Cost Optimization Strategies

```yaml
# Cluster autoscaler configuration for cost optimization
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      containers:
      - image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.28.0
        name: cluster-autoscaler
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 300Mi
        command:
        - ./cluster-autoscaler
        - --v=4
        - --stderrthreshold=info
        - --cloud-provider=aws
        - --skip-nodes-with-local-storage=false
        - --expander=least-waste
        - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/production-cluster
        - --balance-similar-node-groups
        - --skip-nodes-with-system-pods=false
        - --max-node-provision-time=15m
        - --scale-down-unneeded-time=10m
        - --scale-down-delay-after-add=3m
        - --scale-down-unready-time=5m
        - --max-graceful-termination-sec=600
        - --max-total-unready-percentage=45
---
# Resource quotas for cost control
apiVersion: v1
kind: ResourceQuota
metadata:
  name: cost-control-quota
  namespace: production
spec:
  hard:
    requests.cpu: "100"
    requests.memory: 200Gi
    limits.cpu: "200"
    limits.memory: 400Gi
    persistentvolumeclaims: "50"
    count/pods: "200"
    count/services: "100"
    count/secrets: "100"
    count/configmaps: "100"
    services.loadbalancers: "10"
    services.nodeports: "0"
---
# Limit ranges for cost optimization
apiVersion: v1
kind: LimitRange
metadata:
  name: cost-optimization-limits
  namespace: production
spec:
  limits:
  - default:
      cpu: "500m"
      memory: "512Mi"
      ephemeral-storage: 1Gi
    defaultRequest:
      cpu: "100m"
      memory: "128Mi"
      ephemeral-storage: 500Mi
    type: Container
  - max:
      cpu: "8"
      memory: "16Gi"
      ephemeral-storage: 50Gi
    min:
      cpu: "50m"
      memory: "64Mi"
      ephemeral-storage: 100Mi
    type: Container
  - max:
      storage: 1Ti
    min:
      storage: 1Gi
    type: PersistentVolumeClaim
```

***

## 🚀 **Production Multi-Cloud Setup**

### Complete Multi-Cloud Configuration

```yaml
# Multi-cloud namespace
apiVersion: v1
kind: Namespace
metadata:
  name: multi-cloud
  labels:
    name: multi-cloud
    cloud-agnostic: "true"

---
# Multi-cloud service mesh
apiVersion: v1
kind: ConfigMap
metadata:
  name: multi-cloud-config
  namespace: multi-cloud
data:
  config.yaml: |
    clusters:
      aws:
        endpoint: "https://EKS_API_ENDPOINT"
        region: "us-west-2"
        type: "eks"
      gcp:
        endpoint: "https://GKE_API_ENDPOINT"
        region: "us-central1"
        type: "gke"
      azure:
        endpoint: "https://AKS_API_ENDPOINT"
        region: "eastus"
        type: "aks"

    routing:
      default: aws
      failover:
        - gcp
        - azure
      health_check:
        interval: 30s
        timeout: 5s
        retries: 3

    monitoring:
      enabled: true
      endpoints:
        - prometheus.monitoring.svc.cluster.local:9090
        - grafana.monitoring.svc.cluster.local:3000

---
# Multi-cloud deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: multi-cloud-app
  namespace: multi-cloud
spec:
  replicas: 6
  selector:
    matchLabels:
      app: multi-cloud-app
  template:
    metadata:
      labels:
        app: multi-cloud-app
        cloud-agnostic: "true"
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - multi-cloud-app
              topologyKey: kubernetes.io/hostname
      containers:
      - name: app
        image: nginx:1.21
        ports:
        - containerPort: 80
        env:
        - name: CLOUD_PROVIDER
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: CLUSTER_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.labels['kubernetes.io/cluster']
        resources:
          requests:
            cpu: 200m
            memory: 256Mi
          limits:
            cpu: 500m
            memory: 512Mi
        livenessProbe:
          httpGet:
            path: /health
            port: 80
          initialDelaySeconds: 10
          periodSeconds: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 3
```

***

## 📚 **Resources dan Referensi**

### Dokumentasi Official

* [EKS Documentation](https://docs.aws.amazon.com/eks/)
* [GKE Documentation](https://cloud.google.com/kubernetes-engine/docs)
* [AKS Documentation](https://docs.microsoft.com/en-us/azure/aks/)
* [Kubernetes Federation](https://kubernetes.io/docs/concepts/cluster-administration/federation/)

### Multi-Cloud Tools

* [KubeFed](https://github.com/kubernetes-sigs/kubefed)
* [Cluster API](https://cluster-api.sigs.k8s.io/)
* [Velero](https://velero.io/docs/)
* [Crossplane](https://crossplane.io/docs/)

### Cheatsheet Summary

```bash
# Multi-Cloud Commands
kubectl get nodes -o wide --show-labels
kubectl get pods -A -o wide | grep multi-cloud
kubectl get services -A --show-labels

# Cloud-specific commands
aws eks describe-cluster --name cluster-name
gcloud container clusters describe cluster-name --zone us-central1-a
az aks show --resource-group rg-name --name cluster-name

# Federation commands
kubectl get kubefedclusters -A
kubectl get federateddeployments -A
```

Multi-cloud integration documentation siap digunakan! ☁️
