# Cloud Native

> 🚀 **Cloud-Native Kubernetes**: Panduan lengkap untuk managed Kubernetes services di major cloud providers.

***

## 📋 **Daftar Isi**

### **☁️ Amazon EKS**

* [EKS Overview](#eks-overview)
* [EKS Cluster Setup](#eks-cluster-setup)
* [EKS Node Groups](#eks-node-groups)
* [EKS Add-ons](#eks-add-ons)
* [EKS Best Practices](#eks-best-practices)

### **🔷 Google GKE**

* [GKE Overview](#gke-overview)
* [GKE Cluster Creation](#gke-cluster-creation)
* [GKE Node Pools](#gke-node-pools)
* [GKE Features](#gke-features)
* [GKE Optimization](#gke-optimization)

### **🔷 Microsoft Azure AKS**

* [AKS Overview](#aks-overview)
* [AKS Cluster Setup](#aks-cluster-setup)
* [AKS Node Pools](#aks-node-pools)
* [AKS Integration](#aks-integration)
* [AKS Security](#aks-security)

### **🏗️ Cloud Native Patterns**

* [Multi-Cluster Management](#multi-cluster-management)
* [Cross-Cloud Deployment](#cross-cloud-deployment)
* [Hybrid Cloud](#hybrid-cloud)
* [Edge Computing](#edge-computing)

### **💰 Cost Optimization**

* [Right Sizing](#right-sizing)
* [Spot Instances](#spot-instances)
* [Autoscaling Strategies](#autoscaling-strategies)
* [Cost Monitoring](#cost-monitoring)

***

## ☁️ **Amazon EKS**

### EKS Overview

**📖 Konsep Dasar** Amazon EKS (Elastic Kubernetes Service) adalah managed Kubernetes service yang menghilangkan complexity dalam setup dan management Kubernetes control plane.

**🎯 EKS Components**

* **Control Plane**: Fully managed oleh AWS
* **Worker Nodes**: EC2 instances yang Anda kelola
* **EKS Add-ons**: Managed add-ons seperti CoreDNS, kube-proxy, CNI
* **Fargate**: Serverless compute engine

**🔧 EKS Architecture**

```
┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   EKS Control   │    │   Worker Nodes  │    │   EKS Fargate   │
│     Plane       │◄──►│   (EC2/EKS-Opt) │◄──►│   (Serverless)  │
│  (Managed by    │    │                 │    │                 │
│      AWS)       │    │                 │    │                 │
└─────────────────┘    └─────────────────┘    └─────────────────┘
         │                       │                       │
         └───────────────────────┼───────────────────────┘
                                 │
                    ┌─────────────────┐
                    │   AWS Services  │
                    │ (VPC, IAM, S3,  │
                    │   ECR, etc.)    │
                    └─────────────────┘
```

### EKS Cluster Setup

**🚀 Prerequisites**

```bash
# Install AWS CLI v2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Install kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Install eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
```

**⚙️ Create EKS Cluster with eksctl**

```bash
# Basic cluster creation
eksctl create cluster \
  --name my-eks-cluster \
  --version 1.28 \
  --region us-west-2 \
  --nodegroup-name standard-workers \
  --node-type t3.medium \
  --nodes 3 \
  --nodes-min 1 \
  --nodes-max 4 \
  --managed

# Advanced cluster configuration
eksctl create cluster \
  --name production-eks \
  --version 1.28 \
  --region us-west-2 \
  --zones us-west-2a,us-west-2b,us-west-2c \
  --without-nodegroup \
  --managed

# Create node groups
eksctl create nodegroup \
  --cluster production-eks \
  --name system-nodegroup \
  --node-type t3.medium \
  --nodes 2 \
  --nodes-min 1 \
  --nodes-max 3 \
  --node-labels system=true \
  --asg-access \
  --full-ecr-access \
  --instance-types t3.medium,t3.large

eksctl create nodegroup \
  --cluster production-eks \
  --name app-nodegroup \
  --node-type m5.large \
  --nodes 3 \
  --nodes-min 2 \
  --nodes-max 10 \
  --node-labels app=true \
  --managed \
  --spot
```

**🔧 Cluster Configuration File**

```yaml
# cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: production-eks
  region: us-west-2
  version: "1.28"

iam:
  withOIDC: true

addons:
  - name: vpc-cni
  - name: coredns
  - name: kube-proxy
    version: latest

managedNodeGroups:
  - name: system-nodegroup
    instanceType: t3.medium
    minSize: 1
    maxSize: 3
    desiredCapacity: 2
    labels:
      system: "true"
    taints:
      dedicated: system:NoSchedule
    iam:
      withAddonPolicies:
        autoScaler: true
        cloudWatch: true
        ebs: true
        efs: true
        albIngress: true
    ssh:
      allow: true
    volumeSize: 50

  - name: app-nodegroup
    instanceTypes:
      - m5.large
      - m5.xlarge
    minSize: 2
    maxSize: 10
    desiredCapacity: 3
    labels:
      app: "true"
    spot: true
    iam:
      withAddonPolicies:
        autoScaler: true
        cloudWatch: true
        ebs: true
    volumeSize: 100

  - name: gpu-nodegroup
    instanceType: p3.2xlarge
    minSize: 0
    maxSize: 2
    desiredCapacity: 1
    labels:
      gpu: "true"
    taints:
      nvidia.com/gpu: "true:NoSchedule"
    iam:
      withAddonPolicies:
        autoScaler: true
        cloudWatch: true
    volumeSize: 200

availabilityZones:
- us-west-2a
- us-west-2b
- us-west-2c

cloudWatch:
  clusterLogging:
    enable: ["api", "audit", "authenticator", "controllerManager", "scheduler"]
```

### EKS Node Groups

**🎯 Managed Node Groups**

```yaml
# Managed Node Group with Spot Instances
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: spot-cluster
  region: us-west-2

managedNodeGroups:
  - name: spot-workers
    instanceTypes:
      - m5.large
      - m5.xlarge
      - c5.large
    minSize: 2
    maxSize: 20
    desiredCapacity: 5
    spot: true
    availabilityZones:
      - us-west-2a
      - us-west-2b
    capacityType: SPOT
    labels:
      role: spot-worker
    taints:
      spot-instance: "true:NoSchedule"
    ssh:
      allow: true
    iam:
      withAddonPolicies:
        autoScaler: true
        cloudWatch: true
        ebs: true
```

**🔧 Custom AMI Node Groups**

```bash
# Create custom AMI
aws ec2 create-image \
  --instance-id i-1234567890abcdef0 \
  --name "eks-custom-ami" \
  --description "Custom AMI for EKS nodes"

# Create node group with custom AMI
eksctl create nodegroup \
  --cluster my-cluster \
  --name custom-ami-ng \
  --node-ami ami-1234567890abcdef0 \
  --node-type t3.medium \
  --nodes 2
```

### EKS Add-ons

**🚀 Install Core Add-ons**

```bash
# Install VPC CNI
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/v1.8/aws-k8s-cni.yaml

# Install AWS Load Balancer Controller
helm repo add eks https://aws.github.io/eks-charts
helm repo update

kubectl apply -k "github.com/aws/eks-charts/stable/aws-load-balancer-controller/crds?ref=master"

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  -n kube-system \
  --set clusterName=my-eks-cluster \
  --set serviceAccount.create=true \
  --set serviceAccount.name=aws-load-balancer-controller

# Install Cluster Autoscaler
helm repo add autoscaler https://kubernetes.github.io/autoscaler
helm repo update

helm install cluster-autoscaler autoscaler/cluster-autoscaler \
  -n kube-system \
  --set autoDiscovery.clusterName=my-eks-cluster \
  --set awsRegion=us-west-2 \
  --set rbac.create=true \
  --set rbac.serviceAccount.create=true \
  --set rbac.serviceAccount.name=cluster-autoscaler
```

**🔧 Install Metrics Server**

```bash
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# For EKS, patch metrics server
kubectl patch deployment metrics-server -n kube-system \
  --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/args", "value": ["--cert-dir=/tmp", "--secure-port=4443", "--kubelet-insecure-tls", "--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname"]}]'
```

### EKS Best Practices

**🔒 Security Best Practices**

```yaml
# IAM Role for Service Account (IRSA)
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-service-account
  namespace: production
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::123456789012:role/EKS-App-Role
---
apiVersion: v1
kind: Pod
metadata:
  name: app-pod
  namespace: production
spec:
  serviceAccountName: app-service-account
  containers:
  - name: app
    image: my-app:latest
```

**📊 Resource Management**

```yaml
# Pod Disruption Budget
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: app-pdb
  namespace: production
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: my-app

# Resource Quotas
apiVersion: v1
kind: ResourceQuota
metadata:
  name: production-quota
  namespace: production
spec:
  hard:
    requests.cpu: "10"
    requests.memory: 20Gi
    limits.cpu: "20"
    limits.memory: 40Gi
    persistentvolumeclaims: "10"
    pods: "20"
```

***

## 🔷 **Google GKE**

### GKE Overview

**📖 Konsep Dasar** Google Kubernetes Engine (GKE) adalah managed Kubernetes service dari Google Cloud dengan advanced features seperti Autopilot mode dan integrated cloud services.

**🎯 GKE Features**

* **Autopilot Mode**: Fully managed infrastructure
* **Standard Mode**: Flexible node management
* **Regional Clusters**: High availability across zones
* **Private Clusters**: Enhanced security
* **Workload Identity**: Secure IAM integration

### GKE Cluster Creation

**🚀 Using Google Cloud Console**

```bash
# Enable required APIs
gcloud services enable container.googleapis.com
gcloud services enable cloudbuild.googleapis.com

# Set default project and zone
gcloud config set project my-project-id
gcloud config set compute/zone us-central1-a
gcloud config set compute/region us-central1

# Create GKE cluster (Autopilot)
gcloud container clusters create-auto my-autopilot-cluster \
  --region us-central1 \
  --release-channel stable \
  --enable-private-nodes \
  --enable-master-global-access

# Create GKE cluster (Standard)
gcloud container clusters create my-standard-cluster \
  --zone us-central1-a \
  --num-nodes 1 \
  --machine-type e2-standard-2 \
  --enable-autorepair \
  --enable-autoupgrade \
  --enable-autoscaling \
  --min-nodes 1 \
  --max-nodes 10 \
  --enable-ip-alias \
  --enable-private-nodes \
  --enable-master-global-access \
  --enable-stackdriver-kubernetes \
  --enable-cloud-logging \
  --enable-cloud-monitoring
```

**🔧 Advanced Cluster Configuration**

```bash
# Create regional cluster with multiple node pools
gcloud container clusters create production-cluster \
  --region us-central1 \
  --node-locations us-central1-a,us-central1-b,us-central1-c \
  --num-nodes 1 \
  --machine-type e2-standard-4 \
  --enable-autorepair \
  --enable-autoupgrade \
  --enable-autoscaling \
  --min-nodes 2 \
  --max-nodes 10 \
  --enable-ip-alias \
  --enable-private-nodes \
  --enable-master-global-access \
  --enable-cloud-logging \
  --enable-cloud-monitoring \
  --enable-shielded-nodes \
  --enable-image-streaming \
  --enable-pod-security-policy \
  --enable-intranode-visibility

# Create additional node pools
gcloud container node-pools create system-pool \
  --cluster production-cluster \
  --region us-central1 \
  --machine-type e2-standard-2 \
  --num-nodes 1 \
  --enable-autorepair \
  --enable-autoupgrade \
  --node-labels system=true \
  --node-taints dedicated=system:NoSchedule \
  --min-nodes 1 \
  --max-nodes 3

gcloud container node-pools create compute-pool \
  --cluster production-cluster \
  --region us-central1 \
  --machine-type n2-standard-4 \
  --accelerator type=nvidia-tesla-t4,count=1 \
  --num-nodes 0 \
  --enable-autorepair \
  --enable-autoupgrade \
  --node-labels gpu=true \
  --node-taints nvidia.com/gpu=true:NoSchedule \
  --min-nodes 0 \
  --max-nodes 5

gcloud container node-pools create spot-pool \
  --cluster production-cluster \
  --region us-central1 \
  --machine-type e2-standard-2 \
  --num-nodes 2 \
  --enable-autorepair \
  --enable-autoupgrade \
  --spot \
  --node-labels spot=true \
  --min-nodes 0 \
  --max-nodes 10
```

### GKE Node Pools

**🎯 Node Pool Management**

```bash
# List node pools
gcloud container node-pools list --cluster production-cluster

# Resize node pool
gcloud container clusters resize production-cluster \
  --node-pool system-pool \
  --num-nodes 3 \
  --region us-central1

# Upgrade node pool
gcloud container node-pools upgrade system-pool \
  --cluster production-cluster \
  --region us-central1 \
  --cluster-version latest

# Delete node pool
gcloud container node-pools delete old-pool \
  --cluster production-cluster \
  --region us-central1
```

**🔧 Advanced Node Pool Configuration**

```bash
# Create node pool with custom settings
gcloud container node-pools create app-pool \
  --cluster production-cluster \
  --region us-central1 \
  --machine-type e2-standard-4 \
  --num-nodes 3 \
  --disk-size 100 \
  --disk-type pd-ssd \
  --image-type COS_CONTAINERD \
  --enable-autorepair \
  --enable-autoupgrade \
  --enable-autoscaling \
  --min-nodes 2 \
  --max-nodes 10 \
  --node-labels app=true,environment=production \
  --node-taints workloads=production:NoSchedule \
  --max-surge-upgrade 1 \
  --max-unavailable-upgrade 0
```

### GKE Features

**🚀 Workload Identity**

```yaml
# Create Google Service Account
gcloud iam service-accounts create gke-app-sa \
  --display-name "GKE App Service Account"

# Grant IAM permissions
gcloud projects add-iam-policy-binding my-project-id \
  --member "serviceAccount:gke-app-sa@my-project-id.iam.gserviceaccount.com" \
  --role "roles/storage.objectViewer"

# Bind Kubernetes service account to Google service account
gcloud iam service-accounts add-iam-policy-binding gke-app-sa@my-project-id.iam.gserviceaccount.com \
  --role roles/iam.workloadIdentityUser \
  --member "serviceAccount:my-project-id.svc.id.goog[production/app-sa]"

# Kubernetes service account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: app-sa
  namespace: production
  annotations:
    iam.gke.io/gcp-service-account: gke-app-sa@my-project-id.iam.gserviceaccount.com
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
  namespace: production
spec:
  template:
    spec:
      serviceAccountName: app-sa
      containers:
      - name: app
        image: gcr.io/my-project-id/my-app:latest
```

**🔧 Cloud Operations Integration**

```yaml
# Enable operations for workloads
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
  namespace: production
  annotations:
    monitoring.gke.io/scrape: "true"
    monitoring.gke.io/port: "8080"
    monitoring.gke.io/path: "/metrics"
spec:
  template:
    spec:
      containers:
      - name: app
        image: gcr.io/my-project-id/my-app:latest
        ports:
        - containerPort: 8080
          name: metrics
        env:
        - name: ENABLE_METRICS
          value: "true"
        - name: METRICS_PORT
          value: "8080"
```

### GKE Optimization

**📊 Performance Optimization**

```yaml
# Resource optimization with vertical pod autoscaling
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: app-vpa
  namespace: production
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: app-deployment
  updatePolicy:
    updateMode: "Auto"
  resourcePolicy:
    containerPolicies:
    - containerName: app
      minAllowed:
        cpu: 100m
        memory: 128Mi
      maxAllowed:
        cpu: 2
        memory: 4Gi
      controlledResources: ["cpu", "memory"]

# Horizontal pod autoscaling
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: app-hpa
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: app-deployment
  minReplicas: 2
  maxReplicas: 20
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
```

***

## 🔷 **Microsoft Azure AKS**

### AKS Overview

**📖 Konsep Dasar** Azure Kubernetes Service (AKS) adalah managed Kubernetes service dari Microsoft dengan deep integration dengan Azure ecosystem.

**🎯 AKS Features**

* **Azure AD Integration**: Authentication and authorization
* **Azure Monitor**: Comprehensive monitoring
* **Azure Policy**: Policy enforcement
* **Azure Network**: Advanced networking options

### AKS Cluster Setup

**🚀 Prerequisites**

```bash
# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Install kubectl
az aks install-cli

# Login to Azure
az login

# Set subscription
az account set --subscription "My Subscription"
```

**⚙️ Create AKS Cluster**

```bash
# Create resource group
az group create --name myResourceGroup --location eastus

# Create AKS cluster
az aks create \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --node-count 3 \
  --enable-addons monitoring \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 5 \
  --enable-azure-rbac \
  --enable-azure-ad-integration \
  --enable-managed-identity \
  --attach-acr <acr-name> \
  --generate-ssh-keys \
  --network-plugin azure \
  --service-cidr 10.0.0.0/16 \
  --dns-service-ip 10.0.0.10 \
  --docker-bridge-address 172.17.0.1/16 \
  --vnet-subnet-id /subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Network/virtualNetworks/<vnet-name>/subnets/<subnet-name>

# Get cluster credentials
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
```

**🔧 Advanced AKS Configuration**

```bash
# Create AKS cluster with advanced settings
az aks create \
  --resource-group myResourceGroup \
  --name production-aks \
  --kubernetes-version 1.28.3 \
  --node-count 3 \
  --node-vm-size Standard_D2s_v3 \
  --enable-addons monitoring,azure-policy,ingress-appgw \
  --enable-cluster-autoscaler \
  --min-count 2 \
  --max-count 10 \
  --max-pods 110 \
  --network-policy calico \
  --pod-cidr 10.244.0.0/16 \
  --service-cidr 10.0.0.0/16 \
  --dns-service-ip 10.0.0.10 \
  --docker-bridge-address 172.17.0.1/16 \
  --vnet-subnet-id /subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Network/virtualNetworks/<vnet>/subnets/<subnet> \
  --load-balancer-sku standard \
  --load-balancer-managed-outbound-ip-count 2 \
  --enable-azure-rbac \
  --enable-azure-ad-integration \
  --azure-ad-admin-group-object-ids <group-id> \
  --enable-managed-identity \
  --attach-acr <acr-name> \
  --node-osdisk-size 100 \
  --node-osdisk-type Ephemeral \
  --enable-workload-identity \
  --enable-oidc-issuer \
  --enable-image-cleaner \
  --image-cleaner-interval-hours 24 \
  --enable-secret-rotation \
  --rotation-poll-interval 2m \
  --generate-ssh-keys \
  --zones 1 2 3
```

### AKS Node Pools

**🎯 Node Pool Management**

```bash
# Create additional node pools
az aks nodepool add \
  --resource-group myResourceGroup \
  --cluster-name myAKSCluster \
  --name systempool \
  --node-count 1 \
  --node-vm-size Standard_D2s_v3 \
  --mode System \
  --node-taints CriticalAddonsOnly=true:NoSchedule \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 3

az aks nodepool add \
  --resource-group myResourceGroup \
  --cluster-name myAKSCluster \
  --name userpool \
  --node-count 3 \
  --node-vm-size Standard_D4s_v3 \
  --mode User \
  --enable-cluster-autoscaler \
  --min-count 2 \
  --max-count 10

# Create GPU node pool
az aks nodepool add \
  --resource-group myResourceGroup \
  --cluster-name myAKSCluster \
  --name gpupool \
  --node-count 1 \
  --node-vm-size Standard_NC6s_v3 \
  --mode User \
  --enable-cluster-autoscaler \
  --min-count 0 \
  --max-count 2 \
  --node-taints gpu=true:NoSchedule
```

**🔧 Spot Node Pools**

```bash
# Create spot node pool
az aks nodepool add \
  --resource-group myResourceGroup \
  --cluster-name myAKSCluster \
  --name spotpool \
  --priority Spot \
  --eviction-policy Delete \
  --spot-max-price -1 \
  --node-count 2 \
  --node-vm-size Standard_D2s_v3 \
  --enable-cluster-autoscaler \
  --min-count 0 \
  --max-count 5 \
  --node-taints kubernetes.azure.com/scalesetpriority=spot:NoSchedule
```

### AKS Integration

**🚀 Azure Container Registry Integration**

```bash
# Create ACR
az acr create --resource-group myResourceGroup --name myAKSRegistry --sku Basic

# Attach ACR to AKS
az aks update \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --attach-acr myAKSRegistry

# Deploy with ACR image
kubectl create secret docker-registry acr-secret \
  --docker-server=myAKSRegistry.azurecr.io \
  --docker-username=<service-principal-id> \
  --docker-password=<service-principal-password> \
  --docker-email=myemail@example.com
```

**🔧 Azure Monitor Integration**

```yaml
# Container insights enabled deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
  namespace: production
  annotations:
    prometheus.io/scrape: "true"
    prometheus.io/port: "8080"
    prometheus.io/path: "/metrics"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
        prometheus.io/path: "/metrics"
    spec:
      containers:
      - name: app
        image: myregistry.azurecr.io/my-app:latest
        ports:
        - containerPort: 8080
        env:
        - name: APPINSIGHTS_INSTRUMENTATIONKEY
          valueFrom:
            secretKeyRef:
              name: app-secrets
              key: appinsights-key
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi
```

### AKS Security

**🔒 Azure AD Integration**

```bash
# Enable Azure AD integration
az aks enable-aad \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --azure-ad-admin-group-object-ids <group-id>

# Create Azure AD user for cluster access
az ad user create --display-name "AKS User" --user-principal-name aksuser@example.com --password MySecurePassword

# Assign role to user
az role assignment create --assignee <user-id> --role "Azure Kubernetes Service Cluster User Role" --resource-group myResourceGroup
```

**🔧 Network Security**

```yaml
# Network policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: production
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

# Allow specific traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-app-traffic
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: my-app
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: frontend
    ports:
    - protocol: TCP
      port: 8080
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: database
    ports:
    - protocol: TCP
      port: 5432
```

***

## 🏗️ **Cloud Native Patterns**

### Multi-Cluster Management

**🚀 Cluster API Provider**

```yaml
# Cluster API configuration
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
  name: multi-cloud-cluster
  namespace: cluster-system
spec:
  infrastructureRef:
    apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
    kind: AWSCluster
    name: aws-cluster
  controlPlaneRef:
    apiVersion: controlplane.cluster.x-k8s.io/v1beta1
    kind: KubeadmControlPlane
    name: control-plane
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AWSCluster
metadata:
  name: aws-cluster
  namespace: cluster-system
spec:
  region: us-west-2
  sshKeyName: my-key-pair
  networkSpec:
    vpc:
      id: vpc-12345678
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: AzureCluster
metadata:
  name: azure-cluster
  namespace: cluster-system
spec:
  location: eastus
  resourceGroup: myResourceGroup
  networkSpec:
    vnet:
      name: my-vnet
```

### Cross-Cloud Deployment

**🌐 Multi-Cloud Strategy**

```yaml
# Cross-cloud deployment configuration
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cross-cloud-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/mycompany/k8s-manifests
    targetRevision: HEAD
    path: cross-cloud
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

# Environment-specific overlays
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base

patchesStrategicMerge:
- aws-patch.yaml
- azure-patch.yaml
- gcp-patch.yaml

replicas:
- name: app
  count: 5

images:
- name: app
  newTag: v1.2.3
```

### Hybrid Cloud

**🏢 Hybrid Architecture**

```yaml
# On-premises and cloud deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hybrid-app
  namespace: production
spec:
  replicas: 5
  selector:
    matchLabels:
      app: hybrid-app
  template:
    metadata:
      labels:
        app: hybrid-app
        location: hybrid
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: location
                operator: In
                values: ["on-prem", "cloud"]
      containers:
      - name: app
        image: myregistry/hybrid-app:latest
        env:
        - name: DATABASE_URL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: database.url
        - name: REDIS_URL
          valueFrom:
            configMapKeyRef:
              name: app-config
              key: redis.url
```

### Edge Computing

**📱 Edge Deployment Pattern**

```yaml
# Edge node configuration
apiVersion: v1
kind: Node
metadata:
  name: edge-node-1
  labels:
    location: edge
    region: asia-southeast1
    zone: edge-zone-1
spec:
  podCIDR: 10.244.1.0/24
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: edge-processor
  namespace: edge
spec:
  selector:
    matchLabels:
      app: edge-processor
  template:
    metadata:
      labels:
        app: edge-processor
    spec:
      nodeSelector:
        location: edge
      containers:
      - name: processor
        image: myregistry/edge-processor:latest
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 512Mi
        env:
        - name: EDGE_LOCATION
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: PROCESSING_MODE
          value: "local"
```

***

## 💰 **Cost Optimization**

### Right Sizing

**📊 Resource Optimization**

```yaml
# Resource requests and limits
apiVersion: apps/v1
kind: Deployment
metadata:
  name: optimized-app
  namespace: production
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: app
        image: myregistry/app:latest
        resources:
          requests:
            cpu: 250m
            memory: 256Mi
          limits:
            cpu: 500m
            memory: 512Mi
        env:
        - name: JAVA_OPTS
          value: "-Xms256m -Xmx512m -XX:+UseG1GC"
```

**🔧 Vertical Pod Autoscaler**

```yaml
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: app-vpa
  namespace: production
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: optimized-app
  updatePolicy:
    updateMode: "Auto"
  resourcePolicy:
    containerPolicies:
    - containerName: app
      minAllowed:
        cpu: 100m
        memory: 128Mi
      maxAllowed:
        cpu: 2
        memory: 4Gi
      controlledResources: ["cpu", "memory"]
```

### Spot Instances

**💰 Spot Instance Strategy**

```yaml
# EKS Spot node group configuration
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: spot-optimized-cluster
  region: us-west-2

managedNodeGroups:
  - name: spot-workers
    instanceTypes:
      - m5.large
      - m5.xlarge
      - c5.large
      - c5.xlarge
    minSize: 2
    maxSize: 20
    desiredCapacity: 5
    spot: true
    capacityType: SPOT
    labels:
      role: spot-worker
    taints:
      spot-instance: "true:NoSchedule"
    iam:
      withAddonPolicies:
        autoScaler: true
        cloudWatch: true

# Pod tolerations for spot instances
apiVersion: apps/v1
kind: Deployment
metadata:
  name: spot-app
  namespace: production
spec:
  template:
    spec:
      tolerations:
      - key: "spot-instance"
        operator: "Equal"
        value: "true"
        effect: "NoSchedule"
      nodeSelector:
        role: spot-worker
      containers:
      - name: app
        image: myregistry/spot-app:latest
```

### Autoscaling Strategies

**📈 Cluster Autoscaler Configuration**

```yaml
# Cluster Autoscaler for EKS
apiVersion: apps/v1
kind: Deployment
metadata:
  name: cluster-autoscaler
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: cluster-autoscaler
  template:
    metadata:
      labels:
        app: cluster-autoscaler
    spec:
      containers:
      - image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.27.0
        name: cluster-autoscaler
        resources:
          limits:
            cpu: 100m
            memory: 300Mi
          requests:
            cpu: 100m
            memory: 300Mi
        command:
        - ./cluster-autoscaler
        - --v=4
        - --stderrthreshold=info
        - --cloud-provider=aws
        - --skip-nodes-with-local-storage=false
        - --expander=least-waste
        - --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/my-cluster
        - --balance-similar-node-groups
        - --skip-nodes-with-system-pods=false
```

### Cost Monitoring

**📊 Cost Monitoring Tools**

```yaml
# Prometheus cost metrics
apiVersion: v1
kind: ConfigMap
metadata:
  name: cost-monitoring
  namespace: monitoring
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s

    scrape_configs:
    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\d+)?;(\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name

# Cost monitoring dashboard
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-cost-dashboard
  labels:
    grafana_dashboard: "1"
data:
  cost-dashboard.json: |
    {
      "dashboard": {
        "title": "Kubernetes Cost Monitoring",
        "panels": [
          {
            "title": "Cost per Namespace",
            "type": "graph",
            "targets": [
              {
                "expr": "sum by (namespace) (kube_pod_container_resource_requests{resource=\"cpu\"}) * 0.01",
                "legendFormat": "{{namespace}}"
              }
            ]
          }
        ]
      }
    }
```

***

## 🎯 **Best Practices**

### **☁️ Cloud Provider Selection**

1. **EKS Best for:**
   * Existing AWS infrastructure
   * Advanced security features
   * Integration with AWS services
2. **GKE Best for:**
   * Advanced Kubernetes features
   * Autopilot mode
   * Machine learning workloads
3. **AKS Best for:**
   * Microsoft ecosystem
   * Hybrid cloud scenarios
   * Enterprise integration

### **🔧 Migration Strategy**

1. **Assessment Phase**
   * Analyze existing workloads
   * Identify dependencies
   * Plan migration timeline
2. **Implementation Phase**
   * Set up target environment
   * Migrate non-critical workloads
   * Test and validate
3. **Optimization Phase**
   * Monitor performance
   * Optimize resource usage
   * Implement cost controls

### **📊 Performance Optimization**

1. **Right Sizing**
   * Monitor resource usage
   * Use vertical pod autoscaling
   * Implement resource quotas
2. **Networking**
   * Use appropriate load balancers
   * Optimize network policies
   * Monitor network performance

***

## 🔗 **Referensi**

### **📚 Dokumentasi Resmi**

* [Amazon EKS Documentation](https://docs.aws.amazon.com/eks/)
* [Google GKE Documentation](https://cloud.google.com/kubernetes-engine)
* [Microsoft AKS Documentation](https://docs.microsoft.com/en-us/azure/aks/)

### **🛠️ Cloud Tools**

* [eksctl CLI](https://eksctl.io/)
* [gcloud CLI](https://cloud.google.com/sdk)
* [Azure CLI](https://docs.microsoft.com/en-us/cli/azure/)

### **📖 Learning Resources**

* [Cloud Native Computing Foundation](https://www.cncf.io/)
* [Kubernetes Certification](https://www.cncf.io/certification/)
* [Cloud Best Practices](https://kubernetes.io/docs/setup/production-environment/)

***

\*☁️ **Pilih provider yang tepat untuk workload Anda dan optimalkan sesuai kebutuhan bisnis**
