# Setup Installation

> Panduan lengkap untuk instalasi dan setup Kubernetes cluster dari local development hingga production deployment.

## 📋 Daftar Isi

* [🏠 Local Development](#local-development)
* [☁️ Cloud Cluster Setup](#cloud-cluster-setup)
* [⚙️ Configuration](#configuration)
* [🔧 Validation](#validation)
* [🚨 Troubleshooting](#troubleshooting)

***

## 🏠 Local Development

### **Minikube**

**Minikube** adalah tool yang paling mudah untuk menjalankan single-node Kubernetes cluster di local machine untuk development dan testing.

#### **Installation**

```bash
# Linux (x86_64)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# macOS (Intel)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-amd64
sudo install minikube-darwin-amd64 /usr/local/bin/minikube

# macOS (Apple Silicon)
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-arm64
sudo install minikube-darwin-arm64 /usr/local/bin/minikube

# Windows
# Download dari https://minikube.sigs.k8s.io/docs/start/
```

#### **Basic Usage**

```bash
# Start cluster dengan default configuration
minikube start

# Start dengan custom settings
minikube start \
  --cpus=2 \
  --memory=4096 \
  --disk-size=20g \
  --driver=docker \
  --kubernetes-version=v1.28.0

# Check cluster status
minikube status

# Access dashboard
minikube dashboard

# Enable addons
minikube addons enable ingress
minikube addons enable metrics-server
minikube addons enable dashboard

# Stop cluster
minikube stop

# Delete cluster
minikube delete
```

#### **Advanced Configuration**

```bash
# Multi-node cluster
minikube start --nodes=3

# Dengan specific driver
minikube start --driver=virtualbox
minikube start --driver=hyperkit
minikube start --driver=docker
minikube start --driver=podman

# Dengan networking options
minikube start --network-plugin=cni
minikube start --subnet=192.168.49.0/24

# Dengan storage
minikube start --disk-size=50g
minikube start --extra-disks=4

# Dengan registry
minikube start --registry-mirror=https://registry.mirrors.ustc.edu.cn
```

### **kind (Kubernetes in Docker)**

**kind** menjalankan Kubernetes cluster menggunakan Docker containers sebagai "nodes".

#### **Installation**

```bash
# Linux (x86_64)
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# macOS (Intel)
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-darwin-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# macOS (Apple Silicon)
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-darwin-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# Windows
# Download dari https://kind.sigs.k8s.io/docs/user/quick-start/#installation
```

#### **Cluster Configuration**

```yaml
# kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
- role: worker
```

#### **Basic Usage**

```bash
# Create cluster dengan config
kind create cluster --config=kind-config.yaml

# Create cluster default
kind create cluster

# List clusters
kind get clusters

# Delete cluster
kind delete cluster

# Export kubeconfig
kind export kubeconfig --name=kind-cluster

# Use specific cluster
kubectl cluster-info --context=kind-kind
```

### **Docker Desktop**

Docker Desktop memiliki built-in Kubernetes support.

#### **Setup**

```bash
# 1. Install Docker Desktop
# Download dari https://www.docker.com/products/docker-desktop/

# 2. Enable Kubernetes
# - Open Docker Desktop settings
# - Go to Kubernetes tab
# - Enable Kubernetes
# - Apply & Restart

# 3. Verify installation
docker version
kubectl version --client
kubectl cluster-info
```

#### **Configuration**

```bash
# Kubernetes di Docker Desktop
# - CPU: 2 cores minimum
# - Memory: 4GB minimum
# - Disk: 20GB minimum
# - Kubernetes version: Latest stable
# - Network: 127.0.0.1:6443
# - CNI: kubenet
# - CRI: containerd
```

### **k3d / k3s**

**k3s** adalah lightweight Kubernetes distribution dari Rancher, **k3d** adalah tool untuk menjalankannya di Docker.

#### **Installation**

```bash
# Install k3d
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash

# Atau manual download
wget https://github.com/k3d-io/k3d/releases/download/v5.6.3/k3d-linux-amd64
chmod +x k3d-linux-amd64
sudo mv k3d-linux-amd64 /usr/local/bin/k3d
```

#### **Basic Usage**

```bash
# Create cluster
k3d cluster create mycluster

# Create dengan 3 nodes
k3d cluster create mycluster --agents 2

# Dengan port mapping
k3d cluster create mycluster --port 8080:80@loadbalancer

# Dengan specific Kubernetes version
k3d cluster create mycluster --image rancher/k3s:v1.28.3-k3s1

# List clusters
k3d cluster list

# Delete cluster
k3d cluster delete mycluster
```

***

## ☁️ Cloud Cluster Setup

### **Amazon EKS (Elastic Kubernetes Service)**

#### **Prerequisites**

```bash
# Install AWS CLI v2
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install

# Install eksctl
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin/eksctl

# Verify installation
aws --version
eksctl version
```

#### **Create EKS Cluster**

```bash
# Basic EKS cluster
eksctl create cluster \
  --name my-eks-cluster \
  --region us-west-2 \
  --nodegroup-name standard-workers \
  --node-type t3.medium \
  --nodes 3 \
  --nodes-min 1 \
  --nodes-max 4 \
  --managed

# Advanced EKS cluster dengan custom config
eksctl create cluster \
  --name my-eks-cluster \
  --region us-west-2 \
  --nodegroup-name standard-workers \
  --node-type t3.medium \
  --nodes 3 \
  --nodes-min 1 \
  --nodes-max 4 \
  --managed \
  --ssh-access \
  --ssh-public-key ~/.ssh/id_rsa.pub \
  --set clusterName=my-eks-cluster \
  --set region=us-west-2

# Dengan IAM role
aws iam create-role \
  --role-name myEksRole \
  --assume-role-policy-document file://trust-policy.json

aws iam attach-role-policy \
  --role-name myEksRole \
  --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
```

#### **Configure kubectl**

```bash
# Update kubeconfig
aws eks update-kubeconfig --region us-west-2 --name my-eks-cluster

# Verify connection
kubectl get nodes
kubectl get pods --all-namespaces
```

#### **EKS Node Groups**

```bash
# Add node group
eksctl create nodegroup \
  --cluster my-eks-cluster \
  --region us-west-2 \
  --nodegroup-name high-memory-workers \
  --node-type r5.large \
  --nodes 2 \
  --nodes-min 1 \
  --nodes-max 3 \
  --managed

# Scale node group
eksctl scale nodegroup \
  --cluster my-eks-cluster \
  --region us-west-2 \
  --nodegroup-name standard-workers \
  --nodes 5

# Delete node group
eksctl delete nodegroup \
  --cluster my-eks-cluster \
  --region us-west-2 \
  --nodegroup-name high-memory-workers
```

### **Google GKE (Google Kubernetes Engine)**

#### **Prerequisites**

```bash
# Install Google Cloud SDK
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init

# Install kubectl
gcloud components install kubectl

# Verify installation
gcloud version
kubectl version --client
```

#### **Create GKE Cluster**

```bash
# Basic GKE cluster
gcloud container clusters create my-gke-cluster \
  --zone us-central1-a \
  --num-nodes 3 \
  --machine-type e2-standard-2 \
  --enable-autoscaling \
  --min-nodes 1 \
  --max-nodes 5 \
  --enable-autorepair \
  --enable-autoupgrade

# Regional cluster
gcloud container clusters create my-gke-cluster \
  --region us-central1 \
  --node-locations us-central1-a,us-central1-b,us-central1-c \
  --num-nodes 1 \
  --machine-type e2-standard-2 \
  --enable-autoscaling \
  --min-nodes 1 \
  --max-nodes 3

# Private cluster
gcloud container clusters create my-gke-cluster \
  --zone us-central1-a \
  --enable-private-nodes \
  --enable-master-authorized-networks \
  --master-authorized-networks 10.0.0.0/8 \
  --enable-ip-alias \
  --enable-subnetworks \
  --enable-private-endpoint
```

#### **Configure kubectl**

```bash
# Get credentials
gcloud container clusters get-credentials my-gke-cluster \
  --zone us-central1-a

# Verify connection
kubectl get nodes
kubectl get pods --all-namespaces
```

#### **GKE Node Pools**

```bash
# Create node pool
gcloud container node-pools create my-node-pool \
  --cluster=my-gke-cluster \
  --zone=us-central1-a \
  --machine-type=e2-standard-4 \
  --num-nodes=2 \
  --enable-autoscaling \
  --min-nodes=1 \
  --max-nodes=3

# Resize node pool
gcloud container clusters resize my-gke-cluster \
  --node-pool=my-node-pool \
  --num-nodes=3 \
  --zone=us-central1-a

# Delete node pool
gcloud container node-pools delete my-node-pool \
  --cluster=my-gke-cluster \
  --zone=us-central1-a
```

### **Microsoft Azure AKS (Azure Kubernetes Service)**

#### **Prerequisites**

```bash
# Install Azure CLI
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash

# Install kubectl
az aks install-cli

# Verify installation
az --version
kubectl version --client
```

#### **Create AKS Cluster**

```bash
# Create resource group
az group create --name myResourceGroup --location eastus

# Basic AKS cluster
az aks create \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --node-count 3 \
  --generate-ssh-keys \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 5

# Advanced AKS cluster
az aks create \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --node-count 3 \
  --node-vm-size Standard_B2s \
  --enable-addons monitoring \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 5 \
  --enable-oidc-issuer \
  --enable-aad \
  --aad-admin-group-object-ids <AAD-GROUP-ID>
```

#### **Configure kubectl**

```bash
# Get credentials
az aks get-credentials \
  --resource-group myResourceGroup \
  --name myAKSCluster

# Verify connection
kubectl get nodes
kubectl get pods --all-namespaces
```

#### **AKS Node Pools**

```bash
# Create node pool
az aks nodepool add \
  --resource-group myResourceGroup \
  --cluster-name myAKSCluster \
  --name mynodepool \
  --node-count 3 \
  --node-vm-size Standard_D2s_v3 \
  --enable-cluster-autoscaler \
  --min-count 1 \
  --max-count 5

# Scale node pool
az aks nodepool scale \
  --resource-group myResourceGroup \
  --cluster-name myAKSCluster \
  --name mynodepool \
  --node-count 5

# Delete node pool
az aks nodepool delete \
  --resource-group myResourceGroup \
  --cluster-name myAKSCluster \
  --name mynodepool
```

***

## ⚙️ Configuration

### **kubectl Configuration**

#### **kubeconfig Setup**

```bash
# Default location: ~/.kube/config
export KUBECONFIG=$HOME/.kube/config

# View current context
kubectl config current-context

# View all contexts
kubectl config get-contexts

# Switch context
kubectl config use-context my-cluster

# Set namespace preference
kubectl config set-context --current --namespace=mynamespace

# View cluster info
kubectl cluster-info
kubectl config view
```

#### **Multi-Cluster Management**

```bash
# Add multiple contexts to kubeconfig
kubectl config set-cluster gke-dev \
  --server=https://35.227.123.45 \
  --certificate-authority=/path/to/ca.crt

kubectl config set-credentials gke-dev-admin \
  --token=your-token

kubectl config set-context gke-dev \
  --cluster=gke-dev \
  --user=gke-dev-admin \
  --namespace=dev

# Merge kubeconfig files
KUBECONFIG=~/.kube/config:~/dev-kubeconfig kubectl config view --merge
```

### **Network Configuration**

#### **CNI (Container Network Interface)**

```bash
# Install Calico (popular CNI)
kubectl create -f https://docs.projectcalico.org/manifests/calico.yaml

# Install Flannel (simple CNI)
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

# Install Weave Net (feature-rich CNI)
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

# Check CNI pods
kubectl get pods -n kube-system -l k8s-app=calico-node
```

#### **Ingress Controller**

```bash
# NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml

# Traefik Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/traefik/traefik/v2.9/docs/content/user-guides/kubernetes-ingress-provider/traefik-deployment.yaml

# Cert-Manager untuk SSL certificates
kubectl apply -f https://github.com/jetstack/cert-manager/releases/download/v1.13.0/cert-manager.yaml
```

### **Storage Configuration**

#### **Storage Classes**

```bash
# Create storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp3
  iops: "3000"
  throughput: "125"
  fsType: ext4
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete

# List storage classes
kubectl get storageclass

# Default storage class
kubectl patch storageclass fast-ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```

#### **Persistent Volumes**

```bash
# Local storage
apiVersion: v1
kind: PersistentVolume
metadata:
  name: local-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-storage
  local:
    path: /mnt/data
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - node-1
```

### **Monitoring Setup**

#### **Metrics Server**

```bash
# Install metrics server untuk HPA
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Verify installation
kubectl get pods -n kube-system -l k8s-app=metrics-server

# Check node metrics
kubectl top nodes
kubectl top pods
```

#### **Prometheus + Grafana**

```bash
# Install Prometheus Operator
kubectl create namespace monitoring
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --set prometheus.prometheusSpec.storageSpec.volumeClaimTemplate.spec.resources.requests.storage=50Gi
```

***

## 🔧 Validation

### **Cluster Health Checks**

#### **Basic Validation**

```bash
# Check cluster components
kubectl get componentstatuses

# Check nodes status
kubectl get nodes -o wide

# Check system pods
kubectl get pods -n kube-system

# Check API server connectivity
kubectl get --raw=/healthz

# Check etcd health
kubectl get --raw=/healthz/etcd
```

#### **Network Validation**

```bash
# Test DNS resolution
kubectl run busybox --image=busybox:1.35 --rm -it --restart=Never -- nslookup kubernetes.default

# Test external connectivity
kubectl run busybox --image=busybox:1.35 --rm -it --restart=Never -- wget -O- google.com

# Test pod-to-pod connectivity
kubectl run test-pod1 --image=nginx:1.21 --restart=Never
kubectl run test-pod2 --image=nginx:1.21 --restart=Never
kubectl exec test-pod2 -- curl -s http://test-pod1.default.svc.cluster.local
```

#### **Storage Validation**

```bash
# Create test PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

# Verify PVC binding
kubectl get pvc
kubectl describe pvc test-pvc
```

### **Performance Validation**

#### **Resource Usage**

```bash
# Check resource requests vs capacity
kubectl describe nodes | grep -A 10 "Allocated resources:"

# Check pod resource usage
kubectl top pods --all-namespaces

# Check node resource usage
kubectl top nodes
```

#### **Stress Testing**

```bash
# Deploy stress test
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: stress-test
spec:
  replicas: 10
  selector:
    matchLabels:
      app: stress-test
  template:
    metadata:
      labels:
        app: stress-test
    spec:
      containers:
      - name: stress
        image: polinux/stress
        resources:
          limits:
            cpu: "500m"
            memory: "512Mi"
          requests:
            cpu: "100m"
            memory: "128Mi"
        command: ["stress"]
        args: ["--vm", "1", "--vm-bytes", "256M", "--vm-hang", "1"]
EOF

# Monitor during stress test
watch kubectl top pods -l app=stress-test
```

***

## 🚨 Troubleshooting

### **Common Issues**

#### **Pod Issues**

```bash
# Pod stuck in Pending
kubectl describe pod <pod-name>
kubectl get nodes
kubectl get events --sort-by=.metadata.creationTimestamp

# Image pull errors
kubectl describe pod <pod-name>
docker pull <image-name>
kubectl get secret <secret-name> -o yaml

# CrashLoopBackOff
kubectl logs <pod-name>
kubectl logs <pod-name> --previous
kubectl describe pod <pod-name>
```

#### **Node Issues**

```bash
# Node not ready
kubectl describe node <node-name>
kubectl get pods -o wide --field-selector spec.nodeName=<node-name>
journalctl -u kubelet

# Resource exhaustion
kubectl top nodes
kubectl describe nodes | grep -A 5 "Conditions:"
```

#### **Network Issues**

```bash
# DNS resolution
kubectl exec -it <pod-name> -- nslookup kubernetes.default
kubectl get pods -n kube-system -l k8s-app=kube-dns

# Service connectivity
kubectl get endpoints <service-name>
kubectl describe service <service-name>
kubectl exec -it <pod-name> -- curl -I http://<service-name>.<namespace>.svc.cluster.local
```

#### **Storage Issues**

```bash
# Volume mounting
kubectl describe pod <pod-name> | grep -A 10 "Volumes:"
kubectl get pv,pvc
kubectl describe pvc <pvc-name>

# Storage class issues
kubectl get storageclass
kubectl describe storageclass <storage-class-name>
```

### **Cluster Reset**

#### **Minikube Reset**

```bash
# Stop and start Minikube
minikube stop
minikube start

# Complete reset
minikube delete
minikube start

# Reset specific component
minikube addons disable ingress
minikube addons enable ingress
```

#### **kind Reset**

```bash
# Delete and recreate cluster
kind delete cluster
kind create cluster

# Export and import config
kind export kubeconfig
```

#### **EKS Reset**

```bash
# Drain all nodes
kubectl get nodes -o name | xargs -I {} kubectl drain {} --ignore-daemonsets --delete-local-data

# Force delete pods
kubectl delete pods --all --grace-period=0

# Restart nodes
aws ec2 reboot-instances --instance-ids $(kubectl get nodes -o jsonpath='{.items[*].spec.providerID}' | sed 's/aws:\/\///')
```

### **Performance Optimization**

#### **Resource Optimization**

```bash
# Check resource utilization
kubectl top nodes
kubectl top pods --all-namespaces

# Set resource limits
kubectl patch deployment <deployment-name> -p '{"spec":{"template":{"spec":{"containers":[{"name":"<container-name>","resources":{"limits":{"memory":"1Gi"}}}]}}}}'

# Enable cluster autoscaler
kubectl apply -f https://github.com/kubernetes/autoscaler/releases/download/cluster-autoscaler-1.28.0/cluster-autoscaler.yaml
```

#### **Network Optimization**

```bash
# Check network performance
kubectl exec -it <pod-name> -- ping 8.8.8.8
kubectl exec -it <pod-name> -- iperf3 -c <target-pod-ip>

# Optimize CNI
# Review CNI configuration
kubectl get configmap -n kube-system cni-config -o yaml
```

***

## 📋 **Quick Reference Commands**

### **Setup Commands**

```bash
# kubectl installation
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Minikube installation
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube

# kind installation
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

# Basic cluster start
minikube start
kind create cluster
```

### **Validation Commands**

```bash
# Cluster health
kubectl cluster-info
kubectl get nodes
kubectl get pods -A

# Network testing
kubectl run busybox --image=busybox:1.35 --rm -it --restart=Never -- nslookup kubernetes.default

# Resource monitoring
kubectl top nodes
kubectl top pods -A
```

### **Troubleshooting Commands**

```bash
# Pod issues
kubectl describe pod <pod-name>
kubectl logs <pod-name>

# Node issues
kubectl describe node <node-name>
journalctl -u kubelet

# Service issues
kubectl get svc <service-name>
kubectl describe svc <service-name>
kubectl get endpoints <service-name>
```

***

## 🔗 **Additional Resources**

### **Official Documentation**

* [Kubernetes Documentation](https://kubernetes.io/docs/)
* [Setup Tools](https://kubernetes.io/docs/tasks/tools/)
* [Installation Guides](https://kubernetes.io/docs/setup/)

### **Community Resources**

* [Kubernetes Community](https://kubernetes.io/community/)
* [CNCF Landscape](https://landscape.cncf.io/)
* [Kubernetes by Example](https://kubernetesbyexample.com/)

### **Practice Environments**

* [Kubernetes Katacoda](https://katacoda.com/scenarios/kubernetes)
* [Play with Kubernetes](https://labs.play-with-k8s.com/)
* [Killercoda](https://killercoda.com/)

***

*📅 **Last Updated**: November 2024* *🔗 **Related**:* [*Fundamentals*](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/catatan-seekor-devops/kubernetes/fundamentals) *|* [*Application Deployment*](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/catatan-seekor-devops/kubernetes/application-deployment) *|* [*Cheatsheets*](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/catatan-seekor-devops/kubernetes/cheatsheets)
