# Kubernetes Tools Ecosystem

> 🌐 **Ekosistem Lengkap**: Panduan lengkap tools dan ekosistem Kubernetes

***

## 📊 **Overview Ecosystem**

```mermaid
graph TB
    subgraph "Core Kubernetes"
        A[kubectl] --> B[kubelet]
        A --> C[API Server]
        B --> D[Container Runtime]
    end

    subgraph "Package Management"
        E[Helm] --> F[Charts]
        G[Kustomize] --> H[Overlays]
        I[Carvel] --> J[YTT/Kbld]
    end

    subgraph "Service Mesh"
        K[Istio] --> L[Envoy]
        M[Linkerd] --> N[Proxy]
        O[Consul Connect] --> P[Service Discovery]
    end

    subgraph "Monitoring & Observability"
        Q[Prometheus] --> R[Grafana]
        S[Jaeger] --> T[OpenTelemetry]
        U[Fluentd] --> V[ Loki/ELK]
    end

    subgraph "CI/CD & GitOps"
        W[ArgoCD] --> X[Flux CD]
        Y[Jenkins X] --> Z[Tekton]
        AA[Spinnaker] --> BB[Keel]
    end

    subgraph "Storage & Databases"
        CC[Longhorn] --> DD[Rook]
        EE[Operator Framework] --> FF[Database Operators]
    end

    subgraph "Security"
        GG[Falco] --> HH[OPA/Gatekeeper]
        II[Aqua] --> JJ[Trivy]
    end
```

***

## 🚀 **Package Management Tools**

### **📦 Helm - Package Manager**

#### **Installation & Setup**

```bash
# Install Helm
curl https://get.helm.sh/helm-v3.12.0-linux-amd64.tar.gz | tar xz
sudo mv linux-amd64/helm /usr/local/bin/

# Add repositories
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Search charts
helm search repo nginx
helm search hub wordpress
```

#### **Common Helm Commands**

```bash
# Install chart
helm install my-nginx bitnami/nginx
helm install my-app ./my-chart
helm install my-db ./db-chart --set auth.postgresPassword=mypassword

# List releases
helm list
helm list --namespace kube-system

# Upgrade release
helm upgrade my-nginx bitnami/nginx --set replicaCount=3
helm upgrade my-nginx bitnami/nginx -f values.yaml

# Rollback
helm rollback my-nginx 1
helm history my-nginx

# Uninstall
helm uninstall my-nginx

# Pull and inspect charts
helm pull bitnami/nginx
helm inspect chart bitnami/nginx
helm inspect values bitnami/nginx
```

#### **Creating Helm Charts**

```bash
# Create new chart
helm create my-application
cd my-application

# Chart structure
├── Chart.yaml          # Chart metadata
├── values.yaml         # Default values
├── values-prod.yaml    # Production values
├── templates/          # Template files
│   ├── deployment.yaml
│   ├── service.yaml
│   ├── ingress.yaml
│   └── _helpers.tpl
└── charts/             # Dependencies
```

**Example Chart.yaml:**

```yaml
apiVersion: v2
name: my-application
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.0.0"
keywords:
  - web
  - application
home: https://github.com/myorg/my-app
sources:
  - https://github.com/myorg/my-app
maintainers:
  - name: My Name
    email: my.email@example.com
dependencies:
  - name: postgresql
    version: 12.x.x
    repository: https://charts.bitnami.com/bitnami
```

#### **Advanced Helm Features**

```bash
# Template debugging
helm template my-app ./my-chart --values values-prod.yaml
helm install my-app ./my-chart --dry-run --debug

# Dependency management
helm dependency update
helm dependency list

# Plugin management
helm plugin install https://github.com/helm/helm-2to3
helm plugin list

# Secret management with Helm-Secrets
helm secrets encrypt secrets.yaml
helm secrets decrypt secrets.yaml
helm install my-app ./my-chart -f secrets.yaml
```

### **⚙️ Kustomize - Template-Free Configuration**

#### **Basic Kustomize Structure**

```bash
# Directory structure
my-app/
├── base/
│   ├── deployment.yaml
│   ├── service.yaml
│   └── kustomization.yaml
└── overlays/
    ├── production/
    │   ├── kustomization.yaml
    │   └── patch.yaml
    └── staging/
        ├── kustomization.yaml
        └── patch.yaml
```

**Base kustomization.yaml:**

```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml
- service.yaml

commonLabels:
  app: my-app
  version: v1

commonAnnotations:
  maintained-by: "team-platform"

configMapGenerator:
- name: app-config
  literals:
  - LOG_LEVEL=info
  - ENVIRONMENT=production

secretGenerator:
- name: app-secrets
  literals:
  - DATABASE_URL=postgresql://localhost:5432/mydb
```

**Overlay kustomization.yaml (production):**

```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../../base

replicas:
- name: my-app-deployment
  count: 5

images:
- name: my-app
  newTag: v1.2.3

patchesStrategicMerge:
- increase-resources.yaml

patchesJson6902:
- target:
    version: v1
    kind: Deployment
    name: my-app-deployment
  patch: |-
    - op: add
      path: /spec/template/spec/nodeSelector
      value:
        environment: production
```

#### **Kustomize Commands**

```bash
# Build and preview
kustomize build base
kustomize build overlays/production

# Apply directly
kubectl apply -k base/
kubectl apply -k overlays/production/

# Create config
kustomize create --resources deployment.yaml,service.yaml

# Edit resources
kustomize edit set image my-app=v1.2.3
kustomize edit add configmap app-config --from-literal=LOG_LEVEL=debug
```

### **🔧 Carvel - Kubernetes Application Toolset**

#### **YTT (YAML Templating Tool)**

```yaml
#@ load("@ytt:data", "data")
#@ load("@ytt:template", "template")

#@ def labels():
app: #@ data.values.app_name
version: #@ data.values.app_version
#@ end

apiVersion: apps/v1
kind: Deployment
metadata:
  name: #@ data.values.app_name
  labels: #@ labels()
spec:
  replicas: #@ data.values.replicas
  selector:
    matchLabels: #@ labels()
  template:
    metadata:
      labels: #@ labels()
    spec:
      containers:
      - name: app
        image: #@ data.values.image
        ports:
        - containerPort: #@ data.values.port
```

**Values file (values.yml):**

```yaml
#@data/values
---
app_name: my-application
app_version: v1.0.0
replicas: 3
image: myregistry.com/app:1.0.0
port: 8080
```

#### **Kbld (Image Building and Resolution)**

```yaml
apiVersion: kbld.k14s.io/v1alpha1
kind: Config
sources:
- image: my-app
  path: .
  docker:
    build:
      rawOptions: ["--build-arg", "VERSION=1.0.0"]
destinations:
- image: my-app
  newImage: myregistry.com/my-app@sha256:abc123
```

***

## 🌐 **Service Mesh Tools**

### **🚀 Istio**

#### **Installation**

```bash
# Download Istio
curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.17.0 TARGET_ARCH=x86_64 sh -
cd istio-1.17.0

# Install
export PATH=$PWD/bin:$PATH
istioctl install --set profile=demo -y

# Enable automatic sidecar injection
kubectl label namespace default istio-injection=enabled
```

#### **Core Istio Resources**

**Gateway:**

```yaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
  name: app-gateway
spec:
  selector:
    istio: ingressgateway
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - app.example.com
  - port:
      number: 443
      name: https
      protocol: HTTPS
    tls:
      mode: SIMPLE
      credentialName: app-tls
    hosts:
    - app.example.com
```

**VirtualService:**

```yaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
  name: app-vs
spec:
  hosts:
  - app.example.com
  gateways:
  - app-gateway
  http:
  - match:
    - uri:
        prefix: /
    route:
    - destination:
        host: app-service
        port:
          number: 80
    fault:
      delay:
        percentage:
          value: 0.1
        fixedDelay: 5s
    retries:
      attempts: 3
      perTryTimeout: 2s
```

**DestinationRule:**

```yaml
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
  name: app-destination
spec:
  host: app-service
  trafficPolicy:
    loadBalancer:
      simple: LEAST_CONN
    connectionPool:
      tcp:
        maxConnections: 100
      http:
        http1MaxPendingRequests: 50
        maxRequestsPerConnection: 10
    circuitBreaker:
      consecutiveErrors: 3
      interval: 30s
      baseEjectionTime: 30s
    tls:
      mode: ISTIO_MUTUAL
  subsets:
  - name: v1
    labels:
      version: v1
  - name: v2
    labels:
      version: v2
```

#### **Istio Commands**

```bash
# Check installation
istioctl verify-install

# Proxy configuration
kubectl exec -it <pod-name> -c istio-proxy -- pilot-agent request GET config_dump

# Traffic management
istioctl proxy-config routes <pod-name>
istioctl proxy-config listeners <pod-name>
istioctl proxy-config clusters <pod-name>

# Analyze configuration
istioctl analyze --all-namespaces
istioctl analyze deployment.yaml

# Debug with istioctl
istioctl proxy-config log <pod-name> --level debug
istioctl pc endpoint <pod-name>
```

### **⚡ Linkerd**

#### **Installation**

```bash
# Install Linkerd CLI
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/install | sh

# Validate cluster
linkerd check --pre

# Install control plane
linkerd install | kubectl apply -f -

# Validate installation
linkerd check

# Install viz extension
linkerd viz install | kubectl apply -f -
```

#### **Injecting Applications**

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-deployment
  annotations:
    linkerd.io/inject: enabled
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: my-app:1.0
        ports:
        - containerPort: 8080
```

#### **Linkerd Commands**

```bash
# Dashboard
linkerd viz dashboard &

# Top commands
linkerd viz top deploy
linkerd viz edges deploy

# Tap for live debugging
linkerd viz tap deploy/my-app
linkerd viz tap deploy/my-app --to svc/backend

# Check configuration
linkerd check
linkerd config destinations

# Profile for service
linkerd viz profile svc/my-app --output tap
```

***

## 📊 **Monitoring & Observability**

### **📈 Prometheus**

#### **Installation with Helm**

```bash
# Add Prometheus repo
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

# Install Prometheus
helm install prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --create-namespace \
  --set grafana.adminPassword=admin123
```

#### **Custom Prometheus Configuration**

**ServiceMonitor:**

```yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: my-app-metrics
  namespace: monitoring
  labels:
    release: prometheus
spec:
  selector:
    matchLabels:
      app: my-app
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
    honorLabels: true
```

**PrometheusRule:**

```yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: my-app-alerts
  namespace: monitoring
spec:
  groups:
  - name: my-app.rules
    rules:
    - alert: HighErrorRate
      expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.1
      for: 5m
      labels:
        severity: critical
      annotations:
        summary: "High error rate detected"
        description: "Error rate is {{ $value }} errors per second"

    - alert: HighMemoryUsage
      expr: container_memory_usage_bytes / container_spec_memory_limit_bytes > 0.9
      for: 10m
      labels:
        severity: warning
      annotations:
        summary: "High memory usage"
        description: "Memory usage is above 90%"
```

### **📊 Grafana**

#### **Custom Dashboard Configuration**

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: my-app-dashboard
  namespace: monitoring
  labels:
    grafana_dashboard: "1"
data:
  my-app-dashboard.json: |
    {
      "dashboard": {
        "title": "My Application Dashboard",
        "panels": [
          {
            "title": "Request Rate",
            "type": "graph",
            "targets": [
              {
                "expr": "rate(http_requests_total[5m])",
                "legendFormat": "{{method}} {{status}}"
              }
            ]
          },
          {
            "title": "Response Time",
            "type": "graph",
            "targets": [
              {
                "expr": "histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))",
                "legendFormat": "95th percentile"
              }
            ]
          }
        ]
      }
    }
```

### **🔍 Jaeger Tracing**

#### **Installation**

```bash
# Install Jaeger
kubectl create namespace observability
kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/crds/jaegertracing.io_jaegers_crd.yaml
kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/service_account.yaml
kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role.yaml
kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/role_binding.yaml
kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/master/deploy/operator.yaml
```

**Jaeger Instance:**

```yaml
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: simplest
spec:
  strategy: allInOne
  allInOne:
    image: jaegertracing/all-in-one:latest
    options:
      log-level: info
      query:
        base-path: /jaeger
  storage:
    type: memory
```

### **📝 Logging with Loki**

#### **Installation with Helm**

```bash
# Add Grafana repo
helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

# Install Loki
helm install loki grafana/loki-stack \
  --namespace monitoring \
  --set loki.persistence.enabled=true \
  --set loki.persistence.size=20Gi \
  --set promtail.enabled=true
```

**Promtail Configuration:**

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: promtail-config
data:
  promtail.yml: |
    server:
      http_listen_port: 3101
      grpc_listen_port: 0

    positions:
      filename: /tmp/positions.yaml

    clients:
      - url: http://loki:3100/loki/api/v1/push

    scrape_configs:
    - job_name: kubernetes-pods
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_scrape
        action: keep
        regex: true
      - source_labels:
        - __meta_kubernetes_pod_annotation_prometheus_io_path
        action: replace
        target_label: __metrics_path__
        regex: (.+)
```

***

## 🔄 **CI/CD & GitOps Tools**

### **🚀 ArgoCD**

#### **Installation**

```bash
# Create namespace
kubectl create namespace argocd

# Install ArgoCD
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Get initial password
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
```

#### **Application Configuration**

```yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/myorg/my-app-k8s
    targetRevision: HEAD
    path: production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m
```

**AppProject:**

```yaml
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: production
  namespace: argocd
spec:
  description: Production applications
  sourceRepos:
  - https://github.com/myorg/*
  destinations:
  - namespace: production
    server: https://kubernetes.default.svc
  clusterResourceWhitelist:
  - group: ''
    kind: Namespace
  - group: 'networking.k8s.io'
    kind: Ingress
  roles:
  - name: dev-team
    description: Developers
    policies:
    - p, proj:production:dev-team, applications, get, production/*, allow
    groups:
    - myorg:developers
```

### **🌀 Flux CD**

#### **Installation**

```bash
# Install Flux CLI
curl -s https://fluxcd.io/install.sh | sudo bash

# Bootstrap GitOps
flux bootstrap github \
  --owner=myorg \
  --repository=my-infra \
  --branch=main \
  --path=clusters/my-cluster \
  --personal
```

#### **Flux Resources**

**GitRepository:**

```yaml
apiVersion: source.toolkit.fluxcd.io/v1beta1
kind: GitRepository
metadata:
  name: my-app-source
  namespace: flux-system
spec:
  interval: 1m
  url: https://github.com/myorg/my-app-manifests
  ref:
    branch: main
  secretRef:
    name: github-token
```

**Kustomization:**

```yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta1
kind: Kustomization
metadata:
  name: my-app
  namespace: flux-system
spec:
  interval: 10m
  sourceRef:
    kind: GitRepository
    name: my-app-source
  path: ./production
  prune: true
  validation: client
  healthChecks:
  - apiVersion: apps/v1
    kind: Deployment
    name: my-app
    namespace: production
```

### **⚙️ Tekton Pipelines**

#### **Installation**

```bash
# Install Tekton
kubectl apply --filename https://storage.googleapis.com/tekton-releases/latest/release.yaml
kubectl apply --filename https://storage.googleapis.com/tekton-releases/dashboard/latest/tekton-dashboard-release.yaml
```

#### **Pipeline Example**

```yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: build-and-deploy
spec:
  params:
  - name: git-revision
    type: string
    description: The git revision
    default: main
  - name: git-url
    type: string
    description: Repository URL
  workspaces:
  - name: shared-data
    description: Shared workspace
  tasks:
  - name: fetch-source
    taskRef:
      name: git-clone
    workspaces:
    - name: output
      workspace: shared-data
    params:
    - name: url
      value: $(params.git-url)
    - name: revision
      value: $(params.git-revision)

  - name: build-image
    taskRef:
      name: buildah
    runAfter:
    - fetch-source
    workspaces:
    - name: source
      workspace: shared-data
    params:
    - name: IMAGE
      value: "myregistry.com/my-app:$(params.git-revision)"
    - name: DOCKERFILE
      value: ./Dockerfile

  - name: deploy
    taskRef:
      name: kubectl-deploy
    runAfter:
    - build-image
    workspaces:
    - name: source
      workspace: shared-data
    params:
    - name: SCRIPT
      value: |
        kubectl apply -f k8s/
        kubectl rollout status deployment/my-app
```

***

## 🗄️ **Storage Tools**

### **🐉 Longhorn**

#### **Installation**

```bash
# Install Longhorn
kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

# Check installation
kubectl get pods -n longhorn-system
```

#### **Volume Configuration**

```yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: longhorn-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 20Gi
```

### **🦅 Rook - Storage Orchestration**

#### **Ceph Cluster Setup**

```bash
# Deploy Rook Operator
kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/common.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/ceph/operator.yaml

# Deploy Ceph Cluster
kubectl apply -f cluster.yaml
```

**cluster.yaml:**

```yaml
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph
spec:
  cephVersion:
    image: quay.io/ceph/ceph:v16.2.7
  dataDirHostPath: /var/lib/rook
  skipUpgradeChecks: false
  continueUpgradeAfterChecksEvenIfNotHealthy: false
  waitTimeoutForHealthyOSDInMinutes: 10
  storage:
    useAllNodes: true
    useAllDevices: true
    deviceFilter:
    config:
      databaseSizeMB: "1024"
      journalSizeMB: "1024"
      osdsPerDevice: "1"
  network:
    provider: host
  monitoring:
    enabled: true
    rulesNamespace: rook-ceph
  resources:
    mgr:
      limits:
        cpu: "500m"
        memory: "1Gi"
      requests:
        cpu: "500m"
        memory: "1Gi"
```

***

## 🔐 **Security Tools**

### **🛡️ Falco - Runtime Security**

#### **Installation**

```bash
# Install Falco
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update

helm install falco falcosecurity/falco \
  --namespace falco \
  --create-namespace
```

#### **Custom Rules**

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: falco-custom-rules
  namespace: falco
data:
  my_rules.yaml: |
    - rule: Unexpected K8s Service Connection
      desc: Detect connection to unexpected Kubernetes service
      condition: >
        spawned_process and
        proc.name in (nc, netcat, telnet, curl, wget) and
        fd.type=ipv4 and
        fd.sip.name in ("kubernetes.default.svc.cluster.local", "kube-dns.kube-system.svc.cluster.local")
      output: >
        Unexpected k8s service connection (user=%user.name command=%proc.cmdline
        connection=%fd.name)
      priority: WARNING
      tags: [network, k8s]
```

### **🏛️ OPA/Gatekeeper - Policy as Code**

#### **Installation**

```bash
# Install Gatekeeper
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/master/deploy/gatekeeper.yaml
```

#### **Policy Example**

**ConstraintTemplate:**

```yaml
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
      validation:
        openAPIV3Schema:
          type: object
          properties:
            labels:
              type: array
              items:
                type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels

        violation[{"msg": msg, "details": {"missing_labels": missing}}] {
          provided := {label | input.review.object.metadata.labels[label]}
          required := {label | label := input.parameters.labels[_]}
          missing := required - provided
          count(missing) > 0
          msg := sprintf("you must provide labels: %v", [missing])
        }
```

**Constraint:**

```yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: all-pods-must-have-owner
spec:
  enforcementAction: deny
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
  parameters:
    labels: ["owner", "environment"]
```

### **🔍 Trivy - Container Scanner**

#### **Installation and Usage**

```bash
# Install Trivy
sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo "deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main" | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy

# Scan image
trivy image nginx:1.21
trivy image --severity HIGH,CRITICAL myregistry.com/my-app:latest

# Scan filesystem
trivy fs /path/to/project

# Generate reports
trivy image --format json --output report.json nginx:1.21
trivy image --format template --template "@contrib/sarif.tpl" -o report.sarif nginx:1.21
```

***

## 🛠️ **Development & Debugging Tools**

### **🔧 Skaffold**

#### **Installation and Basic Usage**

```bash
# Install Skaffold
curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64
sudo install skaffold /usr/local/bin/

# Initialize project
skaffold init

# Run in dev mode
skaffold dev

# Run in CI mode
skaffold run
skaffold build
```

**skaffold.yaml:**

```yaml
apiVersion: skaffold/v2beta26
kind: Config
metadata:
  name: my-app
build:
  artifacts:
  - image: my-app
    docker:
      dockerfile: Dockerfile
deploy:
  kubectl:
    manifests:
    - k8s/*.yaml
portForward:
- resourceType: service
  resourceName: my-app-service
  port: 8080
  localPort: 9000
```

### **⚡ Tilt - Local Kubernetes Development**

#### **Installation and Tiltfile**

```bash
# Install Tilt
curl -fsSL https://raw.githubusercontent.com/tilt-dev/tilt/master/scripts/install.sh | bash
```

**Tiltfile:**

```python
# Docker build
docker_build('my-app', '.')
docker_build('my-sidecar', './sidecar')

# Kubernetes resources
k8s_yaml(['k8s/deployment.yaml', 'k8s/service.yaml'])

# Local resource for database
local_resource('db', 'docker-compose up -d postgres', serve_cmd='docker-compose logs -f postgres')

# Port forwarding
k8s_port_forward('my-app-service', 8080, 9000)

# Live updates
k8s_resource('my-app', port_forwards=9000, auto_init=True)
```

### **🐙 Octant - Kubernetes Dashboard**

#### **Installation**

```bash
# Install Octant
curl -L https://github.com/vmware-tanzu/octant/releases/download/v0.25.1/Octant-0.25.1-Linux-64bit.tar.gz | tar xz
sudo mv Octant-0.25.1-Linux-64bit/octant /usr/local/bin/

# Run Octant
octant --kubeconfig ~/.kube/config
```

***

## 🎯 **Tool Selection Matrix**

| **Category**           | **Tool**             | **Best For**                            | **Complexity** | **Maturity** |
| ---------------------- | -------------------- | --------------------------------------- | -------------- | ------------ |
| **Package Management** | Helm                 | Complex applications, community charts  | Medium         | High         |
|                        | Kustomize            | GitOps, template-free configs           | Low            | High         |
|                        | Carvel               | Complex templating, image building      | High           | Medium       |
| **Service Mesh**       | Istio                | Enterprise features, traffic management | High           | High         |
|                        | Linkerd              | Performance, simplicity                 | Medium         | High         |
|                        | Consul Connect       | Multi-cloud, service discovery          | Medium         | High         |
| **Monitoring**         | Prometheus + Grafana | Metrics, alerting                       | Medium         | High         |
|                        | Jaeger               | Distributed tracing                     | Medium         | High         |
|                        | Loki                 | Log aggregation                         | Medium         | High         |
| **GitOps**             | ArgoCD               | Visual UI, multi-cluster                | Medium         | High         |
|                        | Flux CD              | CLI-first, Git-native                   | Medium         | High         |
|                        | Tekton               | Cloud-native pipelines                  | High           | High         |
| **Security**           | Falco                | Runtime security                        | Low            | High         |
|                        | OPA/Gatekeeper       | Policy as code                          | High           | High         |
|                        | Trivy                | Container scanning                      | Low            | High         |
| **Development**        | Skaffold             | CI/CD integration                       | Medium         | High         |
|                        | Tilt                 | Local development                       | Low            | High         |
|                        | Octant               | Visual debugging                        | Low            | Medium       |

***

## 📚 **Learning Resources**

### **📖 Official Documentation**

* [Kubernetes Documentation](https://kubernetes.io/docs/)
* [Helm Documentation](https://helm.sh/docs/)
* [Istio Documentation](https://istio.io/docs/)
* [ArgoCD Documentation](https://argoproj.github.io/argo-cd/)
* [Flux Documentation](https://fluxcd.io/docs/)

### **🎓 Certifications**

* [Certified Kubernetes Administrator (CKA)](https://www.cncf.io/certification/cka/)
* [Certified Kubernetes Application Developer (CKAD)](https://www.cncf.io/certification/ckad/)
* [Certified Kubernetes Security Specialist (CKS)](https://www.cncf.io/certification/cks/)

### **🔧 Tool-Specific Learning**

* **Prometheus**: [Prometheus Documentation](https://prometheus.io/docs/)
* **Grafana**: [Grafana Documentation](https://grafana.com/docs/)
* **Jaeger**: [Jaeger Documentation](https://www.jaegertracing.io/docs/)
* **Linkerd**: [Linkerd Getting Started Guide](https://linkerd.io/2/getting-started/)

***

## 🎯 **This Tools & Ecosystem Guide Covers**

* ✅ **Package Management** - Helm, Kustomize, Carvel
* ✅ **Service Mesh** - Istio, Linkerd, Consul
* ✅ **Monitoring** - Prometheus, Grafana, Jaeger, Loki
* ✅ **CI/CD & GitOps** - ArgoCD, Flux, Tekton
* ✅ **Storage Tools** - Longhorn, Rook/Ceph
* ✅ **Security Tools** - Falco, OPA/Gatekeeper, Trivy
* ✅ **Development Tools** - Skaffold, Tilt, Octant
* ✅ **Tool Selection Matrix** - Decision guidance
* ✅ **Installation Commands** - Quick setup
* ✅ **Configuration Examples** - Ready-to-use templates

***

\*📌 \**Gunakan panduan ini untuk memilih dan mengimplementasikan tools Kubernetes yang tepat*
