# Tools Ecosystem

> 🚀 **Toolbox Komprehensif**: Koleksi tools dan ekosistem Kubernetes untuk meningkatkan produktivitas.

***

## 📋 **Daftar Isi**

### **🔧 Command Line Tools**

* [kubectl Advanced Usage](#kubectl-advanced-usage)
* [kubectx & kubens](#kubectx--kubens)
* [stern (Multi-pod Logs)](#stern-multi-pod-logs)
* [k9s (Terminal UI)](#k9s-terminal-ui)
* [lens (Desktop GUI)](#lens-desktop-gui)

### **📦 Package Management**

* [Helm Package Manager](#helm-package-manager)
* [Kustomize Configuration Management](#kustomize-configuration-management)
* [Carvel Tools](#carvel-tools)
* [Operator Framework](#operator-framework)

### **📊 Monitoring & Observability**

* [Prometheus & Grafana](#prometheus--grafana)
* [Jaeger Distributed Tracing](#jaeger-distributed-tracing)
* [ELK Stack](#elk-stack)
* [OpenTelemetry](#opentelemetry)

### **🔍 Debugging & Troubleshooting**

* [Telepresence (Local Development)](#telepresence-local-development)
* [kubectl-debug](#kubectl-debug)
* [Inspektor Gadget](#inspektor-gadget)
* [kubectl-tree](#kubectl-tree)

### **🚀 CI/CD & GitOps**

* [ArgoCD](#argocd)
* [Flux](#flux)
* [Jenkins X](#jenkins-x)
* [Tekton](#tekton)

### **🛡️ Security Tools**

* [Trivy Security Scanner](#trivy-security-scanner)
* [Falco Runtime Security](#falco-runtime-security)
* [OPA Gatekeeper](#opa-gatekeeper)
* [Polaris](#polaris)

***

## 🔧 **Command Line Tools**

### kubectl Advanced Usage

**🎯 Essential kubectl Plugins**

```bash
# Install krew (kubectl plugin manager)
curl -fsSL https://raw.githubusercontent.com/kubernetes-sigs/krew/master/install.sh | bash

# Install useful plugins
kubectl krew install view-serviceaccount-kubeconfig
kubectl krew install get-all
kubectl krew install tree
kubectl krew install ns
kubectl krew install access-matrix
kubectl krew install who-can
```

**⚡ Productivity Aliases**

```bash
# Add to ~/.bashrc or ~/.zshrc
alias k='kubectl'
alias kg='kubectl get'
alias kd='kubectl describe'
alias kdel='kubectl delete'
alias kex='kubectl exec -it'
alias klogs='kubectl logs -f'
alias kyaml='kubectl get -o yaml'

# Namespace aware aliases
alias kgn='kubectl get namespaces'
alias kcn='kubectl config set-context --current --namespace'

# Quick pod access
alias kp='kubectl get pods'
alias kpa='kubectl get pods --all-namespaces'
alias kpf='kubectl port-forward'
```

**🔧 Advanced kubectl Commands**

```bash
# Get all resources in namespace
kubectl get all,configmaps,secrets,pvc -n production

# Watch resources in real-time
kubectl get pods -w --all-namespaces

# Sort by creation time
kubectl get pods --sort-by=.metadata.creationTimestamp

# Get pods with specific labels
kubectl get pods -l app=webserver,env=production

# Explain resources
kubectl explain deployment.spec.template.spec.containers.resources

# Copy files to/from pod
kubectl cp ./local-file pod-name:/remote-path
kubectl cp pod-name:/remote-file ./local-path

# Get YAML output
kubectl get deployment web-deployment -o yaml
```

### kubectx & kubens

**🚀 Installation**

```bash
# macOS
brew install kubectx

# Linux
git clone https://github.com/ahmetb/kubectx.git ~/.kubectx
sudo ln -s ~/.kubectx/kubectx /usr/local/bin/kubectx
sudo ln -s ~/.kubectx/kubens /usr/local/bin/kubens

# Install completion scripts
sudo chmod +x ~/.kubectx/completion/*.sh
sudo cp ~/.kubectx/completion/kubectx.bash /etc/bash_completion.d/
sudo cp ~/.kubectx/completion/kubens.bash /etc/bash_completion.d/
```

**🔧 Usage Examples**

```bash
# Switch between clusters
kubectx
kubectx production-cluster
kubectx -

# Switch between namespaces
kubens
kubens production
kubens default

# Interactive selection
kubectx -i
kubens -i

# Show current context
kubectx -c
kubens -c
```

### stern (Multi-pod Logs)

**🚀 Installation**

```bash
# macOS
brew install stern

# Linux
sudo curl -L https://github.com/stern/stern/releases/download/v1.25.0/stern_1.25.0_linux_amd64.tar.gz -o stern.tar.gz
sudo tar -xvf stern.tar.gz -C /usr/local/bin stern

# Go install
go install github.com/stern/stern@latest
```

**🔧 Usage Examples**

```bash
# Follow logs for all pods in namespace
stern -n production app

# Follow logs for multiple labels
stern -l app=webserver -l env=production

# Include timestamps
stern --timestamps app

# Tail last 100 lines
stern --tail 100 app

# Follow logs with specific container
stern app -c webserver

# Color-coded logs
stern --color auto app

# Follow logs in multiple namespaces
stern --all-namespaces app
```

### k9s (Terminal UI)

**🚀 Installation**

```bash
# macOS
brew install k9s

# Linux
curl -sS https://webinstall.dev/k9s | bash

# Direct download
wget https://github.com/derailed/k9s/releases/download/v0.27.4/k9s_Linux_x86_64.tar.gz
tar -xvf k9s_Linux_x86_64.tar.gz
sudo mv k9s /usr/local/bin/
```

**🔧 Key Features & Shortcuts**

```bash
# Launch k9s
k9s
k9s -n production  # Start in specific namespace

# Navigation
:        # Command mode
/        # Filter
?        # Help
Ctrl+C   # Quit
q        # Quit
esc      # Return

# Views
pods     # View pods
deploy   # View deployments
svc      # View services
cm       # View configmaps
secret   # View secrets

# Actions
d        # Describe
l        # Logs
e        # Exec into pod
s        # Shell into pod
f        # Port forward
y        # YAML view
```

### lens (Desktop GUI)

**🚀 Installation**

```bash
# Download from https://k8slens.dev/
# Available for Windows, macOS, and Linux

# Ubuntu/Debian
wget https://github.com/lensapp/lens/releases/download/v6.3.0/Lens-6.3.0.AppImage
chmod +x Lens-6.3.0.AppImage
./Lens-6.3.0.AppImage

# Install as desktop app
sudo apt install gdebi
wget https://github.com/lensapp/lens/releases/download/v6.3.0/Lens-6.3.0.deb
sudo gdebi Lens-6.3.0.deb
```

***

## 📦 **Package Management**

### Helm Package Manager

**🚀 Installation**

```bash
# Using script
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

# Using package manager
brew install helm  # macOS
sudo apt install helm  # Ubuntu
sudo yum install helm  # RHEL/CentOS

# Verify installation
helm version
```

**🔧 Helm Commands**

```bash
# Add repositories
helm repo add stable https://charts.helm.sh/stable
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update

# Search charts
helm search repo nginx
helm search repo --versions mysql

# Install chart
helm install my-app bitnami/nginx

# Install with custom values
helm install my-app bitnami/nginx -f values.yaml

# List releases
helm list
helm list -n production

# Upgrade release
helm upgrade my-app bitnami/nginx -f new-values.yaml

# Rollback release
helm rollback my-app 1

# Uninstall release
helm uninstall my-app

# Show values
helm show values bitnami/nginx
```

**📝 Custom Helm Chart**

```bash
# Create new chart
helm create my-app
cd my-app

# Chart.yaml
apiVersion: v2
name: my-app
description: A Helm chart for my application
type: application
version: 0.1.0
appVersion: "1.0.0"

# values.yaml
replicaCount: 1
image:
  repository: nginx
  pullPolicy: IfNotPresent
  tag: "latest"
service:
  type: ClusterIP
  port: 80
  targetPort: 80
ingress:
  enabled: true
  className: nginx
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  hosts:
    - host: my-app.local
      paths:
        - path: /
          pathType: Prefix
```

**🔧 Helm Templates**

```yaml
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "my-app.fullname" . }}
  labels:
    {{- include "my-app.labels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "my-app.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "my-app.selectorLabels" . | nindent 8 }}
    spec:
      containers:
      - name: {{ .Chart.Name }}
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        ports:
        - name: http
          containerPort: {{ .Values.service.targetPort }}
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /
            port: http
        readinessProbe:
          httpGet:
            path: /
            port: http
```

### Kustomize Configuration Management

**🚀 Installation (Built-in with kubectl 1.14+)**

```bash
# Standalone installation
curl -s "https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh" | bash

# Usage
kustomize build .
kubectl apply -k .
```

**📝 Kustomize Structure**

```yaml
# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml
- service.yaml

commonLabels:
  app: my-app
  version: v1

images:
- name: nginx
  newTag: 1.21-alpine

replicas:
- name: my-app
  count: 3
```

**🔧 Overlay Configuration**

```yaml
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- ../../base

patchesStrategicMerge:
- replica-count.yaml
- memory-limit.yaml

namespace: production

images:
- name: nginx
  newTag: 1.21-alpine
```

### Carvel Tools

**🚀 Installation**

```bash
# Install Carvel tools
curl -L https://carvel.dev/install.sh | bash

# Install individual tools
kubectl apply -f https://github.com/vmware-tanzu/carvel-ytt/releases/latest/download/release.yml
kubectl apply -f https://github.com/vmware-tanzu/carvel-kapp/releases/latest/download/release.yml
kubectl apply -f https://github.com/vmware-tanzu/carvel-kbld/releases/latest/download/release.yml
```

**🔧 ytt (YAML Templating)**

```yaml
# config.yml
#@ load("@ytt:data", "data")

#@ def labels():
app: my-app
version: #@ data.values.version
#@ end

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
  labels: #@ labels()
spec:
  replicas: #@ data.values.replicas
  selector:
    matchLabels: #@ labels()
  template:
    metadata:
      labels: #@ labels()
    spec:
      containers:
      - name: app
        image: nginx:#@ data.values.image_tag
```

### Operator Framework

**🚀 Installation**

```bash
# Install Operator SDK
curl -L https://github.com/operator-framework/operator-sdk/releases/download/v1.25.0/operator-sdk_linux_amd64 -o operator-sdk
chmod +x operator-sdk
sudo mv operator-sdk /usr/local/bin/

# Install OLM (Operator Lifecycle Manager)
kubectl create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/crds.yaml
kubectl create -f https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.25.0/olm.yaml
```

**🔧 Create Custom Operator**

```bash
# Create new operator project
operator-sdk init --domain mycompany.com --repo mycompany.com/my-operator
operator-sdk create api --group apps --version v1alpha1 --kind MyResource --resource --controller

# Build and push operator image
docker build -t myregistry/my-operator:latest .
docker push myregistry/my-operator:latest

# Deploy operator
make deploy
```

***

## 📊 **Monitoring & Observability**

### Prometheus & Grafana

**🚀 Installation**

```bash
# Using Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack

# Using manifests
kubectl create namespace monitoring
kubectl apply -f https://github.com/prometheus-operator/prometheus-operator/v0.66.0/bundle.yaml
```

**🔧 Prometheus Configuration**

```yaml
# prometheus-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
data:
  prometheus.yml: |
    global:
      scrape_interval: 15s
      evaluation_interval: 15s

    rule_files:
      - "alert_rules.yml"

    alerting:
      alertmanagers:
        - static_configs:
            - targets:
              - alertmanager:9093

    scrape_configs:
      - job_name: 'kubernetes-apiservers'
        kubernetes_sd_configs:
        - role: endpoints
        scheme: https
        tls_config:
          ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
        bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
        relabel_configs:
        - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
          action: keep
          regex: default;kubernetes;https

      - job_name: 'kubernetes-nodes'
        kubernetes_sd_configs:
        - role: node
        relabel_configs:
        - action: labelmap
          regex: __meta_kubernetes_node_label_(.+)
        - target_label: __address__
          replacement: kubernetes.default.svc:443
        - source_labels: [__meta_kubernetes_node_name]
          regex: (.+)
          target_label: __metrics_path__
          replacement: /api/v1/nodes/${1}/proxy/metrics
```

**📊 Grafana Dashboard**

```yaml
# grafana-dashboard-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: grafana-dashboard
  labels:
    grafana_dashboard: "1"
data:
  kubernetes-overview.json: |
    {
      "dashboard": {
        "id": null,
        "title": "Kubernetes Cluster Overview",
        "tags": ["kubernetes"],
        "timezone": "browser",
        "panels": [
          {
            "title": "Pod Status",
            "type": "stat",
            "targets": [
              {
                "expr": "sum(kube_pod_status_phase{phase=\"Running\"})",
                "legendFormat": "Running"
              }
            ],
            "fieldConfig": {
              "defaults": {
                "color": {"mode": "thresholds"},
                "thresholds": {
                  "steps": [
                    {"color": "green", "value": null}
                  ]
                }
              }
            }
          }
        ],
        "refresh": "5s"
      }
    }
```

### Jaeger Distributed Tracing

**🚀 Installation**

```bash
# Using Helm
helm repo add jaegertracing https://jaegertracing.github.io/helm-charts
helm repo update
helm install jaeger jaegertracing/jaeger

# Using Operator
kubectl create namespace observability
kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.47.0/deploy/crds/jaegertracing.io_jaegers.yaml
kubectl apply -f https://raw.githubusercontent.com/jaegertracing/jaeger-operator/v1.47.0/deploy/operator.yaml
```

**🔧 Jaeger Configuration**

```yaml
# jaeger-instance.yaml
apiVersion: jaegertracing.io/v1
kind: Jaeger
metadata:
  name: simplest
spec:
  strategy: AllInOne
  allInOne:
    image: jaegertracing/all-in-one:latest
    options:
      log-level: info
      query:
        base-path: /
  storage:
    type: memory
  ingress:
    enabled: true
    hosts:
    - jaeger.example.com
```

### ELK Stack

**🚀 Installation**

```bash
# Elasticsearch
helm repo add elastic https://helm.elastic.co
helm repo update
helm install elasticsearch elastic/elasticsearch -n elastic-system

# Kibana
helm install kibana elastic/kibana -n elastic-system

# Logstash
helm install logstash elastic/logstash -n elastic-system

# Filebeat
helm install filebeat elastic/filebeat -n elastic-system
```

**🔧 Filebeat Configuration**

```yaml
# filebeat-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
data:
  filebeat.yml: |
    filebeat.inputs:
    - type: container
      paths:
        - /var/log/containers/*.log
      processors:
        - add_kubernetes_metadata:
            host: ${NODE_NAME}
            matchers:
            - logs_path:
                logs_path: "/var/log/containers/"

    output.elasticsearch:
      hosts: ["elasticsearch:9200"]
      index: "filebeat-%{+yyyy.MM.dd}"

    setup.kibana:
      host: "kibana:5601"
```

### OpenTelemetry

**🚀 Installation**

```bash
# OpenTelemetry Operator
kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml

# OpenTelemetry Collector
kubectl apply -f - <<EOF
apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
  name: otel
spec:
  config: |
    receivers:
      otlp:
        protocols:
          grpc:
            endpoint: 0.0.0.0:4317
          http:
            endpoint: 0.0.0.0:4318
    processors:
      batch:
    exporters:
      otlp:
        endpoint: jaeger-collector:4317
      prometheus:
        endpoint: "0.0.0.0:8889"
    service:
      pipelines:
        traces:
          receivers: [otlp]
          processors: [batch]
          exporters: [otlp]
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [prometheus]
EOF
```

***

## 🔍 **Debugging & Troubleshooting**

### Telepresence (Local Development)

**🚀 Installation**

```bash
# macOS
brew install datawire/blackbird/telepresence

# Linux
sudo curl -fL https://app.getambassador.io/download/tel2/linux/amd64/latest -o telepresence
sudo chmod +x telepresence
sudo mv telepresence /usr/local/bin/
```

**🔧 Usage Examples**

```bash
# Connect to cluster
telepresence connect

# Intercept traffic for local development
telepresence intercept web-deployment --port 8080:3000

# Leave intercept
telepresence leave web-deployment

# List intercepts
telepresence list

# Swap deployment with local container
telepresence intercept web-deployment --port 8080 --docker-run -- nginx:alpine

# Use local environment variables
telepresence intercept web-deployment --port 8080 --env-file .env
```

### kubectl-debug

**🚀 Installation**

```bash
# kubectl-debug plugin
kubectl krew install debug

# Using Go
go install github.com/intel/persistent-memory-deployment/cmd/kubectl-debug@latest
```

**🔧 Usage Examples**

```bash
# Debug running pod
kubectl debug my-pod -it --image=nicolaka/netshoot

# Debug with new container
kubectl debug my-pod --image=busybox --copy-to=my-pod-debug

# Debug with ephemeral container
kubectl debug my-pod -it --image=alpine --share-processes

# Debug node
kubectl debug node/node-name -it --image=nicolaka/netshoot
```

### Inspektor Gadget

**🚀 Installation**

```bash
# Using kubectl
kubectl apply -f https://raw.githubusercontent.com/inspektor-gadget/inspektor-gadget/latest/deploy/kubectl-gadget.yaml

# Install CLI
curl -L https://github.com/inspektor-gadget/inspektor-gadget/releases/latest/download/kubectl-gadget-linux-amd64.tar.gz -o kubectl-gadget.tar.gz
tar -xvf kubectl-gadget.tar.gz
sudo mv kubectl-gadget-linux-amd64 /usr/local/bin/kubectl-gadget
```

**🔧 Usage Examples**

```bash
# Trace network connections
kubectl gadget trace tcpconnect

# Monitor process execution
kubectl gadget trace exec

# Trace DNS queries
kubectl gadget trace dns

# Monitor file access
kubectl gadget trace open

# Top processes in containers
kubectl gadget top pid
```

### kubectl-tree

**🚀 Installation**

```bash
# Using krew
kubectl krew install tree

# Using Go
go install github.com/ahmetb/kubectl-tree/cmd/kubectl-tree@latest
```

**🔧 Usage Examples**

```bash
# Show ownership tree for a resource
kubectl tree deployment web-deployment

# Show all resources in namespace as tree
kubectl tree namespace production

# Show tree for different resource types
kubectl tree service web-service
kubectl tree configmap app-config
```

***

## 🚀 **CI/CD & GitOps**

### ArgoCD

**🚀 Installation**

```bash
# Using manifests
kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Using Helm
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install argocd argo/argo-cd -n argocd
```

**🔧 ArgoCD Application**

```yaml
# app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/mycompany/k8s-manifests
    targetRevision: HEAD
    path: production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
```

### Flux

**🚀 Installation**

```bash
# Install Flux CLI
curl -s https://fluxcd.io/install.sh | bash

# Bootstrap cluster
flux bootstrap github \
  --owner=mycompany \
  --repository=k8s-manifests \
  --path=clusters/production \
  --personal
```

**🔧 Flux Configuration**

```yaml
# clusters/production/flux-system/gotk-sync.yaml
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
  name: apps
  namespace: flux-system
spec:
  interval: 10m0s
  sourceRef:
    kind: GitRepository
    name: flux-system
  path: ./production
  prune: true
  wait: true
  timeout: 5m
```

### Jenkins X

**🚀 Installation**

```bash
# Using CLI
curl -L https://github.com/jenkins-x/jx/releases/download/v3.2.197/jx-linux-amd64.tar.gz | tar xzv
sudo mv jx /usr/local/bin

# Create cluster
jx create cluster eks

# Import existing project
jx import --dir myapp
```

### Tekton

**🚀 Installation**

```bash
# Using manifests
kubectl apply -f https://storage.googleapis.com/tekton-releases/latest/release.yaml

# Using Tekton CLI
curl -L https://github.com/tektoncd/cli/releases/download/v0.28.0/tkn_0.28.0_Linux_x86_64.tar.gz -o tkn.tar.gz
tar -xvf tkn.tar.gz
sudo mv tkn /usr/local/bin/
```

**🔧 Tekton Pipeline**

```yaml
# pipeline.yaml
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: build-and-deploy
spec:
  workspaces:
  - name: shared-data
  params:
  - name: git-url
    type: string
    description: url of the git repo for the code of deployment
  - name: git-revision
    type: string
    description: revision to be used from repo of the code for deployment
  tasks:
  - name: fetch-repo
    taskRef:
      name: git-clone
    workspaces:
    - name: output
      workspace: shared-data
    params:
    - name: url
      value: $(params.git-url)
    - name: revision
      value: $(params.git-revision)
  - name: build-image
    taskRef:
      name: buildah
    runAfter: ["fetch-repo"]
    workspaces:
    - name: source
      workspace: shared-data
    params:
    - name: IMAGE
      value: "image-registry.default.svc:5000/myapp:$(params.git-revision)"
```

***

## 🛡️ **Security Tools**

### Trivy Security Scanner

**🚀 Installation**

```bash
# Using script
curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin

# Using package manager
brew install trivy  # macOS
sudo apt install trivy  # Ubuntu
```

**🔧 Usage Examples**

```bash
# Scan image for vulnerabilities
trivy image nginx:latest

# Scan filesystem
trivy fs /path/to/directory

# Scan Git repository
trivy repo https://github.com/mycompany/myapp

# Scan with specific severity
trivy image --severity HIGH,CRITICAL nginx:latest

# Generate JSON report
trivy image --format json --output report.json nginx:latest

# Scan running Kubernetes cluster
trivy k8s --report summary cluster
```

### Falco Runtime Security

**🚀 Installation**

```bash
# Using Helm
helm repo add falcosecurity https://falcosecurity.github.io/charts
helm repo update
helm install falco falcosecurity/falco

# Using manifests
kubectl create namespace falco
kubectl apply -f https://raw.githubusercontent.com/falcosecurity/falco/master/deploy/falco.yaml
```

**🔧 Falco Rules**

```yaml
# custom-rules.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: falco-custom-rules
data:
  rules.yaml: |
    - rule: Shell in container
      desc: A shell was spawned in a container
      condition: >
        spawned_process and
        container and
        shell_procs and
        not proc.name in (bash, sh)
      output: >
        Shell spawned in container (user=%user.name container=%container.name
        shell=%proc.name parent=%proc.pname cmdline=%proc.cmdline)
      priority: WARNING
      tags: [container, shell]

    - rule: Unexpected file access
      desc: Unexpected file access in container
      condition: >
        open_read and
        container and
        fd.name contains /etc/shadow
      output: >
        Access to sensitive file (user=%user.name container=%container.name
        file=%fd.name)
      priority: CRITICAL
      tags: [container, filesystem]
```

### OPA Gatekeeper

**🚀 Installation**

```bash
# Using manifests
kubectl apply -f https://raw.githubusercontent.com/open-policy-agent/gatekeeper/v3.12.0/deploy/gatekeeper.yaml

# Using Helm
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm repo update
helm install gatekeeper gatekeeper/gatekeeper -n gatekeeper-system
```

**🔧 Policy Templates**

```yaml
# template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8srequiredlabels
spec:
  crd:
    spec:
      names:
        kind: K8sRequiredLabels
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8srequiredlabels
        violation[{"msg": msg}] {
          required := input.parameters.labels
          provided := input.review.object.metadata.labels
          missing := {label | required[label]}
          count(missing) > 0
          msg := sprintf("Missing required labels: %v", [missing])
        }

# constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
  name: must-have-gk-owner
spec:
  enforcementAction: deny
  match:
    kinds:
    - apiGroups: [""]
      kinds: ["Pod"]
  parameters:
    labels: ["owner", "environment"]
```

### Polaris

**🚀 Installation**

```bash
# Using Docker
docker run --rm -v $(pwd):/workspace quay.io/fairwinds/polaris --config /workspace/polaris.yaml --save-dashboard /workspace/dashboard.html

# Using Helm
helm repo add fairwinds-stable https://charts.fairwinds.com/stable
helm repo update
helm install polaris fairwinds-stable/polaris
```

**🔧 Polaris Configuration**

```yaml
# polaris.yaml
checks:
  # Security checks
  hostIPCSet: warning
  hostPIDSet: warning
  readOnlyRootFilesystem: warning
  runAsNonRoot: warning
  runAsPrivileged: warning

  # Resource checks
  cpuRequestsMissing: warning
  cpuLimitsMissing: warning
  memoryRequestsMissing: warning
  memoryLimitsMissing: warning

  # Image checks
  tagNotSpecified: warning
  imagePullPolicyNotAlways: warning

exemptions:
  # Exempt specific namespaces or resources
  - namespace: kube-system
    rules:
      - cpuRequestsMissing
      - memoryRequestsMissing
```

***

## 🎯 **Best Practices**

### **🔧 Tool Selection Guidelines**

1. **Use Standard Tools First**
   * kubectl untuk daily operations
   * Helm untuk package management
   * kustomize untuk configuration management
2. **Productivity Tools**
   * k9s untuk interaktif monitoring
   * stern untuk multi-pod logs
   * kubectx/kubens untuk context switching
3. **Advanced Operations**
   * Telepresence untuk local development
   * kubectl-debug untuk troubleshooting
   * Custom plugins untuk automation

### **📊 Monitoring Strategy**

1. **Metrics Collection**
   * Prometheus untuk metrics
   * Grafana untuk visualization
   * AlertManager untuk notifications
2. **Distributed Tracing**
   * Jaeger untuk tracing
   * OpenTelemetry untuk instrumentation
3. **Logging**
   * ELK stack untuk log aggregation
   * Filebeat untuk log collection
   * Structured logging format

### **🛡️ Security Tools Integration**

1. **Image Security**
   * Trivy untuk vulnerability scanning
   * Image signing verification
   * Regular security updates
2. **Runtime Security**
   * Falco untuk runtime monitoring
   * OPA Gatekeeper untuk policy enforcement
   * Network policies segmentation

***

## 🔗 **Referensi**

### **📚 Dokumentasi Resmi**

* [kubectl Documentation](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands)
* [Helm Documentation](https://helm.sh/docs/)
* [Kustomize Documentation](https://kubectl.docs.kubernetes.io/references/kustomize/)
* [Prometheus Documentation](https://prometheus.io/docs/)

### **🛠️ Tool Repositories**

* [k9s GitHub](https://github.com/derailed/k9s)
* [stern GitHub](https://github.com/wercker/stern)
* [kubectx GitHub](https://github.com/ahmetb/kubectx)
* [krew GitHub](https://github.com/kubernetes-sigs/krew)

### **📖 Learning Resources**

* [Kubernetes Tooling Guide](https://kubernetes.io/docs/tasks/tools/)
* [Helm Best Practices](https://helm.sh/docs/topics/best_practices/)
* [GitOps with ArgoCD](https://argoproj.github.io/argo-cd/)

***

\*🛠️ **Tools yang tepat akan meningkatkan produktivitas dan efisiensi kerja Anda**
