# Unsupervised Learning

## 📚 Overview

Unsupervised Learning adalah paradigma machine learning dimana model belajar dari data tanpa label (unlabeled data). Model mencari pola tersembunyi, struktur, atau relationships dalam data tanpa guidance dari output target.

## 🎯 Types of Unsupervised Learning

### 1. **Clustering**

Mengelompokkan data points berdasarkan similarity atau proximity.

**Examples:**

* Customer segmentation
* Image segmentation
* Document clustering
* Market research

### 2. **Dimensionality Reduction**

Mengurangi jumlah features sambil mempertahankan informasi penting.

**Examples:**

* Data visualization
* Feature compression
* Noise reduction
* Data preprocessing

### 3. **Association Rule Learning**

Menemukan relationships dan patterns dalam data.

**Examples:**

* Market basket analysis
* Recommendation systems
* Web usage mining
* Medical diagnosis

### 4. **Anomaly Detection**

Mengidentifikasi data points yang tidak normal atau outliers.

**Examples:**

* Fraud detection
* Network security
* Quality control
* Medical diagnosis

## 🚀 Popular Algorithms

### Clustering Algorithms

#### 1. **K-Means Clustering**

Partitioning method yang membagi data menjadi K clusters berdasarkan centroid.

```python
from sklearn.cluster import KMeans
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
import numpy as np

# Prepare data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Find optimal K using Elbow Method
inertias = []
K_range = range(1, 11)

for k in K_range:
    kmeans = KMeans(n_clusters=k, random_state=42, n_init=10)
    kmeans.fit(X_scaled)
    inertias.append(kmeans.inertia_)

# Plot Elbow Method
plt.figure(figsize=(10, 6))
plt.plot(K_range, inertias, 'bo-')
plt.xlabel('Number of Clusters (K)')
plt.ylabel('Inertia')
plt.title('Elbow Method for Optimal K')
plt.grid(True)
plt.show()

# Optimal K (example: K=3)
optimal_k = 3
kmeans = KMeans(n_clusters=optimal_k, random_state=42, n_init=10)
cluster_labels = kmeans.fit_predict(X_scaled)

# Visualize clusters
plt.figure(figsize=(10, 6))
scatter = plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=cluster_labels, cmap='viridis')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], 
            s=300, c='red', marker='x', linewidths=3, label='Centroids')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title(f'K-Means Clustering (K={optimal_k})')
plt.legend()
plt.colorbar(scatter)
plt.show()

# Cluster analysis
for i in range(optimal_k):
    cluster_data = X_scaled[cluster_labels == i]
    print(f"Cluster {i}: {len(cluster_data)} samples")
    print(f"Centroid: {kmeans.cluster_centers_[i]}")
    print(f"Average distance to centroid: {np.mean(np.linalg.norm(cluster_data - kmeans.cluster_centers_[i], axis=1)):.3f}")
    print("---")
```

**Pros:**

* Simple and fast
* Scales well to large datasets
* Guarantees convergence
* Easy to interpret

**Cons:**

* Requires specifying number of clusters
* Sensitive to initial centroid placement
* Assumes spherical clusters
* Sensitive to outliers

**Use Cases:**

* Customer segmentation
* Image compression
* Document clustering
* Market research

#### 2. **Hierarchical Clustering**

Builds a tree-like structure of clusters (dendrogram).

```python
from sklearn.cluster import AgglomerativeClustering
from scipy.cluster.hierarchy import dendrogram, linkage
from scipy.spatial.distance import pdist

# Create linkage matrix
linkage_matrix = linkage(X_scaled, method='ward')

# Plot dendrogram
plt.figure(figsize=(12, 8))
dendrogram(linkage_matrix, truncate_mode='level', p=5)
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('Sample Index')
plt.ylabel('Distance')
plt.show()

# Perform clustering
n_clusters = 3
hierarchical = AgglomerativeClustering(n_clusters=n_clusters)
cluster_labels = hierarchical.fit_predict(X_scaled)

# Visualize clusters
plt.figure(figsize=(10, 6))
scatter = plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=cluster_labels, cmap='viridis')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title(f'Hierarchical Clustering (K={n_clusters})')
plt.colorbar(scatter)
plt.show()
```

**Pros:**

* No need to specify number of clusters
* Creates interpretable hierarchy
* Can handle non-spherical clusters
* Robust to outliers

**Cons:**

* Computationally expensive
* Sensitive to noise
* Cannot undo previous steps
* Memory intensive for large datasets

**Use Cases:**

* Taxonomy creation
* Phylogenetic analysis
* Social network analysis
* Document organization

#### 3. **DBSCAN (Density-Based Spatial Clustering)**

Clustering berdasarkan density dan spatial proximity.

```python
from sklearn.cluster import DBSCAN
from sklearn.neighbors import NearestNeighbors

# Find optimal epsilon using k-distance graph
neighbors = NearestNeighbors(n_neighbors=5)
neighbors_fit = neighbors.fit(X_scaled)
distances, indices = neighbors_fit.kneighbors(X_scaled)
distances = np.sort(distances[:, 4])

# Plot k-distance graph
plt.figure(figsize=(10, 6))
plt.plot(range(len(distances)), distances)
plt.xlabel('Points')
plt.ylabel('Distance to 5th Nearest Neighbor')
plt.title('K-Distance Graph for DBSCAN')
plt.grid(True)
plt.show()

# Optimal epsilon (example: elbow point)
epsilon = 0.5
min_samples = 5

# Perform DBSCAN clustering
dbscan = DBSCAN(eps=epsilon, min_samples=min_samples)
cluster_labels = dbscan.fit_predict(X_scaled)

# Visualize clusters
plt.figure(figsize=(10, 6))
scatter = plt.scatter(X_scaled[:, 0], X_scaled[:, 1], c=cluster_labels, cmap='viridis')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.title(f'DBSCAN Clustering (eps={epsilon}, min_samples={min_samples})')
plt.colorbar(scatter)
plt.show()

# Cluster analysis
n_clusters = len(set(cluster_labels)) - (1 if -1 in cluster_labels else 0)
n_noise = list(cluster_labels).count(-1)

print(f"Estimated number of clusters: {n_clusters}")
print(f"Estimated number of noise points: {n_noise}")
print(f"Silhouette Score: {silhouette_score(X_scaled, cluster_labels):.3f}")
```

**Pros:**

* No need to specify number of clusters
* Can find clusters of arbitrary shapes
* Robust to outliers
* Handles noise well

**Cons:**

* Sensitive to parameters (eps, min\_samples)
* Struggles with clusters of varying densities
* Computationally expensive for large datasets
* Parameter tuning can be challenging

**Use Cases:**

* Anomaly detection
* Geographic clustering
* Image segmentation
* Network analysis

### Dimensionality Reduction Algorithms

#### 1. **Principal Component Analysis (PCA)**

Linear dimensionality reduction technique.

```python
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler

# Prepare data
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)

# Apply PCA
pca = PCA()
X_pca = pca.fit_transform(X_scaled)

# Explained variance ratio
explained_variance_ratio = pca.explained_variance_ratio_
cumulative_variance_ratio = np.cumsum(explained_variance_ratio)

# Plot explained variance
plt.figure(figsize=(12, 5))

plt.subplot(1, 2, 1)
plt.plot(range(1, len(explained_variance_ratio) + 1), explained_variance_ratio, 'bo-')
plt.xlabel('Principal Component')
plt.ylabel('Explained Variance Ratio')
plt.title('Explained Variance Ratio by Component')
plt.grid(True)

plt.subplot(1, 2, 2)
plt.plot(range(1, len(cumulative_variance_ratio) + 1), cumulative_variance_ratio, 'ro-')
plt.xlabel('Number of Components')
plt.ylabel('Cumulative Explained Variance Ratio')
plt.title('Cumulative Explained Variance Ratio')
plt.grid(True)

plt.tight_layout()
plt.show()

# Choose number of components (e.g., 95% variance)
n_components_95 = np.argmax(cumulative_variance_ratio >= 0.95) + 1
print(f"Components needed for 95% variance: {n_components_95}")

# Apply PCA with selected components
pca_selected = PCA(n_components=n_components_95)
X_pca_selected = pca_selected.fit_transform(X_scaled)

# Feature importance
feature_importance = pd.DataFrame({
    'feature': feature_names,
    'pc1_loading': pca.components_[0],
    'pc2_loading': pca.components_[1]
}).sort_values('pc1_loading', key=abs, ascending=False)

print("Top features by PC1 loading:")
print(feature_importance.head(10))
```

**Pros:**

* Reduces dimensionality effectively
* Preserves maximum variance
* Linear transformation
* Well-understood mathematically

**Cons:**

* Linear transformation only
* Assumes Gaussian distribution
* Sensitive to outliers
* May lose important information

**Use Cases:**

* Data visualization
* Feature compression
* Noise reduction
* Data preprocessing

#### 2. **t-SNE (t-Distributed Stochastic Neighbor Embedding)**

Non-linear dimensionality reduction for visualization.

```python
from sklearn.manifold import TSNE

# Apply t-SNE
tsne = TSNE(n_components=2, random_state=42, perplexity=30)
X_tsne = tsne.fit_transform(X_scaled)

# Visualize t-SNE results
plt.figure(figsize=(10, 6))
scatter = plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=y, cmap='viridis')
plt.xlabel('t-SNE Component 1')
plt.ylabel('t-SNE Component 2')
plt.title('t-SNE Visualization')
plt.colorbar(scatter)
plt.show()

# Compare with PCA
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))

# PCA
scatter1 = ax1.scatter(X_pca[:, 0], X_pca[:, 1], c=y, cmap='viridis')
ax1.set_xlabel('PC1')
ax1.set_ylabel('PC2')
ax1.set_title('PCA Visualization')

# t-SNE
scatter2 = ax2.scatter(X_tsne[:, 0], X_tsne[:, 1], c=y, cmap='viridis')
ax2.set_xlabel('t-SNE Component 1')
ax2.set_ylabel('t-SNE Component 2')
ax2.set_title('t-SNE Visualization')

plt.tight_layout()
plt.show()
```

**Pros:**

* Excellent for visualization
* Preserves local structure
* Can reveal non-linear patterns
* Good for high-dimensional data

**Cons:**

* Computationally expensive
* Non-deterministic
* Cannot be applied to new data
* Sensitive to parameters

**Use Cases:**

* Data exploration
* Cluster visualization
* High-dimensional data analysis
* Research and development

### Association Rule Learning

#### **Apriori Algorithm**

Finds frequent itemsets and association rules.

```python
from mlxtend.frequent_patterns import apriori, association_rules
from mlxtend.preprocessing import TransactionEncoder

# Example transaction data
transactions = [
    ['bread', 'milk', 'eggs'],
    ['bread', 'diapers', 'beer', 'eggs'],
    ['milk', 'diapers', 'beer', 'cola'],
    ['bread', 'milk', 'diapers', 'beer'],
    ['bread', 'milk', 'diapers', 'cola']
]

# Encode transactions
te = TransactionEncoder()
te_ary = te.fit(transactions).transform(transactions)
df = pd.DataFrame(te_ary, columns=te.columns_)

# Find frequent itemsets
frequent_itemsets = apriori(df, min_support=0.4, use_colnames=True)
print("Frequent Itemsets:")
print(frequent_itemsets)

# Generate association rules
rules = association_rules(frequent_itemsets, metric="confidence", min_threshold=0.6)
print("\nAssociation Rules:")
print(rules[['antecedents', 'consequents', 'support', 'confidence', 'lift']])
```

## 🔧 Model Evaluation

### Clustering Evaluation

```python
from sklearn.metrics import silhouette_score, calinski_harabasz_score, davies_bouldin_score

# Silhouette Score (-1 to 1, higher is better)
silhouette_avg = silhouette_score(X_scaled, cluster_labels)
print(f"Silhouette Score: {silhouette_avg:.3f}")

# Calinski-Harabasz Index (higher is better)
ch_score = calinski_harabasz_score(X_scaled, cluster_labels)
print(f"Calinski-Harabasz Score: {ch_score:.3f}")

# Davies-Bouldin Index (lower is better)
db_score = davies_bouldin_score(X_scaled, cluster_labels)
print(f"Davies-Bouldin Score: {db_score:.3f}")

# Compare different clustering methods
clustering_methods = {
    'K-Means': KMeans(n_clusters=3, random_state=42, n_init=10),
    'Hierarchical': AgglomerativeClustering(n_clusters=3),
    'DBSCAN': DBSCAN(eps=0.5, min_samples=5)
}

results = {}
for name, method in clustering_methods.items():
    labels = method.fit_predict(X_scaled)
    if len(set(labels)) > 1:  # Check if clustering was successful
        silhouette = silhouette_score(X_scaled, labels)
        ch_score = calinski_harabasz_score(X_scaled, labels)
        results[name] = {'silhouette': silhouette, 'ch_score': ch_score}

# Compare results
comparison_df = pd.DataFrame(results).T
print("\nClustering Method Comparison:")
print(comparison_df)
```

### Dimensionality Reduction Evaluation

```python
# Reconstruction error for PCA
pca_reconstruction = pca.inverse_transform(X_pca)
reconstruction_error = np.mean(np.square(X_scaled - pca_reconstruction))
print(f"PCA Reconstruction Error: {reconstruction_error:.6f}")

# Information preservation
print(f"Variance explained by first 2 components: {np.sum(explained_variance_ratio[:2]):.3f}")
print(f"Variance explained by first 5 components: {np.sum(explained_variance_ratio[:5]):.3f}")
```

## 🚀 Best Practices

### 1. **Data Preprocessing**

* Scale numerical features (especially for distance-based algorithms)
* Handle missing values appropriately
* Remove or handle outliers
* Consider feature engineering

### 2. **Algorithm Selection**

* **K-Means**: Good for spherical clusters, large datasets
* **Hierarchical**: Good for small datasets, interpretable hierarchy
* **DBSCAN**: Good for irregular shapes, noise handling
* **PCA**: Good for linear relationships, noise reduction
* **t-SNE**: Good for visualization, non-linear patterns

### 3. **Parameter Tuning**

* **K-Means**: Number of clusters (K)
* **DBSCAN**: Epsilon and min\_samples
* **PCA**: Number of components
* **t-SNE**: Perplexity and learning rate

### 4. **Evaluation Strategy**

* Use multiple evaluation metrics
* Consider business context
* Validate results with domain experts
* Test on different datasets

### 5. **Interpretation**

* Analyze cluster characteristics
* Understand feature importance
* Validate with business logic
* Document findings clearly

## 📚 References & Resources

### 📖 Books

* [**"Pattern Recognition and Machine Learning"**](https://www.microsoft.com/en-us/research/people/cmbishop/) by Christopher Bishop
* [**"The Elements of Statistical Learning"**](https://web.stanford.edu/~hastie/ElemStatLearn/) by Trevor Hastie, Robert Tibshirani, Jerome Friedman
* [**"Data Mining: Concepts and Techniques"**](https://www.elsevier.com/books/data-mining-concepts-and-techniques/han/978-0-12-381479-1) by Jiawei Han, Micheline Kamber, Jian Pei

### 🎓 Courses

* [**Coursera Machine Learning**](https://www.coursera.org/learn/machine-learning) by Andrew Ng
* [**Stanford CS229**](https://cs229.stanford.edu/) - Machine Learning Course
* [**MIT 6.036**](https://introml.mit.edu/) - Introduction to Machine Learning

### 📰 Research Papers

* [**"A Density-Based Algorithm for Discovering Clusters"**](https://www.aaai.org/Papers/KDD/1996/KDD96-037.pdf) by Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu
* [**"Principal Component Analysis"**](https://www.jstor.org/stable/2346914) by Karl Pearson
* [**"Visualizing Data using t-SNE"**](https://www.jmlr.org/papers/volume9/vandermaaten08a/vandermaaten08a.pdf) by Laurens van der Maaten and Geoffrey Hinton

### 🐙 GitHub Repositories

* [**Scikit-learn Clustering Examples**](https://github.com/scikit-learn/scikit-learn/tree/main/examples/cluster)
* [**ML-From-Scratch**](https://github.com/eriklindernoren/ML-From-Scratch) - Implementation of clustering algorithms
* [**Awesome Clustering**](https://github.com/edouardfouche/awesome-clustering) - Curated clustering resources

### 📊 Datasets

* [**UCI Machine Learning Repository**](https://archive.ics.uci.edu/ml/)
* [**Kaggle Datasets**](https://www.kaggle.com/datasets)
* [**Scikit-learn Built-in Datasets**](https://scikit-learn.org/stable/datasets.html)

## 🔗 Related Topics

* [🧠 ML Fundamentals](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/fundamentals)
* [🔢 Supervised Learning](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/fundamentals/supervised-learning)
* [🔄 Reinforcement Learning](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/fundamentals/reinforcement-learning)
* [🐍 Python ML Tools](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/python-ml)

***

*Last updated: December 2024* *Contributors: \[Your Name]*
