# Scikit Learn

## 📚 Overview

Scikit-learn adalah library machine learning yang powerful dan user-friendly untuk Python. Library ini menyediakan berbagai algoritma machine learning untuk supervised dan unsupervised learning, preprocessing data, model selection, dan evaluation.

## 🚀 Getting Started

### 1. **Installation & Basic Setup**

```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import datasets, metrics, model_selection, preprocessing
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.svm import SVC, SVR
from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor
from sklearn.cluster import KMeans, DBSCAN
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE

# Set random seed for reproducibility
np.random.seed(42)

print("Scikit-learn version:", sklearn.__version__)
```

### 2. **Loading Built-in Datasets**

```python
# Load various built-in datasets
iris = datasets.load_iris()
breast_cancer = datasets.load_breast_cancer()
diabetes = datasets.load_diabetes()
digits = datasets.load_digits()

print("Iris dataset shape:", iris.data.shape)
print("Breast cancer dataset shape:", breast_cancer.data.shape)
print("Diabetes dataset shape:", diabetes.data.shape)
print("Digits dataset shape:", digits.data.shape)

# Create a simple dataset for demonstration
X, y = datasets.make_classification(
    n_samples=1000, 
    n_features=20, 
    n_informative=15, 
    n_redundant=5, 
    random_state=42
)

print(f"Generated dataset: {X.shape}, target: {y.shape}")
print(f"Class distribution: {np.bincount(y)}")
```

## 🔢 Data Preprocessing

### 1. **Feature Scaling & Normalization**

```python
from sklearn.preprocessing import StandardScaler, MinMaxScaler, RobustScaler

# Create sample data
X_sample = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])

print("Original data:")
print(X_sample)
print(f"Mean: {np.mean(X_sample, axis=0)}")
print(f"Std: {np.std(X_sample, axis=0)}")

# StandardScaler (Z-score normalization)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X_sample)

print("\nStandardScaler (Z-score):")
print(X_scaled)
print(f"Mean: {np.mean(X_scaled, axis=0)}")
print(f"Std: {np.std(X_scaled, axis=0)}")

# MinMaxScaler (0-1 scaling)
minmax_scaler = MinMaxScaler()
X_minmax = minmax_scaler.fit_transform(X_sample)

print("\nMinMaxScaler (0-1):")
print(X_minmax)
print(f"Min: {np.min(X_minmax, axis=0)}")
print(f"Max: {np.max(X_minmax, axis=0)}")

# RobustScaler (robust to outliers)
robust_scaler = RobustScaler()
X_robust = robust_scaler.fit_transform(X_sample)

print("\nRobustScaler:")
print(X_robust)
```

### 2. **Encoding Categorical Variables**

```python
from sklearn.preprocessing import LabelEncoder, OneHotEncoder, OrdinalEncoder

# Sample categorical data
categorical_data = np.array([
    ['red', 'small', 'circle'],
    ['blue', 'medium', 'square'],
    ['red', 'large', 'triangle'],
    ['green', 'small', 'circle']
])

print("Original categorical data:")
print(categorical_data)

# Label Encoding (for target variables)
label_encoder = LabelEncoder()
y_encoded = label_encoder.fit_transform(categorical_data[:, 0])  # Encode colors
print(f"\nLabel encoded colors: {y_encoded}")
print(f"Classes: {label_encoder.classes_}")

# One-Hot Encoding (for features)
onehot_encoder = OneHotEncoder(sparse_output=False)
X_onehot = onehot_encoder.fit_transform(categorical_data)
print(f"\nOne-hot encoded shape: {X_onehot.shape}")
print("One-hot encoded data:")
print(X_onehot)

# Ordinal Encoding (for ordinal categories)
ordinal_encoder = OrdinalEncoder()
X_ordinal = ordinal_encoder.fit_transform(categorical_data)
print(f"\nOrdinal encoded data:")
print(X_ordinal)
```

### 3. **Handling Missing Values**

```python
from sklearn.impute import SimpleImputer

# Create data with missing values
X_with_missing = np.array([
    [1, 2, np.nan],
    [4, np.nan, 6],
    [7, 8, 9],
    [np.nan, 11, 12]
])

print("Data with missing values:")
print(X_with_missing)

# Mean imputation
mean_imputer = SimpleImputer(strategy='mean')
X_mean_imputed = mean_imputer.fit_transform(X_with_missing)

print("\nMean imputation:")
print(X_mean_imputed)

# Median imputation
median_imputer = SimpleImputer(strategy='median')
X_median_imputed = median_imputer.fit_transform(X_with_missing)

print("\nMedian imputation:")
print(X_median_imputed)

# Most frequent imputation
freq_imputer = SimpleImputer(strategy='most_frequent')
X_freq_imputed = freq_imputer.fit_transform(X_with_missing)

print("\nMost frequent imputation:")
print(X_freq_imputed)
```

## 🎯 Supervised Learning

### 1. **Classification**

```python
# Prepare data
X_class = iris.data
y_class = iris.target

# Split data
X_train, X_test, y_train, y_test = model_selection.train_test_split(
    X_class, y_class, test_size=0.3, random_state=42, stratify=y_class
)

print(f"Training set: {X_train.shape}")
print(f"Test set: {X_test.shape}")

# Scale features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Train multiple classifiers
classifiers = {
    'Logistic Regression': LogisticRegression(random_state=42, max_iter=1000),
    'Decision Tree': DecisionTreeClassifier(random_state=42),
    'Random Forest': RandomForestClassifier(n_estimators=100, random_state=42),
    'SVM': SVC(random_state=42),
    'KNN': KNeighborsClassifier(n_neighbors=5)
}

# Train and evaluate
results = {}
for name, clf in classifiers.items():
    # Train
    clf.fit(X_train_scaled, y_train)
    
    # Predict
    y_pred = clf.predict(X_test_scaled)
    
    # Evaluate
    accuracy = metrics.accuracy_score(y_test, y_pred)
    f1 = metrics.f1_score(y_test, y_pred, average='weighted')
    
    results[name] = {'accuracy': accuracy, 'f1': f1}
    
    print(f"{name}:")
    print(f"  Accuracy: {accuracy:.4f}")
    print(f"  F1-Score: {f1:.4f}")

# Plot results
plt.figure(figsize=(12, 5))

plt.subplot(1, 2, 1)
names = list(results.keys())
accuracies = [results[name]['accuracy'] for name in names]
plt.bar(names, accuracies, color='skyblue', alpha=0.7)
plt.title('Classification Accuracy Comparison')
plt.ylabel('Accuracy')
plt.xticks(rotation=45)
plt.ylim(0, 1)

plt.subplot(1, 2, 2)
f1_scores = [results[name]['f1'] for name in names]
plt.bar(names, f1_scores, color='lightcoral', alpha=0.7)
plt.title('Classification F1-Score Comparison')
plt.ylabel('F1-Score')
plt.xticks(rotation=45)
plt.ylim(0, 1)

plt.tight_layout()
plt.show()
```

### 2. **Regression**

```python
# Prepare regression data
X_reg = diabetes.data
y_reg = diabetes.target

# Split data
X_train, X_test, y_train, y_test = model_selection.train_test_split(
    X_reg, y_reg, test_size=0.3, random_state=42
)

# Scale features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

# Train multiple regressors
regressors = {
    'Linear Regression': LinearRegression(),
    'Decision Tree': DecisionTreeRegressor(random_state=42),
    'Random Forest': RandomForestRegressor(n_estimators=100, random_state=42),
    'SVR': SVR(),
    'KNN': KNeighborsRegressor(n_neighbors=5)
}

# Train and evaluate
reg_results = {}
for name, reg in regressors.items():
    # Train
    reg.fit(X_train_scaled, y_train)
    
    # Predict
    y_pred = reg.predict(X_test_scaled)
    
    # Evaluate
    mse = metrics.mean_squared_error(y_test, y_pred)
    r2 = metrics.r2_score(y_test, y_pred)
    mae = metrics.mean_absolute_error(y_test, y_pred)
    
    reg_results[name] = {'mse': mse, 'r2': r2, 'mae': mae}
    
    print(f"{name}:")
    print(f"  MSE: {mse:.4f}")
    print(f"  R²: {r2:.4f}")
    print(f"  MAE: {mae:.4f}")

# Plot actual vs predicted for best model
best_model_name = max(reg_results.keys(), key=lambda x: reg_results[x]['r2'])
best_model = regressors[best_model_name]
y_pred_best = best_model.predict(X_test_scaled)

plt.figure(figsize=(10, 6))
plt.scatter(y_test, y_pred_best, alpha=0.6)
plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', lw=2)
plt.xlabel('Actual Values')
plt.ylabel('Predicted Values')
plt.title(f'Actual vs Predicted - {best_model_name}')
plt.grid(True, alpha=0.3)
plt.show()
```

### 3. **Model Selection & Cross-Validation**

```python
from sklearn.model_selection import cross_val_score, GridSearchCV, RandomizedSearchCV

# Cross-validation
cv_scores = cross_val_score(
    RandomForestClassifier(n_estimators=100, random_state=42),
    X_train_scaled, y_train, cv=5, scoring='accuracy'
)

print("Cross-validation scores:", cv_scores)
print(f"Mean CV accuracy: {cv_scores.mean():.4f} (+/- {cv_scores.std() * 2:.4f})")

# Grid search for hyperparameter tuning
param_grid = {
    'n_estimators': [50, 100, 200],
    'max_depth': [10, 20, None],
    'min_samples_split': [2, 5, 10],
    'min_samples_leaf': [1, 2, 4]
}

grid_search = GridSearchCV(
    RandomForestClassifier(random_state=42),
    param_grid, cv=3, scoring='accuracy', n_jobs=-1
)

grid_search.fit(X_train_scaled, y_train)

print(f"\nBest parameters: {grid_search.best_params_}")
print(f"Best cross-validation score: {grid_search.best_score_:.4f}")

# Random search (faster for large parameter spaces)
random_search = RandomizedSearchCV(
    RandomForestClassifier(random_state=42),
    param_grid, n_iter=20, cv=3, scoring='accuracy', random_state=42, n_jobs=-1
)

random_search.fit(X_train_scaled, y_train)

print(f"\nRandom search best score: {random_search.best_score_:.4f}")
```

## 🎨 Unsupervised Learning

### 1. **Clustering**

```python
# Prepare data for clustering
X_cluster = iris.data[:, [0, 1]]  # Use first two features for visualization

# K-Means clustering
kmeans = KMeans(n_clusters=3, random_state=42)
kmeans_labels = kmeans.fit_predict(X_cluster)

# DBSCAN clustering
dbscan = DBSCAN(eps=0.5, min_samples=5)
dbscan_labels = dbscan.fit_predict(X_cluster)

# Evaluate clustering
kmeans_silhouette = metrics.silhouette_score(X_cluster, kmeans_labels)
dbscan_silhouette = metrics.silhouette_score(X_cluster, dbscan_labels)

print(f"K-Means silhouette score: {kmeans_silhouette:.4f}")
print(f"DBSCAN silhouette score: {dbscan_silhouette:.4f}")

# Visualize clustering results
plt.figure(figsize=(15, 5))

plt.subplot(1, 3, 1)
plt.scatter(X_cluster[:, 0], X_cluster[:, 1], c=iris.target, cmap='viridis')
plt.title('Original Data (True Labels)')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')

plt.subplot(1, 3, 2)
plt.scatter(X_cluster[:, 0], X_cluster[:, 1], c=kmeans_labels, cmap='viridis')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], 
           c='red', marker='x', s=200, linewidths=3, label='Centroids')
plt.title('K-Means Clustering')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')
plt.legend()

plt.subplot(1, 3, 3)
plt.scatter(X_cluster[:, 0], X_cluster[:, 1], c=dbscan_labels, cmap='viridis')
plt.title('DBSCAN Clustering')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')

plt.tight_layout()
plt.show()
```

### 2. **Dimensionality Reduction**

```python
# PCA for dimensionality reduction
pca = PCA(n_components=2)
X_pca = pca.fit_transform(iris.data)

print(f"Original shape: {iris.data.shape}")
print(f"PCA shape: {X_pca.shape}")
print(f"Explained variance ratio: {pca.explained_variance_ratio_}")
print(f"Total explained variance: {sum(pca.explained_variance_ratio_):.4f}")

# t-SNE for visualization
tsne = TSNE(n_components=2, random_state=42, perplexity=30)
X_tsne = tsne.fit_transform(iris.data)

# Visualize dimensionality reduction
plt.figure(figsize=(15, 5))

plt.subplot(1, 3, 1)
plt.scatter(iris.data[:, 0], iris.data[:, 1], c=iris.target, cmap='viridis')
plt.title('Original Data (First 2 Features)')
plt.xlabel('Feature 1')
plt.ylabel('Feature 2')

plt.subplot(1, 3, 2)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=iris.target, cmap='viridis')
plt.title('PCA (2 Components)')
plt.xlabel('Principal Component 1')
plt.ylabel('Principal Component 2')

plt.subplot(1, 3, 3)
plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=iris.target, cmap='viridis')
plt.title('t-SNE (2 Components)')
plt.xlabel('t-SNE 1')
plt.ylabel('t-SNE 2')

plt.tight_layout()
plt.show()

# PCA with different numbers of components
n_components_range = range(1, iris.data.shape[1] + 1)
explained_variance_ratios = []

for n in n_components_range:
    pca_temp = PCA(n_components=n)
    pca_temp.fit(iris.data)
    explained_variance_ratios.append(sum(pca_temp.explained_variance_ratio_))

plt.figure(figsize=(10, 6))
plt.plot(n_components_range, explained_variance_ratios, 'bo-')
plt.xlabel('Number of Components')
plt.ylabel('Cumulative Explained Variance Ratio')
plt.title('PCA Explained Variance vs Number of Components')
plt.grid(True, alpha=0.3)
plt.show()
```

## 📊 Model Evaluation

### 1. **Classification Metrics**

```python
# Detailed classification evaluation
best_clf = RandomForestClassifier(n_estimators=100, random_state=42)
best_clf.fit(X_train_scaled, y_train)
y_pred = best_clf.predict(X_test_scaled)
y_pred_proba = best_clf.predict_proba(X_test_scaled)

# Confusion matrix
cm = metrics.confusion_matrix(y_test, y_pred)
plt.figure(figsize=(8, 6))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', 
            xticklabels=iris.target_names, yticklabels=iris.target_names)
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()

# Classification report
print("Classification Report:")
print(metrics.classification_report(y_test, y_pred, target_names=iris.target_names))

# ROC curve (for binary classification, use first class vs rest)
from sklearn.preprocessing import label_binarize
y_bin = label_binarize(y_test, classes=[0, 1, 2])
y_bin_1 = y_bin[:, 0]  # Class 0 vs rest
y_pred_proba_1 = y_pred_proba[:, 0]

fpr, tpr, _ = metrics.roc_curve(y_bin_1, y_pred_proba_1)
roc_auc = metrics.auc(fpr, tpr)

plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, color='darkorange', lw=2, label=f'ROC curve (AUC = {roc_auc:.2f})')
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve (Class 0 vs Rest)')
plt.legend(loc="lower right")
plt.grid(True, alpha=0.3)
plt.show()
```

### 2. **Regression Metrics**

```python
# Detailed regression evaluation
best_reg = RandomForestRegressor(n_estimators=100, random_state=42)
best_reg.fit(X_train_scaled, y_train)
y_pred_reg = best_reg.predict(X_test_scaled)

# Residual plot
residuals = y_test - y_pred_reg

plt.figure(figsize=(15, 5))

plt.subplot(1, 3, 1)
plt.scatter(y_pred_reg, residuals, alpha=0.6)
plt.axhline(y=0, color='r', linestyle='--')
plt.xlabel('Predicted Values')
plt.ylabel('Residuals')
plt.title('Residual Plot')
plt.grid(True, alpha=0.3)

plt.subplot(1, 3, 2)
plt.hist(residuals, bins=30, alpha=0.7, color='skyblue', edgecolor='black')
plt.xlabel('Residuals')
plt.ylabel('Frequency')
plt.title('Residual Distribution')
plt.grid(True, alpha=0.3)

plt.subplot(1, 3, 3)
from scipy import stats
stats.probplot(residuals, dist="norm", plot=plt)
plt.title('Q-Q Plot (Normality Test)')
plt.grid(True, alpha=0.3)

plt.tight_layout()
plt.show()

# Calculate all regression metrics
regression_metrics = {
    'MSE': metrics.mean_squared_error(y_test, y_pred_reg),
    'RMSE': np.sqrt(metrics.mean_squared_error(y_test, y_pred_reg)),
    'MAE': metrics.mean_absolute_error(y_test, y_pred_reg),
    'R²': metrics.r2_score(y_test, y_pred_reg),
    'Explained Variance': metrics.explained_variance_score(y_test, y_pred_reg)
}

print("Regression Metrics:")
for metric, value in regression_metrics.items():
    print(f"  {metric}: {value:.4f}")
```

## 🔧 Advanced Features

### 1. **Pipeline & Feature Union**

```python
from sklearn.pipeline import Pipeline
from sklearn.feature_selection import SelectKBest, f_classif
from sklearn.compose import ColumnTransformer

# Create a pipeline
pipeline = Pipeline([
    ('scaler', StandardScaler()),
    ('feature_selection', SelectKBest(score_func=f_classif, k=10)),
    ('classifier', RandomForestClassifier(n_estimators=100, random_state=42))
])

# Fit and evaluate pipeline
pipeline.fit(X_train, y_train)
pipeline_score = pipeline.score(X_test, y_test)
print(f"Pipeline accuracy: {pipeline_score:.4f}")

# Feature selection results
feature_scores = pipeline.named_steps['feature_selection'].scores_
selected_features = pipeline.named_steps['feature_selection'].get_support()

print(f"Feature scores: {feature_scores}")
print(f"Selected features: {selected_features}")
```

### 2. **Custom Transformers**

```python
from sklearn.base import BaseEstimator, TransformerMixin

class CustomScaler(BaseEstimator, TransformerMixin):
    def __init__(self, method='standard'):
        self.method = method
        self.scaler_ = None
    
    def fit(self, X, y=None):
        if self.method == 'standard':
            self.scaler_ = StandardScaler()
        elif self.method == 'minmax':
            self.scaler_ = MinMaxScaler()
        elif self.method == 'robust':
            self.scaler_ = RobustScaler()
        
        self.scaler_.fit(X)
        return self
    
    def transform(self, X):
        return self.scaler_.transform(X)

# Use custom transformer
custom_pipeline = Pipeline([
    ('custom_scaler', CustomScaler(method='robust')),
    ('classifier', RandomForestClassifier(n_estimators=100, random_state=42))
])

custom_pipeline.fit(X_train, y_train)
custom_score = custom_pipeline.score(X_test, y_test)
print(f"Custom pipeline accuracy: {custom_score:.4f}")
```

## 🚀 Best Practices

### 1. **Data Preparation**

* **Always scale features** for algorithms sensitive to scale
* **Handle missing values** before training
* **Encode categorical variables** appropriately
* **Split data early** to avoid data leakage

### 2. **Model Training**

* **Use cross-validation** for reliable performance estimates
* **Tune hyperparameters** systematically
* **Start simple** and increase complexity gradually
* **Monitor for overfitting** using validation sets

### 3. **Evaluation**

* **Use appropriate metrics** for your problem type
* **Visualize results** to understand model behavior
* **Compare multiple models** before final selection
* **Consider business context** when choosing metrics

## 📚 References & Resources

### 📖 Documentation

* [**Scikit-learn Official Documentation**](https://scikit-learn.org/stable/)
* [**User Guide**](https://scikit-learn.org/stable/user_guide.html)
* [**API Reference**](https://scikit-learn.org/stable/modules/classes.html)
* [**Examples**](https://scikit-learn.org/stable/auto_examples/index.html)

### 🎓 Tutorials & Courses

* [**Scikit-learn Tutorial**](https://scikit-learn.org/stable/tutorial/)
* [**Machine Learning with Scikit-learn**](https://scikit-learn.org/stable/tutorial/basic/tutorial.html)
* [**DataCamp Scikit-learn Course**](https://www.datacamp.com/courses/supervised-learning-with-scikit-learn)
* [**Real Python ML Tutorial**](https://realpython.com/tutorials/machine-learning/)

### 📰 Articles & Blogs

* [**Scikit-learn Best Practices**](https://scikit-learn.org/stable/modules/preprocessing.html)
* [**Model Selection Guide**](https://scikit-learn.org/stable/model_selection.html)
* [**Feature Engineering Tips**](https://scikit-learn.org/stable/modules/feature_extraction.html)

### 🐙 GitHub Repositories

* [**Scikit-learn Source Code**](https://github.com/scikit-learn/scikit-learn)
* [**Scikit-learn Examples**](https://github.com/scikit-learn/scikit-learn/tree/main/examples)
* [**Scikit-learn Contrib**](https://github.com/scikit-learn-contrib)

### 📊 Datasets & Benchmarks

* [**Scikit-learn Built-in Datasets**](https://scikit-learn.org/stable/datasets/index.html)
* [**UCI Machine Learning Repository**](https://archive.ics.uci.edu/ml/index.php)
* [**OpenML**](https://www.openml.org/)

## 🔗 Related Topics

* [🐍 Python ML Tools](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/python-ml)
* [📊 NumPy & Pandas](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/python-ml/numpy-pandas)
* [📈 Visualization](https://github.com/mahbubzulkarnain/catatan-seekor-the-series/blob/master/machine_learning/python-ml/visualization.md)
* [🧠 ML Fundamentals](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/fundamentals)

***

*Last updated: December 2024* *Contributors: \[Your Name]*
