# Catatan Seekor: Fine Tuning

## 📚 Overview

Fine-tuning adalah teknik untuk mengadaptasi pre-trained model (seperti GPT, BERT, atau vision models) untuk task atau domain spesifik. Teknik ini memungkinkan kita memanfaatkan knowledge yang sudah dipelajari model dari training data yang besar, sambil menyesuaikannya dengan kebutuhan spesifik kita.

## 🎯 Why Fine-tuning?

### 1. **Transfer Learning Benefits**

* **Efficiency**: Tidak perlu training dari scratch
* **Performance**: Biasanya lebih baik dari training dari awal
* **Data Requirements**: Membutuhkan data training yang lebih sedikit
* **Time Savings**: Training time yang jauh lebih cepat

### 2. **Use Cases**

* Domain adaptation (e.g., medical, legal, technical text)
* Task-specific optimization (e.g., sentiment analysis, classification)
* Language adaptation (e.g., Indonesian, Javanese)
* Style transfer (e.g., formal vs informal writing)

## 🏗️ Fine-tuning Approaches

### 1. **Full Fine-tuning**

Mengupdate semua parameter model.

```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer, TrainingArguments, Trainer
from datasets import Dataset

# Load pre-trained model
model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=2)
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")

# Prepare data
def tokenize_function(examples):
    return tokenizer(examples["text"], padding="max_length", truncation=True, max_length=512)

# Load and tokenize dataset
dataset = Dataset.from_dict({
    "text": ["I love this movie", "I hate this movie", "Great film!", "Terrible film!"],
    "label": [1, 0, 1, 0]
})

tokenized_dataset = dataset.map(tokenize_function, batched=True)

# Training arguments
training_args = TrainingArguments(
    output_dir="./results",
    learning_rate=2e-5,
    per_device_train_batch_size=8,
    per_device_eval_batch_size=8,
    num_train_epochs=3,
    weight_decay=0.01,
    evaluation_strategy="epoch",
    save_strategy="epoch",
    load_best_model_at_end=True,
)

# Initialize trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_dataset,
    eval_dataset=tokenized_dataset,
)

# Fine-tune
trainer.train()
```

**Pros:**

* Maximum flexibility
* Best performance potential
* Full model adaptation

**Cons:**

* High computational cost
* Risk of catastrophic forgetting
* Requires more data

### 2. **Parameter-Efficient Fine-tuning (PEFT)**

#### LoRA (Low-Rank Adaptation)

```python
from peft import LoraConfig, get_peft_model, TaskType

# LoRA configuration
lora_config = LoraConfig(
    task_type=TaskType.SEQ_CLS,
    inference_mode=False,
    r=16,  # Rank
    lora_alpha=32,  # Alpha parameter
    lora_dropout=0.1,  # Dropout probability
    target_modules=["query", "value"]  # Target attention modules
)

# Apply LoRA to model
model = get_peft_model(model, lora_config)

# Print trainable parameters
model.print_trainable_parameters()
# Output: trainable params: 1,769,472 || all params: 109,482,240 || trainable%: 1.61%
```

#### Prefix Tuning

```python
from peft import PrefixTuningConfig, get_peft_model

# Prefix tuning configuration
prefix_config = PrefixTuningConfig(
    task_type=TaskType.SEQ_CLS,
    num_virtual_tokens=20,  # Number of prefix tokens
    encoder_hidden_size=768,  # Hidden size of encoder
    prefix_projection=False,  # Whether to project prefix embeddings
)

# Apply prefix tuning
model = get_peft_model(model, prefix_config)
```

#### AdaLoRA

```python
from peft import AdaLoraConfig

# AdaLoRA configuration
adalora_config = AdaLoraConfig(
    task_type=TaskType.SEQ_CLS,
    inference_mode=False,
    r=16,
    lora_alpha=32,
    lora_dropout=0.1,
    target_modules=["query", "value"],
    init_r=12,  # Initial rank
    target_r=8,  # Target rank
    beta1=0.85,  # Beta1 for importance update
    beta2=0.85,  # Beta2 for importance update
    tinit=200,  # Initial warmup steps
    tfinal=1000,  # Final steps
    deltaT=10,  # Update frequency
    orth_reg_weight=0.5,  # Orthogonal regularization weight
)

model = get_peft_model(model, adalora_config)
```

### 3. **Prompt Tuning**

Mengupdate hanya prompt embeddings.

```python
from peft import PromptTuningConfig

# Prompt tuning configuration
prompt_config = PromptTuningConfig(
    task_type=TaskType.SEQ_CLS,
    prompt_tuning_init=torch.nn.init.normal_,
    num_virtual_tokens=20,
    prompt_tuning_init_text="Classify the sentiment of this text:",
    token_dim=768,
)

model = get_peft_model(model, prompt_config)
```

## 🔧 Implementation Examples

### Text Classification Fine-tuning

```python
import torch
from transformers import (
    AutoModelForSequenceClassification, 
    AutoTokenizer, 
    TrainingArguments, 
    Trainer,
    DataCollatorWithPadding
)
from datasets import Dataset
import evaluate
import numpy as np

# Load model and tokenizer
model_name = "microsoft/DialoGPT-medium"
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Add padding token if not present
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

# Prepare dataset
texts = [
    "I love this product, it's amazing!",
    "This is terrible, I hate it.",
    "It's okay, nothing special.",
    "Fantastic service, highly recommended!",
    "Poor quality, very disappointed."
]

labels = [2, 0, 1, 2, 0]  # 0: negative, 1: neutral, 2: positive

# Create dataset
dataset = Dataset.from_dict({
    "text": texts,
    "label": labels
})

# Tokenization function
def tokenize_function(examples):
    return tokenizer(
        examples["text"],
        padding="max_length",
        truncation=True,
        max_length=128,
        return_tensors="pt"
    )

# Tokenize dataset
tokenized_dataset = dataset.map(tokenize_function, batched=True, remove_columns=dataset.column_names)

# Data collator
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)

# Training arguments
training_args = TrainingArguments(
    output_dir="./fine_tuned_model",
    learning_rate=5e-5,
    per_device_train_batch_size=4,
    per_device_eval_batch_size=4,
    num_train_epochs=5,
    weight_decay=0.01,
    evaluation_strategy="epoch",
    save_strategy="epoch",
    load_best_model_at_end=True,
    metric_for_best_model="accuracy",
    greater_is_better=True,
    push_to_hub=False,
)

# Evaluation function
def compute_metrics(eval_pred):
    predictions, labels = eval_pred
    predictions = np.argmax(predictions, axis=1)
    return {"accuracy": (predictions == labels).astype(np.float32).mean().item()}

# Initialize trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_dataset,
    eval_dataset=tokenized_dataset,
    tokenizer=tokenizer,
    data_collator=data_collator,
    compute_metrics=compute_metrics,
)

# Fine-tune
trainer.train()

# Save model
trainer.save_model("./final_model")
tokenizer.save_pretrained("./final_model")
```

### LoRA Fine-tuning for Chat Models

```python
from transformers import (
    AutoModelForCausalLM, 
    AutoTokenizer, 
    TrainingArguments, 
    Trainer,
    DataCollatorForLanguageModeling
)
from peft import LoraConfig, get_peft_model, TaskType
import torch

# Load model and tokenizer
model_name = "microsoft/DialoGPT-medium"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Add padding token
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token

# LoRA configuration
lora_config = LoraConfig(
    task_type=TaskType.CAUSAL_LM,
    inference_mode=False,
    r=16,
    lora_alpha=32,
    lora_dropout=0.1,
    target_modules=["c_attn", "c_proj", "w1", "w2"]
)

# Apply LoRA
model = get_peft_model(model, lora_config)

# Prepare training data
training_data = [
    "User: How are you?\nAssistant: I'm doing well, thank you for asking!",
    "User: What's the weather like?\nAssistant: I don't have access to real-time weather information.",
    "User: Tell me a joke\nAssistant: Why don't scientists trust atoms? Because they make up everything!"
]

# Tokenize data
def tokenize_function(examples):
    return tokenizer(
        examples["text"],
        padding="max_length",
        truncation=True,
        max_length=256,
        return_tensors="pt"
    )

dataset = Dataset.from_dict({"text": training_data})
tokenized_dataset = dataset.map(tokenize_function, batched=True, remove_columns=dataset.column_names)

# Data collator
data_collator = DataCollatorForLanguageModeling(
    tokenizer=tokenizer,
    mlm=False,
)

# Training arguments
training_args = TrainingArguments(
    output_dir="./lora_fine_tuned",
    learning_rate=1e-4,
    per_device_train_batch_size=2,
    num_train_epochs=3,
    weight_decay=0.01,
    evaluation_strategy="no",
    save_strategy="epoch",
    push_to_hub=False,
)

# Initialize trainer
trainer = Trainer(
    model=model,
    args=training_args,
    train_dataset=tokenized_dataset,
    data_collator=data_collator,
)

# Fine-tune
trainer.train()

# Save LoRA weights
trainer.save_model("./lora_weights")
```

## 📊 Evaluation & Monitoring

### Training Metrics

```python
import matplotlib.pyplot as plt

# Plot training loss
def plot_training_metrics(trainer):
    # Get training history
    history = trainer.state.log_history
    
    # Extract metrics
    train_loss = [x['loss'] for x in history if 'loss' in x]
    eval_loss = [x['eval_loss'] for x in history if 'eval_loss' in x]
    eval_accuracy = [x['eval_accuracy'] for x in history if 'eval_accuracy' in x]
    
    # Create plots
    fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
    
    # Training loss
    ax1.plot(train_loss, label='Training Loss')
    ax1.set_title('Training Loss')
    ax1.set_xlabel('Step')
    ax1.set_ylabel('Loss')
    ax1.legend()
    
    # Evaluation metrics
    if eval_loss:
        ax2.plot(eval_loss, label='Evaluation Loss', color='red')
    if eval_accuracy:
        ax2_twin = ax2.twinx()
        ax2_twin.plot(eval_accuracy, label='Accuracy', color='green')
        ax2_twin.set_ylabel('Accuracy')
        ax2_twin.legend(loc='upper right')
    
    ax2.set_title('Evaluation Metrics')
    ax2.set_xlabel('Step')
    ax2.set_ylabel('Loss')
    ax2.legend(loc='upper left')
    
    plt.tight_layout()
    plt.show()

# Use the function
plot_training_metrics(trainer)
```

### Model Performance Testing

```python
def test_fine_tuned_model(model, tokenizer, test_texts):
    model.eval()
    results = []
    
    for text in test_texts:
        # Tokenize input
        inputs = tokenizer(
            text, 
            return_tensors="pt", 
            padding=True, 
            truncation=True, 
            max_length=128
        )
        
        # Get prediction
        with torch.no_grad():
            outputs = model(**inputs)
            predictions = torch.softmax(outputs.logits, dim=-1)
            predicted_class = torch.argmax(predictions, dim=-1).item()
            confidence = predictions.max().item()
        
        results.append({
            'text': text,
            'predicted_class': predicted_class,
            'confidence': confidence,
            'predictions': predictions.tolist()[0]
        })
    
    return results

# Test the model
test_texts = [
    "This product exceeded my expectations!",
    "I'm not satisfied with the quality.",
    "It's an average product, nothing special."
]

results = test_fine_tuned_model(model, tokenizer, test_texts)

for result in results:
    print(f"Text: {result['text']}")
    print(f"Predicted Class: {result['predicted_class']}")
    print(f"Confidence: {result['confidence']:.3f}")
    print(f"All Predictions: {result['predictions']}")
    print("-" * 50)
```

## 🚀 Best Practices

### 1. **Data Quality**

* **Clean Data**: Pastikan data training berkualitas tinggi
* **Balanced Classes**: Jaga keseimbangan antar kelas
* **Data Augmentation**: Gunakan teknik augmentation yang sesuai
* **Validation Split**: Pisahkan data validation dengan baik

### 2. **Hyperparameter Tuning**

* **Learning Rate**: Mulai dengan learning rate kecil (1e-5 to 5e-5)
* **Batch Size**: Sesuaikan dengan memory yang tersedia
* **Epochs**: Monitor overfitting, gunakan early stopping
* **Regularization**: Gunakan weight decay dan dropout

### 3. **Model Selection**

* **Base Model**: Pilih base model yang sesuai dengan task
* **Model Size**: Pertimbangkan trade-off antara performance dan efficiency
* **PEFT Methods**: Gunakan LoRA atau methods lain untuk efficiency

### 4. **Monitoring & Debugging**

* **Training Curves**: Monitor loss dan metrics
* **Gradient Norms**: Check gradient explosion/vanishing
* **Memory Usage**: Monitor GPU memory consumption
* **Overfitting**: Use validation set to detect overfitting

## 📚 References & Resources

### 📖 Research Papers

* [**"LoRA: Low-Rank Adaptation of Large Language Models"**](https://arxiv.org/abs/2106.09685) by Hu et al.
* [**"The Power of Scale for Parameter-Efficient Prompt Tuning"**](https://arxiv.org/abs/2104.08691) by Lester et al.
* [**"Prefix-Tuning: Optimizing Continuous Prompts for Generation"**](https://arxiv.org/abs/2101.00190) by Li and Liang
* [**"AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-tuning"**](https://arxiv.org/abs/2303.10512) by Zhang et al.

### 🛠️ Tools & Libraries

* [**PEFT (Parameter-Efficient Fine-tuning)**](https://github.com/huggingface/peft) - Hugging Face library for efficient fine-tuning
* [**Transformers**](https://github.com/huggingface/transformers) - Hugging Face transformers library
* [**Accelerate**](https://github.com/huggingface/accelerate) - Distributed training and mixed precision
* [**Datasets**](https://github.com/huggingface/datasets) - Dataset loading and processing

### 🎓 Tutorials & Courses

* [**Hugging Face Fine-tuning Course**](https://huggingface.co/course/chapter3)
* [**PEFT Documentation**](https://huggingface.co/docs/peft/index)
* [**LoRA Fine-tuning Guide**](https://huggingface.co/docs/peft/task_guides/sequence_classification_lora)

### 📰 Articles & Blogs

* [**"Fine-tuning Large Language Models"**](https://huggingface.co/blog/fine-tune-llms) by Hugging Face
* [**"Parameter-Efficient Fine-tuning"**](https://huggingface.co/blog/peft) by Hugging Face
* [**"LoRA: Low-Rank Adaptation"**](https://huggingface.co/blog/lora) by Hugging Face

## 🔗 Related Topics

* [🧠 ML Fundamentals](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/fundamentals)
* [🤖 OpenAI Integration](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/catatan-seekor-open-ai)
* [🔍 RAG Systems](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/catatan-seekor-rag)
* [💡 Prompt Engineering](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/catatan-seekor-prompt-ai)

***

*Last updated: December 2024* *Contributors: \[Your Name]*
