# Catatan Seekor: RAG (Retrieval Augmented Generation)

## 📚 Overview

Retrieval Augmented Generation (RAG) adalah teknik yang menggabungkan retrieval system dengan generative AI untuk menghasilkan respons yang lebih akurat, factual, dan up-to-date. RAG mengatasi keterbatasan LLM tradisional dengan memberikan akses ke knowledge base eksternal yang dapat di-retrieve secara dinamis.

## 🎯 How RAG Works

### 1. **Retrieval Phase**

* User query diproses dan di-embed
* Sistem mencari dokumen relevan dari knowledge base
* Menggunakan semantic search dan similarity matching

### 2. **Augmentation Phase**

* Dokumen yang di-retrieve digabungkan dengan user query
* Context yang diperkaya dikirim ke LLM

### 3. **Generation Phase**

* LLM menghasilkan respons berdasarkan context yang diperkaya
* Respons lebih akurat dan factual

## 🏗️ RAG Architecture

```
User Query → Query Processing → Retrieval → Augmentation → Generation → Response
                ↓                    ↓           ↓           ↓
            Embedding          Vector DB    Context     LLM
            Generation        Search       Merging     Inference
```

## 🛠️ Components

### 1. **Retriever**

Sistem yang mencari dan mengembalikan dokumen relevan.

```python
from sentence_transformers import SentenceTransformer
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np

class SimpleRetriever:
    def __init__(self, documents):
        self.documents = documents
        self.model = SentenceTransformer('all-MiniLM-L6-v2')
        self.embeddings = self.model.encode(documents)
    
    def retrieve(self, query, top_k=5):
        query_embedding = self.model.encode([query])
        similarities = cosine_similarity(query_embedding, self.embeddings)[0]
        top_indices = np.argsort(similarities)[-top_k:][::-1]
        return [self.documents[i] for i in top_indices]
```

### 2. **Vector Database**

Database yang menyimpan dan mengindex embeddings dokumen.

```python
import chromadb
from chromadb.config import Settings

# Initialize ChromaDB
client = chromadb.Client(Settings(
    chroma_db_impl="duckdb+parquet",
    persist_directory="./chroma_db"
))

# Create collection
collection = client.create_collection(
    name="documents",
    metadata={"hnsw:space": "cosine"}
)

# Add documents
collection.add(
    documents=["Document 1 content", "Document 2 content"],
    metadatas=[{"source": "source1"}, {"source": "source2"}],
    ids=["doc1", "doc2"]
)

# Query documents
results = collection.query(
    query_texts=["user query"],
    n_results=5
)
```

### 3. **Generator**

LLM yang menghasilkan respons berdasarkan context yang diperkaya.

```python
from openai import OpenAI

class RAGGenerator:
    def __init__(self, api_key):
        self.client = OpenAI(api_key=api_key)
    
    def generate(self, query, context):
        prompt = f"""
        Context: {context}
        
        Question: {query}
        
        Answer the question based on the context provided. If the context doesn't contain enough information to answer the question, say so.
        """
        
        response = self.client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": "You are a helpful assistant that answers questions based on provided context."},
                {"role": "user", "content": prompt}
            ],
            max_tokens=500,
            temperature=0.7
        )
        
        return response.choices[0].message.content
```

## 🔧 Implementation Examples

### Basic RAG Pipeline

```python
class RAGPipeline:
    def __init__(self, retriever, generator):
        self.retriever = retriever
        self.generator = generator
    
    def process_query(self, query, top_k=5):
        # Retrieve relevant documents
        retrieved_docs = self.retriever.retrieve(query, top_k)
        
        # Combine documents into context
        context = "\n\n".join(retrieved_docs)
        
        # Generate response
        response = self.generator.generate(query, context)
        
        return {
            "query": query,
            "retrieved_documents": retrieved_docs,
            "context": context,
            "response": response
        }

# Usage
retriever = SimpleRetriever(documents)
generator = RAGGenerator(api_key)
rag_pipeline = RAGPipeline(retriever, generator)

result = rag_pipeline.process_query("What is machine learning?")
print(result["response"])
```

### Advanced RAG with Re-ranking

```python
from sentence_transformers import CrossEncoder

class AdvancedRAGPipeline:
    def __init__(self, retriever, generator):
        self.retriever = retriever
        self.generator = generator
        self.reranker = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-6-v2')
    
    def rerank_documents(self, query, documents, top_k=5):
        # Create query-document pairs
        pairs = [[query, doc] for doc in documents]
        
        # Get relevance scores
        scores = self.reranker.predict(pairs)
        
        # Sort documents by score
        doc_score_pairs = list(zip(documents, scores))
        doc_score_pairs.sort(key=lambda x: x[1], reverse=True)
        
        # Return top-k documents
        return [doc for doc, score in doc_score_pairs[:top_k]]
    
    def process_query(self, query, top_k=5):
        # Initial retrieval
        retrieved_docs = self.retriever.retrieve(query, top_k * 2)
        
        # Re-rank documents
        reranked_docs = self.reranker.rerank_documents(query, retrieved_docs, top_k)
        
        # Generate response
        context = "\n\n".join(reranked_docs)
        response = self.generator.generate(query, context)
        
        return {
            "query": query,
            "retrieved_documents": retrieved_docs,
            "reranked_documents": reranked_docs,
            "context": context,
            "response": response
        }
```

## 📊 Evaluation Metrics

### 1. **Retrieval Metrics**

* **Precision\@k**: Proporsi dokumen relevan dalam top-k hasil
* **Recall\@k**: Proporsi dokumen relevan yang berhasil di-retrieve
* **NDCG\@k**: Normalized Discounted Cumulative Gain
* **MRR**: Mean Reciprocal Rank

### 2. **Generation Metrics**

* **ROUGE**: Overlap n-grams antara generated dan reference text
* **BLEU**: Bilingual Evaluation Understudy
* **BERTScore**: Semantic similarity menggunakan BERT embeddings
* **Human Evaluation**: Manual assessment oleh human evaluators

```python
from rouge_score import rouge_scorer
from nltk.translate.bleu_score import sentence_bleu
import numpy as np

def evaluate_rag(generated_responses, reference_responses):
    # ROUGE scores
    scorer = rouge_scorer.RougeScorer(['rouge1', 'rouge2', 'rougeL'])
    rouge_scores = []
    
    for gen, ref in zip(generated_responses, reference_responses):
        scores = scorer.score(ref, gen)
        rouge_scores.append(scores)
    
    # Calculate average ROUGE scores
    avg_rouge = {}
    for metric in ['rouge1', 'rouge2', 'rougeL']:
        avg_rouge[metric] = {
            'precision': np.mean([s[metric].precision for s in rouge_scores]),
            'recall': np.mean([s[metric].recall for s in rouge_scores]),
            'fmeasure': np.mean([s[metric].fmeasure for s in rouge_scores])
        }
    
    return avg_rouge
```

## 🚀 Advanced Techniques

### 1. **Hybrid Search**

Kombinasi semantic search dan keyword-based search.

```python
from sklearn.feature_extraction.text import TfidfVectorizer
from scipy.sparse import hstack

class HybridRetriever:
    def __init__(self, documents):
        self.documents = documents
        self.semantic_model = SentenceTransformer('all-MiniLM-L6-v2')
        self.tfidf = TfidfVectorizer()
        
        # Get embeddings
        self.semantic_embeddings = self.semantic_model.encode(documents)
        self.tfidf_matrix = self.tfidf.fit_transform(documents)
    
    def retrieve(self, query, top_k=5, alpha=0.7):
        # Semantic search
        query_embedding = self.semantic_model.encode([query])
        semantic_scores = cosine_similarity(query_embedding, self.semantic_embeddings)[0]
        
        # Keyword search
        query_tfidf = self.tfidf.transform([query])
        keyword_scores = cosine_similarity(query_tfidf, self.tfidf_matrix).toarray()[0]
        
        # Combine scores
        combined_scores = alpha * semantic_scores + (1 - alpha) * keyword_scores
        
        # Get top-k documents
        top_indices = np.argsort(combined_scores)[-top_k:][::-1]
        return [self.documents[i] for i in top_indices]
```

### 2. **Query Expansion**

Memperluas query untuk meningkatkan retrieval performance.

```python
from nltk.corpus import wordnet
import nltk

class QueryExpander:
    def __init__(self):
        nltk.download('wordnet')
    
    def expand_query(self, query, max_synonyms=3):
        words = query.split()
        expanded_words = []
        
        for word in words:
            expanded_words.append(word)
            
            # Get synonyms
            synonyms = []
            for syn in wordnet.synsets(word):
                for lemma in syn.lemmas():
                    if lemma.name() != word and len(synonyms) < max_synonyms:
                        synonyms.append(lemma.name())
            
            expanded_words.extend(synonyms[:max_synonyms])
        
        return " ".join(expanded_words)

# Usage
expander = QueryExpander()
expanded_query = expander.expand_query("machine learning")
print(f"Original: machine learning")
print(f"Expanded: {expanded_query}")
```

### 3. **Context Window Optimization**

Mengoptimalkan penggunaan context window LLM.

```python
class ContextOptimizer:
    def __init__(self, max_tokens=4000):
        self.max_tokens = max_tokens
    
    def optimize_context(self, documents, query, tokenizer):
        # Estimate tokens for query
        query_tokens = len(tokenizer.encode(query))
        
        # Available tokens for context
        available_tokens = self.max_tokens - query_tokens - 100  # Buffer
        
        # Select documents that fit
        selected_docs = []
        current_tokens = 0
        
        for doc in documents:
            doc_tokens = len(tokenizer.encode(doc))
            if current_tokens + doc_tokens <= available_tokens:
                selected_docs.append(doc)
                current_tokens += doc_tokens
            else:
                break
        
        return selected_docs
```

## 🔒 Best Practices

### 1. **Data Quality**

* **Source Verification**: Pastikan sumber data reliable dan up-to-date
* **Content Filtering**: Filter konten yang tidak relevan atau berbahaya
* **Regular Updates**: Update knowledge base secara berkala

### 2. **Security & Privacy**

* **Access Control**: Implement proper access controls untuk knowledge base
* **Data Encryption**: Encrypt sensitive data
* **Audit Logging**: Log semua access dan modifications

### 3. **Performance Optimization**

* **Caching**: Cache frequently accessed documents
* **Indexing**: Optimize vector database indexing
* **Batch Processing**: Process queries in batches when possible

### 4. **Monitoring & Evaluation**

* **Performance Metrics**: Monitor retrieval dan generation quality
* **User Feedback**: Collect dan analyze user feedback
* **A/B Testing**: Test different RAG configurations

## 📚 References & Resources

### 📖 Research Papers

* [**"Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks"**](https://arxiv.org/abs/2005.11401) by Lewis et al.
* [**"REPLUG: Retrieval-Augmented Black-Box Language Models"**](https://arxiv.org/abs/2301.12652) by Shi et al.
* [**"Atlas: Few-shot Learning with Retrieval Augmented Language Models"**](https://arxiv.org/abs/2208.03299) by Izacard et al.

### 🛠️ Tools & Libraries

* [**LangChain**](https://github.com/langchain-ai/langchain) - Framework untuk building LLM applications
* [**LlamaIndex**](https://github.com/run-llama/llama_index) - Data framework untuk LLM applications
* [**ChromaDB**](https://github.com/chroma-core/chroma) - Vector database untuk embeddings
* [**Pinecone**](https://www.pinecone.io/) - Managed vector database service
* [**Weaviate**](https://github.com/weaviate/weaviate) - Vector search engine

### 🎓 Courses & Tutorials

* [**LangChain RAG Tutorial**](https://python.langchain.com/docs/use_cases/question_answering/)
* [**LlamaIndex RAG Guide**](https://docs.llamaindex.ai/en/stable/examples/retrievers/retrieval_augmented_generation.html)
* [**Vector Database Tutorials**](https://www.pinecone.io/learn/)

### 📰 Articles & Blogs

* [**"What is RAG?"**](https://www.pinecone.io/learn/retrieval-augmented-generation/) by Pinecone
* [**"RAG vs Fine-tuning"**](https://www.databricks.com/blog/2023/10/12/rag-vs-fine-tuning.html) by Databricks
* [**"Building RAG Applications"**](https://blog.langchain.dev/retrieval-augmented-generation-rag/)

### 🐙 GitHub Repositories

* [**RAG Examples**](https://github.com/langchain-ai/langchain/tree/master/examples)
* [**RAGatouille**](https://github.com/bclavie/RAGatouille) - RAG evaluation framework
* [**RAG Evaluation**](https://github.com/explodinggradients/ragas) - RAG evaluation metrics

## 🔗 Related Topics

* [🧠 ML Fundamentals](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/fundamentals)
* [🤖 OpenAI Integration](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/catatan-seekor-open-ai)
* [💡 Prompt Engineering](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/catatan-seekor-prompt-ai)
* [🎯 Fine-tuning](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/catatan-seekor-fine-tunning)

***

*Last updated: December 2024* *Contributors: \[Your Name]*
