# Tensorflow

## 📚 Overview

TensorFlow adalah open-source library untuk machine learning dan deep learning yang dikembangkan oleh Google. Library ini menyediakan comprehensive ecosystem untuk building, training, dan deploying machine learning models, dengan fokus pada neural networks dan deep learning.

## 🚀 Getting Started

### 1. **Installation & Basic Setup**

```python
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers

# Check TensorFlow version
print(f"TensorFlow version: {tf.__version__}")
print(f"Keras version: {keras.__version__}")

# Check available devices
print(f"GPU available: {tf.config.list_physical_devices('GPU')}")
print(f"CPU available: {tf.config.list_physical_devices('CPU')}")

# Set random seed for reproducibility
tf.random.set_seed(42)
np.random.seed(42)

# Enable mixed precision for better performance on modern GPUs
tf.keras.mixed_precision.set_global_policy('mixed_float16')
```

### 2. **Basic Tensor Operations**

```python
# Create tensors
scalar = tf.constant(42)
vector = tf.constant([1, 2, 3, 4, 5])
matrix = tf.constant([[1, 2, 3], [4, 5, 6]])
tensor_3d = tf.constant([[[1, 2], [3, 4]], [[5, 6], [7, 8]]])

print("Scalar:", scalar)
print("Vector:", vector)
print("Matrix:", matrix)
print("3D Tensor:", tensor_3d)

# Tensor properties
print(f"\nScalar shape: {scalar.shape}, dtype: {scalar.dtype}")
print(f"Vector shape: {vector.shape}, dtype: {vector.dtype}")
print(f"Matrix shape: {matrix.shape}, dtype: {matrix.dtype}")

# Basic operations
a = tf.constant([1, 2, 3])
b = tf.constant([4, 5, 6])

print(f"\nAddition: {a + b}")
print(f"Subtraction: {a - b}")
print(f"Multiplication: {a * b}")
print(f"Division: {a / b}")
print(f"Power: {a ** 2}")

# Broadcasting
matrix_2d = tf.constant([[1, 2, 3], [4, 5, 6]])
vector_1d = tf.constant([10, 20, 30])

print(f"\nMatrix + Vector (broadcasting):")
print(matrix_2d + vector_1d)

# Reshaping tensors
reshaped = tf.reshape(matrix_2d, [3, 2])
print(f"\nReshaped matrix:")
print(reshaped)
```

## 🏗️ Building Neural Networks

### 1. **Sequential Model**

```python
# Create a simple sequential model
model = keras.Sequential([
    layers.Dense(128, activation='relu', input_shape=(784,)),
    layers.Dropout(0.2),
    layers.Dense(64, activation='relu'),
    layers.Dropout(0.2),
    layers.Dense(10, activation='softmax')
])

# Model summary
model.summary()

# Compile the model
model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

# Alternative compilation with custom learning rate
optimizer = keras.optimizers.Adam(learning_rate=0.001)
model.compile(
    optimizer=optimizer,
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy', 'sparse_categorical_crossentropy']
)
```

### 2. **Functional API**

```python
# Create model using Functional API
inputs = keras.Input(shape=(784,))
x = layers.Dense(128, activation='relu')(inputs)
x = layers.Dropout(0.2)(x)
x = layers.Dense(64, activation='relu')(x)
x = layers.Dropout(0.2)(x)
outputs = layers.Dense(10, activation='softmax')(x)

model = keras.Model(inputs=inputs, outputs=outputs)

# Model summary
model.summary()

# Plot model architecture
keras.utils.plot_model(model, show_shapes=True, show_layer_names=True)
```

### 3. **Custom Layers**

```python
class CustomDenseLayer(layers.Layer):
    def __init__(self, units, activation=None, **kwargs):
        super(CustomDenseLayer, self).__init__(**kwargs)
        self.units = units
        self.activation = keras.activations.get(activation)
    
    def build(self, input_shape):
        self.kernel = self.add_weight(
            name='kernel',
            shape=(input_shape[-1], self.units),
            initializer='glorot_uniform',
            trainable=True
        )
        self.bias = self.add_weight(
            name='bias',
            shape=(self.units,),
            initializer='zeros',
            trainable=True
        )
    
    def call(self, inputs):
        output = tf.matmul(inputs, self.kernel) + self.bias
        if self.activation is not None:
            output = self.activation(output)
        return output

# Use custom layer
custom_model = keras.Sequential([
    CustomDenseLayer(128, activation='relu', input_shape=(784,)),
    CustomDenseLayer(64, activation='relu'),
    CustomDenseLayer(10, activation='softmax')
])

custom_model.summary()
```

## 🔧 Model Training

### 1. **Data Preparation**

```python
# Load and prepare MNIST dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()

# Normalize pixel values
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0

# Reshape for dense layers
x_train = x_train.reshape(-1, 784)
x_test = x_test.reshape(-1, 784)

print(f"Training data shape: {x_train.shape}")
print(f"Test data shape: {x_test.shape}")
print(f"Training labels shape: {y_train.shape}")

# Split training data into train and validation
from sklearn.model_selection import train_test_split
x_train, x_val, y_train, y_val = train_test_split(
    x_train, y_train, test_size=0.2, random_state=42
)

print(f"Training set: {x_train.shape}")
print(f"Validation set: {x_val.shape}")
```

### 2. **Training with Callbacks**

```python
# Define callbacks
callbacks = [
    # Early stopping
    keras.callbacks.EarlyStopping(
        monitor='val_loss',
        patience=5,
        restore_best_weights=True
    ),
    
    # Learning rate reduction
    keras.callbacks.ReduceLROnPlateau(
        monitor='val_loss',
        factor=0.5,
        patience=3,
        min_lr=1e-7
    ),
    
    # Model checkpointing
    keras.callbacks.ModelCheckpoint(
        'best_model.h5',
        monitor='val_accuracy',
        save_best_only=True,
        save_weights_only=False
    ),
    
    # TensorBoard logging
    keras.callbacks.TensorBoard(
        log_dir='./logs',
        histogram_freq=1
    )
]

# Train the model
history = model.fit(
    x_train, y_train,
    batch_size=32,
    epochs=50,
    validation_data=(x_val, y_val),
    callbacks=callbacks,
    verbose=1
)
```

### 3. **Training Visualization**

```python
# Plot training history
def plot_training_history(history):
    fig, axes = plt.subplots(1, 2, figsize=(15, 5))
    
    # Plot accuracy
    axes[0].plot(history.history['accuracy'], label='Training Accuracy')
    axes[0].plot(history.history['val_accuracy'], label='Validation Accuracy')
    axes[0].set_title('Model Accuracy')
    axes[0].set_xlabel('Epoch')
    axes[0].set_ylabel('Accuracy')
    axes[0].legend()
    axes[0].grid(True, alpha=0.3)
    
    # Plot loss
    axes[1].plot(history.history['loss'], label='Training Loss')
    axes[1].plot(history.history['val_loss'], label='Validation Loss')
    axes[1].set_title('Model Loss')
    axes[1].set_xlabel('Epoch')
    axes[1].set_ylabel('Loss')
    axes[1].legend()
    axes[1].grid(True, alpha=0.3)
    
    plt.tight_layout()
    plt.show()

# Plot training history
plot_training_history(history)
```

## 🎨 Advanced Architectures

### 1. **Convolutional Neural Networks (CNNs)**

```python
# Create CNN model for image classification
def create_cnn_model(input_shape, num_classes):
    model = keras.Sequential([
        # First convolutional block
        layers.Conv2D(32, (3, 3), activation='relu', input_shape=input_shape),
        layers.BatchNormalization(),
        layers.MaxPooling2D((2, 2)),
        layers.Dropout(0.25),
        
        # Second convolutional block
        layers.Conv2D(64, (3, 3), activation='relu'),
        layers.BatchNormalization(),
        layers.MaxPooling2D((2, 2)),
        layers.Dropout(0.25),
        
        # Third convolutional block
        layers.Conv2D(128, (3, 3), activation='relu'),
        layers.BatchNormalization(),
        layers.MaxPooling2D((2, 2)),
        layers.Dropout(0.25),
        
        # Flatten and dense layers
        layers.Flatten(),
        layers.Dense(512, activation='relu'),
        layers.BatchNormalization(),
        layers.Dropout(0.5),
        layers.Dense(num_classes, activation='softmax')
    ])
    
    return model

# Load CIFAR-10 dataset for CNN
(x_train_cnn, y_train_cnn), (x_test_cnn, y_test_cnn) = keras.datasets.cifar10.load_data()

# Normalize and prepare data
x_train_cnn = x_train_cnn.astype('float32') / 255.0
x_test_cnn = x_test_cnn.astype('float32') / 255.0

# Split data
x_train_cnn, x_val_cnn, y_train_cnn, y_val_cnn = train_test_split(
    x_train_cnn, y_train_cnn, test_size=0.2, random_state=42
)

# Create and compile CNN model
cnn_model = create_cnn_model((32, 32, 3), 10)
cnn_model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

cnn_model.summary()

# Train CNN model
cnn_history = cnn_model.fit(
    x_train_cnn, y_train_cnn,
    batch_size=64,
    epochs=20,
    validation_data=(x_val_cnn, y_val_cnn),
    callbacks=callbacks,
    verbose=1
)
```

### 2. **Recurrent Neural Networks (RNNs)**

```python
# Create RNN model for sequence classification
def create_rnn_model(input_shape, num_classes):
    model = keras.Sequential([
        layers.LSTM(128, return_sequences=True, input_shape=input_shape),
        layers.Dropout(0.2),
        layers.LSTM(64, return_sequences=False),
        layers.Dropout(0.2),
        layers.Dense(32, activation='relu'),
        layers.Dropout(0.2),
        layers.Dense(num_classes, activation='softmax')
    ])
    
    return model

# Create synthetic sequence data
def generate_sequence_data(n_samples, seq_length, n_features, n_classes):
    X = np.random.random((n_samples, seq_length, n_features))
    y = np.random.randint(0, n_classes, n_samples)
    return X, y

# Generate sequence data
X_seq, y_seq = generate_sequence_data(1000, 50, 10, 5)

# Split data
X_train_seq, X_val_seq, y_train_seq, y_val_seq = train_test_split(
    X_seq, y_seq, test_size=0.2, random_state=42
)

# Create and compile RNN model
rnn_model = create_rnn_model((50, 10), 5)
rnn_model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

rnn_model.summary()

# Train RNN model
rnn_history = rnn_model.fit(
    X_train_seq, y_train_seq,
    batch_size=32,
    epochs=20,
    validation_data=(X_val_seq, y_val_seq),
    callbacks=callbacks,
    verbose=1
)
```

### 3. **Transfer Learning**

```python
# Load pre-trained model
base_model = keras.applications.ResNet50(
    weights='imagenet',
    include_top=False,
    input_shape=(224, 224, 3)
)

# Freeze base model
base_model.trainable = False

# Create transfer learning model
transfer_model = keras.Sequential([
    base_model,
    layers.GlobalAveragePooling2D(),
    layers.Dense(512, activation='relu'),
    layers.Dropout(0.5),
    layers.Dense(256, activation='relu'),
    layers.Dropout(0.3),
    layers.Dense(10, activation='softmax')
])

# Compile model
transfer_model.compile(
    optimizer='adam',
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)

transfer_model.summary()

# Fine-tuning: unfreeze some layers
base_model.trainable = True
for layer in base_model.layers[:-30]:  # Freeze all but last 30 layers
    layer.trainable = False

# Recompile with lower learning rate
transfer_model.compile(
    optimizer=keras.optimizers.Adam(learning_rate=1e-5),
    loss='sparse_categorical_crossentropy',
    metrics=['accuracy']
)
```

## 🔍 Model Evaluation & Analysis

### 1. **Performance Metrics**

```python
# Evaluate model performance
test_loss, test_accuracy = model.evaluate(x_test, y_test, verbose=0)
print(f"Test accuracy: {test_accuracy:.4f}")
print(f"Test loss: {test_loss:.4f}")

# Make predictions
y_pred = model.predict(x_test)
y_pred_classes = np.argmax(y_pred, axis=1)

# Confusion matrix
from sklearn.metrics import confusion_matrix, classification_report
import seaborn as sns

cm = confusion_matrix(y_test, y_pred_classes)
plt.figure(figsize=(10, 8))
sns.heatmap(cm, annot=True, fmt='d', cmap='Blues')
plt.title('Confusion Matrix')
plt.xlabel('Predicted')
plt.ylabel('Actual')
plt.show()

# Classification report
print("Classification Report:")
print(classification_report(y_test, y_pred_classes))

# ROC curve for binary classification
from sklearn.metrics import roc_curve, auc

# Convert to binary (class 0 vs rest)
y_binary = (y_test == 0).astype(int)
y_pred_binary = y_pred[:, 0]

fpr, tpr, _ = roc_curve(y_binary, y_pred_binary)
roc_auc = auc(fpr, tpr)

plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, color='darkorange', lw=2, label=f'ROC curve (AUC = {roc_auc:.2f})')
plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc="lower right")
plt.grid(True, alpha=0.3)
plt.show()
```

### 2. **Model Interpretability**

```python
# Feature importance for the first layer
first_layer_weights = model.layers[0].get_weights()[0]
feature_importance = np.mean(np.abs(first_layer_weights), axis=1)

plt.figure(figsize=(10, 6))
plt.bar(range(len(feature_importance)), feature_importance)
plt.title('Feature Importance (First Layer Weights)')
plt.xlabel('Feature Index')
plt.ylabel('Average Absolute Weight')
plt.grid(True, alpha=0.3)
plt.show()

# Grad-CAM for CNN visualization
def grad_cam(model, image, class_index, layer_name):
    grad_model = keras.Model(
        [model.inputs], [model.get_layer(layer_name).output, model.output]
    )
    
    with tf.GradientTape() as tape:
        conv_outputs, predictions = grad_model(image)
        loss = predictions[:, class_index]
    
    grads = tape.gradient(loss, conv_outputs)
    pooled_grads = tf.reduce_mean(grads, axis=(0, 1, 2))
    
    conv_outputs = conv_outputs[0]
    heatmap = conv_outputs @ pooled_grads[..., tf.newaxis]
    heatmap = tf.squeeze(heatmap)
    
    heatmap = tf.maximum(heatmap, 0) / tf.math.reduce_max(heatmap)
    return heatmap.numpy()

# Example Grad-CAM visualization
sample_image = x_test_cnn[0:1]  # Take first test image
prediction = cnn_model.predict(sample_image)
predicted_class = np.argmax(prediction[0])

# Generate heatmap
heatmap = grad_cam(cnn_model, sample_image, predicted_class, 'conv2d_2')

# Visualize
plt.figure(figsize=(12, 4))

plt.subplot(1, 3, 1)
plt.imshow(sample_image[0])
plt.title(f'Original Image\nPredicted: {predicted_class}')
plt.axis('off')

plt.subplot(1, 3, 2)
plt.imshow(heatmap, cmap='jet')
plt.title('Grad-CAM Heatmap')
plt.axis('off')

plt.subplot(1, 3, 3)
plt.imshow(sample_image[0])
plt.imshow(heatmap, cmap='jet', alpha=0.6)
plt.title('Overlay')
plt.axis('off')

plt.tight_layout()
plt.show()
```

## 🚀 Model Deployment

### 1. **Model Saving & Loading**

```python
# Save model in different formats
model.save('mnist_model.h5')  # HDF5 format
model.save('mnist_model_saved_model')  # SavedModel format

# Save only weights
model.save_weights('mnist_weights.h5')

# Load model
loaded_model = keras.models.load_model('mnist_model.h5')

# Load weights
model.load_weights('mnist_weights.h5')

# Convert to TensorFlow Lite
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()

# Save TFLite model
with open('mnist_model.tflite', 'wb') as f:
    f.write(tflite_model)
```

### 2. **Model Serving with TensorFlow Serving**

```python
# Export model for TensorFlow Serving
model_version = "1"
export_path = f"./saved_models/{model_version}"

model.save(export_path, save_format='tf')

# Model signature
print("Model saved to:", export_path)

# Load and test exported model
loaded_model = tf.saved_model.load(export_path)
print("Model loaded successfully")

# Test inference
test_input = tf.constant(x_test[:1], dtype=tf.float32)
prediction = loaded_model(test_input)
print("Prediction shape:", prediction.shape)
```

### 3. **Performance Optimization**

```python
# Model optimization
converter = tf.lite.TFLiteConverter.from_keras_model(model)

# Enable optimizations
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]

# Convert with optimizations
tflite_model_optimized = converter.convert()

# Save optimized model
with open('mnist_model_optimized.tflite', 'wb') as f:
    f.write(tflite_model_optimized)

# Benchmark model performance
import time

# Benchmark original model
start_time = time.time()
for _ in range(100):
    _ = model.predict(x_test[:100])
original_time = time.time() - start_time

print(f"Original model inference time: {original_time:.4f} seconds")

# Benchmark optimized model
interpreter = tf.lite.Interpreter(model_content=tflite_model_optimized)
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

start_time = time.time()
for _ in range(100):
    interpreter.set_tensor(input_details[0]['index'], x_test[:100].astype(np.float16))
    interpreter.invoke()
    _ = interpreter.get_tensor(output_details[0]['index'])
optimized_time = time.time() - start_time

print(f"Optimized model inference time: {optimized_time:.4f} seconds")
print(f"Speedup: {original_time/optimized_time:.2f}x")
```

## 🔧 Advanced Features

### 1. **Custom Training Loops**

```python
# Custom training loop
@tf.function
def train_step(model, optimizer, x, y):
    with tf.GradientTape() as tape:
        predictions = model(x, training=True)
        loss = keras.losses.sparse_categorical_crossentropy(y, predictions)
    
    gradients = tape.gradient(loss, model.trainable_variables)
    optimizer.apply_gradients(zip(gradients, model.trainable_variables))
    
    return loss

# Training loop
optimizer = keras.optimizers.Adam(learning_rate=0.001)
epochs = 10
batch_size = 32

for epoch in range(epochs):
    print(f"Epoch {epoch + 1}/{epochs}")
    
    # Shuffle data
    indices = tf.random.shuffle(range(len(x_train)))
    x_train_shuffled = tf.gather(x_train, indices)
    y_train_shuffled = tf.gather(y_train, indices)
    
    # Training
    total_loss = 0
    num_batches = len(x_train) // batch_size
    
    for i in range(num_batches):
        start_idx = i * batch_size
        end_idx = start_idx + batch_size
        
        x_batch = x_train_shuffled[start_idx:end_idx]
        y_batch = y_train_shuffled[start_idx:end_idx]
        
        loss = train_step(model, optimizer, x_batch, y_batch)
        total_loss += loss
        
        if i % 100 == 0:
            print(f"  Batch {i}/{num_batches}, Loss: {loss:.4f}")
    
    avg_loss = total_loss / num_batches
    print(f"  Average Loss: {avg_loss:.4f}")
```

### 2. **Data Pipelines with tf.data**

```python
# Create tf.data pipeline
def create_dataset(x, y, batch_size, shuffle=True):
    dataset = tf.data.Dataset.from_tensor_slices((x, y))
    
    if shuffle:
        dataset = dataset.shuffle(buffer_size=10000)
    
    dataset = dataset.batch(batch_size)
    dataset = dataset.prefetch(tf.data.AUTOTUNE)
    
    return dataset

# Create datasets
train_dataset = create_dataset(x_train, y_train, batch_size=32)
val_dataset = create_dataset(x_val, y_val, batch_size=32, shuffle=False)

# Train with tf.data
history = model.fit(
    train_dataset,
    epochs=10,
    validation_data=val_dataset,
    callbacks=callbacks
)
```

## 🚀 Best Practices

### 1. **Model Architecture**

* **Start simple**: Begin with basic architectures
* **Use appropriate layers**: Choose layers based on data type
* **Regularization**: Apply dropout and batch normalization
* **Residual connections**: Use skip connections for deep networks

### 2. **Training Optimization**

* **Learning rate scheduling**: Use callbacks for LR reduction
* **Early stopping**: Prevent overfitting
* **Data augmentation**: Increase dataset diversity
* **Mixed precision**: Use float16 for faster training

### 3. **Performance & Deployment**

* **Model optimization**: Use TensorFlow Lite optimizations
* **Batch processing**: Process data in batches
* **GPU utilization**: Maximize GPU memory usage
* **Model serving**: Use TensorFlow Serving for production

## 📚 References & Resources

### 📖 Documentation

* [**TensorFlow Official Documentation**](https://www.tensorflow.org/)
* [**Keras Documentation**](https://keras.io/)
* [**TensorFlow Tutorials**](https://www.tensorflow.org/tutorials)
* [**TensorFlow Guide**](https://www.tensorflow.org/guide)

### 🎓 Tutorials & Courses

* [**TensorFlow Tutorials**](https://www.tensorflow.org/tutorials)
* [**Deep Learning with TensorFlow**](https://www.coursera.org/learn/deep-learning-tensorflow)
* [**TensorFlow Developer Certificate**](https://www.tensorflow.org/certificate)
* [**TensorFlow YouTube Channel**](https://www.youtube.com/c/TensorFlow)

### 📰 Articles & Blogs

* [**TensorFlow Blog**](https://blog.tensorflow.org/)
* [**TensorFlow Best Practices**](https://www.tensorflow.org/guide/keras)
* [**Model Optimization Guide**](https://www.tensorflow.org/lite/performance)

### 🐙 GitHub Repositories

* [**TensorFlow Source Code**](https://github.com/tensorflow/tensorflow)
* [**TensorFlow Examples**](https://github.com/tensorflow/examples)
* [**TensorFlow Models**](https://github.com/tensorflow/models)
* [**TensorFlow Hub**](https://github.com/tensorflow/hub)

### 📊 Datasets & Models

* [**TensorFlow Datasets**](https://www.tensorflow.org/datasets)
* [**TensorFlow Hub Models**](https://tfhub.dev/)
* [**Model Garden**](https://github.com/tensorflow/models)

## 🔗 Related Topics

* [🐍 Python ML Tools](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/python-ml)
* [🔥 PyTorch](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/python-ml/pytorch)
* [📊 NumPy & Pandas](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/python-ml/numpy-pandas)
* [🧠 ML Fundamentals](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/fundamentals)

***

*Last updated: December 2024* *Contributors: \[Your Name]*
