# Numpy Pandas

## 📚 Overview

NumPy dan Pandas adalah dua library fundamental dalam Python untuk data science dan machine learning. NumPy menyediakan array operations yang efisien, sementara Pandas menyediakan data structures dan tools untuk data manipulation yang powerful.

## 🔢 NumPy Fundamentals

### 1. **Array Creation & Operations**

```python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

# Basic array creation
arr1 = np.array([1, 2, 3, 4, 5])
arr2 = np.array([[1, 2, 3], [4, 5, 6]])
arr3 = np.zeros((3, 4))
arr4 = np.ones((2, 3))
arr5 = np.arange(0, 10, 2)
arr6 = np.linspace(0, 1, 5)

print("Array 1D:", arr1)
print("Array 2D:", arr2)
print("Zeros array:", arr3)
print("Ones array:", arr4)
print("Range array:", arr5)
print("Linear space:", arr6)

# Array properties
print(f"Shape: {arr2.shape}")
print(f"Size: {arr2.size}")
print(f"Data type: {arr2.dtype}")
print(f"Dimensions: {arr2.ndim}")
```

### 2. **Array Indexing & Slicing**

```python
# Basic indexing
arr = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])

print("Original array:")
print(arr)

# Single element
print(f"Element at [1, 2]: {arr[1, 2]}")

# Row slicing
print(f"First row: {arr[0, :]}")
print(f"Second row: {arr[1, :]}")

# Column slicing
print(f"First column: {arr[:, 0]}")
print(f"Second column: {arr[:, 1]}")

# Advanced slicing
print(f"Subarray [0:2, 1:3]:")
print(arr[0:2, 1:3])

# Boolean indexing
mask = arr > 5
print(f"Elements > 5:")
print(arr[mask])

# Fancy indexing
indices = [0, 2]
print(f"Rows 0 and 2:")
print(arr[indices, :])
```

### 3. **Array Operations & Broadcasting**

```python
# Basic arithmetic operations
a = np.array([1, 2, 3, 4])
b = np.array([5, 6, 7, 8])

print("Array a:", a)
print("Array b:", b)
print("Addition:", a + b)
print("Subtraction:", a - b)
print("Multiplication:", a * b)
print("Division:", a / b)
print("Power:", a ** 2)

# Broadcasting
arr_2d = np.array([[1, 2, 3], [4, 5, 6]])
scalar = 2

print("2D array:")
print(arr_2d)
print("Scalar:", scalar)
print("Broadcasted multiplication:")
print(arr_2d * scalar)

# Matrix operations
matrix_a = np.array([[1, 2], [3, 4]])
matrix_b = np.array([[5, 6], [7, 8]])

print("Matrix A:")
print(matrix_a)
print("Matrix B:")
print(matrix_b)
print("Matrix multiplication:")
print(np.dot(matrix_a, matrix_b))
print("Element-wise multiplication:")
print(matrix_a * matrix_b)
```

### 4. **Mathematical Functions**

```python
# Statistical functions
data = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])

print("Data:", data)
print(f"Mean: {np.mean(data):.2f}")
print(f"Median: {np.median(data):.2f}")
print(f"Standard deviation: {np.std(data):.2f}")
print(f"Variance: {np.var(data):.2f}")
print(f"Min: {np.min(data)}")
print(f"Max: {np.max(data)}")
print(f"Sum: {np.sum(data)}")

# Trigonometric functions
angles = np.linspace(0, 2*np.pi, 100)
sin_values = np.sin(angles)
cos_values = np.cos(angles)

# Plot trigonometric functions
plt.figure(figsize=(10, 6))
plt.plot(angles, sin_values, label='sin(x)', linewidth=2)
plt.plot(angles, cos_values, label='cos(x)', linewidth=2)
plt.xlabel('Angle (radians)')
plt.ylabel('Value')
plt.title('Trigonometric Functions')
plt.legend()
plt.grid(True, alpha=0.3)
plt.show()

# Random number generation
np.random.seed(42)  # For reproducibility

# Generate random numbers
random_uniform = np.random.uniform(0, 1, 1000)
random_normal = np.random.normal(0, 1, 1000)
random_integers = np.random.randint(1, 101, 100)

print(f"Random uniform (0-1): mean={np.mean(random_uniform):.3f}, std={np.std(random_uniform):.3f}")
print(f"Random normal (0,1): mean={np.mean(random_normal):.3f}, std={np.std(random_normal):.3f}")
print(f"Random integers (1-100): mean={np.mean(random_integers):.1f}")
```

## 🐼 Pandas Fundamentals

### 1. **Series & DataFrames**

```python
# Creating Series
series1 = pd.Series([1, 3, 5, 7, 9], index=['a', 'b', 'c', 'd', 'e'])
series2 = pd.Series({'x': 10, 'y': 20, 'z': 30})

print("Series 1:")
print(series1)
print("\nSeries 2:")
print(series2)

# Creating DataFrames
data = {
    'Name': ['Alice', 'Bob', 'Charlie', 'Diana'],
    'Age': [25, 30, 35, 28],
    'City': ['New York', 'London', 'Paris', 'Tokyo'],
    'Salary': [50000, 60000, 70000, 55000]
}

df = pd.DataFrame(data)
print("\nDataFrame:")
print(df)

# DataFrame properties
print(f"\nShape: {df.shape}")
print(f"Columns: {list(df.columns)}")
print(f"Index: {list(df.index)}")
print(f"Data types:\n{df.dtypes}")
print(f"Info:\n{df.info()}")
```

### 2. **Data Loading & Inspection**

```python
# Create sample data for demonstration
sample_data = {
    'Date': pd.date_range('2024-01-01', periods=100, freq='D'),
    'Temperature': np.random.normal(20, 5, 100),
    'Humidity': np.random.uniform(30, 80, 100),
    'Rainfall': np.random.exponential(2, 100),
    'Wind_Speed': np.random.gamma(2, 3, 100)
}

weather_df = pd.DataFrame(sample_data)

# Basic inspection
print("First 5 rows:")
print(weather_df.head())

print("\nLast 5 rows:")
print(weather_df.tail())

print("\nDataFrame info:")
print(weather_df.info())

print("\nStatistical summary:")
print(weather_df.describe())

print("\nMissing values:")
print(weather_df.isnull().sum())

# Save and load data
weather_df.to_csv('weather_data.csv', index=False)
weather_df.to_excel('weather_data.xlsx', index=False)

# Load data back
loaded_csv = pd.read_csv('weather_data.csv')
loaded_excel = pd.read_excel('weather_data.xlsx')

print(f"\nLoaded CSV shape: {loaded_csv.shape}")
print(f"Loaded Excel shape: {loaded_excel.shape}")
```

### 3. **Data Selection & Filtering**

```python
# Column selection
print("Select specific columns:")
print(df[['Name', 'Age']])

print("\nSelect single column (returns Series):")
print(df['Name'])

# Row selection
print("\nFirst 2 rows:")
print(df.iloc[0:2])

print("\nRows with specific condition:")
print(df[df['Age'] > 30])

print("\nMultiple conditions:")
print(df[(df['Age'] > 25) & (df['Salary'] > 55000)])

# Using query method
print("\nUsing query method:")
print(df.query('Age > 25 and Salary > 55000'))

# Loc and iloc
print("\nUsing loc (label-based):")
print(df.loc[0:2, ['Name', 'City']])

print("\nUsing iloc (integer-based):")
print(df.iloc[0:2, [0, 2]])
```

### 4. **Data Manipulation**

```python
# Adding new columns
df['Experience'] = df['Age'] - 22
df['Salary_K'] = df['Salary'] / 1000
df['Seniority'] = np.where(df['Experience'] > 5, 'Senior', 'Junior')

print("DataFrame with new columns:")
print(df)

# Modifying existing columns
df['Salary'] = df['Salary'] * 1.1  # 10% raise
df['Age_Group'] = pd.cut(df['Age'], bins=[0, 25, 35, 50], labels=['Young', 'Middle', 'Senior'])

print("\nDataFrame after modifications:")
print(df)

# Removing columns
df_cleaned = df.drop(['Experience', 'Salary_K'], axis=1)
print("\nDataFrame after dropping columns:")
print(df_cleaned)

# Handling missing values
df_with_nulls = df.copy()
df_with_nulls.loc[1, 'Age'] = np.nan
df_with_nulls.loc[2, 'City'] = np.nan

print("\nDataFrame with missing values:")
print(df_with_nulls)

# Fill missing values
df_filled = df_with_nulls.fillna({
    'Age': df_with_nulls['Age'].mean(),
    'City': 'Unknown'
})

print("\nDataFrame after filling missing values:")
print(df_filled)
```

### 5. **Data Aggregation & Grouping**

```python
# Basic aggregation
print("Salary statistics by age group:")
age_group_stats = df.groupby('Age_Group')['Salary'].agg(['mean', 'std', 'count'])
print(age_group_stats)

print("\nMultiple aggregations:")
multi_agg = df.groupby('Age_Group').agg({
    'Salary': ['mean', 'std', 'min', 'max'],
    'Age': ['mean', 'count']
})
print(multi_agg)

# Pivot tables
pivot_table = df.pivot_table(
    values='Salary',
    index='Age_Group',
    columns='Seniority',
    aggfunc='mean',
    fill_value=0
)
print("\nPivot table:")
print(pivot_table)

# Cross tabulation
cross_tab = pd.crosstab(df['Age_Group'], df['Seniority'], values=df['Salary'], aggfunc='mean')
print("\nCross tabulation:")
print(cross_tab)
```

### 6. **Data Cleaning & Preprocessing**

```python
# Create sample messy data
messy_data = {
    'Name': ['Alice', 'bob', 'Charlie', 'diana', 'Eve'],
    'Age': [25, '30', 35, 28, 'unknown'],
    'Salary': ['50000', '60000', '70000', '55000', '45000'],
    'City': ['New York', 'london', 'Paris', 'TOKYO', 'new york'],
    'Date': ['2024-01-01', '2024-01-02', '2024-01-03', '2024-01-04', '2024-01-05']
}

messy_df = pd.DataFrame(messy_data)
print("Original messy data:")
print(messy_df)

# Clean the data
cleaned_df = messy_df.copy()

# Standardize names
cleaned_df['Name'] = cleaned_df['Name'].str.title()

# Convert age to numeric, handle errors
cleaned_df['Age'] = pd.to_numeric(cleaned_df['Age'], errors='coerce')

# Convert salary to numeric
cleaned_df['Salary'] = pd.to_numeric(cleaned_df['Salary'])

# Standardize city names
cleaned_df['City'] = cleaned_df['City'].str.title()

# Convert date to datetime
cleaned_df['Date'] = pd.to_datetime(cleaned_df['Date'])

# Fill missing age with median
cleaned_df['Age'] = cleaned_df['Age'].fillna(cleaned_df['Age'].median())

print("\nCleaned data:")
print(cleaned_df)

print("\nData types after cleaning:")
print(cleaned_df.dtypes)
```

## 🔧 Advanced Operations

### 1. **Time Series Operations**

```python
# Create time series data
dates = pd.date_range('2024-01-01', periods=365, freq='D')
time_series_data = {
    'Date': dates,
    'Sales': np.random.normal(1000, 200, 365) + np.sin(np.arange(365) * 2 * np.pi / 365) * 100,
    'Temperature': np.random.normal(20, 10, 365) + np.sin(np.arange(365) * 2 * np.pi / 365) * 15
}

ts_df = pd.DataFrame(time_series_data)
ts_df.set_index('Date', inplace=True)

print("Time series data (first 10 rows):")
print(ts_df.head(10))

# Resampling
monthly_sales = ts_df['Sales'].resample('M').mean()
quarterly_temp = ts_df['Temperature'].resample('Q').mean()

print("\nMonthly average sales:")
print(monthly_sales.head())

print("\nQuarterly average temperature:")
print(quarterly_temp)

# Rolling statistics
rolling_mean = ts_df['Sales'].rolling(window=7).mean()
rolling_std = ts_df['Sales'].rolling(window=7).std()

# Plot time series
plt.figure(figsize=(15, 10))

plt.subplot(3, 1, 1)
ts_df['Sales'].plot(title='Daily Sales', alpha=0.7)
rolling_mean.plot(title='Daily Sales with 7-day Rolling Mean', alpha=0.9)
plt.legend(['Daily Sales', '7-day Rolling Mean'])
plt.grid(True, alpha=0.3)

plt.subplot(3, 1, 2)
ts_df['Temperature'].plot(title='Daily Temperature', color='red', alpha=0.7)
plt.grid(True, alpha=0.3)

plt.subplot(3, 1, 3)
rolling_std.plot(title='7-day Rolling Standard Deviation of Sales', color='green')
plt.grid(True, alpha=0.3)

plt.tight_layout()
plt.show()
```

### 2. **Data Visualization with Pandas**

```python
# Create sample data for visualization
np.random.seed(42)
viz_data = {
    'Category': np.random.choice(['A', 'B', 'C', 'D'], 1000),
    'Value': np.random.normal(100, 30, 1000),
    'Group': np.random.choice(['Group1', 'Group2', 'Group3'], 1000),
    'Score': np.random.uniform(0, 100, 1000)
}

viz_df = pd.DataFrame(viz_data)

# Create subplots
fig, axes = plt.subplots(2, 2, figsize=(15, 12))

# Histogram
viz_df['Value'].hist(ax=axes[0, 0], bins=30, alpha=0.7, color='skyblue', edgecolor='black')
axes[0, 0].set_title('Distribution of Values')
axes[0, 0].set_xlabel('Value')
axes[0, 0].set_ylabel('Frequency')
axes[0, 0].grid(True, alpha=0.3)

# Box plot
viz_df.boxplot(column='Value', by='Category', ax=axes[0, 1])
axes[0, 1].set_title('Value Distribution by Category')
axes[0, 1].set_xlabel('Category')
axes[0, 1].set_ylabel('Value')
axes[0, 1].grid(True, alpha=0.3)

# Scatter plot
scatter = axes[1, 0].scatter(viz_df['Value'], viz_df['Score'], 
                             c=pd.Categorical(viz_df['Group']).codes, 
                             alpha=0.6, cmap='viridis')
axes[1, 0].set_title('Value vs Score by Group')
axes[1, 0].set_xlabel('Value')
axes[1, 0].set_ylabel('Score')
axes[1, 0].grid(True, alpha=0.3)
plt.colorbar(scatter, ax=axes[1, 0])

# Bar plot
category_counts = viz_df['Category'].value_counts()
category_counts.plot(kind='bar', ax=axes[1, 1], color='lightcoral', alpha=0.7)
axes[1, 1].set_title('Count by Category')
axes[1, 1].set_xlabel('Category')
axes[1, 1].set_ylabel('Count')
axes[1, 1].grid(True, alpha=0.3)

plt.tight_layout()
plt.show()
```

### 3. **Performance Optimization**

```python
# Performance comparison
import time

# Large dataset
large_data = np.random.randn(100000, 100)
large_df = pd.DataFrame(large_data)

# NumPy operations
start_time = time.time()
numpy_result = np.sum(large_data, axis=0)
numpy_time = time.time() - start_time

# Pandas operations
start_time = time.time()
pandas_result = large_df.sum(axis=0)
pandas_time = time.time() - start_time

print(f"NumPy sum time: {numpy_time:.4f} seconds")
print(f"Pandas sum time: {pandas_time:.4f} seconds")
print(f"Speedup: {pandas_time/numpy_time:.2f}x")

# Memory usage optimization
print(f"\nOriginal DataFrame memory usage: {large_df.memory_usage(deep=True).sum() / 1024**2:.2f} MB")

# Optimize data types
optimized_df = large_df.copy()
for col in optimized_df.columns:
    col_type = optimized_df[col].dtype
    if col_type == 'float64':
        optimized_df[col] = pd.to_numeric(optimized_df[col], downcast='float')
    elif col_type == 'int64':
        optimized_df[col] = pd.to_numeric(optimized_df[col], downcast='integer')

print(f"Optimized DataFrame memory usage: {optimized_df.memory_usage(deep=True).sum() / 1024**2:.2f} MB")
print(f"Memory reduction: {(1 - optimized_df.memory_usage(deep=True).sum() / large_df.memory_usage(deep=True).sum()) * 100:.1f}%")
```

## 🚀 Best Practices

### 1. **Data Handling**

* **Use appropriate data types**: Convert to smallest possible numeric types
* **Handle missing values early**: Decide on strategy before analysis
* **Validate data**: Check for outliers and inconsistencies
* **Use vectorized operations**: Avoid loops when possible

### 2. **Performance Optimization**

* **Pre-allocate arrays**: Avoid growing arrays in loops
* **Use NumPy for numerical operations**: Faster than Pandas for pure math
* **Optimize memory usage**: Use appropriate data types
* **Leverage broadcasting**: Use NumPy's broadcasting capabilities

### 3. **Code Organization**

* **Separate data loading**: Keep data I/O separate from processing
* **Use functions**: Encapsulate common operations
* **Document assumptions**: Comment on data transformations
* **Test with small data**: Verify logic before scaling up

## 📚 References & Resources

### 📖 Documentation

* [**NumPy Official Documentation**](https://numpy.org/doc/)
* [**Pandas Official Documentation**](https://pandas.pydata.org/docs/)
* [**NumPy User Guide**](https://numpy.org/doc/stable/user/index.html)
* [**Pandas User Guide**](https://pandas.pydata.org/docs/user_guide/index.html)

### 🎓 Tutorials & Courses

* [**NumPy Tutorial**](https://numpy.org/doc/stable/user/quickstart.html)
* [**Pandas Tutorial**](https://pandas.pydata.org/docs/getting_started/intro_tutorials/index.html)
* [**DataCamp NumPy Course**](https://www.datacamp.com/courses/intro-to-python-for-data-science)
* [**Real Python Pandas Tutorial**](https://realpython.com/pandas-python-tutorial/)

### 📰 Articles & Blogs

* [**NumPy Best Practices**](https://numpy.org/doc/stable/user/basics.broadcasting.html)
* [**Pandas Performance Tips**](https://pandas.pydata.org/pandas-docs/stable/user_guide/enhancingperf.html)
* [**NumPy vs Pandas Performance**](https://towardsdatascience.com/speed-test-pandas-vs-numpy-which-one-is-faster-4d0c2c3b0c4)

### 🐙 GitHub Repositories

* [**NumPy Source Code**](https://github.com/numpy/numpy)
* [**Pandas Source Code**](https://github.com/pandas-dev/pandas)
* [**NumPy Examples**](https://github.com/numpy/numpy/tree/main/examples)
* [**Pandas Examples**](https://github.com/pandas-dev/pandas/tree/main/doc/examples)

### 📊 Datasets for Practice

* [**Kaggle Datasets**](https://www.kaggle.com/datasets)
* [**UCI Machine Learning Repository**](https://archive.ics.uci.edu/ml/index.php)
* [**Google Dataset Search**](https://datasetsearch.research.google.com/)
* [**Pandas Built-in Datasets**](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html)

## 🔗 Related Topics

* [🐍 Python ML Tools](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/python-ml)
* [🤖 Scikit-learn](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/python-ml/scikit-learn)
* [📈 Visualization](https://github.com/mahbubzulkarnain/catatan-seekor-the-series/blob/master/machine_learning/python-ml/visualization.md)
* [🧠 ML Fundamentals](https://mahbubzulkarnain.gitbook.io/catatan-seekor-the-series/machine-learning/fundamentals)

***

*Last updated: December 2024* *Contributors: \[Your Name]*
