# amalhanaja

### Summary statistics

Summary statistics are numbers that summarize and tell us about our dataset.

#### Mean and median

``````# Print the head of the sales DataFrame

# Print the info about the sales DataFrame
print(sales.info())

# Print the mean of weekly_sales
print(sales['weekly_sales'].mean())

# Print the median of weekly_sales
print(sales['weekly_sales'].median())
``````

#### Summarizing dates

``````# Print the maximum of the date column
print(sales['date'].max())

# Print the minimum of the date column
print(sales['date'].min())
``````

#### Efficient summaries

``````# A custom IQR function
def iqr(column):
return column.quantile(0.75) - column.quantile(0.25)

# Print IQR of the temperature_c column
print(sales['temperature_c'].agg(iqr))
``````

#### Cumulative statistics

``````# Sort sales_1_1 by date
sales_1_1 = sales_1_1.sort_values('date', ascending=True)

# Get the cumulative sum of weekly_sales, add as cum_weekly_sales col
sales_1_1['cum_weekly_sales'] = sales_1_1['weekly_sales'].cumsum()

# Get the cumulative max of weekly_sales, add as cum_max_sales col
sales_1_1['cum_max_sales'] = sales_1_1['weekly_sales'].cummax()

# See the columns you calculated
print(sales_1_1[["date", "weekly_sales", "cum_weekly_sales", "cum_max_sales"]])
``````

### Counting

We can summarize categorical data using counting techniques. To avoid double counting, we can remove duplicates from DataFrame using `drop_duplicates()` method.

#### Dropping duplicates

``````# Drop duplicate store/type combinations
store_types = sales.drop_duplicates(['store', 'type'])

# Drop duplicate store/department combinations
store_depts = sales.drop_duplicates(['store', 'department'])

# Subset the rows where is_holiday is True and drop duplicate dates
holiday_dates = sales[sales['is_holiday']].drop_duplicates('date')

# Print date col of holiday_dates
print(holiday_dates['date'])
``````

#### Counting categorical variables

``````# Count the number of stores of each type
store_counts = store_types['type'].value_counts()
print(store_counts)

# Get the proportion of stores of each type
store_props = store_types['type'].value_counts(normalize=True)
print(store_props)

# Count the number of each department number and sort
dept_counts_sorted = store_depts['department'].value_counts(sort=True)
print(dept_counts_sorted)

# Get the proportion of departments of each number and sort
dept_props_sorted = store_depts['department'].value_counts(sort=True, normalize=True)
print(dept_props_sorted)
``````

### Grouped summary statistics

Summary statistics can be useful to compare different groups. While computing summary statistics of entire columns may be useful we can gain many insights from summaries of individual groups.

#### What percent of sales occurred at each store type?

``````# Calc total weekly sales
sales_all = sales["weekly_sales"].sum()

# Subset for type A stores, calc total weekly sales
sales_A = sales[sales["type"] == "A"]["weekly_sales"].sum()

# Subset for type B stores, calc total weekly sales
sales_B = sales[sales["type"] == "B"]["weekly_sales"].sum()

# Subset for type C stores, calc total weekly sales
sales_C = sales[sales["type"] == "C"]["weekly_sales"].sum()

# Get proportion for each type
sales_propn_by_type = [sales_A, sales_B, sales_C] / sales_all
print(sales_propn_by_type)
``````

#### Calculations with .groupby()

``````# Group by type; calc total weekly sales
sales_by_type = sales.groupby("type")["weekly_sales"].sum()

# Get proportion for each type
sales_propn_by_type = sales_by_type / sum(sales_by_type)
print(sales_propn_by_type)

# Group by type and is_holiday; calc total weekly sales
sales_by_type_is_holiday = sales.groupby(["type", "is_holiday"])["weekly_sales"].sum()
print(sales_by_type_is_holiday)
``````

#### Multiple grouped summaries

``````# Import numpy with the alias np
import numpy as np

# For each store type, aggregate weekly_sales: get min, max, mean, and median
sales_stats = sales.groupby('type')['weekly_sales'].agg([np.min, np.max, np.mean, np.median])

# Print sales_stats
print(sales_stats)

# For each store type, aggregate unemployment and fuel_price_usd_per_l: get min, max, mean, and median
unemp_fuel_stats = sales.groupby('type')[['unemployment', 'fuel_price_usd_per_l']].agg([np.min, np.max, np.mean, np.median])

# Print unemp_fuel_stats
print(unemp_fuel_stats)
``````

### Pivot table

Pivot tables are another way of calculating grouped summary statistics. We can use the `aggfunc` argument and pass it a function to get different summary statistics and we can pass a list of functions to `aggfunc` to get multiple summary statistics.

Panda official documentation

#### Pivoting one variable

``````# Pivot for mean weekly_sales for each store type
mean_sales_by_type = sales.pivot_table(values = "weekly_sales", index = 'type')

# Print mean_sales_by_type
print(mean_sales_by_type)
``````
``````# Import NumPy as np
import numpy as np

# Pivot for mean and median weekly_sales for each store type
mean_med_sales_by_type = sales.pivot_table(
values = 'weekly_sales',
index = 'type',
aggfunc = [np.mean, np.median]
)

# Print mean_med_sales_by_type
print(mean_med_sales_by_type)
``````
``````# Pivot for mean weekly_sales by store type and holiday
mean_sales_by_type_holiday = sales.pivot_table(
values = 'weekly_sales',
index = 'type',
columns = 'is_holiday',
)

# Print mean_sales_by_type_holiday
print(mean_sales_by_type_holiday)
``````

#### Fill in missing values and sum values with pivot tables

``````# Print mean weekly_sales by department and type; fill missing values with 0
print(
sales.pivot_table(
values = 'weekly_sales',
index = 'department',
columns = 'type',
fill_value = 0
)
)

# Print the mean weekly_sales by department and type; fill missing values with 0s; sum all rows and cols
print(
sales.pivot_table(
values="weekly_sales",
index="department",
columns="type",
fill_value=0,
margins=True
)
)
``````