# Supervised Learning with scikit-learn

Supervised learning, an essential component of machine learning. We'll build predictive models, tune their parameters, and determine how well they will perform with unseen data—all while using real world datasets. We'll be learning how to use scikit-learn, one of the most popular and user-friendly machine learning libraries for Python.

- Overview
- Libraries
- Classification
- Regression
- Fine-tuning model
- Preprocessing and pipelines

## Overview

Machine learning is the field that teaches machines and computers to learn from existing data to make predictions on new data: Will a tumor be benign or malignant? Which of your customers will take their business elsewhere? Is a particular email spam? We will use Python to perform supervised learning, an essential component of machine learning. We will build predictive models, tune their parameters, and determine how well they will perform with unseen data—all while using real world datasets. We be using scikit-learn, one of the most popular and user-friendly machine learning libraries for Python.

```
from sklearn import datasets
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import (train_test_split,
cross_val_score,
GridSearchCV,
RandomizedSearchCV)
from sklearn.linear_model import (LinearRegression,
Ridge,
Lasso,
LogisticRegression,
ElasticNet)
from sklearn.metrics import (mean_squared_error,
classification_report,
confusion_matrix,
roc_curve,
roc_auc_score,
precision_recall_curve,
plot_precision_recall_curve)
from sklearn.tree import DecisionTreeClassifier
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn.svm import SVC # Support Vector Classiffication
from sklearn.preprocessing import (scale, StandardScaler)
import pandas as pd
import numpy as np
from scipy.stats import randint
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
plt.style.use("ggplot")
```

## Supervised learning

## What is machine learning?

- The art and science of:- Giving computers the ability to learn to make decisions from data - without being explicitly programmed!
- Examples:

- Learning to predict whether an email is spam or not
- Clustering wikipedia entries into different categories
- Supervised learning: Uses labeled data
- Unsupervised learning: Uses unlabeled data ### Unsupervised learning
- Uncovering hidden patterns from unlabeled data
- Example:- Grouping customers into distinct categories (Clustering)> ### Reinforcement learning
- Software agents interact with an environment

- Learn how to optimize their behavior
- Given a system of rewards and punishments
- Draws inspiration from behavioral psychology
- Applications

- Economics
- Genetics
- Game playing
- AlphaGo:First computer to defeat the world champion in Go> ### Supervised learning
- Predictor variables/features and a target variable
- Aim:- Predict the target variable, given the predictor variables - Classication: Target variable consists of categories

- Regression: Target variable is continuous ### Naming conventions
- Features = predictor variables = independent variables
- Target variable = dependent variable = response variable
## Supervised learning

- Automate time-consuming or expensive manual tasks

- Example:Doctor’s diagnosis- Make predictions about the future
- Example: Will a customer click on an ad or not?
- Need labeled data

- Historical data with labels
- Experiments to get labeled data
- Crowd-sourcing labeled data
## Supervised learning in Python

- We will use scikit-learn/sklearn

- Integrates well with the SciPy stack
- Otherlibraries

- Tensor Flow
- keras

```
iris = datasets.load_iris()
type(iris)
```

```
iris.keys()
```

```
type(iris.data)
```

```
type(iris.target)
```

```
iris.data.shape
```

```
iris.target_names
```

```
X = iris.data
y= iris.target
df = pd.DataFrame(X, columns=iris.feature_names)
df.head()
```

```
df2 = df.copy()
df2['target_names'] = iris.target
df2.head()
```

```
iris.target_names
```

```
df2.target_names.value_counts()
```

```
df2['target_names'] = df2.target_names.map({0:'setosa', 1:'versicolor', 2:'virginica'})
df2.head()
```

```
_ = pd.plotting.scatter_matrix(df, c=y, figsize=[8,8], s=150, marker="D")
```

### Numerical EDA

We'll be working with a dataset obtained from the UCI Machine Learning Repository consisting of votes made by US House of Representatives Congressmen. our goal will be to predict their party affiliation ('Democrat' or 'Republican') based on how they voted on certain key issues.

**Note:**Here, it’s worth noting that we have preprocessed this dataset to deal with missing values. This is so that our focus can be directed towards understanding how to train and evaluate supervised learning models.

Before thinking about what supervised learning models we can apply to this, however, we need to perform Exploratory data analysis (EDA) in order to understand the structure of the data.

```
votes = pd.read_csv("datasets/votes.csv")
votes.head()
```

```
votes.info()
```

```
votes.describe()
```

### Observations

- The DataFrame has a total of 435 rows and 17 columns.
- Except for
`'party'`

, all of the columns are of type`int64`

. - The first two rows of the DataFrame consist of votes made by Republicans and the next three rows consist of votes made by Democrats.
- The target variable in this DataFrame is
`'party'`

.

### Votes Visual EDA

The Numerical EDA we did gave us some very important information, such as the names and data types of the columns, and the dimensions of the DataFrame. Following this with some visual EDA will give us an even better understanding of the data. all the features in this dataset are binary; that is, they are either 0 or 1. So a different type of plot would be more useful here, such as **Seaborn's** `countplot`

.

```
def plot_countplot(column):
plt.figure()
sns.countplot(x=column, hue='party', data=votes, palette='RdBu')
plt.xticks([0,1], ['No', 'Yes'])
plt.show()
plot_countplot("education")
```

It seems like Democrats voted resoundingly against this bill, compared to Republicans. This is the kind of information that our machine learning model will seek to learn when we try to predict party affiliation solely based on voting behavior. An expert in U.S politics may be able to predict this without machine learning, but probably not instantaneously - and certainly not if we are dealing with hundreds of samples!

```
plot_countplot('infants')
```

```
plot_countplot('water')
```

```
plot_countplot("budget")
```

```
plot_countplot('physician')
```

```
plot_countplot('salvador')
```

```
plot_countplot('religious')
```

```
plot_countplot('satellite')
```

```
plot_countplot('aid')
```

```
plot_countplot('missile')
```

```
plot_countplot('immigration')
```

```
plot_countplot('synfuels')
```

```
plot_countplot('superfund')
```

```
plot_countplot('crime')
```

```
plot_countplot('duty_free_exports')
```

```
plot_countplot('eaa_rsa')
```

## The classification challenge

## k-Nearest Neighbors

- Basic idea:Predict the label of a data point by - Looking at the ‘k’ closest labeled data points

- Taking a majority vote ### Scikit-learn fit and predict
- All machine learning models implemented as Python classes

- They implement the algorithms for learning and predicting
- Store the information learned from the data
- Training a model on the data = ‘fitting’ a model to the data

`.fit()`

method- To predict the labels of new data:
`.predict()`

method

```
_ = sns.scatterplot(data=df2, x="petal width (cm)", y="petal length (cm)", hue='target_names')
plt.show()
```

```
knn = KNeighborsClassifier(n_neighbors=6)
knn.fit(iris['data'], iris['target'])
```

```
iris['data'].shape
```

```
iris['target'].shape
```

```
X_new = np.array([[5.6, 2.8, 3.9, 1.1],
[5.7, 2.6, 3.8, 1.3],
[4.7, 3.2, 1.3, 0.2]])
prediction = knn.predict(X_new)
prediction
```

The features need to be in an array where each column is a feature and each row a different observation or data point - in this case, a Congressman's voting record. The target needs to be a single column with the same number of observations as the feature data. We will name the feature array `X`

and response variable `y`

: This is in accordance with the common scikit-learn practice.

```
# Create arrays for the features and the response variable
y_votes = votes['party'].values
X_votes = votes.drop('party', axis=1).values
# Create a k-NN classifier with 6 neighbors
knn_votes = KNeighborsClassifier(n_neighbors=6)
# Fit the classifier to the data
knn_votes.fit(X_votes, y_votes)
```

Now that the k-NN classifier with 6 neighbors has been fit to the data, it can be used to predict the labels of new data points.

```
X_new_votes = pd.read_csv("datasets/X_new_votes.csv")
X_new_votes.head()
```

Having fit a k-NN classifier, we can now use it to predict the label of a new data point.

```
# Predict and print the label for the new data point X_new
new_prediction = knn_votes.predict(X_new_votes)
print("Prediction: {}".format(new_prediction))
```

## Measuring model performance

- In classication, accuracy is a commonly used metric
- Accuracy = Fraction of correct predictions
- Which data should be used to compute accuracy?
- How well will the model perform on new data?
- Could compute accuracy on data used to fit classifier
- NOT indicative of ability to generalize
- Split data into training and test set
- Fit/train the classifier on the training set
- Make predictions on test set
- Compare predictions with the known labels
## Model complexity

- Larger k = smoother decision boundary = less complex model
- Smaller k = more complex model = can lead to overfitting

```
X_train_iris, X_test_iris, y_train_iris, y_test_iris = train_test_split(X, y, test_size=.3, random_state=21, stratify=y)
knn_iris = KNeighborsClassifier(n_neighbors=8)
knn_iris.fit(X_train_iris, y_train_iris)
y_pred_iris = knn_iris.predict(X_test_iris)
print(f"Test set predictions \n{y_pred_iris}")
```

```
knn_iris.score(X_test_iris, y_test_iris)
```

### The digits recognition dataset

We'll be working with the **MNIST** digits recognition dataset, which has 10 classes, the digits 0 through 9! A reduced version of the MNIST dataset is one of scikit-learn's included datasets.

Each sample in this scikit-learn dataset is an 8x8 image representing a handwritten digit. Each pixel is represented by an integer in the range 0 to 16, indicating varying levels of black. Helpfully for the MNIST dataset, scikit-learn provides an `'images'`

key in addition to the `'data'`

and `'target'`

keys that we have seen with the Iris data. Because it is a 2D array of the images corresponding to each sample, this `'images'`

key is useful for visualizing the images. On the other hand, the `'data'`

key contains the feature array - that is, the images as a flattened array of 64 pixels.

```
# Load the digits dataset: digits
digits = datasets.load_digits()
# Print the keys and DESCR of the dataset
print(digits.keys())
print(digits.DESCR)
```

```
# Print the shape of the images and data keys
print(digits.images.shape)
digits.data.shape
```

```
# Display digit 1010
plt.imshow(digits.images[1010], cmap=plt.cm.gray_r, interpolation='nearest')
plt.show()
```

It looks like the image in question corresponds to the digit '5'. Now, can we build a classifier that can make this prediction not only for this image, but for all the other ones in the dataset?

### Train/Test Split + Fit/Predict/Accuracy

Now that we have learned about the importance of splitting your data into training and test sets, it's time to practice doing this on the digits dataset! After creating arrays for the features and target variable, we will split them into training and test sets, fit a k-NN classifier to the training data, and then compute its accuracy using the `.score()`

method.

```
# Create feature and target arrays
X_digits = digits.data
y_digits = digits.target
# Split into training and test set
X_train_digits, X_test_digits, y_train_digits, y_test_digits = train_test_split(X_digits, y_digits,
test_size = 0.2, random_state= 42,
stratify=y_digits)
# Create a k-NN classifier with 7 neighbors: knn_digits
knn_digits = KNeighborsClassifier(n_neighbors=7)
# Fit the classifier to the training data
knn_digits.fit(X_train_digits, y_train_digits)
# Print the accuracy
knn_digits.score(X_test_digits, y_test_digits)
```

Incredibly, this out of the box k-NN classifier with 7 neighbors has learned from the training data and predicted the labels of the images in the test set with 98% accuracy, and it did so in less than a second! This is one illustration of how incredibly useful machine learning techniques can be.

### Overfitting and underfitting

We will now construct such a model complexity curve for the digits dataset! We will compute and plot the training and testing accuracy scores for a variety of different neighbor values.

By observing how the accuracy scores differ for the training and testing sets with different values of k, we will develop your intuition for overfitting and underfitting.

```
# Setup arrays to store train and test accuracies
neighbors_digits = np.arange(1, 9)
train_accuracy_digits = np.empty(len(neighbors_digits))
test_accuracy_digits = np.empty(len(neighbors_digits))
# Loop over different values of k
for i, k in enumerate(neighbors_digits):
# Setup a k-NN Classifier with k neighbors: knn
knn_digits = KNeighborsClassifier(n_neighbors=k)
# Fit the classifier to the training data
knn_digits.fit(X_train_digits, y_train_digits)
#Compute accuracy on the training set
train_accuracy_digits[i] = knn_digits.score(X_train_digits, y_train_digits)
#Compute accuracy on the testing set
test_accuracy_digits[i] = knn_digits.score(X_test_digits, y_test_digits)
# Generate plot
plt.title('k-NN: Varying Number of Neighbors')
plt.plot(neighbors_digits, test_accuracy_digits, label = 'Testing Accuracy')
plt.plot(neighbors_digits, train_accuracy_digits, label = 'Training Accuracy')
plt.legend()
plt.xlabel('Number of Neighbors')
plt.ylabel('Accuracy')
plt.show()
```

It looks like the test accuracy is highest when using 1 and35 neighbors. Using 8 neighbors or more seems to result in a simple model that underfits the data.

# Regression

We used image and political datasets to predict binary and multiclass outcomes. But what if our problem requires a continuous outcome? Regression is best suited to solving such problems. We will explore the fundamental concepts in regression and apply them to predict the life expectancy in a given country using Gapminder data.

## Introduction to regression

Example of an regression problem: A bike share company using time and weather data to predict the number of bikes being rented at any given hour. The target variable here - the number of bike rentals at any given hour - is quantitative, so this is best framed as a regression problem.

```
boston = datasets.load_boston()
boston.data.shape
```

```
boston.target.shape
```

```
boston.feature_names
```

```
boston_df = pd.DataFrame(boston.data, columns=boston.feature_names)
boston_df['MEDV'] = boston.target
boston_df.head()
```

```
X_boston = boston.data
y_boston = boston.target
```

```
X_boston_rooms = X_boston[:,5]
type(X_boston_rooms), type(y_boston)
```

```
y_boston = y_boston.reshape(-1,1)
X_boston_rooms = X_boston_rooms.reshape(-1,1)
```

```
plt.scatter(X_boston_rooms, y_boston)
plt.ylabel('Value of house /1000 ($)')
plt.xlabel('Number of rooms')
plt.show();
```

```
reg_boston = LinearRegression()
reg_boston.fit(X_boston_rooms, y_boston)
boston_prediction_space = np.linspace(min(X_boston_rooms), max(X_boston_rooms)).reshape(-1,1)
```

```
plt.scatter(X_boston_rooms, y_boston, color="blue")
plt.plot(boston_prediction_space, reg_boston.predict(boston_prediction_space), color='black', linewidth=3)
plt.show()
```

### Importing Gapminder data for supervised learning

We will work with Gapminder data that we have consolidated into one CSV file.

Specifically, our goal will be to use this data to predict the life expectancy in a given country based on features such as the country's GDP, fertility rate, and population.

Since the target variable here is quantitative, this is a regression problem. To begin, we will fit a linear regression with just one feature: `'fertility'`

, which is the average number of children a woman in a given country gives birth to.

Before that, however, we need to import the data and get it into the form needed by scikit-learn. This involves creating feature and target variable arrays. Furthermore, since we are going to use only one feature to begin with, we need to do some reshaping using NumPy's `.reshape()`

method.

```
# Read the CSV file into a DataFrame: gapminder_df
gapminder = pd.read_csv("datasets/gapminder.csv")
# Create arrays for features and target variable
y_gapminder = gapminder.life.values
X_gapminder = gapminder.fertility.values
# Print the dimensions of X and y before reshaping
print("Dimensions of y before reshaping: {}".format(y_gapminder.shape))
print("Dimensions of X before reshaping: {}".format(X_gapminder.shape))
# Reshape X and y
y_gapminder = y_gapminder.reshape(-1,1)
X_gapminder = X_gapminder.reshape(-1,1)
# Print the dimensions of X and y after reshaping
print("Dimensions of y after reshaping: {}".format(y_gapminder.shape))
print("Dimensions of X after reshaping: {}".format(X_gapminder.shape))
```

```
sns.heatmap(gapminder.corr(), square=True, cmap="RdYlGn")
plt.show()
```

Cells that are in green show positive correlation, while cells that are in red show negative correlation. `life`

and `fertility`

are negatively correlated. `GDP`

and `life`

are positively correlated

```
gapminder.head()
```

```
gapminder.info()
```

The DataFrame has 139 samples (or rows) and 9 columns.

```
gapminder.describe()
```

The mean of `life`

is 69.602878

## The basics of linear regression

## Regression mechanics

- $y = ax + b$

- $y$ = target
- $x$ = single feature
- $a$, $b$ = parameters of model
- How do we choose $a$ and $b$?
- Define an error functions for any given lineChoose the line that minimizes the error function
- Ordinary least squares(OLS):Minimize sum of squares of residuals> ### Linear regression in higher dimensions
- $y=a_1x_1+a_2x_2+b$
- To fit a linear regression model here:- Need to specify 3 variables- In higher dimensions:

- Must specify coefcient for each feature and the variable $b$
- $y=a_1x_1+a_2x_2+a_3x_3+...+a_nx_n+b$
- Scikit-learn API works exactly the same way:

- Pass two arrays: Features, and target

```
X_train_boston, X_test_boston, y_train_boston, y_test_boston = train_test_split(X_boston, y_boston,
test_size=.3, random_state=42)
reg_all_boston = LinearRegression()
reg_all_boston.fit(X_train_boston, y_train_boston)
y_pred_boston = reg_all_boston.predict(X_test_boston)
reg_all_boston.score(X_test_boston, y_test_boston)
```

```
sns.scatterplot(data=gapminder, x="fertility", y="life")
plt.show()
```

As you can see, there is a strongly negative correlation, so a linear regression should be able to capture this trend. Our job is to fit a linear regression and then predict the life expectancy, overlaying these predicted values on the plot to generate a regression line. We will also compute and print the $R^2$ score using sckit-learn's `.score()`

method.

```
# Create the regressor: reg
reg_gapminder = LinearRegression()
# Create the prediction space
prediction_space = np.linspace(min(X_gapminder), max(X_gapminder)).reshape(-1,1)
# Fit the model to the data
reg_gapminder.fit(X_gapminder,y_gapminder)
# Compute predictions over the prediction space: y_pred
y_pred_gapminder = reg_gapminder.predict(prediction_space)
# Print R^2
print(reg_gapminder.score(X_gapminder, y_gapminder))
```

```
# Plot regression line
sns.scatterplot(data=gapminder, x="fertility", y="life")
plt.plot(prediction_space, y_pred_gapminder, color='black', linewidth=3)
plt.show()
```

Notice how the line captures the underlying trend in the data. And the performance is quite decent for this basic regression model with only one feature!

### Train/test split for regression

train and test sets are vital to ensure that the supervised learning model is able to generalize well to new data. This was true for classification models, and is equally true for linear regression models.

We will split the Gapminder dataset into training and testing sets, and then fit and predict a linear regression over **all** features. In addition to computing the $R^2$ score, we will also compute the Root Mean Squared Error (RMSE), which is another commonly used metric to evaluate regression models.

```
X_gapminder = gapminder.drop("life", axis=1).values
```

```
# Create training and test sets
X_train_gapminder, X_test_gapminder, y_train_gapminder, y_test_gapminder = train_test_split(X_gapminder, y_gapminder, test_size = .3, random_state=42)
# Create the regressor: reg_all
reg_all_gapminder = LinearRegression()
# Fit the regressor to the training data
reg_all_gapminder.fit(X_train_gapminder, y_train_gapminder)
# Predict on the test data: y_pred
y_pred_gapminder = reg_all_gapminder.predict(X_test_gapminder)
# Compute and print R^2 and RMSE
print("R^2: {}".format(reg_all_gapminder.score(X_test_gapminder, y_test_gapminder)))
rmse_gapminder = np.sqrt(mean_squared_error(y_test_gapminder, y_pred_gapminder))
print("Root Mean Squared Error: {}".format(rmse_gapminder))
```

Using all features has improved the model score. This makes sense, as the model has more information to learn from. However, there is one potential pitfall to this process. Can you spot it?

## Cross-validation

## Cross-validation motivation

- Model performance is dependent on way the data is split
- Not representative of the model’s ability to generalize
- Solution:Cross-validation!
## Cross-validation and model performance

- 5 folds = 5-fold CV
- 10 folds = 10-fold CV
- k folds = k-fold CV
- More folds = More computationally expensive

```
cv_results_boston = cross_val_score(reg_all_boston, X_boston, y_boston, cv=5)
cv_results_boston
```

```
np.mean(cv_results_boston)
```

```
np.median(cv_results_boston)
```

### 5-fold cross-validation

Cross-validation is a vital step in evaluating a model. It maximizes the amount of data that is used to train the model, as during the course of training, the model is not only trained, but also tested on all of the available data.

We will practice 5-fold cross validation on the Gapminder data. By default, scikit-learn's `cross_val_score()`

function uses R2 as the metric of choice for regression. Since We are performing 5-fold cross-validation, the function will return 5 scores. We will compute these 5 scores and then take their average.

```
# Compute 5-fold cross-validation scores: cv_scores
cv_scores_gapminder = cross_val_score(reg_gapminder, X_gapminder, y_gapminder, cv=5)
# Print the 5-fold cross-validation scores
print(cv_scores_gapminder)
print("Average 5-Fold CV Score: {}".format(np.mean(cv_scores_gapminder)))
```

Now that we have cross-validated your model, we can more confidently evaluate its predictions.

```
%timeit cross_val_score(reg_gapminder, X_gapminder, y_gapminder, cv=3)
```

```
%timeit cross_val_score(reg_gapminder, X_gapminder, y_gapminder, cv=10)
```

```
# Perform 3-fold CV
cvscores_3_gapminder = cross_val_score(reg_gapminder, X_gapminder, y_gapminder, cv=3)
print(np.mean(cvscores_3_gapminder))
# Perform 10-fold CV
cvscores_10_gapminder = cross_val_score(reg_gapminder, X_gapminder, y_gapminder, cv=10)
print(np.mean(cvscores_10_gapminder))
```

## Regularized regression

## Why regularize?

- Recall:Linear regression minimizes a loss function- It chooses a coefcient for each feature variable
- Large coefcients can lead to overtting
- Penalizing large coefcients: Regularization ### Ridge regression
- Loss function = OLS loss function + $\alpha * \sum_{i=1}^{n} a_i^2$
- Alpha:Parameter we need to choose- Picking alpha here is similar to picking k in k-NN
- Hyperparameter tuning
- Alpha controls model complexity

- Alpha = 0: We get back OLS (Can lead to overtting)
- Very high alpha: Can lead to undertting
## Lasso regression

- Loss function = OLS loss function + $\alpha * \sum_{i=1}^{n} |a_i|$
## Lasso regression for feature selection

- Can be used to select important features of a dataset
- Shrinks the coefcients of less important features to exactly 0

```
ridge_boston = Ridge(alpha=.1, normalize=True)
ridge_boston.fit(X_train_boston, y_train_boston)
ridge_pred_boston = ridge_boston.predict(X_test_boston)
ridge_boston.score(X_test_boston, y_test_boston)
```

```
lasso_boston = Lasso(alpha=.1, normalize=True)
lasso_boston.fit(X_train_boston, y_train_boston)
lasso_pred_boston = lasso_boston.predict(X_test_boston)
lasso_boston.score(X_test_boston, y_test_boston)
```

```
names_boston = boston.feature_names
lasso_boston_2 = Lasso(alpha=.1)
lasso_coef_boston = lasso_boston_2.fit(X_boston, y_boston).coef_
_ = plt.plot(range(len(names_boston)), lasso_coef_boston)
_ = plt.xticks(range(len(names_boston)), names_boston, rotation=60)
_ = plt.ylabel("Coefficients")
plt.show()
```

### Regularization I: Lasso

We saw how Lasso selected out the 'RM' feature as being the most important for predicting Boston house prices, while shrinking the coefficients of certain other features to 0. Its ability to perform feature selection in this way becomes even more useful when you are dealing with data involving thousands of features.

We will fit a lasso regression to the Gapminder data we have been working with and plot the coefficients. Just as with the Boston data.

```
df_columns_gapminder = pd.Index(['population', 'fertility', 'HIV', 'CO2', 'BMI_male', 'GDP',
'BMI_female', 'child_mortality'],
dtype='object')
```

```
# Instantiate a lasso regressor: lasso
lasso_gapminder = Lasso(alpha=.4, normalize=True)
# Fit the regressor to the data
lasso_gapminder.fit(X_gapminder,y_gapminder)
# Compute and print the coefficients
lasso_coef_gapminder = lasso_gapminder.fit(X_gapminder,y_gapminder).coef_
print(lasso_coef_gapminder)
# Plot the coefficients
plt.plot(range(len(df_columns_gapminder)), lasso_coef_gapminder)
plt.xticks(range(len(df_columns_gapminder)), df_columns_gapminder.values, rotation=60)
plt.margins(0.02)
plt.show()
```

According to the lasso algorithm, it seems like `'child_mortality'`

is the most important feature when predicting life expectancy.

### Regularization II: Ridge

Lasso is great for feature selection, but when building regression models, Ridge regression should be the first choice.

lasso performs regularization by adding to the loss function a penalty term of the *absolute* value of each coefficient multiplied by some alpha. This is also known as $L1$ regularization because the regularization term is the $L1$ norm of the coefficients. This is not the only way to regularize, however.

```
def display_plot(cv_scores, cv_scores_std):
"""plots the R^2 score as well as standard error for each alpha"""
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(alpha_space_gapminder, cv_scores)
std_error = cv_scores_std / np.sqrt(10)
ax.fill_between(alpha_space_gapminder, cv_scores + std_error, cv_scores - std_error, alpha=0.2)
ax.set_ylabel('CV Score +/- Std Error')
ax.set_xlabel('Alpha')
ax.axhline(np.max(cv_scores), linestyle='--', color='.5')
ax.set_xlim([alpha_space_gapminder[0], alpha_space_gapminder[-1]])
ax.set_xscale('log')
plt.show()
```

If instead we took the sum of the *squared* values of the coefficients multiplied by some alpha - like in Ridge regression - we would be computing the $L2$ norm. We will fit ridge regression models over a range of different alphas, and plot cross-validated $R^2$ scores for each, using this function `display_plot`

, which plots the $R^2$ score as well as standard error for each alpha:

```
# Setup the array of alphas and lists to store scores
alpha_space_gapminder = np.logspace(-4, 0, 50)
ridge_scores_gapminder = []
ridge_scores_std_gapminder = []
# Create a ridge regressor: ridge
ridge_gapminder = Ridge(normalize=True)
# Compute scores over range of alphas
for alpha in alpha_space_gapminder:
# Specify the alpha value to use: ridge.alpha
ridge_gapminder.alpha = alpha
# Perform 10-fold CV: ridge_cv_scores
ridge_cv_scores_gapminder = cross_val_score(ridge_gapminder, X_gapminder, y_gapminder, cv=10)
# Append the mean of ridge_cv_scores to ridge_scores
ridge_scores_gapminder.append(np.mean(ridge_cv_scores_gapminder))
# Append the std of ridge_cv_scores to ridge_scores_std
ridge_scores_std_gapminder.append(np.std(ridge_cv_scores_gapminder))
# Display the plot
display_plot(ridge_scores_gapminder, ridge_scores_std_gapminder)
```

the cross-validation scores change with different alphas.

### How good is your model?

## Classication metrics

- Measuring model performance with accuracy:- Fraction of correctly classied samples - Not always a useful metric ### Class imbalance example:Emails- Spam classication

- 99% of emails are real; 1% of emails are spam
- Could build a classier that predicts ALL emails as real

- 99% accurate!
- But horrible at actually classifying spam
- Fails at its original purpose
- Need more nuanced metrics ### Diagnosing classication predictions
- Confusion matrix
- Accuracy:$\frac{tp+tn}{tp+tn+fp+fn}$> ### Metrics from the confusion matrix
- Precision:$\frac{tp}{tp+fp}$- Recal $\frac{tp}{tp+fn}$
- F1score: $2.\frac{precision.recal}{precision+recall}$
- High precision: Not many real emails predicted as spam
- High recall: Predicted most spam emails correctly

```
confusion_matrix(y_test_iris, y_pred_iris)
```

```
print(classification_report(y_test_iris, y_pred_iris))
```

```
X_train_votes, X_test_votes, y_train_votes, y_test_votes = train_test_split(X_votes, y_votes, test_size=.4, random_state=42)
knn_votes = KNeighborsClassifier(n_neighbors=8)
knn_votes.fit(X_train_votes, y_train_votes)
y_pred_votes = knn_votes.predict(X_test_votes)
```

```
confusion_matrix(y_test_votes, y_pred_votes)
```

```
print(classification_report(y_test_votes, y_pred_votes))
```

The support gives the number of samples of the true response that lie in that class, the support was the number of Republicans or Democrats in the test set on which the classification report was computed. The precision, recall, and f1-score columns, then, gave the respective metrics for that particular class.

### Metrics for classification

We evaluated the performance of k-NN classifier based on its accuracy. However, accuracy is not always an informative metric. We will dive more deeply into evaluating the performance of binary classifiers by computing a confusion matrix and generating a classification report.

We'll work with the PIMA Indians dataset obtained from the UCI Machine Learning Repository. The goal is to predict whether or not a given female patient will contract diabetes based on features such as BMI, age, and number of pregnancies. Therefore, it is a binary classification problem. A target value of 0 indicates that the patient does not have diabetes, while a value of 1 indicates that the patient does have diabetes.

```
pidd = pd.read_csv("datasets/pima_indians_diabetes_database.csv")
pidd.head()
```

We will train a k-NN classifier to the data and evaluate its performance by generating a confusion matrix and classification report.

```
y_pidd = pidd.diabetes.values
X_pidd = pidd.drop("diabetes", axis=1).values
```

```
# Create training and test set
X_train_pidd, X_test_pidd, y_train_pidd, y_test_pidd = train_test_split(X_pidd, y_pidd, test_size=.4, random_state=42)
# Instantiate a k-NN classifier: knn
knn_pidd = KNeighborsClassifier(n_neighbors=6)
# Fit the classifier to the training data
knn_pidd.fit(X_train_pidd, y_train_pidd)
# Predict the labels of the test data: y_pred
y_pred_pidd = knn_pidd.predict(X_test_pidd)
# Generate the confusion matrix and classification report
print(confusion_matrix(y_test_pidd, y_pred_pidd))
print(classification_report(y_test_pidd, y_pred_pidd))
```

By analyzing the confusion matrix and classification report, we can get a much better understanding of your classifier's performance.

## Logistic regression and the ROC curve

## Logistic regression for binary classication

- Logistic regression outputs probabilities
- If the probability ‘p’ is greater than 0.5:- The data is labeled ‘1’- If the probability ‘p’ is less than 0.5:

- The data is labeled ‘0’
## Probability thresholds

- By default, logistic regression threshold = 0.5
- Not specific to logistic regression

- k-NN classifiers also have thresholds
- What happens if we vary the threshold?

```
# Create the classifier: logreg
logreg_pidd = LogisticRegression()
# Fit the classifier to the training data
logreg_pidd.fit(X_train_pidd, y_train_pidd)
# Predict the labels of the test set: y_pred
y_pred_logreg_pidd = logreg_pidd.predict(X_test_pidd)
# Compute and print the confusion matrix and classification report
print(confusion_matrix(y_test_pidd, y_pred_logreg_pidd))
print(classification_report(y_test_pidd, y_pred_logreg_pidd))
```

```
disp = plot_precision_recall_curve(logreg_pidd, X_test_pidd, y_test_pidd)
disp.ax_.set_title('Precision-Recall curve: ')
```

- A recall of 1 corresponds to a classifier with a low threshold in which all females who contract diabetes were correctly classified as such, at the expense of many misclassifications of those who did not have diabetes.
- Precision is undefined for a classifier which makes no positive predictions, that is, classifies everyone as not having diabetes.
- When the threshold is very close to 1, precision is also 1, because the classifier is absolutely certain about its predictions.

### Plotting an ROC curve

Classification reports and confusion matrices are great methods to quantitatively evaluate model performance, while ROC curves provide a way to visually evaluate models. most classifiers in scikit-learn have a `.predict_proba()`

method which returns the probability of a given sample being in a particular class. Having built a logistic regression model, we'll now evaluate its performance by plotting an ROC curve. In doing so, we'll make use of the `.predict_proba()`

method and become familiar with its functionality.

```
# Compute predicted probabilities: y_pred_prob
y_pred_prob_pidd = logreg_pidd.predict_proba(X_test_pidd)[:,1]
# Generate ROC curve values: fpr, tpr, thresholds
fpr_pidd, tpr_pidd, thresholds_pidd = roc_curve(y_test_pidd, y_pred_prob_pidd)
# Plot ROC curve
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_pidd, tpr_pidd)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.show()
```

### AUC computation

Say you have a binary classifier that in fact is just randomly making guesses. It would be correct approximately 50% of the time, and the resulting ROC curve would be a diagonal line in which the True Positive Rate and False Positive Rate are always equal. The Area under this ROC curve would be 0.5. This is one way in which the AUC is an informative metric to evaluate a model. If the AUC is greater than 0.5, the model is better than random guessing. Always a good sign!

We'll calculate AUC scores using the `roc_auc_score()`

function from `sklearn.metrics`

as well as by performing cross-validation on the diabetes dataset.

```
# Compute and print AUC score
print("AUC: {}".format(roc_auc_score(y_test_pidd, y_pred_prob_pidd)))
# Compute cross-validated AUC scores: cv_auc
cv_auc_pidd = cross_val_score(logreg_pidd, X_pidd, y_pidd, cv=5, scoring="roc_auc")
# Print list of AUC scores
print("AUC scores computed using 5-fold cross-validation: {}".format(cv_auc_pidd))
```

## Hyperparameter tuning

## Hyperparameter tuning

- Linear regression:Choosing parameters- Ridge/lasso regression: Choosing alpha
- k-Nearest Neighbors: Choosing n_neighbors
- Parameters like alpha and k: Hyperparameters
- Hyperparameters cannot be learned by tting the model
## Choosing the correct hyperparameter

- Try a bunch of different hyperparameter values
- Fit all of them separately
- See how well each performs
- Choose the best performing one
- It is essential to use cross-validation

```
param_grid_votes = {"n_neighbors":np.arange(1,50)}
knn_votes = KNeighborsClassifier()
knn_cv_votes = GridSearchCV(knn_votes, param_grid=param_grid_votes, cv=5)
knn_cv_votes.fit(X_votes, y_votes)
knn_cv_votes.best_params_
```

```
knn_cv_votes.best_score_
```

```
# Setup the hyperparameter grid
c_space_pidd = np.logspace(-5, 8, 15)
param_grid_pidd = {'C': c_space_pidd}
# Instantiate the GridSearchCV object: logreg_cv
logreg_cv_pidd = GridSearchCV(logreg_pidd, param_grid_pidd, cv=5)
# Fit it to the data
logreg_cv_pidd.fit(X_pidd,y_pidd)
# Print the tuned parameters and score
print("Tuned Logistic Regression Parameters: {}".format(logreg_cv_pidd.best_params_))
print("Best score is {}".format(logreg_cv_pidd.best_score_))
```

### Hyperparameter tuning with RandomizedSearchCV

`GridSearchCV`

can be computationally expensive, especially if you are searching over a large hyperparameter space and dealing with multiple hyperparameters. A solution to this is to use `RandomizedSearchCV`

, in which not all hyperparameter values are tried out. Instead, a fixed number of hyperparameter settings is sampled from specified probability distributions.

Decision trees have many parameters that can be tuned, such as `max_features`

, `max_depth`

, and `min_samples_leaf`

: This makes it an ideal use case for `RandomizedSearchCV`

. Our goal is to use RandomizedSearchCV to find the optimal hyperparameters.

```
# Setup the parameters and distributions to sample from: param_dist
param_dist_pidd = {"max_depth": [3, None],
"max_features": randint(1, 9),
"min_samples_leaf": randint(1, 9),
"criterion": ["gini", "entropy"]}
# Instantiate a Decision Tree classifier: tree
tree_pidd = DecisionTreeClassifier()
# Instantiate the RandomizedSearchCV object: tree_cv
tree_cv_pidd = RandomizedSearchCV(tree_pidd, param_dist_pidd, cv=5)
# Fit it to the data
tree_cv_pidd.fit(X,y)
# Print the tuned parameters and score
print("Tuned Decision Tree Parameters: {}".format(tree_cv_pidd.best_params_))
print("Best score is {}".format(tree_cv_pidd.best_score_))
```

**Note:**

`RandomizedSearchCV`

will never outperform `GridSearchCV`

. Instead, it is valuable because it saves on computation time.
## Hold-out set for final evaluation

## Hold-out set reasoning

- How well can the model perform on never before seen data?
- Using ALL data for cross-validation is not ideal
- Split data into training and hold-out set at the beginning
- Perform grid search cross-validation on training set
- Choose best hyperparameters and evaluate on hold-out set

### Hold-out set in practice I: Classification

You will now practice evaluating a model with tuned hyperparameters on a hold-out set. In addition to $C$, logistic regression has a `'penalty'`

hyperparameter which specifies whether to use `'l1'`

or `'l2'`

regularization. Our job is to create a hold-out set, tune the `'C'`

and `'penalty'`

hyperparameters of a logistic regression classifier using `GridSearchCV`

on the training set.

```
param_grid_pidd['penalty'] = ['l1', 'l2']
# Instantiate the GridSearchCV object: logreg_cv
logreg_cv_pidd = GridSearchCV(logreg_pidd, param_grid_pidd, cv=5)
# Fit it to the training data
logreg_cv_pidd.fit(X_train_pidd, y_train_pidd)
# Print the optimal parameters and best score
print("Tuned Logistic Regression Parameter: {}".format(logreg_cv_pidd.best_params_))
print("Tuned Logistic Regression Accuracy: {}".format(logreg_cv_pidd.best_score_))
```

### Hold-out set in practice II: Regression

Lasso used the $L1$ penalty to regularize, while ridge used the $L2$ penalty. There is another type of regularized regression known as the elastic net. In elastic net regularization, the penalty term is a linear combination of the $L1$ and $L2$ penalties:

$$ a∗L1+b∗L2 $$In scikit-learn, this term is represented by the `'l1_ratio'`

parameter: An `'l1_ratio'`

of 1 corresponds to an $L1$ penalty, and anything lower is a combination of $L1$ and $L2$.

We will `GridSearchCV`

to tune the `'l1_ratio'`

of an elastic net model trained on the Gapminder data.

```
# Create the hyperparameter grid
l1_space_gapminder = np.linspace(0, 1, 30)
param_grid_gapminder = {'l1_ratio': l1_space_gapminder}
# Instantiate the ElasticNet regressor: elastic_net
elastic_net_gapminder = ElasticNet()
# Setup the GridSearchCV object: gm_cv
gm_cv_gapminder = GridSearchCV(elastic_net_gapminder, param_grid_gapminder, cv=5)
# Fit it to the training data
gm_cv_gapminder.fit(X_train_gapminder, y_train_gapminder)
# Predict on the test set and compute metrics
y_pred_gapminder = gm_cv_gapminder.predict(X_test_gapminder)
r2_gapminder = gm_cv_gapminder.score(X_test_gapminder, y_test_gapminder)
mse_gapminder = mean_squared_error(y_test_gapminder, y_pred_gapminder)
print("Tuned ElasticNet l1 ratio: {}".format(gm_cv_gapminder.best_params_))
print("Tuned ElasticNet R squared: {}".format(r2_gapminder))
print("Tuned ElasticNet MSE: {}".format(mse_gapminder))
```

## Preprocessing data

## Dealing with categorical features

- Scikit-learn will not accept categorical features by default
- Need to encode categorical features numerically
- Convert to ‘dummy variables’

- 0:Observation was NOT that category - 1: Observation was that category ### Dealing with categorical features in Python
- scikit-learn:-
`OneHotEncoder()`

- pandas:

`get_dummies()`

```
autos = pd.read_csv("datasets/autos.csv")
autos.head()
```

```
autos.info()
```

```
autos.describe()
```

```
autos.shape
```

```
_ = sns.boxplot(data=autos, x="origin", y="mpg", order=['Asia', 'US', 'Europe'])
plt.show()
```

```
autos_origin = pd.get_dummies(autos)
autos_origin.head()
```

```
autos_origin = autos_origin.drop("origin_Asia", axis=1)
autos_origin.head()
```

```
X_autos_origin = autos_origin[["origin_Europe", "origin_US"]].values
y_autos_origin = autos_origin['mpg'].values
```

```
X_train_autos_origin, X_test_autos_origin, y_train_autos_origin, y_test_autos_origin, = train_test_split(X_autos_origin,
y_autos_origin,
test_size=.3,
random_state=42)
ridge_autos_origin = Ridge(alpha=.5, normalize=True).fit(X_train_autos_origin, y_train_autos_origin)
ridge_autos_origin.score(X_test_autos_origin, y_test_autos_origin)
```

### Exploring categorical features

The Gapminder dataset that we worked with in previous section also contained a categorical `'Region'`

feature, which we dropped since we did not have the tools to deal with it. Now however, we do, so we have added it back in!

We will explore this feature. Boxplots are particularly useful for visualizing categorical features such as this.

```
gapminder.head()
```

```
gapminder_2 = pd.read_csv("datasets/gapminder_2.csv")
gapminder_2.head()
```

```
# Create a boxplot of life expectancy per region
gapminder_2.boxplot("life", "Region", rot=60)
# Show the plot
plt.show()
```

**Important:**Exploratory data analysis should always be the precursor to model building.

### Creating dummy variables

scikit-learn does not accept non-numerical features. The `'Region'`

feature contains very useful information that can predict life expectancy. For example, Sub-Saharan Africa has a lower life expectancy compared to Europe and Central Asia. Therefore, if we are trying to predict life expectancy, it would be preferable to retain the `'Region'`

feature. To do this, we need to binarize it by creating dummy variables, which is what we will do.

```
# Create dummy variables with drop_first=True: df_region
gapminder_region = pd.get_dummies(gapminder_2, drop_first=True)
# Print the new columns of df_region
print(gapminder_region.columns)
```

```
gapminder_region.head()
```

Now that we have created the dummy variables, we can use the `'Region'`

feature to predict life expectancy!

```
X_gapminder_region = gapminder_region.drop("life", axis=1).values
y_gapminder_region = gapminder_region.life.values
```

```
# Instantiate a ridge regressor: ridge
ridge_gapminder_region = Ridge(alpha=.5, normalize=True)
# Perform 5-fold cross-validation: ridge_cv
ridge_cv_gapminder_region = cross_val_score(ridge_gapminder_region, X_gapminder_region, y_gapminder_region, cv=5)
# Print the cross-validated scores
print(ridge_cv_gapminder_region)
```

We now know how to build models using data that includes categorical features.

```
pidd.head()
```

```
pidd.info()
```

```
pidd.insulin.replace(0, np.nan, inplace=True)
pidd.bmi.replace(0, np.nan, inplace=True)
pidd.triceps.replace(0, np.nan, inplace=True)
pidd.info()
```

```
pidd.head()
```

```
votes2 = pd.read_csv("datasets/votes2.csv")
votes2.head()
```

there are certain data points labeled with a `'?'`

. These denote missing values. We will convert the `'?'`

s to `NaN`

s, and then drop the rows that contain them from the DataFrame.

```
# Convert '?' to NaN
votes2[votes2 == "?"] = np.nan
# Print the number of NaNs
display(votes2.isnull().sum())
# Print shape of original DataFrame
print("Shape of Original DataFrame: {}".format(votes2.shape))
# Print shape of new DataFrame
print("Shape of DataFrame After Dropping All Rows with Missing Values: {}".format(votes2.dropna().shape))
```

When many values in a dataset are missing, if you drop them, you may end up throwing away valuable information along with the missing data. It's better instead to develop an imputation strategy. This is where domain knowledge is useful, but in the absence of it, you can impute missing values with the mean or the median of the row or column that the missing value is in.

### Imputing missing data in a ML Pipeline I

there are many steps to building a model, from creating training and test sets, to fitting a classifier or regressor, to tuning its parameters, to evaluating its performance on new data. Imputation can be seen as the first step of this machine learning process, the entirety of which can be viewed within the context of a pipeline. Scikit-learn provides a pipeline constructor that allows you to piece together these steps into one process and thereby simplify your workflow.

We will be setting up a pipeline with two steps: the imputation step, followed by the instantiation of a classifier. We've seen three classifiers in this course so far: k-NN, logistic regression, and the decision tree. Here we will be using the SVM (Support Vector Machine)

```
votes2.head()
```

```
votes2.info()
```

```
# Setup the Imputation transformer: imp
imp_votes = SimpleImputer(missing_values=np.nan, strategy="most_frequent")
# Instantiate the SVC classifier: clf
clf_votes = SVC()
# Setup the pipeline with the required steps: steps
steps_votes = [('imputation', imp_votes),
('SVM', clf_votes)]
```

Having set up the pipeline steps, we can now use it for classification.

```
X_votes[:5]
```

```
votes.head()
```

```
X_votes = votes.drop("party", axis=1)
y_votes = votes.party
```

```
X_train_votes, X_test_votes, y_train_votes, y_test_votes = train_test_split(X_votes, y_votes, test_size=.3, random_state=42)
```

```
# Create the pipeline: pipeline
pipeline_votes = Pipeline(steps_votes)
# Fit the pipeline to the train set
pipeline_votes.fit(X_train_votes, y_train_votes)
# Predict the labels of the test set
y_pred_votes = pipeline_votes.predict(X_test_votes)
# Compute metrics
print(classification_report(y_test_votes, y_pred_votes))
```

## Centering and scaling

## Why scale your data?

- Many models use some form of distance to inform them
- Features on larger scales can unduly influence the model
- Example:k-NN uses distance explicitly when making predictions- We want features to be on a similar scale
- Normalizing (or scaling and centering) ### Ways to normalize your data
- Standardization:Subtract the mean and divide by variance- All features are centered around zero and have variance one
- Can also subtract the minimum and divide by the range
- Minimum zero and maximum one
- Can also normalize so the data ranges from -1 to +1

### Centering and scaling your data

the performance of a model can improve if the features are scaled. Note that this is not always the case: In the Congressional voting records dataset, for example, all of the features are binary. In such a situation, scaling will have minimal impact. We will explore scalling on White Wine Quality.

```
wwq = pd.read_csv("datasets/white_wine_quality.csv")
wwq.head()
```

```
X_wwq = pd.read_csv("datasets/X_wwq.csv").values
X_wwq[:5]
```

```
y_wwq = np.array([False, False, False, False, False, False, False, False, False,
False, True, True, True, False, True, False, False, False,
False, True, False, False, False, True, False, False, False,
False, False, False, False, False, False, False, True, True,
True, False, True, True, False, False, False, False, False,
False, True, True, False, True, False, False, False, False,
False, False, False, False, False, False, False, False, True,
False, False, True, False, True, False, True, False, True,
True, False, False, True, False, False, True, True, False,
False, True, False, True, False, False, False, True, False,
False, True, False, False, False, False, False, False, True,
False, True, True, True, True, True, False, True, False,
False, True, False, True, True, True, True, True, False,
False, True, True, True, True, True, False, False, False,
True, False, False, False, True, False, True, True, True,
True, False, True, False, False, True, True, False, False,
False, False, False, True, False, False, False, False, False,
True, False, False, False, False, False, False, False, True,
True, False, True, True, False, False, True, True, False,
False, True, False, True, False, True, True, True, False,
False, True, True, False, True, True, False, True, False,
True, False, True, False, True, True, False, True, True,
True, True, True, True, True, False, True, True, True,
True, True, False, True, False, True, False, False, True,
True, True, True, True, True, False, False, False, False,
True, False, False, False, True, True, False, False, False,
False, False, False, False, False, False, True, True, False,
False, True, False, False, False, False, True, True, True,
True, True, False, False, False, False, False, True, False,
True, True, False, False, True, False, True, False, False,
False, True, True, True, True, False, False, True, True,
False, False, False, True, True, True, True, False, False,
False, False, False, False, True, False, True, False, True,
False, False, False, False, False, False, False, False, False,
True, False, False, False, False, False, False, False, True,
False, False, True, False, False, False, True, False, False,
True, True, False, False, False, True, False, True, False,
True, True, False, False, False, True, False, False, False,
False, True, False, False, False, False, False, True, False,
False, False, False, False, False, False, False, False, False,
False, True, False, False, False, False, False, False, False,
True, False, False, True, False, False, False, False, False,
False, False, False, False, False, False, False, False, False,
True, False, False, False, True, False, False, True, True,
True, False, True, False, False, True, True, True, False,
True, False, True, False, True, False, False, True, True,
False, False, False, True, False, False, False, False, False,
False, False, False, False, True, True, True, True, True,
False, True, False, False, True, False, False, True, False,
False, False, False, False, True, True, False, False, False,
True, True, False, False, False, False, False, True, False,
True, True, True, True, False, True, True, False, False,
True, True, False, True, False, False, False, True, False,
False, False, False, True, False, True, True, True, False,
False, False, False, False, False, False, False, False, False,
False, True, False, True, True, False, False, False, True,
False, False, True, False, False, False, False, False, False,
False, False, False, True, False, False, True, True, True,
False, False, True, False, True, False, False, False, False,
True, False, False, False, True, True, False, True, False,
True, True, False, False, False, False, False, False, False,
True, False, False, False, False, False, False, True, False,
True, False, False, True, False, False, True, False, False,
True, False, False, True, False, True, False, False, False,
False, False, False, False, True, True, False, False, False,
False, False, False, False, False, True, False, True, True,
True, False, True, False, False, False, False, False, True,
True, False, False, True, True, True, False, False, False,
True, True, True, True, False, False, False, False, True,
True, False, True, True, False, True, False, False, False,
True, True, False, True, False, False, False, True, True,
True, False, True, False, True, True, True, True, False,
True, False, False, False, False, False, False, False, False,
True, True, True, True, False,
```