3.9 AutoML#

Automated workflow for hyper-parameter tuning and optimal model finder

In this tutorial, we will try some cool technique that has been used widely to make AI/ML less tedious and boost your ML workflow efficiency.

If you have learned 3.6, you might be amazed but also annoyed by all those parameter tuning efforts and many back-n-forth iterations needed to figure out which configuration will be optimal for your case. It has been known as the major reason for low productivity in the AI/ML world. People come up with an idea that it seems most work in that tuning and iteration are very simple, can we automate it? The answer is yes, and that will be the technique we will introduce here: AutoML.

There are many AutoML solutions on the market, e.g., AutoKeras, auto-sklearn, H2O, Auto-WEKA, etc. Here we will focus on PyCaret which is a popular one in both academia and industry and very easy to use.

In the following tutorial, we will use the Pycaret Docker Image to run the tutorial. In Terminal, call docker to pull the PyCaret image and start a jupyter notebook:

docker pull pycaret/full
docker run -it -p 8888:8888 -e GRANT_SUDO=yes pycaret/full

Installations on M1 Mac can be tricky - especially when using lighgbm library. Try to install both libraries.

You will then be able to edit a notebook with the following cells:

First we get data ready#

As usual, data collection is the first step. To better demonstrate the point of AutoML, we will use the same data as 3.6 Random Forest.

!pip install wget
import wget
wget.download("https://docs.google.com/uc?export=download&id=1pko9oRmCllAxipZoa3aoztGZfPAD2iwj")

Display the data columns#

Show the columns and settle on the target variables and the input variables. In this chapter, we will use

# Pandas is used for data manipulation
import pandas as pd
# Read in data and display first 5 rows
features = pd.read_csv('temps.csv')
features.columns
  • Temp_2 : Maximum temperature on 2 days prior to today.

  • Temp_1: Maximum temperature on yesterday.

  • Average: Historical temperature average

  • Actual: Actual measure temperature on today.

  • Forecast_NOAA: Temperature values forecasted by NOAA

  • Friend: Forecasted by Friend (Randomly selected number within plus-minus 20 of Average temperature)

We will use the actual as the label, and all the other variables as features.

Check the data shape#

features.shape
# One-hot encode the data using pandas get_dummies
features = pd.get_dummies(features)
# Display the first 5 rows of the last 12 columns
features.iloc[:,5:].head(5)

Split training and testing#

As we already did all the quality checks in 3.6, we will not repeat them here and directly go to AutoML experiment. First, split the data into training and testing subsets.

train_df = features[:300]
test_df = features[300:]
print('Data for Modeling: ' + str(train_df.shape))
print('Unseen Data For Predictions: ' + str(test_df.shape))
train_df

Run PyCaret (no hassle)#

Directly get to the point. Expect PyCaret to tell you what is going wrong. It should be able to automatically recognize the columns and assign appropriate data types to them.

First step, PyCaret need you to confirm the data columns are correctly parsed and their data types match their values. If yes, please enter in the popup text field.

from pycaret.regression import *
exp_reg101 = setup(data = train_df, 
                   target = 'actual',
                   # imputation_type='iterative', 
                   fold_shuffle=True, 
                   session_id=123)

Compare Models#

Once you confirmed the data types are correct, run the comparison using one single line of code:

best = compare_models(exclude = ['ransac'])

Get Best Model#

It looks great! PyCaret automatically did all the work under the hood and give us the best model! You need to look at the RMSE and R2 columns in the comparison table, and the best RMSE and R2 are both achieved by Random Forest, which is much clear and can save you a lot of time to compare them. These results are professionally calculated at the point where PyCaret thinks it is neither overfitting nor underfitting. So the comparison results are very solid and reliable.

Next step is to extract the best model’s hyperparameter configuration, and you can consider the hyperparameter tuning step is done, and go ahead and train your model.

best

If you don’t think the best model is the most cost wise model and need to check more models, you can print out more models by top3 = compare_models(exclude = ['ransac'], n_select = 3) and top3 will be a list and return the first 3 models.

Model Interpretation#

You can get more details about why the best model is the best. PyCaret provides a function called interpret_model. It will produce a figure showing the influence of each input variable on the results. It is actually the same result of SHAP library and PyCaret integrates it.

interpret_model(best)

Evaluate More Metrics#

PyCaret provides some awesome widgets and plots to give you an easy way for visualizing and checking many other useful metrics during its training.

evaluate_model(best)

TroubleShooting#

  1. First time runners might meet this issue on M1: https://github.com/microsoft/LightGBM/issues/1369 Please reinstall pycaret and lightgbm and see if the problem is gone. If not, please create a new issue on the Github repository issue page.