– Open Machine Learning Course

Author: Yury Kashnitsky. Translated and edited by Christina Butsko, Nerses Bagiyan, Yulia Klimushina, and Yuanyuan Pao. This material is subject to the terms and conditions of the Creative Commons CC BY-NC-SA 4.0 license. Free use is permitted for any non-commercial purpose.

This is a static version of a Jupyter notebook. You can also check out the latest version in the course repository, the corresponding interactive web-based Kaggle Notebook or video lectures: theoretical part, practical part.

Topic 4. Linear Classification and Regression

Part 4. Where Logistic Regression Is Good and Where It's Not

Article outline

  1. Analysis of IMDB movie reviews
  2. A Simple Count of Words
  3. XOR-Problem
  4. Assignments
  5. Useful resources

1. Analysis of IMDB movie reviews

Now for a little practice! We want to solve the problem of binary classification of IMDB movie reviews. We have a training set with marked reviews, 12500 reviews marked as good, another 12500 bad. Here, it's not easy to get started with machine learning right away because we don't have the matrix $X$; we need to prepare it. We will use a simple approach: bag of words model. Features of the review will be represented by indicators of the presence of each word from the whole corpus in this review. The corpus is the set of all user reviews. The idea is illustrated by a picture

In [1]:
import os

import matplotlib.pyplot as plt
import numpy as np
from sklearn.datasets import load_files
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression

To get started, we automatically download the dataset from here and unarchive it along with the rest of datasets in the data folder. The dataset is briefly described here. There are 12.5k of good and bad reviews in the test and training sets.

In [2]:
import tarfile
from io import BytesIO

import requests

url = ""

def load_imdb_dataset(extract_path="../../data", overwrite=False):
    # check if existed already
    if (
        os.path.isfile(os.path.join(extract_path, "aclImdb", "README"))
        and not overwrite
        print("IMDB dataset is already in place.")

    print("Downloading the dataset from:  ", url)
    response = requests.get(url)

    tar ="r:gz", fileobj=BytesIO(response.content))

    data = tar.extractall(extract_path)

Will download the dataset from
In [3]:
# change if you have it in alternative location
PATH_TO_IMDB = "../../data/aclImdb"

reviews_train = load_files(
    os.path.join(PATH_TO_IMDB, "train"), categories=["pos", "neg"]
text_train, y_train =,

reviews_test = load_files(os.path.join(PATH_TO_IMDB, "test"), categories=["pos", "neg"])
text_test, y_test =,
In [4]:
# # Alternatively, load data from previously pickled objects.
# import pickle
# with open('../../data/imdb_text_train.pkl', 'rb') as f:
#     text_train = pickle.load(f)
# with open('../../data/imdb_text_test.pkl', 'rb') as f:
#     text_test = pickle.load(f)
# with open('../../data/imdb_target_train.pkl', 'rb') as f:
#     y_train = pickle.load(f)
# with open('../../data/imdb_target_test.pkl', 'rb') as f:
#     y_test = pickle.load(f)
In [5]:
print("Number of documents in training data: %d" % len(text_train))
print("Number of documents in test data: %d" % len(text_test))
Number of documents in training data: 25000
[12500 12500]
Number of documents in test data: 25000
[12500 12500]

Here are a few examples of the reviews.

In [6]:
b'Words can\'t describe how bad this movie is. I can\'t explain it by writing only. You have too see it for yourself to get at grip of how horrible a movie really can be. Not that I recommend you to do that. There are so many clich\xc3\xa9s, mistakes (and all other negative things you can imagine) here that will just make you cry. To start with the technical first, there are a LOT of mistakes regarding the airplane. I won\'t list them here, but just mention the coloring of the plane. They didn\'t even manage to show an airliner in the colors of a fictional airline, but instead used a 747 painted in the original Boeing livery. Very bad. The plot is stupid and has been done many times before, only much, much better. There are so many ridiculous moments here that i lost count of it really early. Also, I was on the bad guys\' side all the time in the movie, because the good guys were so stupid. "Executive Decision" should without a doubt be you\'re choice over this one, even the "Turbulence"-movies are better. In fact, every other movie in the world is better than this one.'
In [7]:
y_train[1]  # bad review
In [8]:
b'Everyone plays their part pretty well in this "little nice movie". Belushi gets the chance to live part of his life differently, but ends up realizing that what he had was going to be just as good or maybe even better. The movie shows us that we ought to take advantage of the opportunities we have, not the ones we do not or cannot have. If U can get this movie on video for around $10, it\xc2\xb4d be an investment!'
In [9]:
y_train[2]  # good review
In [10]:
# import pickle
# with open('../../data/imdb_text_train.pkl', 'wb') as f:
#     pickle.dump(text_train, f)
# with open('../../data/imdb_text_test.pkl', 'wb') as f:
#     pickle.dump(text_test, f)
# with open('../../data/imdb_target_train.pkl', 'wb') as f:
#     pickle.dump(y_train, f)
# with open('../../data/imdb_target_test.pkl', 'wb') as f:
#     pickle.dump(y_test, f)

2. A Simple Count of Words

First, we will create a dictionary of all the words using CountVectorizer

In [11]:
cv = CountVectorizer()


If you look at the examples of "words" (let's call them tokens), you can see that we have omitted many of the important steps in text processing (automatic text processing can itself be a completely separate series of articles).

In [12]:
['00', '000', '0000000000001', '00001', '00015', '000s', '001', '003830', '006', '007', '0079', '0080', '0083', '0093638', '00am', '00pm', '00s', '01', '01pm', '02', '020410', '029', '03', '04', '041', '05', '050', '06', '06th', '07', '08', '087', '089', '08th', '09', '0f', '0ne', '0r', '0s', '10', '100', '1000', '1000000', '10000000000000', '1000lb', '1000s', '1001', '100b', '100k', '100m']
['pincher', 'pinchers', 'pinches', 'pinching', 'pinchot', 'pinciotti', 'pine', 'pineal', 'pineapple', 'pineapples', 'pines', 'pinet', 'pinetrees', 'pineyro', 'pinfall', 'pinfold', 'ping', 'pingo', 'pinhead', 'pinheads', 'pinho', 'pining', 'pinjar', 'pink', 'pinkerton', 'pinkett', 'pinkie', 'pinkins', 'pinkish', 'pinko', 'pinks', 'pinku', 'pinkus', 'pinky', 'pinnacle', 'pinnacles', 'pinned', 'pinning', 'pinnings', 'pinnochio', 'pinnocioesque', 'pino', 'pinocchio', 'pinochet', 'pinochets', 'pinoy', 'pinpoint', 'pinpoints', 'pins', 'pinsent']

Secondly, we are encoding the sentences from the training set texts with the indices of incoming words. We'll use the sparse format.

In [13]:
X_train = cv.transform(text_train)
<25000x74849 sparse matrix of type '<class 'numpy.int64'>'
	with 3445861 stored elements in Compressed Sparse Row format>

Let's see how our transformation worked

In [14]:
b'This movie is terrible but it has some good effects.'
In [15]:
array([ 9881, 21020, 28068, 29999, 34585, 34683, 44147, 61617, 66150,
       66562], dtype=int32)
In [16]:
(array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0], dtype=int32),
 array([ 9881, 21020, 28068, 29999, 34585, 34683, 44147, 61617, 66150,
        66562], dtype=int32))

Third, we will apply the same operations to the test set

In [17]:
X_test = cv.transform(text_test)

The next step is to train Logistic Regression.

In [18]:
logit = LogisticRegression(solver="lbfgs", n_jobs=-1, random_state=7), y_train)
CPU times: user 29.7 ms, sys: 69.7 ms, total: 99.4 ms
Wall time: 2.82 s

Let's look at accuracy on the both the training and the test sets.

In [19]:
round(logit.score(X_train, y_train), 3), round(logit.score(X_test, y_test), 3),
(0.981, 0.864)

The coefficients of the model can be beautifully displayed.

In [20]:
def visualize_coefficients(classifier, feature_names, n_top_features=25):
    # get coefficients with large absolute values
    coef = classifier.coef_.ravel()
    positive_coefficients = np.argsort(coef)[-n_top_features:]
    negative_coefficients = np.argsort(coef)[:n_top_features]
    interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])
    # plot them
    plt.figure(figsize=(15, 5))
    colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]] * n_top_features), coef[interesting_coefficients], color=colors)
    feature_names = np.array(feature_names)
        np.arange(1, 1 + 2 * n_top_features),
In [21]:
def plot_grid_scores(grid, param_name):
In [22]:
visualize_coefficients(logit, cv.get_feature_names())

To make our model better, we can optimize the regularization coefficient for the Logistic Regression. We'll use sklearn.pipeline because CountVectorizer should only be applied to the training data (so as to not "peek" into the test set and not count word frequencies there). In this case, pipeline determines the correct sequence of actions: apply CountVectorizer, then train Logistic Regression.

In [23]:
from sklearn.pipeline import make_pipeline

text_pipe_logit = make_pipeline(
    # for some reason n_jobs > 1 won't work
    # with GridSearchCV's n_jobs > 1
    LogisticRegression(solver="lbfgs", n_jobs=1, random_state=7),
), y_train)
print(text_pipe_logit.score(text_test, y_test))
/opt/conda/lib/python3.6/site-packages/sklearn/linear_model/ ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
  "of iterations.", ConvergenceWarning)
CPU times: user 19.7 s, sys: 6.57 s, total: 26.3 s
Wall time: 9.26 s
In [24]:
from sklearn.model_selection import GridSearchCV

param_grid_logit = {"logisticregression__C": np.logspace(-5, 0, 6)}
grid_logit = GridSearchCV(
    text_pipe_logit, param_grid_logit, return_train_score=True, cv=3, n_jobs=-1
), y_train)
CPU times: user 17.3 s, sys: 6.6 s, total: 23.9 s
Wall time: 39.5 s
/opt/conda/lib/python3.6/site-packages/sklearn/linear_model/ ConvergenceWarning: lbfgs failed to converge. Increase the number of iterations.
  "of iterations.", ConvergenceWarning)

Let's print best $C$ and cv-score using this hyperparameter:

In [25]:
grid_logit.best_params_, grid_logit.best_score_
({'logisticregression__C': 0.1}, 0.8848)
In [26]:
plot_grid_scores(grid_logit, "logisticregression__C")

For the validation set:

In [27]:
grid_logit.score(text_test, y_test)

Now let's do the same with random forest. We see that, with logistic regression, we achieve better accuracy with less effort.

In [28]:
from sklearn.ensemble import RandomForestClassifier
In [29]:
forest = RandomForestClassifier(n_estimators=200, n_jobs=-1, random_state=17)
In [30]:
%%time, y_train)
CPU times: user 1min 27s, sys: 77.3 ms, total: 1min 27s
Wall time: 16.4 s
RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini',
            max_depth=None, max_features='auto', max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, n_estimators=200, n_jobs=-1,
            oob_score=False, random_state=17, verbose=0, warm_start=False)
In [31]:
round(forest.score(X_test, y_test), 3)

3. XOR-Problem

Let's now consider an example where linear models are worse.

Linear classification methods still define a very simple separating surface - a hyperplane. The most famous toy example of where classes cannot be divided by a hyperplane (or line) with no errors is "the XOR problem".

XOR is the "exclusive OR", a Boolean function with the following truth table:

XOR is the name given to a simple binary classification problem in which the classes are presented as diagonally extended intersecting point clouds.

In [32]:
# creating dataset
rng = np.random.RandomState(0)
X = rng.randn(200, 2)
y = np.logical_xor(X[:, 0] > 0, X[:, 1] > 0)
In [33]:
plt.scatter(X[:, 0], X[:, 1], s=30, c=y,;

Obviously, one cannot draw a single straight line to separate one class from another without errors. Therefore, logistic regression performs poorly with this task.

In [34]:
def plot_boundary(clf, X, y, plot_title):
    xx, yy = np.meshgrid(np.linspace(-3, 3, 50), np.linspace(-3, 3, 50)), y)
    # plot the decision function for each datapoint on the grid
    Z = clf.predict_proba(np.vstack((xx.ravel(), yy.ravel())).T)[:, 1]
    Z = Z.reshape(xx.shape)

    image = plt.imshow(
        extent=(xx.min(), xx.max(), yy.min(), yy.max()),
    contours = plt.contour(xx, yy, Z, levels=[0], linewidths=2, linetypes="--")
    plt.scatter(X[:, 0], X[:, 1], s=30, c=y,
    plt.axis([-3, 3, -3, 3])
    plt.title(plot_title, fontsize=12);
In [35]:
    LogisticRegression(solver="lbfgs"), X, y, "Logistic Regression, XOR problem"
/opt/conda/lib/python3.6/site-packages/matplotlib/ UserWarning: No contour levels were found within the data range.
  warnings.warn("No contour levels were found"
/opt/conda/lib/python3.6/site-packages/matplotlib/ UserWarning: The following kwargs were not used by contour: 'linetypes'

But if one were to give polynomial features as an input (here, up to 2 degrees), then the problem is solved.

In [36]:
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
In [37]:
logit_pipe = Pipeline(
        ("poly", PolynomialFeatures(degree=2)),
        ("logit", LogisticRegression(solver="lbfgs")),
In [38]:
plot_boundary(logit_pipe, X, y, "Logistic Regression + quadratic features. XOR problem")
/opt/conda/lib/python3.6/site-packages/matplotlib/ UserWarning: No contour levels were found within the data range.
  warnings.warn("No contour levels were found"
/opt/conda/lib/python3.6/site-packages/matplotlib/ UserWarning: The following kwargs were not used by contour: 'linetypes'

Here, logistic regression has still produced a hyperplane but in a 6-dimensional feature space $1, x_1, x_2, x_1^2, x_1x_2$ and $x_2^2$. When we project to the original feature space, $x_1, x_2$, the boundary is nonlinear.

In practice, polynomial features do help, but it is computationally inefficient to build them explicitly. SVM with the kernel trick works much faster. In this approach, only the distance between the objects (defined by the kernel function) in a high dimensional space is computed, and there is no need to produce a combinatorially large number of features.

4. Assignments


To practice with linear models, you can complete this demo assignment where you'll build a sarcasm detection model. The assignment is just for you to practice, and goes with solution.

Bonus version

You can also choose a "Bonus Assignments" tier (details are outlined on the main page) and get a non-demo version of the assignment where you'll guided through working with sparse data, feature engineering, model validation, and the process of competing on Kaggle. The task will be to beat baselines in a Kaggle competition. That's a very useful assignment for anyone starting to practice with Machine Learning, regardless of the desire to compete on Kaggle.

5. Useful resources

  • Medium "story" based on this notebook
  • Main course site, course repo, and YouTube channel
  • Course materials as a Kaggle Dataset
  • If you read Russian: an article on with ~ the same material. And a lecture on YouTube
  • A nice and concise overview of linear models is given in the book "Deep Learning" (I. Goodfellow, Y. Bengio, and A. Courville).
  • Linear models are covered practically in every ML book. We recommend "Pattern Recognition and Machine Learning" (C. Bishop) and "Machine Learning: A Probabilistic Perspective" (K. Murphy).
  • If you prefer a thorough overview of linear model from a statistician's viewpoint, then look at "The elements of statistical learning" (T. Hastie, R. Tibshirani, and J. Friedman).
  • The book "Machine Learning in Action" (P. Harrington) will walk you through implementations of classic ML algorithms in pure Python.
  • Scikit-learn library. These guys work hard on writing really clear documentation.
  • Scipy 2017 scikit-learn tutorial by Alex Gramfort and Andreas Mueller.
  • One more ML course with very good materials.
  • Implementations of many ML algorithms. Search for linear regression and logistic regression.

Support course creators

You can make a monthly (Patreon) or one-time (Ko-Fi) donation ↓