How to Build a Logistic Regression Model: A Spam-filter Tutorial

Oluwadamisi Samuel Praise - May 5 - - Dev Community

Ever wondered how spam filters know which emails to banish to the junk folder? Or how credit card companies decide whether to approve your transaction and how they detect fraud?
Logistic regression, a powerful tool in the data science world, can help answer that question and many more! Often referred to as the "workhorse" of machine learning, it is known for its simplicity and effectiveness in tackling classification problems.

Legendary statistician Andrew Gelman said, "Logistic regression is the duct tape of data science." It's not the flashiest tool, but it gets the job done cleanly and efficiently.

This guide will be an introduction into logistic Regression, peeling back the layers and showing you how to build your own logistic regression model, even if you're a complete beginner.

What is Logistic Regression?

Imagine you're getting flooded with emails, but some just seem...off. Strange sender addresses, suspicious attachments, promises of overnight riches – cue the spam filter! This is where logistic regression comes in.

Logistic Regression is a powerful supervised machine learning technique that helps categorize outcomes into 2 groups by assuming a Linear relationship between the features and the outcome. Think of it like a sorting hat for data, but instead of Gryffindor or Ravenclaw, it sorts things into two buckets: yes or no, 0 or 1. In the case of spam filtering, "spam" and "not spam". It thrives on situations where there are two possible outcomes. Not to be confused with Linear regression which is similar in that it assumes a linear relationship between variables but it’s basis of predictions is completely different. Check out my article on Linear Regression to read up more on it.

Logistic regression doesn't just give you a simple yes or no answer. It actually calculates the probability of a result belonging to one category or the other. So, for an email, it might predict a 90% chance of being spam or a 2% chance of being important. Pretty cool, right?

This ability to estimate probabilities makes logistic regression incredibly useful in many real-world applications. Here are just a few examples:

  • Spam Filtering: As we mentioned earlier, it can help sort your inbox from friend or foe. with impressive accuracy.

  • Fraud detection: Banks use it to identify suspicious transactions and protect your finances.

  • Loan approvals:

  • Medical diagnosis: Doctors can leverage it to assess the likelihood of a disease based on symptoms.

  • Customer churn prediction: Businesses can use it to estimate which customers are at risk of leaving.

How does Logistic Regression make predictions?

Now that you understand the principles behind Logistic regression let’s peel back the layers and dive into the core concepts you need to understand before building your own Logistic regression model.

Data Preparation

Data preparation is vital for a successful logistic regression model. The quality of the data you feed the algorithm is directly proportional to how well it will perform. If you feed the model with data for disease predictions your model will perform poorly for spam filtering or data which doesn't have the key features and keywords, you will get half-assed and incorrect predictions. Here's a breakdown of key data preparation steps:

- Features: These are the individual characteristics used for prediction. it contains the key data points that teach the model right from wrong and for spam filtering, features might include:

  • Words in the subject line
  • Sender information
  • Presence of attachments It is important to get the right features to feed the model as irrelevant features will not produce the right results.

- Labels: These tell the model the correct classification for each data point. For spam filtering, labels would simply be "spam" or "not spam" for each email.

-Data Cleaning: Just like cleaning ingredients before cooking, we need to address issues like missing values, inconsistencies, and typos to ensure the model works effectively. Check out my article on Data CLeaning for an in-depth tutorial for this step.

The Logistic Regression Equation (Simplified):

Now, let's get into the math behind the magic. Logistic regression uses a formula to combine the features of our data. It assigns a weight or importance to each feature. The formula multiplies each feature by its weight and sums them all up. This gives us a score that reflects how likely a data point belongs to a particular class (e.g., spam).

Mathematical Equation:

Logistic regression uses an equation similar to that of linear regression where inputs are combined linearly using weights or coefficient values to predict an output modeled, but here the result is a binary value(0 or 1).

Equation for logistic regression:

Image description

Where;

  • x = input value

  • y = predicted output

  • b0 = bias or intercept

  • b1 = coefficient for input (x)

The Sigmoid Function:

This score of the model isn't directly our probability yet. Logistic regression uses the sigmoid function to map predicted values to probabilities and also convert the value into a range between 0 and 1.
Logistic regression uses the concept of the threshold value for instance 0.5, where:

Values below 0.5 get squashed towards 0 (very unlikely spam).
Values above get pushed towards 1 (almost definitely spam).

Image description

where,

  • e = base of natural logarithms

  • value = numerical value to be transformed

Image description

Training the Model:

The model learns from experience, We provide it with labeled data, like emails categorized as spam or not spam, then analyzes this data, comparing its predictions with the actual labels. If there are mistakes, the model adjusts its internal weights to improve accuracy. This process of learning and refining is called optimization.

Making Predictions:

Once trained, the model is ready to tackle new emails. It uses the same formula and sigmoid function. Based on the features of a new email, the model calculates a score and transforms it into a probability. We can then set a threshold probability. For instance, if the probability of an email being spam is above 70%, we might classify it as spam.

Evaluating the Model:

We need to assess the model's performance, like grading a student's work. Metrics like accuracy tell us the percentage of correct predictions. Precision measures how many classified spam emails were truly spam, and recall tells us how many actual spam emails were correctly identified.

It's crucial to use a separate test set for evaluation, not the data used for training. Imagine testing a student on the same material they studied – it wouldn't be a fair assessment!

Build your Regression Model

Now that you've grasped the core concepts, let's put theory into practice. Here, we'll use Python's scikit-learn library to build a basic spam filter.

1. Import Libraries:



import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score



Enter fullscreen mode Exit fullscreen mode
  • pandas (pd): Used for data manipulation and analysis.
  • train_test_split: Splits data into training and testing sets.
  • TfidfVectorizer: Converts text data into numerical features.
  • LogisticRegression: The model we'll be using for classification.
  • accuracy_score: Calculates the accuracy of the model's predictions.

2. Load and Prepare Data:

To make this as simple and explanatory as possible, let us imagine we have a simple dataset with two columns, "Email" containing the email text, and "Label" indicating spam ("spam") or not spam ("not spam").



# Replace 'path/to/your/data.csv' with the actual path to your data
data = pd.read_csv("path/to/your/data.csv")

# Split data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(data["Email"], data["Label"], test_size=0.2, random_state=42)


Enter fullscreen mode Exit fullscreen mode

We load the data from a CSV file using pandas.read_csv.
train_test_split splits the data into training and testing sets, ensuring the model generalizes well to unseen data. The test_size parameter controls the size of the test set (20% in this case).

3. Feature Engineering:

Since our model works with numerical data, we need to convert the text emails into features. We'll use a technique called TF-IDF (Term Frequency-Inverse Document Frequency), which considers the importance of each word in a document.



# Create a TF-IDF vectorizer
vectorizer = TfidfVectorizer()

# Transform training and testing data into TF-IDF features
X_train_features = vectorizer.fit_transform(X_train)
X_test_features = vectorizer.transform(X_test)



Enter fullscreen mode Exit fullscreen mode

We create a TfidfVectorizer object and use it to fit (learn the vocabulary) and transform the training data.
The transformed data (X_train_features) now contains numerical features representing the importance of each word in each email. We repeat the same process for the testing data (X_test_features).

4. Train the Model:



# Create a logistic regression model
model = LogisticRegression()

# Train the model on the training data
model.fit(X_train_features, y_train)



Enter fullscreen mode Exit fullscreen mode

We create a LogisticRegression object representing the model.
We use the fit method to train the model on the prepared training features (X_train_features) and labels (y_train). During this process, the model learns the relationships between features and spam/not spam labels, adjusting its internal weights.

5. Make Predictions:



# Predict labels for the test data
y_pred = model.predict(X_test_features)

# Calculate accuracy
accuracy = accuracy_score(y_test, y_pred)
print("Accuracy:", accuracy)


Enter fullscreen mode Exit fullscreen mode

We use the trained model to predict labels (spam/not spam) for the unseen test data using the predict method.
We calculate the accuracy of the model's predictions using the accuracy_score function.

6. Interpret Results:

The output will display the model's accuracy, for example, "Accuracy: 0.85". This tells you that the model correctly classified 85% of the emails in the test set as spam or not spam. While this is a decent starting point, it's important to remember that accuracy alone might not be the most informative metric in all situations, especially when dealing with imbalanced datasets (where one class, like spam, might be much smaller than the other).

Key properties and limitations of Logistic Regression

Logistic regression is a versatile tool, but it's essential to acknowledge its limitations:

*- Assumptions: * It assumes a linear relationship between features and the outcome. If the data exhibits non-linearity, the model might struggle. It shares this limitation with the other regression model “Linear regression”. Non-parametric models like decision trees or kernel methods like support vector machines can handle such complexities.

- Overfitting: Overly complex models or those trained on limited data can become overly specific to the training data and perform poorly on unseen data. Regularization techniques like L1 or L2 regularization can help mitigate this by penalizing models with high complexity.

- Binary Classification: Logistic regression is designed for problems with two outcome categories (e.g., spam/not spam). For multi-class problems (e.g., classifying different types of flowers), you might need to explore models like multinomial logistic regression or random forests.

These limitations shouldn't deter you, as there are ways to address them and unlock logistic regression's full potential. Remember, this is just the first step in your machine learning journey! Here are some resources to fuel your exploration:

Online Courses:
Coursera: "Machine Learning" by Andrew Ng
edX: "Introduction to Machine Learning" by MIT
Tutorials:
Scikit-learn documentation: https://scikit-learn.org/
Kaggle Learn: https://www.kaggle.com/learn
Books:
"Hands-On Machine Learning with Scikit-Learn, Keras & TensorFlow" by Aurélien Géron
"The Elements of Statistical Learning" by Trevor Hastie, Robert Tibshirani, and Jerome Friedman
By understanding the core concepts of logistic regression, its limitations, and exploring further resources, you'll be well-equipped to navigate the exciting world of machine learning!

Conclusion

Logistic regression is a really powerful tool which when used efficiently leads to informed decision-making in classification tasks with two possible outcomes, making it a valuable and beginner-friendly introduction to machine learning. Its versatility tackles diverse problems like spam filtering and medical diagnosis. However, remember its limitations: assumptions of linearity, susceptibility to overfitting, and a focus on binary classification. As you explore further, dive into advanced techniques like regularization, feature selection, and non-parametric models to unlock logistic regression's full potential and embark on your exciting data science journey!

Connect with me on Linkedin and Twitter if you found this helpful

. . . . . . . . .