Naive Bayesian Text Classification

Spam filtering may be the best known use of naïve Bayesian text classification, but it's not the only application.


May 01, 2005
URL:http://drdobbs.com/architecture-and-design/naive-bayesian-text-classification/184406064

Paul Graham popularized the term "Bayesian Classification" (or more accurately "Naïve Bayesian Classification") after his "A Plan for Spam" article was published (http://www.paulgraham.com/spam.html). In fact, text classifiers based on naïve Bayesian and other techniques have been around for many years. Companies such as Autonomy and Interwoven incorporate machine-learning techniques to automatically classify documents of all kinds; one such machine-learning technique is naïve Bayesian text classification.

Naïve Bayesian text classifiers are fast, accurate, simple, and easy to implement. In this article, I present a complete naïve Bayesian text classifier written in 100 lines of commented, nonobfuscated Perl.

A text classifier is an automated means of determining some metadata about a document. Text classifiers are used for such diverse needs as spam filtering, suggesting categories for indexing a document created in a content management system, or automatically sorting help desk requests.

The classifier I present here determines which of a set of possible categories a document is most likely to fall into and can be used in any of the ways mentioned with appropriate training. Feed it samples of spam and nonspam e-mail and it learns the difference; feed it documents on various medical fields and it distinguishes an article on, say, "heart disease" from one on "influenza." Show it samples of different types of help desk requests and it should be able to sort them so that when 50 e-mails come in informing you that the laser printer is down, you'll quickly know that they are all the same.

The Math

You don't need to know any of the underlying mathematics to use the sample classifier presented here, but it helps.

The underlying theorem for naïve Bayesian text classification is the Bayes Rule:

P(A|B) = ( P(B|A) * P(A) ) / P(B)

The probability of A happening given B is determined from the probability of B given A, the probability of A occurring and the probability of B. The Bayes Rule enables the calculation of the likelihood of event A given that B has happened. This is used in text classification to determine the probability that a document B is of type A just by looking at the frequencies of words in the document. You can think of the Bayes Rule as showing how to update the probability of event A happening given that you've observed B.

A far more extensive discussion of the Bayes Rule and its general implications can be found in the Wikipedia (http://en.wikipedia.org/wiki/Bayes%27_Theorem). For the purposes of text classification, the Bayes Rule is used to determine the category a document falls into by determining the most probable category. That is, given this document with these words in it, which category does it fall into?

A category is represented by a collection of words and their frequencies; the frequency is the number of times that each word has been seen in the documents used to train the classifier.

Suppose there are n categories C0 to Cn-1. Determining which category a document D is most associated with means calculating the probability that document D is in category Ci, written P(Ci|D), for each category Ci.

Using the Bayes Rule, you can calculate P(Ci|D) by computing:

P(Ci|D) = ( P(D|Ci ) * P(Ci) ) / P(D)

P(Ci|D) is the probability that document D is in category Ci; that is, the probability that given the set of words in D, they appear in category Ci. P(D|Ci) is the probability that for a given category Ci, the words in D appear in that category.

P(Ci) is the probability of a given category; that is, the probability of a document being in category Ci without considering its contents. P(D) is the probability of that specific document occurring.

To calculate which category D should go in, you need to calculate P(Ci|D) for each of the categories and find the largest probability. Because each of those calculations involves the unknown but fixed value P(D), you just ignore it and calculate:

P(Ci |D) = P(D|Ci ) * P(Ci)

P(D) can also be safely ignored because you are interested in the relative—not absolute—values of P(Ci|D), and P(D) simply acts as a scaling factor on P(Ci|D).

D is split into the set of words in the document, called W0 through Wm-1. To calculate P(D|Ci), calculate the product of the probabilities for each word; that is, the likelihood that each word appears in Ci. Here's the "naïve" step: Assume that words appear independently from other words (which is clearly not true for most languages) and P(D|Ci) is the simple product of the probabilities for each word:

P(D|Ci) = P(W0|Ci) * P(W1|Ci) * ... * P(Wm-1|Ci)

For any category, P(Wj|Ci) is calculated as the number of times Wj appears in Ci divided by the total number of words in Ci. P(Ci) is calculated as the total number of words in Ci divided by the total number of words in all the categories put together. Hence, P(Ci|D) is:

P(W0|Ci) * P(W1|Ci) * ... * P(W m-1|Ci) * P(Ci)

for each category, and picking the largest determines the category for document D.

A common criticism of naïve Bayesian text classifiers is that they make the naïve assumption that words are independent of each other and are, therefore, less accurate than a more complex model. There are many more complex text classification techniques, such as Support Vector Machines, k-nearest neighbor, and so on. In practice, naïve Bayesian classifiers often perform well, and the current state of spam filtering indicates that they work very well for e-mail classification.

A useful toolkit that implements different algorithms is the freely available Bow toolkit from CMU (http://www-2.cs.cmu.edu/~mccallum/bow/). It makes a useful testbed for comparing the accuracy of different techniques. A good starting point for reading more about naïve Bayesian text classification is the Wikipedia article on the subject (http://en.wikipedia.org/wiki/Naïve_Bayesian_classification).

Implementation

The Perl implementation (Listing One) uses the hash (associative array) %words to store the word counts for each word and for each category.

Listing One


use strict;
use DB_File;

# Hash with two levels of keys: $words{category}{word} gives count of
# 'word' in 'category'.  Tied to a DB_File to keep it persistent.

my %words;
tie %words, 'DB_File', 'words.db';

# Read a file and return a hash of the word counts in that file

sub parse_file
{
    my ( $file ) = @_;
    my %word_counts;

    # Grab all the words with between 3 and 44 letters

    open FILE, "<$file";
    while ( my $line = <FILE> ) {
        while ( $line =~ s/([[:alpha:]]{3,44})[ \t\n\r]// ) {
            $word_counts{lc($1)}++;
        }
    }
    close FILE;
    return %word_counts;
}

# Add words from a hash to the word counts for a category
sub add_words
{
    my ( $category, %words_in_file ) = @_;

    foreach my $word (keys %words_in_file) {
        $words{"$category-$word"} += $words_in_file{$word};
    }
}

# Get the classification of a file from word counts
sub classify
{
    my ( %words_in_file ) = @_;

    # Calculate the total number of words in each category and
    # the total number of words overall

    my %count;
    my $total = 0;
    foreach my $entry (keys %words) {
        $entry =~ /^(.+)-(.+)$/;
        $count{$1} += $words{$entry};
        $total += $words{$entry};
    }

    # Run through words and calculate the probability for each category

    my %score;
    foreach my $word (keys %words_in_file) {
        foreach my $category (keys %count) {
            if ( defined( $words{"$category-$word"} ) ) {
                $score{$category} += log( $words{"$category-$word"} /
                                          $count{$category} );
            } else {
                $score{$category} += log( 0.01 /
                                          $count{$category} );
            }
        }
    }
    # Add in the probability that the text is of a specific category

    foreach my $category (keys %count) {
        $score{$category} += log( $count{$category} / $total );
    }
    foreach my $category (sort { $score{$b} <=> $score{$a} } keys %count) {
        print "$category $score{$category}\n";
    }
}

# Supported commands are 'add' to add words to a category and
# 'classify' to get the classification of a file

if ( ( $ARGV[0] eq 'add' ) && ( $#ARGV == 2 ) ) {
    add_words( $ARGV[1], parse_file( $ARGV[2] ) );
} elsif ( ( $ARGV[0] eq 'classify' ) && ( $#ARGV == 1 ) ) {
    classify( parse_file( $ARGV[1] ) );
} else {
    print <<EOUSAGE;
Usage: add <category> <file> - Adds words from <file> to category <category>
       classify <file>       - Outputs classification of <file>
EOUSAGE
}

untie %words;

The hash is stored to disk using a Perl construct called a "tie" that, when used with the DB_File module, results in the hash being stored automatically in a file called "words.db" so that its contents persist between invocations.

use DB_File;
my %words;
tie %words, 'DB_File', 'words.db';

The hash keys are strings of the form category-word: For example, if the word "potato" appears in the category "veggies" with a count of three, there will be a hash entry with key "potato-veggies" and value "3." This data structure contains enough information to compute the probability of a document and do a naïve Bayesian classification.

The subroutine parse_file reads the document to be classified or trained on and fills in a hash called %words_in_file that maps words to the count of the number of times that word appeared in the document. It uses a simple regular expression to extract every 3- to 44-letter word that is followed by whitespace; in a real classifier, this word splitting could be made more complex by accounting for punctuation, digits, and hyphenated words.

sub parse_file
{
   my ( $file ) = @_;
   my %word_counts;
   open FILE, "<$file";
   while ( my $line = <FILE> ) {
      while ( $line =~
          s/([[:alpha:]]{3,44})[ \t\n\r]// ){
        $word_counts{lc($1)}++;
      }
   }
   close FILE;
   return %word_counts;
}

The output of parse_file can be used in two ways: It can be used to train the classifier by learning the word counts for a particular category and updating the %words hash, or it can be used to determine the classification of a particular document.

To train the classifier, call the add_words subroutine with the output of parse_file and a category. In the Perl code, a category is any string and the classifier is trained by passing sample documents into parse_file and then into add_words: add_words( <category>, parse_file( <sample document>));

sub add_words
{
   my ( $category, %words_in_file ) = @_;
   foreach my $word (keys %words_in_file) {
      $words{"$category-$word"} +=
         $words_in_file{$word};
   }
}

Once document training has been done, the classify subroutine can be called with the output of parse_file on a document. classify will print out the possible categories for the document in order of most likely to least likely:

classify ( parse_file( <document to classify> ) );

sub classify
   my ( %words_in_file ) = @_;
   my %count;
   my $total = 0;
   foreach my $entry (keys %words) {
      $entry =~ /^(.+)-(.+)$/;
      $count{$1} += $words{$entry};
      $total += $words{$entry};
   }
   my %score;
   foreach my $word (keys %words_in_file) {
      foreach my $category (keys %count) {
         if (defined($words{"$category-$word"})) {
            $score{$category} +=
               log( $words{"$category-$word"} /
                  $count{$category} );
         } else {
            $score{$category} +=
               log( 0.1 /
                  $count{$category} );
         }
      }
   }
   foreach my $category (keys %count) {
      $score{$category} +=
         log( $count{$category} / $total );
   }
   foreach my $category (sort { $score{$b} <=> $score
{$a} } keys %count) {
      print "$category $score{$category}\n";
   }
}

classify first calculates the total word count ($total) for all categories (which it needs to calculate P(Ci)) and the word count for each category (%count indexed by category name, which it needs to calculate P(Wj|Ci)). Then classify calculates the score for each category: The score is the value of P(Ci|D). It's preferable to call it a score for two reasons: Ignoring P(D) means that, strictly speaking, the value is being calculated incorrectly and classify uses logs to reduce overflow errors and replace multiplication by addition for speed. The score is in fact log P(Ci|D), which is:

log P(W0|Ci) + log P(W1|Ci) + ... + log P(Wm-1|Ci) + log P(Ci)

(Recall the equality log (A*B)=log A+log B). In that log form, it is still suitable for comparison. After the score has been calculated, classify calculates log P(Ci) for each category and then sorts the scores in descending order to output the classifier's opinion of the document. classify makes an estimate of the probability for a word that doesn't appear in a particular category by calculating a very small, nonzero probability for that word based on the word count for the category:

$score{$category} += log( 0.1 / $count{$category} );


A small amount of Perl code wraps these three subroutines into a usable classifier that accepts commands to add a document to the word list for a category (and hence, train the classifier), and to classify a document.

if ( ( $ARGV[0] eq 'add' ) && ( $#ARGV == 2 ) ) {
   add_words( $ARGV[1], 
                     parse_file( $ARGV[2] ) );
} elsif ( ( $ARGV[0] eq 'classify' ) && ( $#ARGV == 1 )
) {
   classify( parse_file( $ARGV[1] ) );
} else {
   print <<EOUSAGE;
Usage: add <category> <file> - Adds words from <file>
to category <category>
   classify <file> - Outputs classification
of <file>
EOUSAGE
}
untie %words;

If the Perl code is stored in file bayes.pl, then the classifier is trained like this:

perl bayes.pl add veggies article-about-vegetables
perl bayes.pl add fruits article-about-fruits
perl bayes.pl add nuts article-about-nuts


to create three categories (veggies, fruits, and nuts). Asking bayes.pl to classify a document will output the likelihood that the document is about vegetables, fruits, or nuts:

% perl bayes.pl classify article-I-just-wrote
fruits -4.11700258611469
nuts -6.60190923590268
veggies -11.9002266024507

Here, bayes.pl shows that the new article is most likely about fruits.

E-Mail Classification

If you are interested in classifying e-mail, there are a couple of tweaks that improve accuracy in practice: Don't fold case on values from headers and count words differently if they appear in the subject or body.

In the aforementioned Perl implementation, there is no difference between the words From, FROM, and fRoM: They are all considered to be instances of from. The parse_file subroutine lowercases the word before counting it. In practical e-mail classifiers, the names of e-mail headers turn out to be a better indicator of the type of an e-mail if case is preserved. For example, the header MIME-Version was written MiME-Version by one piece of common spamming software.

Distinguishing words found in the subject versus the body also increases the accuracy of a naïve Bayesian text classifier on e-mail. The simplest way to do this is to store a word like forward as subject:forward when it comes from the subject line, and simply forward when it is seen in the body.

Performance

The Perl code presented here isn't optimized at all. Each time classify is called, it has to recalculate the total word count for each category and it would be easy to cache the log values between invocations. The use of a Perl hash will not scale well in terms of memory usage.

However, the algorithm is simple and can be implemented in any language. A highly optimized version of this code is used in the POPFile e-mail classifier to do automatic classification. It uses a combination of Perl and SQL queries. The Bow toolkit from CMU has a fast C implementation of naïve Bayesian classification.

Uses of Text Classification

Although spam filtering is the best-known use of naïve Bayesian text classification, there are a number of other interesting uses on the horizon. IBM researcher Martin Overton has published a paper concerning the use of naïve Bayesian e-mail classification to detect e-mail-borne malware (http://arachnid.homeip.net/papers/VB2004-Canning-more-than-SPAM-1.02.pdf). In Overton's paper, presented at the Virus Bulletin 2004 conference, he demonstrated that a text classifier could accurately identify worms and viruses, such as W32.Bagle, and that it was able to spot even mutated versions of the worms. All this was done without giving the classifier any special knowledge of viruses.

The POPFile Project is a general e-mail classifier that can classify incoming e-mail into any number of categories. Users of POPFile have reported using its naïve Bayesian engine to classify mail into up to 50 different categories with good accuracy, and one journalist uses it to sort "interesting" from "uninteresting" press releases.

At LISA 2004, four Norwegian researchers presented a paper concerning a system called DIGIMIMIR, which was capable of automatically classifying requests coming into a typical IT help desk and in some cases responding automatically (http://www.digimimir.org/). They use a document clustering approach that, while not naïve Bayesian, is similar in implementation complexity and allowed the clustering together of "similar" e-mails without knowing the initial set of possible topics.


John is chief scientist at Electric Cloud, which focuses on reducing software build times. He is also the creator of POPFile. John can be contacted at [email protected].

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.