{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Natural Language Processing for Text Classification with NLTK and Scikit-learn\n", "\n", "Text classification using a simple support vector classifier on a dataset of positive and negative movie reviews.\n", "\n", "The data set we will be using comes from the UCI Machine Learning Repository. It contains over 5000 SMS labeled messages that have been collected for mobile phone spam research. It can be downloaded from the following URL:\n", "\n", "https://archive.ics.uci.edu/ml/datasets/sms+spam+collection\n", "\n", "## Import Necessary Libraries" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Python: 3.6.7 |Anaconda custom (64-bit)| (default, Oct 23 2018, 14:01:38) \n", "[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)]\n", "NLTK: 3.4\n", "Scikit-learn: 0.20.2\n", "Pandas: 0.23.4\n", "Numpy: 1.15.4\n" ] } ], "source": [ "import sys\n", "import nltk\n", "import sklearn\n", "import pandas as pd\n", "import numpy as np\n", "\n", "print('Python: {}'.format(sys.version))\n", "print('NLTK: {}'.format(nltk.__version__))\n", "print('Scikit-learn: {}'.format(sklearn.__version__))\n", "print('Pandas: {}'.format(pd.__version__))\n", "print('Numpy: {}'.format(np.__version__))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load the Dataset" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "df = pd.read_table('SMSSPamCollection', header=None, encoding='utf-8')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Print useful information about the dataset" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "RangeIndex: 5572 entries, 0 to 5571\n", "Data columns (total 2 columns):\n", "0 5572 non-null object\n", "1 5572 non-null object\n", "dtypes: object(2)\n", "memory usage: 87.1+ KB\n", "None\n", " 0 1\n", "0 ham Go until jurong point, crazy.. Available only ...\n", "1 ham Ok lar... Joking wif u oni...\n", "2 spam Free entry in 2 a wkly comp to win FA Cup fina...\n", "3 ham U dun say so early hor... U c already then say...\n", "4 ham Nah I don't think he goes to usf, he lives aro...\n" ] } ], "source": [ "print(df.info())\n", "print(df.head())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Check class distribution" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ham 4825\n", "spam 747\n", "Name: 0, dtype: int64\n" ] } ], "source": [ "classes = df[0]\n", "print(classes.value_counts())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preprocess the Data\n", "\n", "### Convert class labels to binary values, 0 = ham and 1 = spam" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0 0 1 0 0 1 0 0 1 1]\n" ] } ], "source": [ "from sklearn.preprocessing import LabelEncoder\n", "\n", "encoder = LabelEncoder()\n", "Y = encoder.fit_transform(classes)\n", "\n", "print(Y[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Store the SMS message data" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 Go until jurong point, crazy.. Available only ...\n", "1 Ok lar... Joking wif u oni...\n", "2 Free entry in 2 a wkly comp to win FA Cup fina...\n", "3 U dun say so early hor... U c already then say...\n", "4 Nah I don't think he goes to usf, he lives aro...\n", "5 FreeMsg Hey there darling it's been 3 week's n...\n", "6 Even my brother is not like to speak with me. ...\n", "7 As per your request 'Melle Melle (Oru Minnamin...\n", "8 WINNER!! As a valued network customer you have...\n", "9 Had your mobile 11 months or more? U R entitle...\n", "Name: 1, dtype: object\n" ] } ], "source": [ "text_messages = df[1]\n", "print(text_messages[:10])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Regular Expressions\n", "\n", "Some common regular expression metacharacters - copied from wikipedia\n", "\n", "^ Matches the starting position within the string. In line-based tools, it matches the starting position of any line.\n", "\n", ". Matches any single character (many applications exclude newlines, and exactly which characters are considered newlines is flavor-, character-encoding-, and platform-specific, but it is safe to assume that the line feed character is included). Within POSIX bracket expressions, the dot character matches a literal dot. For example, a.c matches \"abc\", etc., but [a.c] matches only \"a\", \".\", or \"c\".\n", "\n", "[ ] A bracket expression. Matches a single character that is contained within the brackets. For example, [abc] matches \"a\", \"b\", or \"c\". [a-z] specifies a range which matches any lowercase letter from \"a\" to \"z\". These forms can be mixed: [abcx-z] matches \"a\", \"b\", \"c\", \"x\", \"y\", or \"z\", as does [a-cx-z]. The - character is treated as a literal character if it is the last or the first (after the ^, if present) character within the brackets: [abc-], [-abc]. Note that backslash escapes are not allowed. The ] character can be included in a bracket expression if it is the first (after the ^) character: []abc].\n", "\n", "[^ ] Matches a single character that is not contained within the brackets. For example, [^abc] matches any character other than \"a\", \"b\", or \"c\". [^a-z] matches any single character that is not a lowercase letter from \"a\" to \"z\". Likewise, literal characters and ranges can be mixed.\n", "\n", "$ Matches the ending position of the string or the position just before a string-ending newline. In line-based tools, it matches the ending position of any line.\n", "\n", "( ) Defines a marked subexpression. The string matched within the parentheses can be recalled later (see the next entry, \\n). A marked subexpression is also called a block or capturing group. BRE mode requires ( ).\n", "\n", "\\n Matches what the nth marked subexpression matched, where n is a digit from 1 to 9. This construct is vaguely defined in the POSIX.2 standard. Some tools allow referencing more than nine capturing groups.\n", "\n", "* Matches the preceding element zero or more times. For example, abc matches \"ac\", \"abc\", \"abbbc\", etc. [xyz] matches \"\", \"x\", \"y\", \"z\", \"zx\", \"zyx\", \"xyzzy\", and so on. (ab)* matches \"\", \"ab\", \"abab\", \"ababab\", and so on.\n", "\n", "{m,n} Matches the preceding element at least m and not more than n times. For example, a{3,5} matches only \"aaa\", \"aaaa\", and \"aaaaa\". This is not found in a few older instances of regexes. BRE mode requires {m,n}.\n", "\n", "### Use regular expressions to replace email addresses, URLs, phone numbers, other numbers" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "# Replace email addresses with 'email'\n", "processed = text_messages.str.replace(r'^.+@[^\\.].*\\.[a-z]{2,}$',\n", " 'emailaddress')\n", "\n", "# Replace URLs with 'webaddress'\n", "processed = processed.str.replace(r'^http\\://[a-zA-Z0-9\\-\\.]+\\.[a-zA-Z]{2,3}(/\\S*)?$',\n", " 'webaddress')\n", "\n", "# Replace money symbols with 'moneysymb' (£ can by typed with ALT key + 156)\n", "processed = processed.str.replace(r'£|\\$', 'moneysymb')\n", " \n", "# Replace 10 digit phone numbers (formats include paranthesis, spaces, no spaces, dashes) with 'phonenumber'\n", "processed = processed.str.replace(r'^\\(?[\\d]{3}\\)?[\\s-]?[\\d]{3}[\\s-]?[\\d]{4}$',\n", " 'phonenumbr')\n", " \n", "# Replace numbers with 'numbr'\n", "processed = processed.str.replace(r'\\d+(\\.\\d+)?', 'numbr')" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [], "source": [ "# Remove punctuation\n", "processed = processed.str.replace(r'[^\\w\\d\\s]', ' ')\n", "\n", "# Replace whitespace between terms with a single space\n", "processed = processed.str.replace(r'\\s+', ' ')\n", "\n", "# Remove leading and trailing whitespace\n", "processed = processed.str.replace(r'^\\s+|\\s+?$', '')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Change words to lower case - Hello, HELLO, hello are all the same word" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "0 go until jurong point crazy available only in ...\n", "1 ok lar joking wif u oni\n", "2 free entry in numbr a wkly comp to win fa cup ...\n", "3 u dun say so early hor u c already then say\n", "4 nah i don t think he goes to usf he lives arou...\n", "5 freemsg hey there darling it s been numbr week...\n", "6 even my brother is not like to speak with me t...\n", "7 as per your request melle melle oru minnaminun...\n", "8 winner as a valued network customer you have b...\n", "9 had your mobile numbr months or more u r entit...\n", "10 i m gonna be home soon and i don t want to tal...\n", "11 six chances to win cash from numbr to numbr nu...\n", "12 urgent you have won a numbr week free membersh...\n", "13 i ve been searching for the right words to tha...\n", "14 i have a date on sunday with will\n", "15 xxxmobilemovieclub to use your credit click th...\n", "16 oh k i m watching here\n", "17 eh u remember how numbr spell his name yes i d...\n", "18 fine if that s the way u feel that s the way i...\n", "19 england v macedonia dont miss the goals team n...\n", "20 is that seriously how you spell his name\n", "21 i m going to try for numbr months ha ha only j...\n", "22 so ü pay first lar then when is da stock comin\n", "23 aft i finish my lunch then i go str down lor a...\n", "24 ffffffffff alright no way i can meet up with y...\n", "25 just forced myself to eat a slice i m really n...\n", "26 lol your always so convincing\n", "27 did you catch the bus are you frying an egg di...\n", "28 i m back amp we re packing the car now i ll le...\n", "29 ahhh work i vaguely remember that what does it...\n", " ... \n", "5542 armand says get your ass over to epsilon\n", "5543 u still havent got urself a jacket ah\n", "5544 i m taking derek amp taylor to walmart if i m ...\n", "5545 hi its in durban are you still on this number\n", "5546 ic there are a lotta childporn cars then\n", "5547 had your contract mobile numbr mnths latest mo...\n", "5548 no i was trying it all weekend v\n", "5549 you know wot people wear t shirts jumpers hat ...\n", "5550 cool what time you think you can get here\n", "5551 wen did you get so spiritual and deep that s g...\n", "5552 have a safe trip to nigeria wish you happiness...\n", "5553 hahaha use your brain dear\n", "5554 well keep in mind i ve only got enough gas for...\n", "5555 yeh indians was nice tho it did kane me off a ...\n", "5556 yes i have so that s why u texted pshew missin...\n", "5557 no i meant the calculation is the same that lt...\n", "5558 sorry i ll call later\n", "5559 if you aren t here in the next lt gt hours imm...\n", "5560 anything lor juz both of us lor\n", "5561 get me out of this dump heap my mom decided to...\n", "5562 ok lor sony ericsson salesman i ask shuhui the...\n", "5563 ard numbr like dat lor\n", "5564 why don t you wait til at least wednesday to s...\n", "5565 huh y lei\n", "5566 reminder from onumbr to get numbr pounds free ...\n", "5567 this is the numbrnd time we have tried numbr c...\n", "5568 will ü b going to esplanade fr home\n", "5569 pity was in mood for that so any other suggest...\n", "5570 the guy did some bitching but i acted like i d...\n", "5571 rofl its true to its name\n", "Name: 1, Length: 5572, dtype: object\n" ] } ], "source": [ "processed = processed.str.lower()\n", "print(processed)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Remove stop words from text messages" ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "from nltk.corpus import stopwords\n", "\n", "stop_words = set(stopwords.words('english'))\n", "\n", "processed = processed.apply(lambda x: ' '.join(\n", " term for term in x.split() if term not in stop_words))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Remove word stems using a Porter stemmer" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [], "source": [ "ps = nltk.PorterStemmer()\n", "\n", "processed = processed.apply(lambda x: ' '.join(\n", " ps.stem(term) for term in x.split()))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Generating Features\n", "\n", "### Create bag-of-words" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [], "source": [ "from nltk.tokenize import word_tokenize\n", "\n", "all_words = []\n", "\n", "for message in processed:\n", " words = word_tokenize(message)\n", " for w in words:\n", " all_words.append(w)\n", " \n", "all_words = nltk.FreqDist(all_words)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Print the total number of words and the 15 most common words" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Number of words: 6579\n", "Most common words: [('numbr', 2648), ('u', 1207), ('call', 674), ('go', 456), ('get', 451), ('ur', 391), ('gt', 318), ('lt', 316), ('come', 304), ('moneysymbnumbr', 303), ('ok', 293), ('free', 284), ('day', 276), ('know', 275), ('love', 266)]\n" ] } ], "source": [ "print('Number of words: {}'.format(len(all_words)))\n", "print('Most common words: {}'.format(all_words.most_common(15)))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Use the 1500 most common words as features" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [], "source": [ "word_features = list(all_words.keys())[:1500]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The find_features function will determine which of the 1500 word features are contained in the review" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "go\n", "jurong\n", "point\n", "crazi\n", "avail\n", "bugi\n", "n\n", "great\n", "world\n", "la\n", "e\n", "buffet\n", "cine\n", "got\n", "amor\n", "wat\n" ] } ], "source": [ "def find_features(message):\n", " words = word_tokenize(message)\n", " features = {}\n", " for word in word_features:\n", " features[word] = (word in words)\n", "\n", " return features\n", "\n", "features = find_features(processed[0])\n", "for key, value in features.items():\n", " if value == True:\n", " print(key)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Now lets do it for all the messages" ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [], "source": [ "messages = list(zip(processed, Y))\n", "\n", "# define a seed for reproducibility\n", "seed = 1\n", "np.random.seed = seed\n", "np.random.shuffle(messages)\n", "\n", "# call find_features function for each SMS message\n", "featuresets = [(find_features(text), label) for (text, label) in messages]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Training and Testing datasets using sklearn" ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "4179\n", "1393\n" ] } ], "source": [ "from sklearn import model_selection\n", "\n", "# split the data into training and testing datasets\n", "training, testing = model_selection.train_test_split(featuresets, test_size = 0.25, random_state=seed)\n", "print(len(training))\n", "print(len(testing))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Scikit-Learn Classifiers with NLTK" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "SVC Accuracy: 98.27709978463747\n" ] } ], "source": [ "from nltk.classify.scikitlearn import SklearnClassifier\n", "from sklearn.svm import SVC\n", "\n", "model = SklearnClassifier(SVC(kernel = 'linear'))\n", "\n", "# train the model on the training data\n", "model.train(training)\n", "\n", "# and test on the testing dataset!\n", "accuracy = nltk.classify.accuracy(model, testing)*100\n", "print(\"SVC Accuracy: {}\".format(accuracy))" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "K Nearest Neighbors Accuracy: 93.96984924623115\n", "Decision Tree Accuracy: 97.48743718592965\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/anaconda3/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.\n", " \"10 in version 0.20 to 100 in 0.22.\", FutureWarning)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Random Forest Accuracy: 98.1335247666906\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/logistic.py:433: FutureWarning: Default solver will be changed to 'lbfgs' in 0.22. Specify a solver to silence this warning.\n", " FutureWarning)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "Logistic Regression Accuracy: 98.42067480258436\n" ] }, { "name": "stderr", "output_type": "stream", "text": [ "/anaconda3/lib/python3.6/site-packages/sklearn/linear_model/stochastic_gradient.py:183: FutureWarning: max_iter and tol parameters have been added in SGDClassifier in 0.19. If max_iter is set but tol is left unset, the default value for tol in 0.19 and 0.20 will be None (which is equivalent to -infinity, so it has no effect) but will change in 0.21 to 1e-3. Specify tol to silence this warning.\n", " FutureWarning)\n" ] }, { "name": "stdout", "output_type": "stream", "text": [ "SGD Classifier Accuracy: 97.48743718592965\n", "Naive Bayes Accuracy: 98.27709978463747\n", "SVM Linear Accuracy: 98.27709978463747\n" ] } ], "source": [ "from sklearn.neighbors import KNeighborsClassifier\n", "from sklearn.tree import DecisionTreeClassifier\n", "from sklearn.ensemble import RandomForestClassifier\n", "from sklearn.linear_model import LogisticRegression, SGDClassifier\n", "from sklearn.naive_bayes import MultinomialNB\n", "from sklearn.svm import SVC\n", "from sklearn.metrics import classification_report, accuracy_score, confusion_matrix\n", "\n", "# Define models to train\n", "names = [\"K Nearest Neighbors\", \"Decision Tree\", \"Random Forest\", \"Logistic Regression\", \"SGD Classifier\",\n", " \"Naive Bayes\", \"SVM Linear\"]\n", "\n", "classifiers = [\n", " KNeighborsClassifier(),\n", " DecisionTreeClassifier(),\n", " RandomForestClassifier(),\n", " LogisticRegression(),\n", " SGDClassifier(max_iter = 100),\n", " MultinomialNB(),\n", " SVC(kernel = 'linear')\n", "]\n", "\n", "models = zip(names, classifiers)\n", "\n", "for name, model in models:\n", " nltk_model = SklearnClassifier(model)\n", " nltk_model.train(training)\n", " accuracy = nltk.classify.accuracy(nltk_model, testing)*100\n", " print(\"{} Accuracy: {}\".format(name, accuracy))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Ensemble method - Voting classifier" ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Voting Classifier: Accuracy: 98.27709978463747\n" ] } ], "source": [ "from sklearn.ensemble import VotingClassifier\n", "\n", "names = [\"K Nearest Neighbors\", \"Decision Tree\", \"Random Forest\", \"Logistic Regression\", \"SGD Classifier\",\n", " \"Naive Bayes\", \"SVM Linear\"]\n", "\n", "classifiers = [\n", " KNeighborsClassifier(),\n", " DecisionTreeClassifier(),\n", " RandomForestClassifier(),\n", " LogisticRegression(),\n", " SGDClassifier(max_iter = 100),\n", " MultinomialNB(),\n", " SVC(kernel = 'linear')\n", "]\n", "\n", "models = list(zip(names, classifiers))\n", "\n", "nltk_ensemble = SklearnClassifier(VotingClassifier(estimators = models, voting = 'hard', n_jobs = -1))\n", "nltk_ensemble.train(training)\n", "accuracy = nltk.classify.accuracy(nltk_model, testing)*100\n", "print(\"Voting Classifier: Accuracy: {}\".format(accuracy))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Make class label prediction for testing set" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [], "source": [ "txt_features, labels = zip(*testing)\n", "\n", "prediction = nltk_ensemble.classify_many(txt_features)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Print a confusion matrix and a classification report" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " precision recall f1-score support\n", "\n", " 0 0.98 1.00 0.99 1213\n", " 1 0.99 0.88 0.93 180\n", "\n", " micro avg 0.98 0.98 0.98 1393\n", " macro avg 0.99 0.94 0.96 1393\n", "weighted avg 0.98 0.98 0.98 1393\n", "\n" ] }, { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
predicted
hamspam
actualham12112
spam21159
\n", "
" ], "text/plain": [ " predicted \n", " ham spam\n", "actual ham 1211 2\n", " spam 21 159" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "print(classification_report(labels, prediction))\n", "\n", "pd.DataFrame(\n", " confusion_matrix(labels, prediction),\n", " index = [['actual', 'actual'], ['ham', 'spam']],\n", " columns = [['predicted', 'predicted'], ['ham', 'spam']])" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.6.7" } }, "nbformat": 4, "nbformat_minor": 2 }