Naive Bayes

Sahil Jilani
2 min readDec 9, 2020

--

Naïve Bayes Classifiers:

Naïve Bayes is a machine learning method you can use to predict the likelihood that an event will occur given evidence that’s present in your data Particularly useful for very large data sets Known to outperform even highly complicated classification methods a. e.g. Earlier method for spam detection

Conditional Probability:

P(c|x) — the posterior probability of class c, target given predictor (x, attributes). 

P(c) — the prior probability of class.

P(x|c) — is the likelihood which is the probability of the predictor given class. P(x) — is the prior probability of the predictor. (e.g. figure 0.1)

Figure (0.1 probability)

Three Types of Naïve Bayes Models:

Multinomial — good for when your feature (categoric or continuous describe frequency count (e.g. word count)

Bernoulli — good for making a prediction from binary features (like 0,1)

Gaussian — It is used in classification and it assumes that features follow a normal distribution.

Naïve Bayes Assumption:

Predictors are independent of each other.

A priori assumption: this is an assumption that the past condition still holds true. when we make predictions from historic values, we will get incorrect results if present circumstances have changed. All regression models maintain a priori assumption as well.

Import Library: There is a need to import a library to get the naïve Bayes model.

Figure (0.2 libraries)

Spam Checking:

Step 1: Convert the data set into a frequency table

Step 2: Create a Likelihood table by finding the probabilities like Spam probability good for when your feature (categoric or continuous describe frequency count)

Figure (0.3 spam check)

--

--