Decision Tree Classification

classification
decision tree
prediction model
Decision Tree Classification cover image

Introduction

Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.

Entropy

The goal of the training is to find the best splits in the nodes in order to find the most optimal tree. The splits are done using some criteria such as: Entropy.

There exist many definitions of entropy such as:

  • Entropy corresponds to the amount of information contained in a source of information.

  • Entropy can also be seen as the randomness or the measuring of surprise in a set.

  • Entropy is a metric that measures the unpredictability or impurity in the system.

entropy

In decision trees, we will consider entropy as the measure of the purity inside of a node. The goal of the decision tree model is to reduce the entropy of the nodes at each split:

entropy_reductioin

Thus, we want to maximize the difference between the entropy of the parent node and the entropy of the child nodes. This difference is called the Information gain.

The Entropy HH of a set XX is mathematically formulated as follow:

H(X)=_xXp(x)logp(x)H(X) = - \sum\limits\_{x \in X} p(x) \log p(x)

Information gain

Information Gain is the difference between the entropy of the parent node and the weighted sum of the entropies of the chlid nodes, and thus it can be formulated as follow:

IG(Y,X)=H(Y)xunique(X)P(xX)×H(YX=x)IG(Y, X) = H(Y) - \sum_{x \in unique(X)} P(x|X) \times H(Y | X = x)

=H(Y)xunique(X)X.count(x)len(X)×H(Y[X==x])= H(Y) - \sum_{x \in unique(X)} \frac{X.count(x)}{len(X)} \times H(Y[X == x])

where:

  • H(.)H(.) is the entropy.

  • YY is the population prior to the split, it represents the parent node.

  • XX is the variable that we want to use for the splitting.

  • xx is a unique value of X.

  • Y[X==x]Y[X==x] is a splitted list with only xx values.

let's take a proper example:

entropy_reductioin

We are going to calculate the Information Gain when we divide the parent node by using the values of X:

IG(parent,X)=H(parent)xunique(X)P(xX)×H(parentX=x)IG(parent, X) = H(parent) - \sum_{x \in unique(X)} P(x|X) \times H(parent | X = x)

\

First, we calculate the entropy of the parent node:

H(parent)=P(Y=Blue)×logP(Y=Blue)P(Y=Yellow)×logP(Y=Yellow)H(parent) = - P(Y=Blue) \times \log P(Y=Blue) - P(Y=Yellow) \times \log P(Y=Yellow)

=1121×log(1121)1021×log(1021)=0.3= - \frac{11}{21} \times \log(\frac{11}{21}) - \frac{10}{21} \times \log(\frac{10}{21}) = 0.3

\

Then, we are going to calculate the internal probability of each child node after the split by using the unique values of X:

unique(X)=[Circle,Square]unique(X) = [Circle, Square]

_xunique(X)P(xX)×H(YX=x)=P(SquareX)×H(YX=Square)\sum\_{x \in unique(X)} P(x|X) \times H(Y | X = x) = P(Square|X) \times H(Y | X = Square)

+P(CircleX)×H(YX=Circle)+ P(Circle|X) \times H(Y | X = Circle)

=921×H(YX=Square)+1221×H(YX=Circle)= \frac{9}{21} \times H(Y | X = Square) + \frac{12}{21} \times H(Y | X = Circle)

Such as:

  • H(YX=Square)H(Y | X = Square) : represents the entropy of the first child node.

  • H(YX=Circle)H(Y | X = Circle) : represents the entropy of the second child node.

\

We start with the first child node:

H(YX=Square)=P(Y=BlueX=Square)×logP(Y=BlueX=Square)H(Y | X = Square) = - P(Y=Blue | X = Square) \times \log P(Y=Blue| X = Square)

P(Y=YellowX=Square)×logP(Y=YellowX=Square)- P(Y=Yellow| X = Square) \times \log P(Y=Yellow| X = Square)

=79×log7929×log29=0.23= - \frac{7}{9} \times \log\frac{7}{9} - \frac{2}{9} \times \log\frac{2}{9} = 0.23

\

And then the second child node:

H(YX=Circle)=P(Y=BlueX=Circle)×logP(Y=BlueX=Circle)H(Y | X = Circle) = - P(Y=Blue | X = Circle) \times \log P(Y=Blue| X = Circle)

P(Y=YellowX=Circle)×logP(Y=YellowX=Circle)- P(Y=Yellow| X = Circle) \times \log P(Y=Yellow| X = Circle)

=412×log412812×log812=0.28= - \frac{4}{12} \times \log\frac{4}{12} - \frac{8}{12} \times \log\frac{8}{12} = 0.28

\

Finally, we substitute the entropies in the Information Gain formula:

IG(parent,X)=H(parent)xunique(X)P(xX)×H(parentX=x)IG(parent, X) = H(parent) - \sum_{x \in unique(X)} P(x|X) \times H(parent | X = x)

=0.3921×0.231221×0.28=0.041= 0.3 - \frac{9}{21} \times 0.23 - \frac{12}{21} \times 0.28 = 0.041

\

\

As stated before, the objective of a node split is to maximize the Information Gain, and thus minimize the Entropy in the resulting children node. To do this, we need to try and split the node with different sets of inputs X1,X2,,XnX_1, X_2, \ldots, Xn and we only keep the split that maximizes the Information Gain:

X\*=argmaxXiIG(Y,Xi)X^{\*} = \underset{X_i}{\operatorname{argmax}} IG(Y, X_i)

When to stop splitting

The node splitting in decision trees is a recursive, so there must be a criteria which we can use in order to stop the splitting. These some of the most implemented criteria:

  • When the node is pure: H(node) = 0. It's pointless to split the node any further.

  • Maximum number of depth: We can set a maximum depth that the model can reach, it means that even if the node is not pure the splitting is stopped.

  • Minimum number of samples per node: We can also set a minimum number NN of samples per node. If the number of samples per node is equal to NN then we stop splitting even if the node is not pure.

By the end of the training ( the splitting ), each node that relies on the end of the decision tree is called a "Leaf", because it is not a root to any subtree. Each leaf will represent yield the class with the most samples.

Conclusion

Decision tree is one of the most famous machine learning algorithms due to its efficiency, intuitive background and simple implementation. This algorithm can further be used with numerical independent variables ( Gaussian Decision Tree ), and it can be extended to solve regression tasks as well.


Career Services background pattern

Career Services

Contact Section background image

Let’s stay in touch

Code Labs Academy © 2024 All rights reserved.