Facts About Random Forest - And Why They Matter

Category: Entertainment

Presentation Description

No description available.


Presentation Transcript

Facts About Random Forest - And Why They Matter:

Facts About Random Forest - And Why They Matter

Random Forest:

Random Forest Random forests or random decision forests are an ensemble learning strategy for classification, relapse and other tasks that operates by developing a multitude of decision trees at training time and yielding the class that is the mode of the classes or mean prediction of the individual trees. 2

Table of Contents::

Table of Contents: How it works Real Life Analogy Feature Importance Difference between Decision Trees and Random Forests Important Hyperparameters (predictive power, speed) Advantages and Disadvantages Use Cases Summary 3


Random forest builds multiple decision trees and merges them together to get a more accurate and stable prediction. 4

How it Works:

Random Forest is a supervised learning algorithm. Like you would already see from it’s name, it makes a forest and makes it somehow random. The “forest “ it builds, is an ensemble of Decision Trees, most of the time trained with the “bagging” method. The general idea of the bagging method is that a combination of learning models increases the overall outcome. How it Works 5

Real Life Analogy:

Imagine a girl named Jenny , that want’s to decide, to which places he should travel during a this one-year vacation trip. She asks people who know him for advice. First, he goes to a friend, than asks Jenny where he traveled to in the past and if he liked it or not. Based on the answers, She will give Jenny some advice. Real Life Analogy 6

Feature Importance:

Another great quality of the random forest algorithm is that it is very easy to measure the relative importance of each feature on the prediction. It computes this score automatically for each feature after training and scales the results, so that the all sum of all importance is equal to 1. Feature Importance 7

Difference Between Decision Trees and Random Forests:

Difference Between Decision Trees and Random Forests If you put the features and labels into a decision tree, it will generate some rules. Then you can predict whether the advertisement will be clicked or not. In comparison, the Random Forest algorithm randomly selects observations and features to build several decision trees and then averages the results. 8

Important Hyper Parameters:

Increasing the Predictive Power Increasing the Models Speed Important Hyper Parameters 9

Advantages and Disadvantages:

An advantage of random forest is that it can be used for both regression and classification tasks and that it’s easy to view the relative importance it assigns to the input features . Random Forest is also considered as a very handy and easy to use algorithm, because it’s default hyper parameters often produce a good prediction result.  Advantages and Disadvantages 10

Use Cases:

The random forest algorithm is used in a lot of different fields, like Banking, Stock Market, Medicine and E-Commerce. In Banking it is used for example to detect customers who will use the bank’s services more frequently than others and repay their debt in time. In this domain it is also used to detect fraud customers who want to scam the bank. Use Cases 11


Random Forests are also very hard to beat in terms of performance. Of course you can probably always find a model that can perform better, like a neural network, but these usually take much more time in the development. And on top of that, they can handle a lot of different feature types, like binary, categorical and numerical. Summary 12


Thanks! Any questions? You can find me at Jigsaw Academy [email protected] 13

authorStream Live Help