Computer Oriented Numerical Analysis

Categories: Technology

Introduction

The general strategy for random forest was first proposed by Ho in 1995. Ho set up that timberlands of trees parting with sideways hyperplanes can pick up precision as they develop without experiencing overtraining, as long as the forest are randomly limited to be touchy to just chosen highlight measurements. A consequent work along the equivalent lines presumed that other parting strategies, as long as they are haphazardly compelled to be harsh toward some element measurements, act likewise.

Note that this perception of a progressively perplexing classifier (a bigger backwoods) getting increasingly exact almost monotonically is in sharp difference to the basic conviction that the multifaceted nature of a classifier can just develop to a specific degree of exactness before being harmed by overfitting.

The clarification of the timberland technique's protection from overtraining can be found in Kleinberg's hypothesis of stochastic discrimination.

Random forest or random decision forest are an outfit learning technique for grouping, relapse and different assignments that works by developing a large number of choice trees at preparing time and yielding the class that is the method of the classes (characterization) or mean expectation (relapse) of the individual trees.

Get quality help now
Bella Hamilton
Bella Hamilton
checked Verified writer

Proficient in: Technology

star star star star 5 (234)

“ Very organized ,I enjoyed and Loved every bit of our professional interaction ”

avatar avatar avatar
+84 relevant experts are online
Hire writer

Random choice woodlands right for choice trees' propensity for overfitting to their preparation set.

Random forest gives considerably more exact forecasts when contrasted with basic relapse models in numerous situations. These cases for the most part have high number of prescient factors and tremendous example size. This is on the grounds that it catches the difference of a few information factors simultaneously and empowers high number of perceptions to partake in the forecast.

Get to Know The Price Estimate For Your Paper
Topic
Number of pages
Email Invalid email

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy. We’ll occasionally send you promo and account related email

"You must agree to out terms of services and privacy policy"
Write my paper

You won’t be charged yet!

In a portion of the coming articles, we will speak progressively about the calculation in more detail and discussion about how to fabricate a basic random forest on R.

Applications

genuine guide to make the Random Forest calculation straightforward. Assume Mady needs to go to better places that he may like for his fourteen day excursion, and he approaches his companion for guidance. His companion will ask where he has been to as of now, and whether he loves the spots that he's visited. In light of Mady's answers, his companion begins to give the proposal. Here, his companion shapes the choice tree.

Mady needs to approach more companions for guidance since he figures just a single companion can't enable him to settle on a precise choice. So his different companions likewise pose him arbitrary inquiries, lastly, gives an answer. He thinks about the spot with the most votes as his get-away choice. Here, the creator gives an investigation to this model.

His one companion posed him a few inquiries and gave the proposal of the best spot dependent on the appropriate responses. This is a run of the mill choice tree calculation approach. The companion made the standards dependent on the appropriate responses and utilized the principles to discover the appropriate response that coordinated the guidelines.

Mady's companions additionally arbitrarily asked him various inquiries and furnished responses, which for Mady are the decisions in favor of the spot. Toward the end, the spot with the most noteworthy votes is the one Mady will choose to go. This is the run of the mill Random Forest

For the application in banking, Random Forest calculation is utilized to discover steadfast clients, which means clients who can take out a lot of credits and pay enthusiasm to the bank appropriately, and misrepresentation clients, which means clients who have terrible records like inability to pay back an advance on schedule or have hazardous activities.

For the application in prescription, Random Forest calculation can be utilized to both distinguish the right blend of parts in drug, and to recognize sicknesses by dissecting the patient's therapeutic records.

For the application in the securities exchange, Random Forest calculation can be utilized to distinguish a stock's conduct and the normal misfortune or benefit.

For the application in web based business, Random Forest calculation can be utilized for foreseeing whether the client will like the suggest items, in light of the experience of comparable clients.

Algorithm

Let the number of training cases be N, and the number of variables in the classifier be M. The number m of input variables are used to determine the decision at a node of the tree; m should be much less than M. Choose a training set for this tree by choosing N times with replacement from all N available training cases. Use the rest of the cases to estimate the error of the tree, by predicting their classes. For each node of the tree, randomly choose m variables on which to base the decision at that node. Calculate the best split based on these m variables in the training set. Each tree is fully grown and not pruned.

  1. Build Tree
  2. Split
  3. Can Split
  4. Split Is Valid
  5. Must Split
  6. Should Split
  7. Best Split
  8. Information Gain
  9. Update Estimation Statistics
  10. Update Structural Statistics

Code:

import pandas as pd

import numpy as np

from sklearn.model_selection import train_test_split

from sklearn.ensemble import RandomForestClassifier

RSEED = 50

# Load in data

df = pd.read_csv('https://s3.amazonaws.com/projects-rf/clean_data.csv')

# Full dataset: https://www.kaggle.com/cdc/behavioral-risk-factor-surveillance-system

# Extract the labels

labels = np.array(df.pop('label'))

# 30% examples in test data

train, test, train_labels, test_labels = train_test_split(df,

labels,

stratify = labels,

test_size = 0.3,

random_state = RSEED)

# Imputation of missing values

train = train.fillna(train.mean())

test = test.fillna(test.mean())

# Features for feature importances

features = list(train.columns)

# Create the model with 100 trees

model = RandomForestClassifier(n_estimators=100,

random_state=RSEED,

max_features = 'sqrt',

n_jobs=-1, verbose = 1)

# Fit on training data

model.fit(train, train_labels)

n_nodes = []

max_depths = []

# Stats about the trees in random forest

for ind_tree in model.estimators_:

n_nodes.append(ind_tree.tree_.node_count)

max_depths.append(ind_tree.tree_.max_depth)

print(f'Average number of nodes {int(np.mean(n_nodes))}')

print(f'Average maximum depth {int(np.mean(max_depths))}')

# Training predictions (to demonstrate overfitting)

train_rf_predictions = model.predict(train)

train_rf_probs = model.predict_proba(train)[:, 1]

# Testing predictions (to determine performance)

rf_predictions = model.predict(test)

rf_probs = model.predict_proba(test)[:, 1]

from sklearn.metrics import precision_score, recall_score, roc_auc_score, roc_curve

import matplotlib.pyplot as plt

# Plot formatting

plt.style.use('fivethirtyeight')

plt.rcParams['font.size'] = 18

def evaluate_model(predictions, probs, train_predictions, train_probs):

'''Compare machine learning model to baseline performance.

Computes statistics and shows ROC curve.'''

baseline = {}

baseline['recall'] = recall_score(test_labels,

[1 for _ in range(len(test_labels))])

baseline['precision'] = precision_score(test_labels,

[1 for _ in range(len(test_labels))])

baseline['roc'] = 0.5

results = {}

results['recall'] = recall_score(test_labels, predictions)

results['precision'] = precision_score(test_labels, predictions)

results['roc'] = roc_auc_score(test_labels, probs)

train_results = {}

train_results['recall'] = recall_score(train_labels, train_predictions)

train_results['precision'] = precision_score(train_labels, train_predictions)

train_results['roc'] = roc_auc_score(train_labels, train_probs)

for metric in ['recall', 'precision', 'roc']:

print(f'{metric.capitalize()} Baseline: {round(baseline[metric], 2)} Test: {round(results[metric], 2)} Train: {round(train_results[metric], 2)}')

# Calculate false positive rates and true positive rates

base_fpr, base_tpr, _ = roc_curve(test_labels, [1 for _ in range(len(test_labels))])

model_fpr, model_tpr, _ = roc_curve(test_labels, probs)

plt.figure(figsize = (8, 6))

plt.rcParams['font.size'] = 16

# Plot both curves

plt.plot(base_fpr, base_tpr, 'b', label = 'baseline')

plt.plot(model_fpr, model_tpr, 'r', label = 'model')

plt.legend();

plt.xlabel('False Positive Rate');

plt.ylabel('True Positive Rate'); plt.title('ROC Curves');

plt.show();

evaluate_model(rf_predictions, rf_probs, train_rf_predictions, train_rf_probs)

plt.savefig('roc_auc_curve.png')

from sklearn.metrics import confusion_matrix

import itertools

def plot_confusion_matrix(cm, classes,

normalize=False,

title='Confusion matrix',

cmap=plt.cm.Oranges):

This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`.

if normalize:

cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]

print('Normalized confusion matrix')

else:

print('Confusion matrix, without normalization')

print(cm)

# Plot the confusion matrix

plt.figure(figsize = (10, 10))

plt.imshow(cm, interpolation='nearest', cmap=cmap)

plt.title(title, size = 24)

plt.colorbar(aspect=4)

tick_marks = np.arange(len(classes))

plt.xticks(tick_marks, classes, rotation=45, size = 14)

plt.yticks(tick_marks, classes, size = 14)

fmt = '.2f' if normalize else 'd'

thresh = cm.max() / 2.

# Labeling the plot

for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):

plt.text(j, i, format(cm[i, j], fmt), fontsize = 20,

horizontalalignment='center',

color='white' if cm[i, j] > thresh else 'black')

plt.grid(None)

plt.tight_layout()

plt.ylabel('True label', size = 18)

plt.xlabel('Predicted label', size = 18)

# Confusion matrix

cm = confusion_matrix(test_labels, rf_predictions)

plot_confusion_matrix(cm, classes = ['Poor Health', 'Good Health'],

title = 'Health Confusion Matrix')

plt.savefig('cm.png')

Conclusion

Random Forest stands out as a powerful analytical tool with wide-ranging applications across banking, healthcare, e-commerce, and the stock market. Its resistance to overfitting, ability to handle large datasets with numerous variables, and provision of insights into feature importance distinguish it from traditional regression models. As data complexity grows, Random Forest will remain a critical asset in the data scientist's toolkit for predictive modeling and numerical analysis.

Updated: Feb 18, 2024
Cite this page

Computer Oriented Numerical Analysis. (2024, Feb 18). Retrieved from https://studymoose.com/document/computer-oriented-numerical-analysis

Live chat  with support 24/7

👋 Hi! I’m your smart assistant Amy!

Don’t know where to start? Type your requirements and I’ll connect you to an academic expert within 3 minutes.

get help with your assignment