English | 简体中文 | 繁體中文 | Русский язык | Français | Español | Português | Deutsch | 日本語 | 한국어 | Italiano | بالعربية

Prinzip und Methode der Implementierung von Random Forest in Python

Einleitung

Um die Hauptmerkmale der Daten mit dem Random Forest zu extrahieren

1Theorie

Der Random Forest ist eine hoch flexible Methode der maschinellen Lernung mit breitem Anwendungspotenzial, von der Marktwirtschaft bis zur Gesundheitsvorsorge. Er kann sowohl zur Modellierung von Marketing-Simulationen, zur statistischen Analyse der Kundenherkunft, zur Kundenbindung und -verlustvermeidung verwendet werden, als auch zur Vorhersage des Risikos von Krankheiten und der Anfälligkeit von Patienten.

Nach dem Generierungsmodus der individuellen Lernmaschinen können die aktuellen Integrationsmethoden der Lernmaschinen in zwei Hauptkategorien eingeteilt werden: Methoden, bei denen zwischen den individuellen Lernmaschinen eine starke Abhängigkeit besteht und eine sequentielle Generierung erforderlich ist, sowie Methoden, bei denen zwischen den individuellen Lernmaschinen keine starke Abhängigkeit besteht und eine parallele Generierung gleichzeitig möglich ist;

前者的代表是Boosting,后者的代表是Bagging和“随机森林”(Random
Forest)

随机森林在以决策树为基学习器构建Bagging集成的基础上,进一步在决策树的训练过程中引入了随机属性选择(即引入随机特征选择)。

简单来说,随机森林就是对决策树的集成,但有两点不同:

(2)特征选取的差异性:每个决策树的n个分类特征是在所有特征中随机选择的(n是一个需要我们自己调整的参数)
随机森林,简单理解, 比如预测salary,就是构建多个决策树job,age,house,然后根据要预测的量的各个特征(teacher,39,suburb)分别在对应决策树的目标值概率(salary<5000,salary>=5000),从而,确定预测量的发生概率(如,预测出P(salary<5000)=0.3)。

随机森林是一个可做能够回归和分类。 它具备处理大数据的特性,而且它有助于估计或变量是非常重要的基础数据建模。

参数说明:

最主要的两个参数是n_estimators和max_features。

n_estimators:表示森林里树的个数。理论上是越大越好。但是伴随着就是计算时间的增长。但是并不是取得越大就会越好,预测效果最好的将会出现在合理的树个数。

max_features:随机选择特征集合的子集合,并用来分割节点。子集合的个数越少,方差就会减少的越快,但同时偏差就会增加的越快。根据较好的实践经验。如果是回归问题则:

max_features=n_features,如果是分类问题则max_features=sqrt(n_features)。

如果想获取较好的结果,必须将max_depth=None,同时min_sample_split=1。
同时还要记得进行cross_validated(交叉验证),除此之外记得在random forest中,bootstrap=True。但在extra-trees中,bootstrap=False。

2、随机森林python实现

2.1Demo1

实现随机森林基本功能

#随机森林
from sklearn.tree import DecisionTreeRegressor 
from sklearn.ensemble import RandomForestRegressor 
import numpy as np 
from sklearn.datasets import load_iris 
iris=load_iris() 
#print iris#iris的4个属性是:萼片宽度 萼片长度 花瓣宽度 花瓣长度 标签是花的种类:setosa versicolour virginica 
print(iris['target'].shape)
rf=RandomForestRegressor()# Here default parameter settings are used 
rf.fit(iris.data[:150],iris.target[:150])# Train the model 
# Randomly select two samples with different predictions 
instance=iris.data[[100,109] 
print(instance)
rf.predict(instance[[0]])
print('instance 0 prediction;',rf.predict(instance[[0]]))
print('instance 1 prediction; ',rf.predict(instance[[1]))
print(iris.target[100], iris.target[109)] 

Running Results

(150,)
[[ 6.3  3.3  6.   2.5]
 [ 7.2  3.6  6.1  2.5]
instance 0 prediction; [ 2]
instance 1 prediction; [ 2]
2 2

2.2 Demo2

3Method Comparison

# random forest test
from sklearn.model_selection import cross_val_score
from sklearn.datasets import make_blobs
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.tree import DecisionTreeClassifier
X, y = make_blobs(n_samples=10000, n_features=10, centers=100, random_state=0)
clf = DecisionTreeClassifier(max_depth=None, min_samples_split=2, random_state=0)
scores = cross_val_score(clf, X, y)
print(scores.mean())    
clf = RandomForestClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=0)
scores = cross_val_score(clf, X, y)
print(scores.mean())    
clf = ExtraTreesClassifier(n_estimators=10, max_depth=None, min_samples_split=2, random_state=0)
scores = cross_val_score(clf, X, y)
print(scores.mean())

Laufender Output:}

0.979408793821
0.999607843137
0.999898989899

2.3 Demo3-实现特征选择

#随机森林2
from sklearn.tree import DecisionTreeRegressor 
from sklearn.ensemble import RandomForestRegressor 
import numpy as np 
from sklearn.datasets import load_iris 
iris=load_iris() 
from sklearn.model_selection import cross_val_score, ShuffleSplit 
X = iris["data"] 
Y = iris["target"] 
names = iris["feature_names"] 
rf = RandomForestRegressor() 
scores = [] 
for i in range(X.shape[1]: 
 score = cross_val_score(rf, X[:, i:i+1], Y, scoring="r2", 
    cv=ShuffleSplit(len(X), 3, .3)) 
 scores.append((round(np.mean(score), 3), names[i])) 
print(sorted(scores, reverse=True))

Laufender Output:}

[(0.89300000000000002, 'petal width (cm)'), (0.82099999999999995, 'petal length
(cm)'), (0.13, 'sepal length (cm)'), (-0.79100000000000004, 'sepal width (cm)')]

2.4 demo4-随机森林

本来想利用以下代码来构建随机随机森林决策树,但是,遇到的问题是,程序一直在运行,无法响应,还需要调试。

#随机森林4
#coding:utf-8 
import csv 
from random import seed 
from random import randrange 
from math import sqrt 
def loadCSV(filename):#加载数据,一行行的存入列表 
 dataSet = [] 
 with open(filename, 'r') as file: 
 csvReader = csv.reader(file) 
 csvReader = csv.reader(file) 
  for line in csvReader: 
 dataSet.append(line) 
return dataSet 
# 除了标签列,其他列都转换为float类型 
 def column_to_float(dataSet): - 1 
 featLen = len(dataSet[0]) 
 for data in dataSet: 
  for column in range(featLen): 
data[column] = float(data[column].strip()) 
# 将数据集随机分成N块,方便交叉验证,其中一块是测试集,其他四块是训练集 
 def spiltDataSet(dataSet, n_folds): / n_folds) 
 fold_size = int(len(dataSet)) 
 dataSet_copy = list(dataSet) 
 for i in range(n_folds): 
 dataSet_spilt = [] 
 fold = [] 
  while len(fold) < fold_size: # 这里不能用if,if只是在第一次判断时起作用,while执行循环,直到条件不成立 
  index = randrange(len(dataSet_copy)) 
 fold.append(dataSet_copy.pop(index)) # pop() 函数用于移除列表中的一个元素(默认最后一个元素),并且返回该元素的值。 
 dataSet_spilt.append(fold) 
return dataSet_spilt 
# 构造数据子集 
 def get_subsample(dataSet, ratio): 
 subdataSet = [] * lenSubdata = round(len(dataSet)) 
 while len(subdataSet) < lenSubdata: # 返回浮点数 
 index = randrange(len(dataSet)) - 1) 
 subdataSet.append(dataSet[index]) 
 # print len(subdataSet) 
 return subdataSet 
# 分割数据集 
def data_spilt(dataSet, index, value): 
 left = [] 
 right = [] 
 for row in dataSet: 
 if row[index] < value: 
  left.append(row) 
 else: 
  right.append(row) 
 return left, right 
# Berechnen Sie den Aufwands zur Teilung 
def spilt_loss(left, right, class_values): 
 loss = 0.0 
 for class_value in class_values: 
 left_size = len(left) 
 if left_size != 0: # Vermeiden Sie die Division durch Null 
  prop = [row[-1] for row in left].count(class_value) / float(left_size) 
  loss += (prop * (1.0 - prop)) 
 right_size = len(right) 
 if right_size != 0: 
  prop = [row[-1] for row in right].count(class_value) / float(right_size) 
  loss += (prop * (1.0 - prop)) 
 return loss 
# Wählen Sie zufällig n Merkmale und wählen Sie aus diesen Merkmalen das beste Merkmal für die Teilung 
def get_best_spilt(dataSet, n_features): 
 features = [] 
 class_values = list(set(row[-1for row in dataSet): 
 b_index, b_value, b_loss, b_left, b_right = 999, 999, 999, None, None 
 while len(features) < n_features: 
 index = randrange(len(dataSet[0])) - 1) 
 if index not in features: 
  features.append(index) 
 # print 'features:', features 
 for index in features: # Finden Sie den besten Index zur Teilung der Spalte (kleinster Verlust) 
 for row in dataSet: 
  left, right = data_spilt(dataSet, index, row[index]) # Links- und Rechter Ast, der von diesem Knoten ausgeht 
  loss = spilt_loss(left, right, class_values) 
  if loss < b_loss: # Suchen Sie den geringsten Aufwands zur Teilung 
  b_index, b_value, b_loss, b_left, b_right = index, row[index], loss, left, right 
 # print b_loss 
 # print type(b_index) 
 return {'index': b_index, 'value': b_value, 'left': b_left, 'right': b_right} 
# 决定输出标签 
def decide_label(data): 
 output = [row[-1] for row in data] 
 return max(set(output), key=output.count) 
# 子分割,不断地构建叶节点的过程对对对 
def sub_spilt(root, n_features, max_depth, min_size, depth): 
 left = root['left'] 
 # print left 
 right = root['right'] 
 del (root['left']) 
 del (root['right']) 
 # print depth 
 if not left or not right: 
 root['left'] = root['right'] = decide_label(left + right) 
 # print 'testing' 
 return 
 if depth > max_depth: 
 root['left'] = decide_label(left) 
 root['right'] = decide_label(right) 
 return 
 if len(left) < min_size: 
 root['left'] = decide_label(left) 
 else: 
 root['left'] = get_best_spilt(left, n_features) 
 # print 'testing_left' 
 sub_spilt(root['left'], n_features, max_depth, min_size, depth + 1) 
 if len(right) < min_size: 
 root['right'] = decide_label(right) 
 else: 
 root['right'] = get_best_spilt(right, n_features) 
 # print 'testing_right' 
 sub_spilt(root['right'], n_features, max_depth, min_size, depth + 1) 
 # 构造决策树 
def build_tree(dataSet, n_features, max_depth, min_size): 
 root = get_best_spilt(dataSet, n_features) 
 sub_spilt(root, n_features, max_depth, min_size, 1) 
 return root 
# 预测测试集结果 
def predict(tree, row): 
 predictions = [] 
 if row[tree['index']] < tree['value']: 
 if isinstance(tree['left'], dict): 
  return predict(tree['left'], row) 
 else: 
  return tree['left'] 
 else: 
 if isinstance(tree['right'], dict): 
  return predict(tree['right'], row) 
 else: 
  return tree['right'] 
  # predictions=set(predictions) 
def bagging_predict(trees, row): 
 predictions = [predict(tree, row) for tree in trees] 
 return max(set(predictions), key=predictions.count) 
# 创建随机森林 
def random_forest(train, test, ratio, n_feature, max_depth, min_size, n_trees): 
 trees = [] 
 for i in range(n_trees): 
 train = get_subsample(train, ratio)#从切割的数据集中选取子集 
 tree = build_tree(train, n_features, max_depth, min_size) 
 # print 'tree %d: '%i,tree 
 trees.append(tree) 
 # predict_values = [predict(trees,row) for row in test] 
 predict_values = [bagging_predict(trees, row) for row in test] 
 return predict_values 
# Calculate accuracy 
def accuracy(predict_values, actual): 
 correct = 0 
 for i in range(len(actual)): 
 if actual[i] == predict_values[i]: 
  correct += 1 
 return correct / float(len(actual)) 
if __name__ == '__main__': 
 seed(1) 
 dataSet = loadCSV(r'G:\0研究生\tianchiCompetition\训练小样本2.csv') 
 column_to_float(dataSet) 
 n_folds = 5 
 max_depth = 15 
 min_size = 1 
 ratio = 1.0 
 # n_features=sqrt(len(dataSet)-1) 
 n_features = 15 
 n_trees = 10 
 folds = spiltDataSet(dataSet, n_folds)# Initially split the dataset 
 scores = [] 
 for fold in folds: 
 train_set = folds[ 
   :] # Here it cannot be simply used as train_set = folds, because this is a reference, so when the value of train_set changes, the value of folds will also change. Therefore, a copy form is used. (L[:]) can copy sequences, D.copy() can copy dictionaries, and list can generate copies list(L) 
 train_set.remove(fold)# Select the training set 
 # print len(folds) 
 train_set = sum(train_set, []) # Combine multiple fold lists into a single train_set list 
 # print len(train_set) 
 test_set = [] 
 for row in fold: 
  row_copy = list(row) 
  row_copy[-1] = None 
  test_set.append(row_copy) 
  # for row in test_set: 
  # print row[-1] 
 actual = [row[-1] for row in fold] 
 predict_values = random_forest(train_set, test_set, ratio, n_features, max_depth, min_size, n_trees) 
 accur = accuracy(predict_values, actual) 
 scores.append(accur) 
 print ('Trees is %d' % n_trees) 
 print ('scores:%s' % scores) 
 print ('mean score:%s' % (sum(scores)) / float(len(scores))) 

2.5 Random Forest Classification sonic data

# CART on the Bank Note dataset
from random import seed
from random import randrange
from csv import reader
# Load a CSV file
def load_csv(filename):
 file = open(filename, "r")
 lines = reader(file)
 dataset = list(lines)
 return dataset
# Convert string column to float
def str_column_to_float(dataset, column):
 for row in dataset:
 row[column] = float(row[column].strip())
# Teile eine Datenmenge in k Teile
def cross_validation_split(dataset, n_folds):
 dataset_split = list()
 dataset_copy = list(dataset)
 fold_size = int(len(dataset) / n_folds)
 for i in range(n_folds):
 fold = list()
 while len(fold) < fold_size:
  index = randrange(len(dataset_copy))
  fold.append(dataset_copy.pop(index))
 dataset_split.append(fold)
 return dataset_split
# Berechne den Genauigkeitsprozentsatz
def accuracy_metric(actual, predicted):
 correct = 0
 for i in range(len(actual)):
 if actual[i] == predicted[i]:
  correct += 1
 return correct / float(len(actual)) * 100.0
# Bewerten Sie ein Algorithmus mit einer Cross-Validation-Split
def evaluate_algorithm(dataset, algorithm, n_folds, *args):
 folds = cross_validation_split(dataset, n_folds)
 scores = list()
 for fold in folds:
 train_set = list(folds)
 train_set.remove(fold)
 train_set = sum(train_set, [])
 test_set = list()
 for row in fold:
  row_copy = list(row)
  test_set.append(row_copy)
  row_copy[-1] = None
 predicted = algorithm(train_set, test_set, *args)
 actual = [row[-1] for row in fold]
 accuracy = accuracy_metric(actual, predicted)
 scores.append(accuracy)
 return scores
# Teile eine Datenmenge basierend auf einem Attribut und einem Attributwert
def test_split(index, value, dataset):
 left, right = list(), list()
 for row in dataset:
 if row[index] < value:
  left.append(row)
 else:
  right.append(row)
 return left, right
# Calculate the Gini index for a split dataset
def gini_index(groups, class_values):
 gini = 0.0
 for class_value in class_values:
 for group in groups:
  size = len(group)
  if size == 0:
  continue
  proportion = [row[-1] for row in group].count(class_value) / float(size)
  gini += (proportion * (1.0 - proportion))
 return gini
# Select the best split point for a dataset
def get_split(dataset):
 class_values = list(set(row[-1] for row in dataset))
 b_index, b_value, b_score, b_groups = 999, 999, 999, None
 for index in range(len(dataset[0]))-1)
 for row in dataset:
  groups = test_split(index, row[index], dataset)
  gini = gini_index(groups, class_values)
  if gini < b_score:
  b_index, b_value, b_score, b_groups = index, row[index], gini, groups
 print ({'index':b_index, 'value':b_value})
 return {'index':b_index, 'value':b_value, 'groups':b_groups}
# Create a terminal node value
def to_terminal(group):
 outcomes = [row[-1] for row in group]
 return max(set(outcomes), key=outcomes.count)
# Create child splits for a node or make terminal
def split(node, max_depth, min_size, depth):
 left, right = node['groups']
 del(node['groups'])
 # check for a no split
 if not left or not right:
 node['left'] = node['right'] = to_terminal(left + right)
 return
 # check for max depth
 if depth >= max_depth:
 node['left'], node['right'] = to_terminal(left), to_terminal(right)
 return
 # process left child
 if len(left) <= min_size:
 node['left'] = to_terminal(left)
 else:
 node['left'] = get_split(left)
 split(node['left'], max_depth, min_size, depth+1)
 # process right child
 if len(right) <= min_size:
 node['right'] = to_terminal(right)
 else:
 node['right'] = get_split(right)
 split(node['right'], max_depth, min_size, depth+1)
# Ein Decisionsbaum bauen
def build_tree(train, max_depth, min_size):
 root = get_split(train)
 split(root, max_depth, min_size, 1)
 return root
# Eine Vorhersage mit einem Decisionsbaum machen
def predict(node, row):
 if row[node['index']] < node['value']:
 if isinstance(node['left'], dict):
  return predict(node['left'], row)
 else:
  return node['left']
 else:
 if isinstance(node['right'], dict):
  return predict(node['right'], row)
 else:
  return node['right']
# Classification and Regression Tree Algorithm
def decision_tree(train, test, max_depth, min_size):
 tree = build_tree(train, max_depth, min_size)
 predictions = list()
 for row in test:
 prediction = predict(tree, row)
 predictions.append(prediction)
 return(predictions)
# Test CART on Bank Note dataset
seed(1)
# load and prepare data
filename = r'G:\0pythonstudy\决策树\sonar.all-data.csv'
dataset = load_csv(filename)
# convert string attributes to integers
for i in range(len(dataset[0]))-1)
 str_column_to_float(dataset, i)
# evaluate algorithm
n_folds = 5
max_depth = 5
min_size = 10
scores = evaluate_algorithm(dataset, decision_tree, n_folds, max_depth, min_size)
print('Scores: %s' % scores)
print('Mean Accuracy: %.3f%%' % (sum(scores)/float(len(scores)))

Laufender Output:}

{'index': 38, 'value': 0.0894}
{'index': 36, 'value': 0.8459}
{'index': 50, 'value': 0.0024}
{'index': 15, 'value': 0.0906}
{'index': 16, 'value': 0.9819}
{'index': 10, 'value': 0.0785}
{'index': 16, 'value': 0.0886}
{'index': 38, 'value': 0.0621}
{'index': 5, 'value': 0.0226}
{'index': 8, 'value': 0.0368}
{'index': 11, 'value': 0.0754}
{'index': 0, 'value': 0.0239}
{'index': 8, 'value': 0.0368}
{'index': 29, 'value': 0.1671}
{'index': 46, 'value': 0.0237}
{'index': 38, 'value': 0.0621}
{'index': 14, 'value': 0.0668}
{'index': 4, 'value': 0.0167}
{'index': 37, 'value': 0.0836}
{'index': 12, 'value': 0.0616}
{'index': 7, 'value': 0.0333}
{'index': 33, 'value': 0.8741}
{'index': 16, 'value': 0.0886}
{'index': 8, 'value': 0.0368}
{'index': 33, 'value': 0.0798}
{'index': 44, 'value': 0.0298}
Scores: [48.78048780487805, 70.73170731707317, 58.536585365853654, 51.2195121951
2195, 39.02439024390244]
Mean Accuracy: 53.659%
请按任意键继续. . .

知识点:

1.load CSV file

from csv import reader
# Load a CSV file
def load_csv(filename):
 file = open(filename, "r")
 lines = reader(file)
 dataset = list(lines)
 return dataset
filename = r'G:\0pythonstudy\决策树\sonar.all-data.csv'
dataset=load_csv(filename)
print(dataset)

2.把数据转化成float格式

# Convert string column to float
def str_column_to_float(dataset, column):
 for row in dataset:
 row[column] = float(row[column].strip())
 # print(row[column])
# convert string attributes to integers
for i in range(len(dataset[0]))-1)
 str_column_to_float(dataset, i)

3.把最后一列的分类字符串转化成0、1整数

def str_column_to_int(dataset, column):
 class_values = [row[column] for row in dataset]#生成一个class label的list
 # print(class_values)
 unique = set(class_values)#set 获得list的不同元素
 print(unique)
 lookup = dict()#定义一个字典
 # print(enumerate(unique))
 for i, value in enumerate(unique):
 lookup[value] = i
 # print(lookup)
 for row in dataset:
 row[column] = lookup[row[column]]
 print(lookup['M'])

4、Datenmenge in K Teile aufteilen

# Teile eine Datenmenge in k Teile
def cross_validation_split(dataset, n_folds):
 dataset_split = list()# Erstelle eine leere Liste
 dataset_copy = list(dataset)
 print(len(dataset_copy))
 print(len(dataset))
 #print(dataset_copy)
 fold_size = int(len(dataset) / n_folds)
 for i in range(n_folds):
 fold = list()
 while len(fold) < fold_size:
  index = randrange(len(dataset_copy))
  # print(index)
  fold.append(dataset_copy.pop(index))#.pop() löscht alle Elemente innen (entspricht einer Übertragung), diese k-fach verschiedenen Elemente sind unterschiedlich.
 dataset_split.append(fold)
 return dataset_split
n_folds=5 
folds = cross_validation_split(dataset, n_folds)#k-fach verschiedene Trainingsmengen

5. Berechne die Genauigkeit

# Berechne den Genauigkeitsprozentsatz
def accuracy_metric(actual, predicted):
 correct = 0
 for i in range(len(actual)):
 if actual[i] == predicted[i]:
  correct += 1
 return correct / float(len(actual)) * 100.0# Dies ist der Ausdruck für die Genauigkeit der binären Klassifizierung

6. Zweiklassige Klassifizierung jeder Spalte

# Teile eine Datenmenge basierend auf einem Attribut und einem Attributwert
def test_split(index, value, dataset):
 left, right = list(), list()# Initialize two empty lists
 for row in dataset:
 if row[index] < value:
  left.append(row)
 else:
  right.append(row)
 return left, right # Return two lists, each classifying the specified rows (index) according to the value.

7. Use the Gini coefficient to obtain the best split point

# Calculate the Gini index for a split dataset
def gini_index(groups, class_values):
 gini = 0.0
 for class_value in class_values:
 for group in groups:
  size = len(group)
  if size == 0:
  continue
  proportion = [row[-1] for row in group].count(class_value) / float(size)
  gini += (proportion * (1.0 - proportion))
 return gini
# Select the best split point for a dataset
def get_split(dataset):
 class_values = list(set(row[-1] for row in dataset))
 b_index, b_value, b_score, b_groups = 999, 999, 999, None
 for index in range(len(dataset[0]))-1)
 for row in dataset:
  groups = test_split(index, row[index], dataset)
  gini = gini_index(groups, class_values)
  if gini < b_score:
  b_index, b_value, b_score, b_groups = index, row[index], gini, groups
 # print(groups)
 print ({'index':b_index, 'value':b_value,'score':gini})
 return {'index':b_index, 'value':b_value, 'groups':b_groups}

Dieses Code-Segment berechnet den Gini-Index direkt durch Anwendung der Definition und ist daher leicht verständlich. Der beste Schnittpunkt kann jedoch schwer zu verstehen sein, da hier zwei Ebenen von Iterationen verwendet werden: eine Iteration über verschiedene Spalten und eine Iteration über verschiedene Zeilen. Und bei jeder Iteration wird das Gini-Koeffizienten aktualisiert.

8、决策树生成

# Create child splits for a node or make terminal
def split(node, max_depth, min_size, depth):
 left, right = node['groups']
 del(node['groups'])
 # check for a no split
 if not left or not right:
 node['left'] = node['right'] = to_terminal(left + right)
 return
 # check for max depth
 if depth >= max_depth:
 node['left'], node['right'] = to_terminal(left), to_terminal(right)
 return
 # process left child
 if len(left) <= min_size:
 node['left'] = to_terminal(left)
 else:
 node['left'] = get_split(left)
 split(node['left'], max_depth, min_size, depth+1)
 # process right child
 if len(right) <= min_size:
 node['right'] = to_terminal(right)
 else:
 node['right'] = get_split(right)
 split(node['right'], max_depth, min_size, depth+1)

Hier wird rekursives Programmieren verwendet, um ständig linke und rechte Äste zu erzeugen.

9.Ein Decisionsbaum bauen

# Ein Decisionsbaum bauen
def build_tree(train, max_depth, min_size):
 root = get_split(train)
 split(root, max_depth, min_size, 1)
 return root 
tree=build_tree(train_set, max_depth, min_size)
print(tree)

10、Vorhersage des Testdatensatzes

# Ein Decisionsbaum bauen
def build_tree(train, max_depth, min_size):
 root = get_split(train)# Erhalten Sie den besten Schnittpunkt, Indexwert, Gruppen
 split(root, max_depth, min_size, 1)
 return root 
# tree=build_tree(train_set, max_depth, min_size)
# print(tree) 
# Eine Vorhersage mit einem Decisionsbaum machen
def predict(node, row):
 print(row[node['index']])
 print(node['value'])
 if row[node['index']] < node['value']:# Ersetzen Sie den besten Schnittpunkt aus der Trainingsdatenmenge mit dem Testdatensatz, bei einem versetzten Schnittpunkt vergleichen Sie weiter durch Suchen in den linken und rechten Ästen.
 if isinstance(node['left'], dict):# Wenn es sich um ein Dictionary handelt, führen Sie die Operation durch
  return predict(node['left'], row)
 else:
  return node['left']
 else:
 if isinstance(node['right'], dict):
  return predict(node['right'], row)
 else:
  return node['right']
tree = build_tree(train_set, max_depth, min_size)
predictions = list()
for row in test_set:
 prediction = predict(tree, row)
 predictions.append(prediction)

11.Bewertung eines Decisionsbaums

# Bewerten Sie ein Algorithmus mit einer Cross-Validation-Split
def evaluate_algorithm(dataset, algorithm, n_folds, *args):
 folds = cross_validation_split(dataset, n_folds)
 scores = list()
 for fold in folds:
 train_set = list(folds)
 train_set.remove(fold)
 train_set = sum(train_set, [])
 test_set = list()
 for row in fold:
  row_copy = list(row)
  test_set.append(row_copy)
  row_copy[-1] = None
 predicted = algorithm(train_set, test_set, *args)
 actual = [row[-1] for row in fold]
 accuracy = accuracy_metric(actual, predicted)
 scores.append(accuracy)
 return scores 

Das ist der gesamte Inhalt dieses Artikels. Wir hoffen, dass er Ihnen bei Ihrem Lernen hilft und dass Sie die Anleitung stark unterstützen.

Erklärung: Der Inhalt dieses Artikels wurde aus dem Internet übernommen und gehört dem ursprünglichen Autor. Der Inhalt wurde von Internetnutzern freiwillig bereitgestellt und hochgeladen. Diese Website besitzt keine Eigentumsrechte und hat den Inhalt nicht manuell bearbeitet. Sie übernimmt auch keine rechtlichen Verantwortlichkeiten. Wenn Sie verdächtige Inhalte entdecken, sind Sie herzlich eingeladen, eine E-Mail an: notice#w zu senden.3Erklärung: Der Inhalt dieses Artikels stammt aus dem Internet und ist dem ursprünglichen Eigentümer gehört. Der Inhalt wird von Internetbenutzern freiwillig beigesteuert und hochgeladen. Diese Website besitzt keine Eigentumsrechte und hat den Inhalt nicht manuell bearbeitet. Sie übernimmt auch keine rechtlichen Verantwortlichkeiten. Wenn Sie verdächtige Inhalte entdecken, senden Sie bitte eine E-Mail an: notice#w

Gefällt mir