首页 > 分享 > 决策树实战

决策树实战

决策树实战

scikit-learn 中有两类的决策树,它们均采用优化的CART决策树算法

回归决策树(Decision Tree Regressor)

DecisionTreeRegressor实现了回归决策树,用于回归问题

DecisionTreeRegressor(criterion='mse', splitter='best', max_depth=None,min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0 , max_features=None, random_state=None, max_leaf_nodes=None, presort=False) 1

参数

criterion: 一个字符串,指定切分质量的评价准则。默认为“mse”,且只支持该字符串,表示均方误差splitter: 一个字符串,指定切分原则 best 表示选择最优的切分random 表示随机切分 max_features 可以为整数、浮点、字符串或者None,指定寻找best_split 时考虑的特征数量 如果是整数,则每次切分只考虑max_features个特征如果是浮点数,则每次切分只考虑max_features * n_features 个特征 (max_features 指定了百分比)如果是字符串auto 或者 sqrt 则max_features 等于 n_freatures如果是字符串log2 则max_features 等于 log2(n_features)如果是None,则max_features 等于n_features max_depth 可以为整数或者None,指定树的最大深度 如果为None,则表示树的深度不限如果是max_leaf_nodes 参数非None,则忽略此选项 min_samples_split 为整数 指定每个内部结点包含的最少的样本数min_samples_leaf 为整数 指定每个叶节点的最少的样本数min_weight_fraction_leaf 为浮点数 叶节点中样本的最小权重系数max_leaf_nodes 为整数或者None,指定叶节点的最大数量 如果为None,此时叶节点数量不限如果为非None 则max_depth 被忽略 class_weight 为一个字典、字典的列表、字符串balanced 或者None,指定了分类的权重,权重的形式为:{class_label : weight} 如果为None 则每个分类的权重为1字符串balanced 表示分类的权重的样本中各分类出现的频率的反比。 random_state 一个整数或者一个RandomState实例或者None 如果为整数,它指定了随机数生成器的种子如果为RandomState实例,指定了随机数生成器如果为None,则使用默认的随机生成器 presort 一个布尔值,指定是否要提前排序数据从而加速寻找最优切分的过程,设置为True时,对于大数据集会减慢总体的训练过程,但是对于一个小数据集或者设定了最大深度的情况下,则会加速训练的过程。

属性

feature_importances_ 给出了特征的重要程度,该值越高,则该特征越重要max_features_ max_features的推断值n_features_ 当执行fit之后,特征的数量n_outputs_ 当执行fit之后,输出的数量tree_ 一个Tree对象,即底层的决策树

方法

fit(X, y[,sample_weight, check_inut, ….]) 训练模型predict(X[,check_input]) 用模型进行预测,返回预测值score 返回预测性能得分

# -*- coding: utf-8 -*- import numpy as np from sklearn.tree import DecisionTreeRegressor from sklearn import model_selection import matplotlib.pyplot as plt # 给出一个随机产生的数据集 def create_data(n): """ :param n: n为数据集容量 :return: 返回一个元组,元素依次是:训练样本集、测试样本集、训练样本集对应的值,测试样本集对应的值 create_data 函数产生的数据集在sin(x)函数的基础上添加了若干个随机噪声产生的, x是随机在0~1之间产生的,y是sinx,其中y每隔5个点添加一个随机噪声 """ np.random.seed(0) X = 5 * np.random.rand(n, 1) y = np.sin(X).ravel() noise_num = (int)(n/5) y[::5] += 3 * (0.5 - np.random.rand(noise_num)) return model_selection.train_test_split(X,y,test_size=0.25, random_state=1) def test_DecisionTreeRegressor(*data): X_train,X_test, y_train, y_test = data regr = DecisionTreeRegressor() regr.fit(X_train, y_train) print("Training Score: %f"%(regr.score(X_train,y_train))) print("Training Score: %f"%(regr.score(X_test,y_test))) # 绘图 fig = plt.figure() ax = fig.add_subplot(1,1,1) X = np.arange(0.0, 5.0, 0.01)[:, np.newaxis] Y = regr.predict(X) ax.scatter(X_train, y_train, label="train sample", c='g') ax.scatter(X_test, y_test, label="test sample", c='r') ax.plot(X,Y, label="predict_value", linewidth=2, alpha=0.5) ax.set_xlabel("data") ax.set_ylabel("target") ax.set_title("Decision Tree Regression") ax.legend(framealpha=0.5) plt.show() if __name__ == '__main__': X_train,X_test,y_train,y_test=create_data(100) test_DecisionTreeRegressor(X_train,X_test, y_train,y_test)

12345678910111213141516171819202122232425262728293031323334353637383940414243444546

在这里插入图片描述

运行的结果

Training Score: 1.000000 Training Score: 0.789107 12

可以看到对于训练样本的拟合很好,但是对于测试样本的拟合就差强人意

接下来,检验随机划分与最优划分的影响

def test_DecisionTreeRegressor_splitter(*data): X_train,X_test,y_train,y_test = data splitters = ['best','random'] for splitter in splitters: regr = DecisionTreeRegressor(splitter=splitter) regr.fit(X_train, y_train) print("Splitter %s"%splitter) print("Trainging score: %f"%(regr.score(X_train,y_train))) print("Testing score: %f"%(regr.score(X_test,y_test))) 123456789

Splitter best Trainging score: 1.000000 Testing score: 0.789107 Splitter random Trainging score: 1.000000 Testing score: 0.778989 123456

可以看到最优划分预测性能较强,但是相差不大,对于训练集的拟合,二者都拟合的相当好。

分类决策树(Decision Tree Classifier)

DecisionTreeClassfier 实现了分类决策树,用于分类问题

DecisionTreeClassifier(criterion='gini', splitter = 'best', max_depth=None, min_sample_split=2, min_sample_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state = None, max_leaf_nodes=None, class_weight=None, presort=False) 1 criterion:一个字符串,表示切分质量的评价准则 gini:表示切分时评价准则是gini系数entropy 表示切分时评价准则是熵

其他的参数类似

使用的数据是鸢尾花数据集

# -*- coding: utf-8 -*- import numpy as np import matplotlib.pyplot as plt from sklearn import datasets from sklearn.tree import DecisionTreeClassifier from sklearn import model_selection def load_data(): iris = datasets.load_iris() X_train = iris.data y_train = iris.target return model_selection.train_test_split(X_train,y_train, test_size=0.25, random_state=0, stratify=y_train) def test_DecisionTreeClassifier(*data): X_train,X_test,y_train,y_test = data clf = DecisionTreeClassifier() clf.fit(X_train, y_train) print("Training Score : %f" %(clf.score(X_train,y_train))) print("Training Score : %f" %(clf.score(X_test,y_test))) X_train, X_test, y_train, y_test = load_data() test_DecisionTreeClassifier(X_train, X_test,y_train,y_test)

123456789101112131415161718192021222324

执行的结果

Training Score : 1.000000 Training Score : 0.973684 12

考察不同的切分指令的评价准则对分类性能的影响

def test_DecisionTreeClassifier_Criterion(*data): X_train, X_test,y_train,y_test = data criterions = ['gini', 'entropy'] for criterion in criterions: clf = DecisionTreeClassifier(criterion=criterion) clf.fit(X_train, y_train) print("Criterion : %s" %criterion) print("Training Score : %f" % (clf.score(X_train, y_train))) print("Training Score : %f" % (clf.score(X_test, y_test))) 123456789

运行结果

Criterion : gini Training Score : 1.000000 Training Score : 0.973684 Criterion : entropy Training Score : 1.000000 Training Score : 0.921053 123456

可以看到gini系数的策略预测性能比较高

加下来检验随机划分与最优划分的影响

def test_DecisionTreeClassifier_Splitter(*data): X_train, X_test, y_train, y_test = data splitters = ['best','random'] for splitter in splitters: clf = DecisionTreeClassifier(splitter=splitter) clf.fit(X_train, y_train) print("Splitter : %s" % splitter) print("Training Score : %f" % (clf.score(X_train, y_train))) print("Training Score : %f" % (clf.score(X_test, y_test))) 123456789

Splitter : best Training Score : 1.000000 Training Score : 0.947368 Splitter : random Training Score : 1.000000 Training Score : 0.894737 123456

两者对于训练集的拟合度都非常的高,但是对于测试集的预测best的性能比较好。

最后查看决策树深度的影响

def test_DecisionTreeClassifier_depth(*data, maxdepth): X_train, X_test, y_train, y_test = data depths = np.arange(1,maxdepth) training_scores = [] testing_scores = [] for depth in depths: clf = DecisionTreeClassifier(max_depth=depth) clf.fit(X_train,y_train) training_scores.append(clf.score(X_train, y_train)) testing_scores.append(clf.score(X_test,y_test)) # 绘图 fig = plt.figure() ax = fig.add_subplot(1,1,1) ax.plot(depths,training_scores,label="training score", marker='o') ax.plot(depths,testing_scores, label="testing score", marker='*') ax.set_xlabel("maxdepth") ax.set_ylabel("Score") ax.set_title("Decision Tree Classification") ax.legend(framealpha=0.5, loc="best") plt.show()

1234567891011121314151617181920

在这里插入图片描述

相关知识

决策树可视化:鸢尾花数据集分类(附代码数据集)
实验——基于决策树算法完成鸢尾花卉品种预测任务
决策树
9.决策树
决策树(二)——决策树的剪枝(预剪枝和后剪枝)
决策树算法简介
决策树下的智慧果实:Python机器学习实战,轻松摘取数据洞察的果实
《Python机器学习开发实战》电子书在线阅读
决策树完成鸢尾花分类
机器学习算法基础知识1:决策树

网址: 决策树实战 https://m.huajiangbk.com/newsview1746968.html

所属分类:花卉
上一篇: Sensor传感器源码的阅读与应
下一篇: 米兰花有种子吗 米兰花是否结种子