决策树
决策树是一个选择的过程,以树的结构来展示,其每个非叶节点表示一个特征属性上的测试,每个分支代表这个特征属性在某个值域上的输出。
如找对象决策树
构造决策树
构造决策树的问题在于哪个特征在划分数据分类中起到决定性作用,为此要去评估起决定作用的特征值。
这里介绍两种方法信息增益和基尼不纯度
信息增益
信息增益是划分数据集之前之后的变化。信息增益最高的特征就是最好的选择。
集合信息的度量方式称之为熵或香农熵 ,其信息的熵计算公式为
pi为选择该类的概率
限制把数据集D 分成D1,D2,D3.... ,则划分后的信息熵为
那么信息增益则为:
实现代码
def chooseBestFeatureToSplit(dataSet):
numFeatures = len(dataSet[0]) - 1 #the last column is used for the labels
baseEntropy = calcShannonEnt(dataSet)
bestInfoGain = 0.0; bestFeature = -1
for i in range(numFeatures): #iterate over all the features
featList = [example[i] for example in dataSet]#create a list of all the examples of this feature
uniqueVals = set(featList) #get a set of unique values
newEntropy = 0.0
for value in uniqueVals:
subDataSet = splitDataSet(dataSet, i, value)
prob = len(subDataSet)/float(len(dataSet))
newEntropy += prob * calcShannonEnt(subDataSet)
infoGain = baseEntropy - newEntropy #calculate the info gain; ie reduction in entropy
if (infoGain > bestInfoGain): #compare this to the best gain so far
bestInfoGain = infoGain #if better than current best, set to best
bestFeature = i
return bestFeature #returns an integer
基尼不纯度
同样的可以通过基尼不纯度来划分数据集
基尼不纯度的定义:
在划分k个子集后数据集的不纯度的公式为
前后的变化:
递归构建决策树
这里使用信息增益的方法来选择特征,通过递归来构造决策树
def createTree(dataSet,labels):
classList = [example[-1] for example in dataSet]
if classList.count(classList[0]) == len(classList):
return classList[0]#stop splitting when all of the classes are equal
if len(dataSet[0]) == 1: #stop splitting when there are no more features in dataSet
return majorityCnt(classList)
bestFeat = chooseBestFeatureToSplit(dataSet)
bestFeatLabel = labels[bestFeat]
myTree = {bestFeatLabel:{}}
del(labels[bestFeat])
featValues = [example[bestFeat] for example in dataSet]
uniqueVals = set(featValues)
for value in uniqueVals:
subLabels = labels[:] #copy all of labels, so trees don't mess up existing labels
myTree[bestFeatLabel][value] = createTree(splitDataSet(dataSet, bestFeat, value),subLabels)
return myTree
可视化决策树
创建的决策树以字典的新式返回,使用graphviz来绘制
以下是隐形眼镜决策树