姓名:崔少杰 学号:16040510021
转载自://www.greatytc.com/p/5314834f9f8e=有修改
【嵌牛导读】:K-means算法是很典型的基于距离的聚类算法,采用距离作为相似性的评价指标,即认为两个对象的距离越近,其相似度就越大。该算法认为簇是由距离靠近的对象组成的,因此把得到紧凑且独立的簇作为最终目标。
【嵌牛鼻子】:iris、欧式距离、随机质心、k均值聚类
【嵌牛提问】:如何通过Python来实现Kmeans的算法?
【嵌牛正文】:算法步骤:
创建k个点作为起始支点(随机选择)
当任意一个簇的分配结果发生改变的时候
对数据集的每个数据点
对每个质心
计算质心与数据点之间的距离
将数据分配到距离其最近的簇
对每一簇,计算簇中所有点的均值并将其均值作为质心
iris
我们用非常著名的iris数据集。
fromsklearnimportdatasetsiris = datasets.load_iris()X, y = iris.data, iris.target
data = X[:,[1,3]]# 为了便于可视化,只取两个维度plt.scatter(data[:,0],data[:,1]);
iris
欧式距离
计算欧式距离,我们需要为每个点找到离其最近的质心,需要用这个辅助函数。
defdistance(p1,p2):"""
Return Eclud distance between two points.
p1 = np.array([0,0]), p2 = np.array([1,1]) => 1.414
"""tmp = np.sum((p1-p2)**2)returnnp.sqrt(tmp)distance(np.array([0,0]),np.array([1,1]))
1.4142135623730951
随机质心
在给定数据范围内随机产生k个簇心,作为初始的簇。随机数都在给定数据的范围之内dmin + (dmax - dmin) * np.random.rand(k)实现。
defrand_center(data,k):"""Generate k center within the range of data set."""n = data.shape[1]# featurescentroids = np.zeros((k,n))# init with (0,0)....foriinrange(n): dmin, dmax = np.min(data[:,i]), np.max(data[:,i]) centroids[:,i] = dmin + (dmax - dmin) * np.random.rand(k)returncentroidscentroids = rand_center(data,2)centroids
array([[ 2.15198267, 2.42476808],
[ 2.77985426, 0.57839675]])
k均值聚类
这个基本的算法只需要明白两点。
给定一组质心,则簇更新,所有的点被分配到离其最近的质心中。
给定k簇,则质心更新,所有的质心用其簇的均值替换
当簇不在有更新的时候,迭代停止。当然kmeans有个缺点,就是可能陷入局部最小值,有改进的方法,比如二分k均值,当然也可以多计算几次,去效果好的结果。
defkmeans(data,k=2):def_distance(p1,p2):"""
Return Eclud distance between two points.
p1 = np.array([0,0]), p2 = np.array([1,1]) => 1.414
"""tmp = np.sum((p1-p2)**2)returnnp.sqrt(tmp)def_rand_center(data,k):"""Generate k center within the range of data set."""n = data.shape[1]# featurescentroids = np.zeros((k,n))# init with (0,0)....foriinrange(n): dmin, dmax = np.min(data[:,i]), np.max(data[:,i]) centroids[:,i] = dmin + (dmax - dmin) * np.random.rand(k)returncentroidsdef_converged(centroids1, centroids2):# if centroids not changed, we say 'converged'set1 = set([tuple(c)forcincentroids1]) set2 = set([tuple(c)forcincentroids2])return(set1 == set2) n = data.shape[0]# number of entriescentroids = _rand_center(data,k) label = np.zeros(n,dtype=np.int)# track the nearest centroidassement = np.zeros(n)# for the assement of our modelconverged =Falsewhilenotconverged: old_centroids = np.copy(centroids)foriinrange(n):# determine the nearest centroid and track it with labelmin_dist, min_index = np.inf,-1forjinrange(k): dist = _distance(data[i],centroids[j])ifdist < min_dist: min_dist, min_index = dist, j label[i] = j assement[i] = _distance(data[i],centroids[label[i]])**2# update centroidforminrange(k): centroids[m] = np.mean(data[label==m],axis=0) converged = _converged(old_centroids,centroids)returncentroids, label, np.sum(assement)
由于算法可能局部收敛的问题,随机多运行几次,取最优值
best_assement = np.infbest_centroids =Nonebest_label =Noneforiinrange(10): centroids, label, assement = kmeans(data,2)ifassement < best_assement: best_assement = assement best_centroids = centroids best_label = labeldata0 = data[best_label==0]data1 = data[best_label==1]
如下图,我们把数据分为两簇,绿色的点是每个簇的质心,从图示效果看,聚类效果还不错。
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(12,5))ax1.scatter(data[:,0],data[:,1],c='c',s=30,marker='o')ax2.scatter(data0[:,0],data0[:,1],c='r')ax2.scatter(data1[:,0],data1[:,1],c='c')ax2.scatter(centroids[:,0],centroids[:,1],c='b',s=120,marker='o')plt.show()