100字范文,内容丰富有趣,生活中的好帮手!
100字范文 > 机器学习算法Python实现:kmeans文本聚类

机器学习算法Python实现:kmeans文本聚类

时间:2024-01-14 13:53:17

相关推荐

机器学习算法Python实现:kmeans文本聚类

# -*- coding:utf-8 -*#本代码是在jupyter notebook上实现,author:huzhifei, create time:/8/14#本脚本主要实现了基于python通过kmeans做的文本聚类的项目目的#导入相关包import numpy as npimport pandas as pdimport reimport osimport codecsfrom sklearn import feature_extractionimport jieba#对title文本做分词f1 =open("title.txt","r",encoding='utf-8',errors='ignore')f2 =open("title_fenci", 'w',encoding='utf-8',errors='ignore')for line in f1:seg_list = jieba.cut(line, cut_all=False)f2.write((" ".join(seg_list)).replace("\t\t\t","\t"))#print(w)f1.close()f2.close()#对summary(在这里用content表示summary)文本做分词f1 =open("content.txt","r",encoding='utf-8',errors='ignore')f2 =open("content_fenci.txt", 'w',encoding='utf-8',errors='ignore')for line in f1:seg_list = jieba.cut(line, cut_all=False)f2.write((" ".join(seg_list)).replace("\t\t\t","\t"))#print(w)f1.close()f2.close()#打开已经分好词的title与content文本内容titles = open('title_fenci.txt',encoding='utf-8',errors='ignore').read().split('\n')#print(titles)print(str(len(titles)) + ' titles')contents = open('content_fenci.txt',encoding='utf-8',errors='ignore').read().split('\n')contents = contents[:len(titles)]#print(contents)print(str(len(contents)) + ' contents')#中文停用词def get_custom_stopwords(stop_words_file):with open(stop_words_file,encoding='utf-8')as f:stopwords=f.read()stopwords_list=stopwords.split('\n')custom_stopwords_list=[i for i in stopwords_list]return custom_stopwords_list#停用词函数调用stop_words_file="stopwordsHIT.txt"stopwords=get_custom_stopwords(stop_words_file)#做tfidffrom sklearn.feature_extraction.text import TfidfVectorizermax_df=0.8min_df=2tfidf_vectorizer = TfidfVectorizer(max_df=max_df,min_df=min_df, max_features=200000,stop_words='english',use_idf=True, token_pattern=u'(?u)\\b[^\\d\\W]\\w+\\b',tokenizer=tokenize_and_stem, ngram_range=(1,2))%time tfidf_matrix = tfidf_vectorizer.fit_transform(contents)print(tfidf_matrix.shape)#获取特证词terms = tfidf_vectorizer.get_feature_names()#kmeans聚类from sklearn.cluster import KMeansnum_clusters = 6km = KMeans(n_clusters=num_clusters)%time km.fit(tfidf_matrix)clusters = km.labels_.tolist()#调用pkl的kmeans模型from sklearn.externals import joblibjoblib.dump(km, 'y_cluster.pkl')km = joblib.load('y_cluster.pkl')#print(km)clusters = km.labels_.tolist()print(len(clusters))#将结果存入pandasimport pandas as pdfilms = { 'title': titles, 'rank': ranks, 'synopsis': contents[0:53612],'cluster': clusters[0:53612]}frame = pd.DataFrame(films, index=[films['cluster']],columns = ['cluster','title','rank', 'synopsis'])#簇统计frame['cluster'].value_counts()#打印出每个簇的详细簇信息from __future__ import print_functionprint("Top terms per cluster:")print()order_centroids = km.cluster_centers_.argsort()[:, ::-1]#print(order_centroids)for i in range(num_clusters):print("Cluster %d words:" % i, end='')#print(order_centroids[1,:100])for ind in order_centroids[i, :50]:print (ind)frame=frame.insert(4,'word',(vocab_frame.ix[terms[ind].split(' ')].values.tolist()[0][0].encode('utf-8', 'ignore'), end=','))t=vocab_frame.ix[terms[ind].split(' ')].values.tolist()[0][0].encode('utf-8', 'ignore')print(len(t))print(' %s' % vocab_frame.ix[terms[ind].split(' ')].values.tolist()[0][0].encode('utf-8', 'ignore'), end=',')print()

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。