gensim是python的一个工具包,由于看了一篇paper,里面有提到这个,所以了解下并且做个记录
简介
gensim是一个python的免费的NLP库,旨在自动地从文档中提取语义主题,尽可能地高效和轻松
- Gensim旨在处理原始的、非结构化的数字文本(纯文本)。
- 实现了
Word2Vec
,FastText
, Latent Semantic Analysis (LSI, LSA, seeLsiModel
), Latent Dirichlet Allocation (LDA, seeLdaModel
) 等一些无监督的算法
安装
<!--使用pip安装-->
pip install --upgrade gensim
依赖包列表
- Python >= 2.7 (tested with versions 2.7, 3.5 and 3.6)
- NumPy >= 1.11.3
- SciPy >= 0.18.1
- Six >= 1.5.0
- smart_open >= 1.2.1
使用
- String 转 Vectors:将单词标号,统计过滤掉停用词和词频低的词,再用词序号表示当前的文档,实现从string到vector的转变
>>> import logging
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
>>> from pprint import pprint
>>> from collections import defaultdict
>>> from gensim import corpora
>>> documents = ["Human machine interface for lab abc computer applications",
>>> "A survey of user opinion of computer system response time",
>>> "The EPS user interface management system",
>>> "System and human system engineering testing of EPS",
>>> "Relation of user perceived response time to error measurement",
>>> "The generation of random binary unordered trees",
>>> "The intersection graph of paths in trees",
>>> "Graph minors IV Widths of trees and well quasi ordering",
>>> "Graph minors A survey"]
# remove common words and tokenize
>>> stoplist = set('for a of the and to in'.split())
>>> texts = [[word for word in document.lower().split() if word not in stoplist]
>>> for document in documents]
>>>
>>> # remove words that appear only once
>>> frequency = defaultdict(int)
>>> for text in texts:
>>> for token in text:
>>> frequency[token] += 1
>>>
>>> texts = [[token for token in text if frequency[token] > 1]
>>> for text in texts]
>>>
>>> pprint(texts)
[['human', 'interface', 'computer'],
['survey', 'user', 'computer', 'system', 'response', 'time'],
['eps', 'user', 'interface', 'system'],
['system', 'human', 'system', 'eps'],
['user', 'response', 'time'],
['trees'],
['graph', 'trees'],
['graph', 'minors', 'trees'],
['graph', 'minors', 'survey']]
>>> dictionary = corpora.Dictionary(texts)
>>> dictionary.save('/tmp/deerwester.dict') # store the dictionary, for future reference
>>> print(dictionary)
Dictionary(12 unique tokens)
>>> print(dictionary.token2id)
{'minors': 11, 'graph': 10, 'system': 5, 'trees': 9, 'eps': 8, 'computer': 0,
'survey': 4, 'user': 7, 'human': 1, 'time': 6, 'interface': 2, 'response': 3}
>>> new_doc = "Human computer interaction"
>>> new_vec = dictionary.doc2bow(new_doc.lower().split())
>>> print(new_vec) # the word "interaction" does not appear in the dictionary and is ignored
[(0, 1), (1, 1)]
>>> corpus = [dictionary.doc2bow(text) for text in texts]
>>> corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus) # store to disk, for later use
>>> print(corpus)
[(0, 1), (1, 1), (2, 1)]
[(0, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1)]
[(2, 1), (5, 1), (7, 1), (8, 1)]
[(1, 1), (5, 2), (8, 1)]
[(3, 1), (6, 1), (7, 1)]
[(9, 1)]
[(9, 1), (10, 1)]
[(9, 1), (10, 1), (11, 1)]
对于数据量小的文本,我们可以一次性的加载进内存然后处理文本,但是对于数据量很大的时候,直接加载会严重浪费内存,gensim可以分批加载,在使用的时候再将数据加载进来
>>> class MyCorpus(object):
>>> def __iter__(self):
>>> for line in open('mycorpus.txt'):
>>> # assume there's one document per line, tokens separated by whitespace
>>> yield dictionary.doc2bow(line.lower().split())
>>> corpus_memory_friendly = MyCorpus() # doesn't load the corpus into memory!
>>> print(corpus_memory_friendly)
<__main__.MyCorpus object at 0x10d5690>
>>> for vector in corpus_memory_friendly: # load one vector into memory at a time
... print(vector)
[(0, 1), (1, 1), (2, 1)]
[(0, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1)]
[(2, 1), (5, 1), (7, 1), (8, 1)]
[(1, 1), (5, 2), (8, 1)]
[(3, 1), (6, 1), (7, 1)]
[(9, 1)]
[(9, 1), (10, 1)]
[(9, 1), (10, 1), (11, 1)]
[(4, 1), (10, 1), (11, 1)]
统计出现的单词,构建字典的时候也可以不用一次性加载全部文本
>>> from six import iteritems
>>> # collect statistics about all tokens
>>> dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
>>> # remove stop words and words that appear only once
>>> stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
>>> if stopword in dictionary.token2id]
>>> once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
>>> dictionary.filter_tokens(stop_ids + once_ids) # remove stop words and words that appear only once
>>> dictionary.compactify() # remove gaps in id sequence after words that were removed
>>> print(dictionary)
Dictionary(12 unique tokens)
- 主题向量的变换
对文本向量的变换是Gensim的核心。通过挖掘语料中隐藏的语义结构特征,我们最终可以变换出一个简洁高效的文本向量。- Transforming vectors,首先,将上一小节的稀疏向量转换为TF-IDF
>>> from gensim import models
>>> tfidf = models.TfidfModel(corpus) # step 1 -- initialize a model
2019-04-12 11:05:09,654 : INFO : collecting document frequencies
2019-04-12 11:05:09,655 : INFO : PROGRESS: processing document #0
2019-04-12 11:05:09,655 : INFO : calculating IDF weights for 9 documents and 11 features (28 matrix non-zeros)
>>> print(tfidf)
TfidfModel(num_docs=9, num_nnz=28)
>>> doc_bow = [(0, 1), (1, 1)] # test tf-idf
>>> print(tfidf[doc_bow])
[(0, 0.7071067811865476), (1, 0.7071067811865476)]
>>> corpus_tfidf = tfidf[corpus] # corpus tf-idf
>>> for doc in corpus_tfidf:
... print(doc)
...
[(0, 0.5773502691896257), (1, 0.5773502691896257), (2, 0.5773502691896257)]
[(0, 0.44424552527467476), (3, 0.44424552527467476), (4, 0.44424552527467476), (5, 0.3244870206138555), (6, 0.44424552527467476), (7, 0.3244870206138555)]
[(2, 0.5710059809418182), (5, 0.4170757362022777), (7, 0.4170757362022777), (8, 0.5710059809418182)]
[(1, 0.49182558987264147), (5, 0.7184811607083769), (8, 0.49182558987264147)]
[(3, 0.6282580468670046), (6, 0.6282580468670046), (7, 0.45889394536615247)]
[(9, 1.0)]
[(9, 0.7071067811865475), (10, 0.7071067811865475)]
[(9, 0.5080429008916749), (10, 0.5080429008916749), (11, 0.695546419520037)]
[(4, 0.6282580468670046), (10, 0.45889394536615247), (11, 0.6282580468670046)]
- 构建完TF-IDF后,计算LSI(latent Sematic Indexing)
>>> lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics = 2)
2019-04-12 11:11:38,070 : INFO : using serial LSI version on this node
2019-04-12 11:11:38,071 : INFO : updating model with new documents
2019-04-12 11:11:38,072 : INFO : preparing a new chunk of documents
2019-04-12 11:11:38,073 : INFO : using 100 extra samples and 2 power iterations
2019-04-12 11:11:38,073 : INFO : 1st phase: constructing (12, 102) action matrix
2019-04-12 11:11:38,183 : INFO : orthonormalizing (12, 102) action matrix
2019-04-12 11:11:38,295 : INFO : 2nd phase: running dense svd on (12, 9) matrix
2019-04-12 11:11:38,330 : INFO : computing the final decomposition
2019-04-12 11:11:38,330 : INFO : keeping 2 factors (discarding 47.565% of energy spectrum)
2019-04-12 11:11:38,358 : INFO : processed documents up to #9
2019-04-12 11:11:38,379 : INFO : topic #0(1.594): 0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"
2019-04-12 11:11:38,379 : INFO : topic #1(1.476): -0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"
>>> corpus_lsi = lsi[corpus_tfidf]
>>> lsi.print_topics(2)
2019-04-12 11:12:31,602 : INFO : topic #0(1.594): 0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"
2019-04-12 11:12:31,602 : INFO : topic #1(1.476): -0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"
[(0, '0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"'), (1, '-0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"')]
上面结果表明,该文档被分为两个潜话题分类,第一类话题和“trees”, “graph” ,“minors”关联性比较大。最后来查看整个文档中每个文件的话题分类
>>> for doc in corpus_lsi:
... print(doc)
...
[(0, 0.0660078339609052), (1, -0.5200703306361847)]
[(0, 0.19667592859142907), (1, -0.7609563167700036)]
[(0, 0.08992639972446646), (1, -0.7241860626752505)]
[(0, 0.07585847652178296), (1, -0.6320551586003427)]
[(0, 0.10150299184980459), (1, -0.5737308483002947)]
[(0, 0.7032108939378302), (1, 0.16115180214026098)]
[(0, 0.8774787673119822), (1, 0.16758906864659778)]
[(0, 0.909862468681857), (1, 0.14086553628719395)]
[(0, 0.6165825350569285), (1, -0.05392907566389119)]
>>> for doc in documents:
... print(doc)
...
Human machine interface for lab abc computer applications
A survey of user opinion of computer system response time
The EPS user interface management system
System and human system engineering testing of EPS
Relation of user perceived response time to error measurement
The generation of random binary unordered trees
The intersection graph of paths in trees
Graph minors IV Widths of trees and well quasi ordering
Graph minors A survey
除了LSI模型外,还有Random Projections(RP),Latent Dirichlet Allocation(LDA),Hierarchical Dirichlet Process(HDP),都是用于提取潜话题模型
- 文档相似度的计算
在得到每一篇文档对应的主题向量后,我们就可以计算文档之间的相似度,进而完成如文本聚类、信息检索之类的任务。在Gensim中,也提供了这一类任务的API接口
以信息检索为例。对于一篇待检索的query,我们的目标是从文本集合中检索出主题相似度最高的文档。
首先,我们需要将待检索的query和文本放在同一个向量空间里进行表达(以LSI向量空间为例):
# 构造LSI模型并将待检索的query和文本转化为LSI主题向量
# 转换之前的corpus和query均是BOW向量
lsi_model = models.LsiModel(corpus, id2word=dictionary, num_topics=2)
documents = lsi_model[corpus]
query_vec = lsi_model[query]
接下来,我们用待检索的文档向量初始化一个相似度计算的对象:
index = similarities.MatrixSimilarity(documents)
我们也可以通过save()和load()方法持久化这个相似度矩阵:
index.save('/tmp/test.index')
index = similarities.MatrixSimilarity.load('/tmp/test.index')
注意,如果待检索的目标文档过多,使用similarities.MatrixSimilarity类往往会带来内存不够用的问题。此时,可以改用similarities.Similarity类。二者的接口基本保持一致。
最后,我们借助index对象计算任意一段query和所有文档的(余弦)相似度:
sims = index[query_vec]
#返回一个元组类型的迭代器:(idx, sim)