gensim学习记录

gensim是python的一个工具包,由于看了一篇paper,里面有提到这个,所以了解下并且做个记录

简介

gensim是一个python的免费的NLP库,旨在自动地从文档中提取语义主题,尽可能地高效和轻松

  • Gensim旨在处理原始的、非结构化的数字文本(纯文本)。
  • 实现了Word2Vec, FastText, Latent Semantic Analysis (LSI, LSA, see LsiModel), Latent Dirichlet Allocation (LDA, see LdaModel) 等一些无监督的算法

安装

官方安装页面

<!--使用pip安装-->
pip install --upgrade gensim

依赖包列表

使用

  • String 转 Vectors:将单词标号,统计过滤掉停用词和词频低的词,再用词序号表示当前的文档,实现从string到vector的转变
>>> import logging
>>> logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
>>> from pprint import pprint
>>> from collections import defaultdict
>>> from gensim import corpora
>>> documents = ["Human machine interface for lab abc computer applications",
>>>              "A survey of user opinion of computer system response time",
>>>              "The EPS user interface management system",
>>>              "System and human system engineering testing of EPS",
>>>              "Relation of user perceived response time to error measurement",
>>>              "The generation of random binary unordered trees",
>>>              "The intersection graph of paths in trees",
>>>              "Graph minors IV Widths of trees and well quasi ordering",
>>>              "Graph minors A survey"]
# remove common words and tokenize
>>> stoplist = set('for a of the and to in'.split())
>>> texts = [[word for word in document.lower().split() if word not in stoplist]
>>>          for document in documents]
>>>
>>> # remove words that appear only once
>>> frequency = defaultdict(int)
>>> for text in texts:
>>>     for token in text:
>>>         frequency[token] += 1
>>>
>>> texts = [[token for token in text if frequency[token] > 1]
>>>          for text in texts]
>>>
>>> pprint(texts)
[['human', 'interface', 'computer'],
 ['survey', 'user', 'computer', 'system', 'response', 'time'],
 ['eps', 'user', 'interface', 'system'],
 ['system', 'human', 'system', 'eps'],
 ['user', 'response', 'time'],
 ['trees'],
 ['graph', 'trees'],
 ['graph', 'minors', 'trees'],
 ['graph', 'minors', 'survey']]
>>> dictionary = corpora.Dictionary(texts)
>>> dictionary.save('/tmp/deerwester.dict')  # store the dictionary, for future reference
>>> print(dictionary)
Dictionary(12 unique tokens)
>>> print(dictionary.token2id)
{'minors': 11, 'graph': 10, 'system': 5, 'trees': 9, 'eps': 8, 'computer': 0,
'survey': 4, 'user': 7, 'human': 1, 'time': 6, 'interface': 2, 'response': 3}
>>> new_doc = "Human computer interaction"
>>> new_vec = dictionary.doc2bow(new_doc.lower().split())
>>> print(new_vec)  # the word "interaction" does not appear in the dictionary and is ignored
[(0, 1), (1, 1)]
>>> corpus = [dictionary.doc2bow(text) for text in texts]
>>> corpora.MmCorpus.serialize('/tmp/deerwester.mm', corpus)  # store to disk, for later use
>>> print(corpus)
[(0, 1), (1, 1), (2, 1)]
[(0, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1)]
[(2, 1), (5, 1), (7, 1), (8, 1)]
[(1, 1), (5, 2), (8, 1)]
[(3, 1), (6, 1), (7, 1)]
[(9, 1)]
[(9, 1), (10, 1)]
[(9, 1), (10, 1), (11, 1)]

对于数据量小的文本,我们可以一次性的加载进内存然后处理文本,但是对于数据量很大的时候,直接加载会严重浪费内存,gensim可以分批加载,在使用的时候再将数据加载进来

>>> class MyCorpus(object):
>>>     def __iter__(self):
>>>         for line in open('mycorpus.txt'):
>>>             # assume there's one document per line, tokens separated by whitespace
>>>             yield dictionary.doc2bow(line.lower().split())
>>> corpus_memory_friendly = MyCorpus()  # doesn't load the corpus into memory!
>>> print(corpus_memory_friendly)
<__main__.MyCorpus object at 0x10d5690>
>>> for vector in corpus_memory_friendly:  # load one vector into memory at a time
...     print(vector)
[(0, 1), (1, 1), (2, 1)]
[(0, 1), (3, 1), (4, 1), (5, 1), (6, 1), (7, 1)]
[(2, 1), (5, 1), (7, 1), (8, 1)]
[(1, 1), (5, 2), (8, 1)]
[(3, 1), (6, 1), (7, 1)]
[(9, 1)]
[(9, 1), (10, 1)]
[(9, 1), (10, 1), (11, 1)]
[(4, 1), (10, 1), (11, 1)]

统计出现的单词,构建字典的时候也可以不用一次性加载全部文本

>>> from six import iteritems
>>> # collect statistics about all tokens
>>> dictionary = corpora.Dictionary(line.lower().split() for line in open('mycorpus.txt'))
>>> # remove stop words and words that appear only once
>>> stop_ids = [dictionary.token2id[stopword] for stopword in stoplist
>>>             if stopword in dictionary.token2id]
>>> once_ids = [tokenid for tokenid, docfreq in iteritems(dictionary.dfs) if docfreq == 1]
>>> dictionary.filter_tokens(stop_ids + once_ids)  # remove stop words and words that appear only once
>>> dictionary.compactify()  # remove gaps in id sequence after words that were removed
>>> print(dictionary)
Dictionary(12 unique tokens)
  • 主题向量的变换
    对文本向量的变换是Gensim的核心。通过挖掘语料中隐藏的语义结构特征,我们最终可以变换出一个简洁高效的文本向量。
    • Transforming vectors,首先,将上一小节的稀疏向量转换为TF-IDF
>>> from gensim import models
>>> tfidf = models.TfidfModel(corpus)  # step 1 -- initialize a model
2019-04-12 11:05:09,654 : INFO : collecting document frequencies
2019-04-12 11:05:09,655 : INFO : PROGRESS: processing document #0
2019-04-12 11:05:09,655 : INFO : calculating IDF weights for 9 documents and 11 features (28 matrix non-zeros)
>>> print(tfidf)
TfidfModel(num_docs=9, num_nnz=28)
>>> doc_bow = [(0, 1), (1, 1)] # test tf-idf
>>> print(tfidf[doc_bow])
[(0, 0.7071067811865476), (1, 0.7071067811865476)]
>>> corpus_tfidf = tfidf[corpus] # corpus tf-idf
>>> for doc in corpus_tfidf:
...     print(doc)
...
[(0, 0.5773502691896257), (1, 0.5773502691896257), (2, 0.5773502691896257)]
[(0, 0.44424552527467476), (3, 0.44424552527467476), (4, 0.44424552527467476), (5, 0.3244870206138555), (6, 0.44424552527467476), (7, 0.3244870206138555)]
[(2, 0.5710059809418182), (5, 0.4170757362022777), (7, 0.4170757362022777), (8, 0.5710059809418182)]
[(1, 0.49182558987264147), (5, 0.7184811607083769), (8, 0.49182558987264147)]
[(3, 0.6282580468670046), (6, 0.6282580468670046), (7, 0.45889394536615247)]
[(9, 1.0)]
[(9, 0.7071067811865475), (10, 0.7071067811865475)]
[(9, 0.5080429008916749), (10, 0.5080429008916749), (11, 0.695546419520037)]
[(4, 0.6282580468670046), (10, 0.45889394536615247), (11, 0.6282580468670046)]
  • 构建完TF-IDF后,计算LSI(latent Sematic Indexing)
>>> lsi = models.LsiModel(corpus_tfidf, id2word=dictionary, num_topics = 2)
2019-04-12 11:11:38,070 : INFO : using serial LSI version on this node
2019-04-12 11:11:38,071 : INFO : updating model with new documents
2019-04-12 11:11:38,072 : INFO : preparing a new chunk of documents
2019-04-12 11:11:38,073 : INFO : using 100 extra samples and 2 power iterations
2019-04-12 11:11:38,073 : INFO : 1st phase: constructing (12, 102) action matrix
2019-04-12 11:11:38,183 : INFO : orthonormalizing (12, 102) action matrix
2019-04-12 11:11:38,295 : INFO : 2nd phase: running dense svd on (12, 9) matrix
2019-04-12 11:11:38,330 : INFO : computing the final decomposition
2019-04-12 11:11:38,330 : INFO : keeping 2 factors (discarding 47.565% of energy spectrum)
2019-04-12 11:11:38,358 : INFO : processed documents up to #9
2019-04-12 11:11:38,379 : INFO : topic #0(1.594): 0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"
2019-04-12 11:11:38,379 : INFO : topic #1(1.476): -0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"
>>> corpus_lsi = lsi[corpus_tfidf]
>>> lsi.print_topics(2)
2019-04-12 11:12:31,602 : INFO : topic #0(1.594): 0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"
2019-04-12 11:12:31,602 : INFO : topic #1(1.476): -0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"
[(0, '0.703*"trees" + 0.538*"graph" + 0.402*"minors" + 0.187*"survey" + 0.061*"system" + 0.060*"time" + 0.060*"response" + 0.058*"user" + 0.049*"computer" + 0.035*"interface"'), (1, '-0.460*"system" + -0.373*"user" + -0.332*"eps" + -0.328*"interface" + -0.320*"time" + -0.320*"response" + -0.293*"computer" + -0.280*"human" + -0.171*"survey" + 0.161*"trees"')]

上面结果表明,该文档被分为两个潜话题分类,第一类话题和“trees”, “graph” ,“minors”关联性比较大。最后来查看整个文档中每个文件的话题分类

>>> for doc in corpus_lsi:
...     print(doc)
...
[(0, 0.0660078339609052), (1, -0.5200703306361847)]
[(0, 0.19667592859142907), (1, -0.7609563167700036)]
[(0, 0.08992639972446646), (1, -0.7241860626752505)]
[(0, 0.07585847652178296), (1, -0.6320551586003427)]
[(0, 0.10150299184980459), (1, -0.5737308483002947)]
[(0, 0.7032108939378302), (1, 0.16115180214026098)]
[(0, 0.8774787673119822), (1, 0.16758906864659778)]
[(0, 0.909862468681857), (1, 0.14086553628719395)]
[(0, 0.6165825350569285), (1, -0.05392907566389119)]
>>> for doc in documents:
...     print(doc)
...
Human machine interface for lab abc computer applications
A survey of user opinion of computer system response time
The EPS user interface management system
System and human system engineering testing of EPS
Relation of user perceived response time to error measurement
The generation of random binary unordered trees
The intersection graph of paths in trees
Graph minors IV Widths of trees and well quasi ordering
Graph minors A survey

除了LSI模型外,还有Random Projections(RP),Latent Dirichlet Allocation(LDA),Hierarchical Dirichlet Process(HDP),都是用于提取潜话题模型

  • 文档相似度的计算
    在得到每一篇文档对应的主题向量后,我们就可以计算文档之间的相似度,进而完成如文本聚类、信息检索之类的任务。在Gensim中,也提供了这一类任务的API接口

以信息检索为例。对于一篇待检索的query,我们的目标是从文本集合中检索出主题相似度最高的文档。
首先,我们需要将待检索的query和文本放在同一个向量空间里进行表达(以LSI向量空间为例):

# 构造LSI模型并将待检索的query和文本转化为LSI主题向量
# 转换之前的corpus和query均是BOW向量
lsi_model = models.LsiModel(corpus, id2word=dictionary,          num_topics=2)
documents = lsi_model[corpus]
query_vec = lsi_model[query]

接下来,我们用待检索的文档向量初始化一个相似度计算的对象:

index = similarities.MatrixSimilarity(documents)

我们也可以通过save()和load()方法持久化这个相似度矩阵:

index.save('/tmp/test.index')
index = similarities.MatrixSimilarity.load('/tmp/test.index')

注意,如果待检索的目标文档过多,使用similarities.MatrixSimilarity类往往会带来内存不够用的问题。此时,可以改用similarities.Similarity类。二者的接口基本保持一致。
最后,我们借助index对象计算任意一段query和所有文档的(余弦)相似度:

sims = index[query_vec] 
#返回一个元组类型的迭代器:(idx, sim)
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 207,248评论 6 481
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 88,681评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 153,443评论 0 344
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 55,475评论 1 279
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 64,458评论 5 374
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,185评论 1 284
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,451评论 3 401
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,112评论 0 261
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 43,609评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,083评论 2 325
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,163评论 1 334
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,803评论 4 323
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,357评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,357评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,590评论 1 261
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,636评论 2 355
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,925评论 2 344

推荐阅读更多精彩内容