首页 > 编程 > Python > 正文

Python_LDA实现方法详解

2020-01-04 16:39:27
字体:
来源:转载
供稿:网友

LDA(Latent Dirichlet allocation)模型是一种常用而用途广泛地概率主题模型。其实现一般通过Variational inference和Gibbs Samping实现。作者在提出LDA模型时给出了其变分推理的C源码(后续贴出C++改编的类),这里贴出基于Python的第三方模块改写的LDA类及实现。

#coding:utf-8import numpy as npimport ldaimport lda.datasetsimport jiebaimport codecsclass LDA_v20161130():  def __init__(self, topics=2):    self.n_topic = topics    self.corpus = None    self.vocab = None    self.ppCountMatrix = None    self.stop_words = [u',', u'。', u'、', u'(', u')', u'·', u'!', u' ', u':', u'“', u'”', u'/n']    self.model = None  def loadCorpusFromFile(self, fn):    # 中文分词    f = open(fn, 'r')    text = f.readlines()    text = r' '.join(text)    seg_generator = jieba.cut(text)    seg_list = [i for i in seg_generator if i not in self.stop_words]    seg_list = r' '.join(seg_list)    # 切割统计所有出现的词纳入词典    seglist = seg_list.split(" ")    self.vocab = []    for word in seglist:      if (word != u' ' and word not in self.vocab):        self.vocab.append(word)    CountMatrix = []    f.seek(0, 0)    # 统计每个文档中出现的词频    for line in f:      # 置零      count = np.zeros(len(self.vocab),dtype=np.int)      text = line.strip()      # 但还是要先分词      seg_generator = jieba.cut(text)      seg_list = [i for i in seg_generator if i not in self.stop_words]      seg_list = r' '.join(seg_list)      seglist = seg_list.split(" ")      # 查询词典中的词出现的词频      for word in seglist:        if word in self.vocab:          count[self.vocab.index(word)] += 1      CountMatrix.append(count)    f.close()    #self.ppCountMatrix = (len(CountMatrix), len(self.vocab))    self.ppCountMatrix = np.array(CountMatrix)    print "load corpus from %s success!"%fn  def setStopWords(self, word_list):    self.stop_words = word_list  def fitModel(self, n_iter = 1500, _alpha = 0.1, _eta = 0.01):    self.model = lda.LDA(n_topics=self.n_topic, n_iter=n_iter, alpha=_alpha, eta= _eta, random_state= 1)    self.model.fit(self.ppCountMatrix)  def printTopic_Word(self, n_top_word = 8):    for i, topic_dist in enumerate(self.model.topic_word_):      topic_words = np.array(self.vocab)[np.argsort(topic_dist)][:-(n_top_word + 1):-1]      print "Topic:",i,"/t",      for word in topic_words:        print word,      print  def printDoc_Topic(self):    for i in range(len(self.ppCountMatrix)):      print ("Doc %d:((top topic:%s) topic distribution:%s)"%(i, self.model.doc_topic_[i].argmax(),self.model.doc_topic_[i]))  def printVocabulary(self):    print "vocabulary:"    for word in self.vocab:      print word,    print  def saveVocabulary(self, fn):    f = codecs.open(fn, 'w', 'utf-8')    for word in self.vocab:      f.write("%s/n"%word)    f.close()  def saveTopic_Words(self, fn, n_top_word = -1):    if n_top_word==-1:      n_top_word = len(self.vocab)    f = codecs.open(fn, 'w', 'utf-8')    for i, topic_dist in enumerate(self.model.topic_word_):      topic_words = np.array(self.vocab)[np.argsort(topic_dist)][:-(n_top_word + 1):-1]      f.write( "Topic:%d/t"%i)      for word in topic_words:        f.write("%s "%word)      f.write("/n")    f.close()  def saveDoc_Topic(self, fn):    f = codecs.open(fn, 'w', 'utf-8')    for i in range(len(self.ppCountMatrix)):      f.write("Doc %d:((top topic:%s) topic distribution:%s)/n" % (i, self.model.doc_topic_[i].argmax(), self.model.doc_topic_[i]))    f.close()

算法实现demo:

例如,抓取BBC川普当选的新闻作为语料,输入以下代码:

if __name__=="__main__":  _lda = LDA_v20161130(topics=20)  stop = [u'!', u'@', u'#', u',',u'.',u'/',u';',u' ',u'[',u']',u'$',u'%',u'^',u'&',u'*',u'(',u')',      u'"',u':',u'<',u'>',u'?',u'{',u'}',u'=',u'+',u'_',u'-',u'''''']  _lda.setStopWords(stop)  _lda.loadCorpusFromFile(u'C://Users/Administrator/Desktop//BBC.txt')  _lda.fitModel(n_iter=1500)  _lda.printTopic_Word(n_top_word=10)  _lda.printDoc_Topic()  _lda.saveVocabulary(u'C://Users/Administrator/Desktop//vocab.txt')  _lda.saveTopic_Words(u'C://Users/Administrator/Desktop//topic_word.txt')  _lda.saveDoc_Topic(u'C://Users/Administrator/Desktop//doc_topic.txt')

因为语料全部为英文,因此这里的stop_words全部设置为英文符号,主题设置20个,迭代1500次。结果显示,文档148篇,词典1347词,总词数4174,在i3的电脑上运行17s。
Topic_words部分输出如下:

Topic: 0
to will and of he be trumps the what policy
Topic: 1 he would in said not no with mr this but
Topic: 2 for or can some whether have change health obamacare insurance
Topic: 3 the to that president as of us also first all
Topic: 4 trump to when with now were republican mr office presidential
Topic: 5 the his trump from uk who president to american house
Topic: 6 a to that was it by issue vote while marriage
Topic: 7 the to of an are they which by could from
Topic: 8 of the states one votes planned won two new clinton
Topic: 9 in us a use for obama law entry new interview
Topic: 10 and on immigration has that there website vetting action given

Doc_Topic部分输出如下:

Doc 0:((top topic:4) topic distribution:[ 0.02972973 0.0027027 0.0027027 0.16486486 0.32702703 0.19189189
0.0027027 0.0027027 0.02972973 0.0027027 0.02972973 0.0027027
0.0027027 0.0027027 0.02972973 0.0027027 0.02972973 0.0027027
0.13783784 0.0027027 ])
Doc 1:((top topic:18) topic distribution:[ 0.21 0.01 0.01 0.01 0.01 0.01 0.01 0.01 0.11 0.01 0.01 0.01
0.01 0.01 0.01 0.01 0.01 0.01 0.31 0.21])
Doc 2:((top topic:18) topic distribution:[ 0.02075472 0.00188679 0.03962264 0.00188679 0.00188679 0.00188679
0.00188679 0.15283019 0.00188679 0.02075472 0.00188679 0.24716981
0.00188679 0.07735849 0.00188679 0.00188679 0.00188679 0.00188679
0.41698113 0.00188679])

当然,对于英文语料,需要排除大部分的虚词以及常用无意义词,例如it, this, there, that...在实际操作中,需要合理地设置参数。

换中文语料尝试,采用习大大就卡斯特罗逝世发表的吊唁文章和朴槿惠辞职的新闻。

Topic: 0
的 同志 和 人民 卡斯特罗 菲德尔 古巴 他 了 我
Topic: 1 在 朴槿惠 向 表示 总统 对 将 的 月 国民
Doc 0:((top topic:0) topic distribution:[ 0.91714123 0.08285877])
Doc 1:((top topic:1) topic distribution:[ 0.09200666 0.90799334])

还是存在一些虚词,例如“的”,“和”,“了”,“对”等词的干扰,但是大致来说,两则新闻的主题分布很明显,效果还不赖。

总结

以上就是本文关于Python_LDA实现方法详解的全部内容,希望对大家有所帮助。有什么问题可以随时留言,欢迎大家一起交流讨论。感谢朋友们对本站的支持!


注:相关教程知识阅读请移步到python教程频道。
发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表