首页 > 编程 > Python > 正文

scrapy自定义pipeline类实现将采集数据保存到mongodb的方法

2019-11-25 17:43:22
字体:
来源:转载
供稿:网友

本文实例讲述了scrapy自定义pipeline类实现将采集数据保存到mongodb的方法。分享给大家供大家参考。具体如下:

# Standard Python library imports# 3rd party modulesimport pymongofrom scrapy import logfrom scrapy.conf import settingsfrom scrapy.exceptions import DropItemclass MongoDBPipeline(object):  def __init__(self):    self.server = settings['MONGODB_SERVER']    self.port = settings['MONGODB_PORT']    self.db = settings['MONGODB_DB']    self.col = settings['MONGODB_COLLECTION']    connection = pymongo.Connection(self.server, self.port)    db = connection[self.db]    self.collection = db[self.col]  def process_item(self, item, spider):    err_msg = ''    for field, data in item.items():      if not data:        err_msg += 'Missing %s of poem from %s/n' % (field, item['url'])    if err_msg:      raise DropItem(err_msg)    self.collection.insert(dict(item))    log.msg('Item written to MongoDB database %s/%s' % (self.db, self.col),        level=log.DEBUG, spider=spider)    return item

希望本文所述对大家的python程序设计有所帮助。

发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表