首页 > 编程 > Python > 正文

在scrapy中使用phantomJS实现异步爬取的方法

2020-01-04 13:47:32
字体:
来源:转载
供稿:网友

使用selenium能够非常方便的获取网页的ajax内容,并且能够模拟用户点击和输入文本等诸多操作,这在使用scrapy爬取网页的过程中非常有用。

网上将selenium集成到scrapy的文章很多,但是很少有能够实现异步爬取的,下面这段代码就重写了scrapy的downloader,同时实现了selenium的集成以及异步。

使用时需要PhantomJSDownloadHandler添加到配置文件的DOWNLOADER中。

# encoding: utf-8from __future__ import unicode_literals from scrapy import signalsfrom scrapy.signalmanager import SignalManagerfrom scrapy.responsetypes import responsetypesfrom scrapy.xlib.pydispatch import dispatcherfrom selenium import webdriverfrom six.moves import queuefrom twisted.internet import defer, threadsfrom twisted.python.failure import Failure  class PhantomJSDownloadHandler(object):  def __init__(self, settings):  self.options = settings.get('PHANTOMJS_OPTIONS', {})   max_run = settings.get('PHANTOMJS_MAXRUN', 10)  self.sem = defer.DeferredSemaphore(max_run)  self.queue = queue.LifoQueue(max_run)   SignalManager(dispatcher.Any).connect(self._close, signal=signals.spider_closed)  def download_request(self, request, spider):  """use semaphore to guard a phantomjs pool"""  return self.sem.run(self._wait_request, request, spider)  def _wait_request(self, request, spider):  try:   driver = self.queue.get_nowait()  except queue.Empty:   driver = webdriver.PhantomJS(**self.options)   driver.get(request.url)  # ghostdriver won't response when switch window until page is loaded  dfd = threads.deferToThread(lambda: driver.switch_to.window(driver.current_window_handle))  dfd.addCallback(self._response, driver, spider)  return dfd  def _response(self, _, driver, spider):  body = driver.execute_script("return document.documentElement.innerHTML")  if body.startswith("<head></head>"): # cannot access response header in Selenium   body = driver.execute_script("return document.documentElement.textContent")  url = driver.current_url  respcls = responsetypes.from_args(url=url, body=body[:100].encode('utf8'))  resp = respcls(url=url, body=body, encoding="utf-8")   response_failed = getattr(spider, "response_failed", None)  if response_failed and callable(response_failed) and response_failed(resp, driver):   driver.close()   return defer.fail(Failure())  else:   self.queue.put(driver)   return defer.succeed(resp)  def _close(self):  while not self.queue.empty():   driver = self.queue.get_nowait()   driver.close()

以上这篇在scrapy中使用phantomJS实现异步爬取的方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持VEVB武林网。


注:相关教程知识阅读请移步到python教程频道。
发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表