首页 > 编程 > Python > 正文

Python爬虫包BeautifulSoup实例(三)

2020-02-15 21:53:51
字体:
来源:转载
供稿:网友

一步一步构建一个爬虫实例,抓取糗事百科的段子

先不用beautifulsoup包来进行解析

第一步,访问网址并抓取源码

# -*- coding: utf-8 -*-# @Author: HaonanWu# @Date:  2016-12-22 16:16:08# @Last Modified by:  HaonanWu# @Last Modified time: 2016-12-22 20:17:13import urllibimport urllib2import reimport osif __name__ == '__main__':  # 访问网址并抓取源码  url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'  user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'  headers = {'User-Agent':user_agent}  try:    request = urllib2.Request(url = url, headers = headers)    response = urllib2.urlopen(request)    content = response.read()  except urllib2.HTTPError as e:    print e    exit()  except urllib2.URLError as e:    print e    exit()  print content.decode('utf-8')

第二步,利用正则表达式提取信息

首先先观察源码中,你需要的内容的位置以及如何识别
然后用正则表达式去识别读取
注意正则表达式中的 . 是不能匹配/n的,所以需要设置一下匹配模式。

# -*- coding: utf-8 -*-# @Author: HaonanWu# @Date:  2016-12-22 16:16:08# @Last Modified by:  HaonanWu# @Last Modified time: 2016-12-22 20:17:13import urllibimport urllib2import reimport osif __name__ == '__main__':  # 访问网址并抓取源码  url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'  user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'  headers = {'User-Agent':user_agent}  try:    request = urllib2.Request(url = url, headers = headers)    response = urllib2.urlopen(request)    content = response.read()  except urllib2.HTTPError as e:    print e    exit()  except urllib2.URLError as e:    print e    exit()  regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S)  items = re.findall(regex, content)  # 提取数据  # 注意换行符,设置 . 能够匹配换行符  for item in items:    print item

第三步,修正数据并保存到文件中

# -*- coding: utf-8 -*-# @Author: HaonanWu# @Date:  2016-12-22 16:16:08# @Last Modified by:  HaonanWu# @Last Modified time: 2016-12-22 21:41:32import urllibimport urllib2import reimport osif __name__ == '__main__':  # 访问网址并抓取源码  url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357'  user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36'  headers = {'User-Agent':user_agent}  try:    request = urllib2.Request(url = url, headers = headers)    response = urllib2.urlopen(request)    content = response.read()  except urllib2.HTTPError as e:    print e    exit()  except urllib2.URLError as e:    print e    exit()  regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S)  items = re.findall(regex, content)  # 提取数据  # 注意换行符,设置 . 能够匹配换行符  path = './qiubai'  if not os.path.exists(path):    os.makedirs(path)  count = 1  for item in items:    #整理数据,去掉/n,将<br/>换成/n    item = item.replace('/n', '').replace('<br/>', '/n')    filepath = path + '/' + str(count) + '.txt'    f = open(filepath, 'w')    f.write(item)    f.close()    count += 1            
发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表