首页 > 编程 > Python > 正文

Python3爬虫入门之爬取豆瓣Top250电影名称

2019-11-06 07:02:03
字体:
来源:转载
供稿:网友

Python3爬虫入门之爬取豆瓣Top250电影名称

准备工具

Python3.5requestsBeautifulSouplxml

最终效果

这里写图片描述

首先看一下网站的结构 这里写图片描述 可以很清楚的看到每个电影对应了一个<li>标签,我们只需要一步一步的从<ol> 向下搜索,可以得到电影对应的名称,即<span class="titile">肖申克的救赎</span> 这一行接着看一下网页内 后页按钮对应的代码结构 这里写图片描述 可以看出后一页的URL为 https://movie.douban.com/top250?start=25&filter= 最后一页这没有这个标签 对应None 这样我们就可以进行翻页了 直接上代码获取html代码 这里使用requests模块,获取很方便import requests# 获取目标网页htmldef download_page(url):# 伪装成浏览器 headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' } data = requests.get(url, headers=headers).content return data解析html 获取到html源码后就要对其进行解析了,这里使用BeautifulSoup模块from bs4 import BeautifulSoupURL='https://movie.douban.com/top250'# 解析html 方法一 (这里的写法参考了某博主的代码)def parse_html(html): # 获取BeautifulSoup 对象 soup = BeautifulSoup(html,'lxml') movie_name_list = [] # 先获取最外层ol movie_list_soup = soup.find('ol', attrs={'class':'grid_view'}) # 获取每个列表<li> for movie_li in movie_list_soup.find_all('li'): detail = movie_li.find('div', attrs={'class':'hd'}) movie_name = detail.find('span', attrs={'class':'title'}).getText()# 这里名称要用getText()获取相应内容 movie_name_list.append(movie_name) next_page = soup.find('span',attrs={'class':'next'}).find('a') if next_page: return movie_name_list,URL+next_page['href'] return movie_name_list,Nonefrom bs4 import BeautifulSoupURL='https://movie.douban.com/top250'# 解析html方法2 这里用了一些BeautifulSoup的新特性 用起来比较方便def parse_html1(html): soup = BeautifulSoup(html, 'lxml'); movie_names = [] movie_list = soup.select('ol.grid_view li div.item div.info div.hd a') for movie_title in movie_list: movie_name = movie_title.find('span',class_='title') movie_names.append(movie_name.getText()) next_page = soup.find('span',class_='next').find('a') if next_page: return movie_names,URL+next_page['href'] return movie_names,None汇总一下,并把获取到的名字列表写进文件中import requestsfrom bs4 import BeautifulSoupURL='https://movie.douban.com/top250'# 获取目标网页htmldef download_page(url): headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36' } data = requests.get(url, headers=headers).content return data# 解析htmldef parse_html1(html): soup = BeautifulSoup(html, 'lxml'); movie_names = [] movie_list = soup.select('ol.grid_view li div.item div.info div.hd a') for movie_title in movie_list: movie_name = movie_title.find('span',class_='title') movie_names.append(movie_name.getText()) next_page = soup.find('span',class_='next').find('a') if next_page: return movie_names,URL+next_page['href'] return movie_names,Nonedef main(): url = URL with codecs.open('e:/movies.txt','wb',encoding='utf-8') as fp: while url: html = download_page(url) movies,url=parse_html1(html) for movie_name in movies: fp.write(movie_name) fp.write('/r/n')if __name__=='__main__': main()
发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表