本文实例讲述了Python实现爬取百度贴吧帖子所有楼层图片的爬虫。分享给大家供大家参考,具体如下:
下载百度贴吧帖子图片,好好看
python2.7版本:
#coding=utf-8import reimport requestsimport urllibfrom bs4 import BeautifulSoupimport timetime1=time.time()def getHtml(url): page = requests.get(url) html =page.text return htmldef getImg(html): soup = BeautifulSoup(html, 'html.parser') img_info = soup.find_all('img', class_='BDE_Image') global index for index,img in enumerate(img_info,index+1): print ("正在下载第{}张图片".format(index)) urllib.urlretrieve(img.get("src"),'C:/pic4/%s.jpg' % index)def getMaxPage(url): html = getHtml(url) reg = re.compile(r'max-page="(/d+)"') page = re.findall(reg,html) page = int(page[0]) return pageif __name__=='__main__': url = "https://tieba.baidu.com/p/5113603072" page = getMaxPage(url) index = 0 for i in range(1,page): url = "%s%s" % ("https://tieba.baidu.com/p/5113603072?pn=",str(i)) html = getHtml(url) getImg(html) print ("OK!All DownLoad!") time2=time.time() print u'总共耗时:' + str(time2 - time1) + 's'
希望本文所述对大家Python程序设计有所帮助。
新闻热点
疑难解答