首页 > 编程 > Python > 正文

Python3爬虫之urllib携带cookie爬取网页的方法

2020-01-04 13:40:28
字体:
来源:转载
供稿:网友

如下所示:

import urllib.requestimport urllib.parse url = 'https://weibo.cn/5273088553/info'#正常的方式进行访问# headers = {#  'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36'# }# 携带cookie进行访问headers = {'GET https':'//weibo.cn/5273088553/info HTTP/1.1','Host':' weibo.cn','Connection':' keep-alive','Upgrade-Insecure-Requests':' 1','User-Agent':' Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.84 Safari/537.36','Accept':' text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8',# 'Referer: https':'//weibo.cn/','Accept-Language':' zh-CN,zh;q=0.9','Cookie':' _T_WM=c1913301844388de10cba9d0bb7bbf1e; SUB=_2A253Wy_dDeRhGeNM7FER-CbJzj-IHXVUp7GVrDV6PUJbkdANLXPdkW1NSesPJZ6v1GA5MyW2HEUb9ytQW3NYy19U; SUHB=0bt8SpepeGz439; SCF=Aua-HpSw5-z78-02NmUv8CTwXZCMN4XJ91qYSHkDXH4W9W0fCBpEI6Hy5E6vObeDqTXtfqobcD2D32r0O_5jSRk.; SSOLoginState=1516199821',}request = urllib.request.Request(url=url,headers=headers)response = urllib.request.urlopen(request)#输出所有# print(response.read().decode('gbk'))#将内容写入文件中with open('weibo.html','wb') as fp: fp.write(response.read())

以上这篇Python3爬虫之urllib携带cookie爬取网页的方法就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持VEVB武林网。


注:相关教程知识阅读请移步到python教程频道。
发表评论 共有条评论
用户名: 密码:
验证码: 匿名发表