python爬虫第1章 urllib库(二) urllib发送post请求
一、构建请求对象Request
先看看python urllib默认的请求头:
import urllib.request
url = r"http://www.baidu.com"
response = urllib.request.urlopen(url)
print(response.read())
执行上面代码,使用Charles抓包:
GET / HTTP/1.1 | |
---|---|
Accept-Encoding | identity |
Host | www.baidu.com |
User-Agent | Python-urllib/3.6 |
Connection | close |
User-Agent : Python-urllib/3.6 这是一点伪装都没有,直接告诉服务器自己是python程序。服务器响不响应数据那就很难说了。
那该怎么办?需要伪装自己的User-Agent(UA),让服务器以为浏览器上网。
构建请求对象:urllib.request.Request (这是一个类)
import urllib.request
import ssl
def crawle(url):
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Maxthon/4.4.3.4000 Chrome/30.0.1599.101 Safari/537.36"
}
request = urllib.request.Request(url = url , headers = headers)
# 使用ssl创建未验证的上下文
# 必须要有这个参数,否则无法爬取https网页
# 如果不加,报错:urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)>
context = ssl._create_unverified_context()
response = urllib.request.urlopen(request , context = context)
print(response.read().decode())
url = r"https://www.jd.com/"
crawle(url)
再次抓包:
GET / HTTP/1.1 | |
---|---|
Accept-Encoding | identity |
Host | www.jd.com |
User-Agent | Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Maxthon/4.4.3.4000 Chrome/30.0.1599.101 Safari/537.36 |
Connection | close |
二、POST请求案例:使用百度翻译的sug接口查询英语单词
如何获取这个接口?浏览器打开https://fanyi.baidu.com/页面,点击F12键盘打开开发者工具,在页面的输入框输入文字,如cat,就可以在开发者工具的Network中看到发送的请求。
import urllib.request
import urllib.parse
# 百度翻译post请求的url
post_url = r"https://fanyi.baidu.com/sug"
word = input("请输入需要搜索的英文单词...")
form_data = {
"kw":word
}
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36"
}
# 对post参数进行url编码
# 必须是字节类型,否则报错:TypeError: POST data should be bytes, an iterable of bytes, or a file object. It cannot be of type str.
form_data = urllib.parse.urlencode(form_data).encode()
# 传入url、请求头构建Request对象
request = urllib.request.Request(url = post_url,headers=headers)
# 发送post请求
response = urllib.request.urlopen(request,data=form_data)
print(response.read().decode())
获取到了 unicode
{"errno":0,"data":[{"k":"cat","v":"n. \u732b; \u732b\u79d1\u52a8\u7269;"},{"k":"catch","v":"v. \u63a5\u4f4f; \u622a\u4f4f; \u62e6\u4f4f; \u63a5(\u843d\u4e0b\u7684\u6db2\u4f53); \u6293\u4f4f; \u63e1\u4f4f; n. \u63a5(\u7403\u7b49); \u603b\u6355\u83b7\u91cf; \u6263"},{"k":"category","v":"n. (\u4eba\u6216\u4e8b\u7269\u7684)\u7c7b\u522b\uff0c\u79cd\u7c7b;"},{"k":"cattle","v":"n. \u725b;"},{"k":"categories","v":"n. (\u4eba\u6216\u4e8b\u7269\u7684)\u7c7b\u522b\uff0c\u79cd\u7c7b; category\u7684\u590d\u6570;"}]}
unicode转字符串后(可以在一些json格式转化的网站进行unicode转字符串,如http://www.bejson.com/):
{
"errno": 0,
"data": [{
"k": "cat",
"v": "n. 猫; 猫科动物;"
}, {
"k": "catch",
"v": "v. 接住; 截住; 拦住; 接(落下的液体); 抓住; 握住; n. 接(球等); 总捕获量; 扣"
}, {
"k": "category",
"v": "n. (人或事物的)类别,种类;"
}, {
"k": "cattle",
"v": "n. 牛;"
}, {
"k": "categories",
"v": "n. (人或事物的)类别,种类; category的复数;"
}]
}