公众号文章定向爬虫程序解析
为了成功地从某一微信公众号上爬取文章,我们首先需要获取这些文章的网址链接。为此,我们可以使用一段Python脚本来帮助我们实现这一目标。接下来,我将展示这段脚本程序,以便大家了解如何获取微信公众号文章的链接。
# -*- coding: UTF-8 -*-
import requests
import time
import pandas as pd
import math
import random
user_agent_list = [
'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3',
'Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50',
'Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11',
'Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0',
'Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1',
"Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.75 Mobile Safari/537.36",
]
# 目标url
url = "https://mp.weixin.qq.com/cgi-bin/appmsg"
cookie = "这里换成你拷贝出来的cookie值"
# 使用Cookie,跳过登陆操作
data = {
"token": "这里需要进行替换",
"lang": "zh_CN",
"f": "json",
"ajax": "1",
"action": "list_ex",
"begin": "0",
"count": "5",
"query": "",
"fakeid": "这里进行替换",
"type": "9",
}
headers = {
"Cookie": cookie,
"User-Agent": "Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.75 Mobile Safari/537.36",
}
content_json = requests.get(url, headers=headers, params=data).json()
count = int(content_json["app_msg_cnt"])
# 总条数信息
print(count)
page = int(math.ceil(count / 5))
# 每页5条,计算页数
print(page)
content_list = []
# 功能:爬取IP存入ip_list列表
# range函数中第一个参数从第几页开始采集
for i in range(1,page):
data["begin"] = i * 5
user_agent = random.choice(user_agent_list)
headers = {
"Cookie": cookie,
"User-Agent": user_agent,
}
ip_headers = {
'User-Agent': user_agent
}
# 使用get方法进行提交
content_json = requests.get(url, headers=headers, params=data).json()
# 返回了一个json,里面是每一页的数据
for item in content_json["app_msg_list"]:
# 提取每页文章的标题及对应的url
items = []
items.append(item["title"])
items.append(item["link"])
t = time.localtime(item["create_time"])
items.append(time.strftime("%Y-%m-%d %H:%M:%S", t))
content_list.append(items)
print(i)
if (i > 0) and (i % 10 == 0):
name = ['title', 'link', 'create_time']
test = pd.DataFrame(columns=name, data=content_list)
test.to_csv("url.csv", mode='a', encoding='utf-8')
print("第" + str(i) + "次保存成功")
content_list = []
time.sleep(random.randint(60,90))
else:
time.sleep(random.randint(15,25))
name = ['title', 'link', 'create_time']
test = pd.DataFrame(columns=name, data=content_list)
test.to_csv("url.csv", mode='a', encoding='utf-8')
print("最后一次保存成功")
实际上,要让爬虫程序能够针对特定微信公众号爬取文章链接,你需要准备以下个参数:你的微信公众号的cookies、token以及fakeid。这三者缺一不可。这三个参数在程序中的****位置如下:
5、定向爬虫程序的三个关键参数获取
为了获取上面提到的这三个参数,首先你需要准备一个微信公众号,没有的话去注册一个。
运行python爬虫程序
完成爬虫程序的三个参数设置之后,您就可以着手运行程序了。不过在此之前,请确保您的电脑上已经安装了Python环境。一旦Python环境配置妥当,您还需要安装程序运行所必需的几个依赖项。以下是具体的安装命令:
pip install requests pandas