Scrapy 爬取豆瓣 Top250 电影

Creating a project

scrapy startproject top_250

Source

  • in spiders/top_250.py
# -*- coding: utf-8 -*-
import scrapy
from scrapy import Request, Spider
from top_250.items import Top250Item

class MovieSpider(Spider):
    name = 'top_250'
    allowed_domains = ['movie.douban.com'] # don't add any `http` or `https`
    headers = {
        'User-Agent' : 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/53.0.2785.143 Safari/537.36' 
    }
    def start_requests(self):
        url = 'https://movie.douban.com/top250'
        yield Request(url, headers=self.headers)
    

    def parse(self, response):
        item = Top250Item()
        for movie in response.xpath("//div[@class='item']"):
            item['movie_name'] = movie.xpath(".//a/span[@class='title']/text()").extract_first()
            item['movie_url'] = movie.xpath(".//a/@href").extract_first()
            item['movie_rank'] = movie.xpath(".//div[@class='star']/span[@class='rating_num']/text()").extract_first()
            yield item

        next_page_url = response.xpath("//div[@class='paginator']/span[@class='next']/a/@href").extract_first()
        if next_page_url:
            next_page_url = 'https://movie.douban.com/top250' + next_page_url
            yield Request(next_page_url, headers=self.headers)
  • in items.py
import scrapy

class Top250Item(scrapy.Item):
    # define the fields for your item here like:
    # name = scrapy.Field()
    movie_name = scrapy.Field()
    movie_url = scrapy.Field()
    movie_rank = scrapy.Field()
  • in pipelines.py
# -*- coding: utf-8 -*-
import codecs
import json
import os
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html


class Top250Pipeline(object):
    def __init__(self):
        self.file = codecs.open('top_250.json', 'w', encoding='utf-8')
        self.file.write('[')

    def process_item(self, item, spider):
        line = json.dumps(dict(item), ensure_ascii=False) + '\n'
        self.file.write(line+',')
        return item

    def close_spider(self, spider):
        self.file.seek(-1, os.SEEK_END)
        self.file.truncate()
        self.file.write(']')
        self.file.close()
  • in setting.py
...
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
...
...
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    'top_250.pipelines.Top250Pipeline': 300,
}
...
FEED_EXPORT_ENCODING = 'utf-8' # use utf-8 to store Chinese

Run our spider

scrapy crawl top_250

生成的文件在运行目录下: top_250.json

Reference

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • **2014真题Directions:Read the following text. Choose the be...
    又是夜半惊坐起阅读 9,934评论 0 23
  • 第二十二章 为我开门的是孩子的姑姑,看见我的样子,她惊讶地叫了出来:“天啊,你这是怎么了,小白。”我刚想说话,只觉...
    展融融阅读 866评论 3 8
  • 昨天晚上9点半左右,从公司打滴滴专车回家。一上车,还没缓过神来,发现司机异常开心地对我说:你好。我连忙回应,他掩饰...
    黄子妈阅读 787评论 0 51
  • 感赏女儿:今天和爸妈相处愉快,和我分享同学发来的搞笑图片,少了以前面对父母的恶声恶语。能相对控制上网时间。坚持学了...
    阳光燕阅读 138评论 0 0