Python爬虫实战之使用Scrapy爬取豆瓣图片
作者:濯君 发布时间:2023-06-08 10:56:20
标签:Python,Scrapy
使用Scrapy爬取豆瓣某影星的所有个人图片
以莫妮卡·贝鲁奇为例
1.首先我们在命令行进入到我们要创建的目录,输入 scrapy startproject banciyuan
创建scrapy项目
创建的项目结构如下
2.为了方便使用pycharm执行scrapy项目,新建main.py
from scrapy import cmdline
cmdline.execute("scrapy crawl banciyuan".split())
再edit configuration
然后进行如下设置,设置后之后就能通过运行main.py运行scrapy项目了
3.分析该HTML页面,创建对应spider
from scrapy import Spider
import scrapy
from banciyuan.items import BanciyuanItem
class BanciyuanSpider(Spider):
name = 'banciyuan'
allowed_domains = ['movie.douban.com']
start_urls = ["https://movie.douban.com/celebrity/1025156/photos/"]
url = "https://movie.douban.com/celebrity/1025156/photos/"
def parse(self, response):
num = response.xpath('//div[@class="paginator"]/a[last()]/text()').extract_first('')
print(num)
for i in range(int(num)):
suffix = '?type=C&start=' + str(i * 30) + '&sortby=like&size=a&subtype=a'
yield scrapy.Request(url=self.url + suffix, callback=self.get_page)
def get_page(self, response):
href_list = response.xpath('//div[@class="article"]//div[@class="cover"]/a/@href').extract()
# print(href_list)
for href in href_list:
yield scrapy.Request(url=href, callback=self.get_info)
def get_info(self, response):
src = response.xpath(
'//div[@class="article"]//div[@class="photo-show"]//div[@class="photo-wp"]/a[1]/img/@src').extract_first('')
title = response.xpath('//div[@id="content"]/h1/text()').extract_first('')
# print(response.body)
item = BanciyuanItem()
item['title'] = title
item['src'] = [src]
yield item
4.items.py
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class BanciyuanItem(scrapy.Item):
# define the fields for your item here like:
src = scrapy.Field()
title = scrapy.Field()
pipelines.py
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
from scrapy.pipelines.images import ImagesPipeline
import scrapy
class BanciyuanPipeline(ImagesPipeline):
def get_media_requests(self, item, info):
yield scrapy.Request(url=item['src'][0], meta={'item': item})
def file_path(self, request, response=None, info=None, *, item=None):
item = request.meta['item']
image_name = item['src'][0].split('/')[-1]
# image_name.replace('.webp', '.jpg')
path = '%s/%s' % (item['title'].split(' ')[0], image_name)
return path
settings.py
# Scrapy settings for banciyuan project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'banciyuan'
SPIDER_MODULES = ['banciyuan.spiders']
NEWSPIDER_MODULE = 'banciyuan.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.80 Safari/537.36'}
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
# 'Accept-Language': 'en',
#}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'banciyuan.middlewares.BanciyuanSpiderMiddleware': 543,
#}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
# 'banciyuan.middlewares.BanciyuanDownloaderMiddleware': 543,
#}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'banciyuan.pipelines.BanciyuanPipeline': 1,
}
IMAGES_STORE = './images'
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
5.爬取结果
reference
源码
来源:https://blog.csdn.net/zzldm/article/details/117425949
0
投稿
猜你喜欢
- 今天为大家介绍一下python中与class 相关的知识……获取对象的类名python是一门面向对象的语言,对于一切接对象的pyt
- 简述队列一直都是工程化开发中经常使用的数据类型,本篇文章主要介绍一下python queue的使用,会边调试代码,边说明方法内容。环境pyt
- 动态 web 应用也会需要静态文件,通常是 CSS 和 JavaScript 文件。理想状况下, 我们已经配置好 Web 服务器来提供静态文
- 如下所示:# coding: utf-8import osimport psutilimport timedef write_pid():
- 一个简单的JS显示日期代码,可以显示星期几<script type="text/javascript">fu
- 这里主要是解决multipart/form-data这种格式的文件上传,基本现在http协议上传文件基本上都是通过这种格式上传1 思路一般情
- 目录效果特点使用手册主要代码完整项目地址效果在Excel日历模板的基础上,生成带有农历日期、节假日、休班等信息的日历,解决DIY日历最大的技
- 一个是没有对输入的数据进行过滤(过滤输入),还有一个是没有对发送到数据库的数据进行转义(转义输出)。这两个重要的步骤缺一不可,需要同时加以特
- 今天我想讲一讲关于Elasticsearch的索引建立,当然提前是你已经安装部署好Elasticsearch。ok,先来介绍一下Elatic
- keras中正则化(regularization)keras内置3种正则化方法keras.regularizers.l1(lambda)ke
- WordPress 本身以及主题和插件通常需要加载一些 JavaScript 来实现某些特殊功能。为了最大限度地保证兼容性,不至于出现 Ja
- 我的PJBlog在从2.7升级的3.0的时候,犹豫了很久。升级到PJBlog3.0就是看中了新增的静态页面功能,但是同时又担心造成博客出现大
- 题目: 一个环形单链表,从头结点开始向后,指针每移动一个结点,就计数加1,当数到第m个节点时,就把该结点删除,然后继续从下一个节点开始从1计
- 目录1、算数运算符:2、赋值运算符:3、比较运算符4、逻辑运算符5、 成员运算符总结大至分为以下5类运算符号算数运算符赋值运算符比
- 但是Class这个东西,如果用得比较少,充其量只是一个大模块的包装方式. 只有大规模地用它来开发,才能显出它对项目管理的优越性来. 所谓的意
- lambda函数的定义   lambda函数是Python中常用的内置函数,又称为匿名
- 一、 模块1、模块的概念模块是 Python 程序架构的一个核心概念每一个以扩展名 py 结尾的 Python 源代码文件都是一个 模块模块
- 今天有业务需要制作用户头像的需求,在网上找了个可以裁剪大图制作自己希望大小的图片的方法(基于Struts2)。特此记录一下。不废话,具体的步
- python中zip函数返回一个以元组为元素的列表,其中第 i 个元组包含每个参数序列的第 i 个元素。返回的列表长度被截断为最短的参数序列
- 在命令行输入以下代码:pythonimport cv2cv2.__version__来源:https://blog.csdn.net/dlh