Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程
作者:剑客阿良_ALiang 发布时间:2023-04-03 12:13:33
标签:Python,Scrapy框架,爬虫案例
前言
闲来无聊,写了一个爬虫程序获取百度疫情数据。申明一下,研究而已。而且页面应该会进程做反爬处理,可能需要调整对应xpath。
Github仓库地址:代码仓库
本文主要使用的是scrapy框架。
环境部署
主要简单推荐一下
插件推荐
这里先推荐一个Google Chrome的扩展插件xpath helper,可以验证xpath语法是不是正确。
爬虫目标
需要爬取的页面:实时更新:新型冠状病毒肺炎疫情地图
主要爬取的目标选取了全国的数据以及各个身份的数据。
项目创建
使用scrapy命令创建项目
scrapy startproject yqsj
webdriver部署
这里就不重新讲一遍了,可以参考我这篇文章的部署方法:Python 详解通过Scrapy框架实现爬取CSDN全站热榜标题热词流程
项目代码
开始撸代码,看一下百度疫情省份数据的问题。
页面需要点击展开全部span。所以在提取页面源码的时候需要模拟浏览器打开后,点击该按钮。所以按照这个方向,我们一步步来。
Item定义
定义两个类YqsjProvinceItem和YqsjChinaItem,分别定义国内省份数据和国内数据。
# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html
import scrapy
class YqsjProvinceItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
location = scrapy.Field()
new = scrapy.Field()
exist = scrapy.Field()
total = scrapy.Field()
cure = scrapy.Field()
dead = scrapy.Field()
class YqsjChinaItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
# 现有确诊
exist_diagnosis = scrapy.Field()
# 无症状
asymptomatic = scrapy.Field()
# 现有疑似
exist_suspecte = scrapy.Field()
# 现有重症
exist_severe = scrapy.Field()
# 累计确诊
cumulative_diagnosis = scrapy.Field()
# 境外输入
overseas_input = scrapy.Field()
# 累计治愈
cumulative_cure = scrapy.Field()
# 累计死亡
cumulative_dead = scrapy.Field()
中间件定义
需要打开页面后点击一下展开全部。
完整代码
# Define here the models for your spider middleware
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
from scrapy import signals
# useful for handling different item types with a single interface
from itemadapter import is_item, ItemAdapter
from scrapy.http import HtmlResponse
from selenium.common.exceptions import TimeoutException
from selenium.webdriver import ActionChains
import time
class YqsjSpiderMiddleware:
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the spider middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_spider_input(self, response, spider):
# Called for each response that goes through the spider
# middleware and into the spider.
# Should return None or raise an exception.
return None
def process_spider_output(self, response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response.
# Must return an iterable of Request, or item objects.
for i in result:
yield i
def process_spider_exception(self, response, exception, spider):
# Called when a spider or process_spider_input() method
# (from other spider middleware) raises an exception.
# Should return either None or an iterable of Request or item objects.
pass
def process_start_requests(self, start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn't have a response associated.
# Must return only requests (not items).
for r in start_requests:
yield r
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
class YqsjDownloaderMiddleware:
# Not all methods need to be defined. If a method is not defined,
# scrapy acts as if the downloader middleware does not modify the
# passed objects.
@classmethod
def from_crawler(cls, crawler):
# This method is used by Scrapy to create your spiders.
s = cls()
crawler.signals.connect(s.spider_opened, signal=signals.spider_opened)
return s
def process_request(self, request, spider):
# Called for each request that goes through the downloader
# middleware.
# Must either:
# - return None: continue processing this request
# - or return a Response object
# - or return a Request object
# - or raise IgnoreRequest: process_exception() methods of
# installed downloader middleware will be called
# return None
try:
spider.browser.get(request.url)
spider.browser.maximize_window()
time.sleep(2)
spider.browser.find_element_by_xpath("//*[@id='nationTable']/div/span").click()
# ActionChains(spider.browser).click(searchButtonElement)
time.sleep(5)
return HtmlResponse(url=spider.browser.current_url, body=spider.browser.page_source,
encoding="utf-8", request=request)
except TimeoutException as e:
print('超时异常:{}'.format(e))
spider.browser.execute_script('window.stop()')
finally:
spider.browser.close()
def process_response(self, request, response, spider):
# Called with the response returned from the downloader.
# Must either;
# - return a Response object
# - return a Request object
# - or raise IgnoreRequest
return response
def process_exception(self, request, exception, spider):
# Called when a download handler or a process_request()
# (from other downloader middleware) raises an exception.
# Must either:
# - return None: continue processing this exception
# - return a Response object: stops process_exception() chain
# - return a Request object: stops process_exception() chain
pass
def spider_opened(self, spider):
spider.logger.info('Spider opened: %s' % spider.name)
定义爬虫
分别获取国内疫情数据以及省份疫情数据。完整代码:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Time : 2021/11/7 22:05
# @Author : 至尊宝
# @Site :
# @File : baidu_yq.py
import scrapy
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from yqsj.items import YqsjChinaItem, YqsjProvinceItem
class YqsjSpider(scrapy.Spider):
name = 'yqsj'
# allowed_domains = ['blog.csdn.net']
start_urls = ['https://voice.baidu.com/act/newpneumonia/newpneumonia#tab0']
china_xpath = "//div[contains(@class, 'VirusSummarySix_1-1-317_2ZJJBJ')]/text()"
province_xpath = "//*[@id='nationTable']/table/tbody/tr[{}]/td/text()"
province_xpath_1 = "//*[@id='nationTable']/table/tbody/tr[{}]/td/div/span/text()"
def __init__(self):
chrome_options = Options()
chrome_options.add_argument('--headless') # 使用无头谷歌浏览器模式
chrome_options.add_argument('--disable-gpu')
chrome_options.add_argument('--no-sandbox')
self.browser = webdriver.Chrome(chrome_options=chrome_options,
executable_path="E:\\chromedriver_win32\\chromedriver.exe")
self.browser.set_page_load_timeout(30)
def parse(self, response, **kwargs):
country_info = response.xpath(self.china_xpath)
yq_china = YqsjChinaItem()
yq_china['exist_diagnosis'] = country_info[0].get()
yq_china['asymptomatic'] = country_info[1].get()
yq_china['exist_suspecte'] = country_info[2].get()
yq_china['exist_severe'] = country_info[3].get()
yq_china['cumulative_diagnosis'] = country_info[4].get()
yq_china['overseas_input'] = country_info[5].get()
yq_china['cumulative_cure'] = country_info[6].get()
yq_china['cumulative_dead'] = country_info[7].get()
yield yq_china
# 遍历35个地区
for x in range(1, 35):
path = self.province_xpath.format(x)
path1 = self.province_xpath_1.format(x)
province_info = response.xpath(path)
province_name = response.xpath(path1)
yq_province = YqsjProvinceItem()
yq_province['location'] = province_name.get()
yq_province['new'] = province_info[0].get()
yq_province['exist'] = province_info[1].get()
yq_province['total'] = province_info[2].get()
yq_province['cure'] = province_info[3].get()
yq_province['dead'] = province_info[4].get()
yield yq_province
pipeline输出结果文本
将结果按照一定的文本格式输出出来。完整代码:
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# useful for handling different item types with a single interface
from itemadapter import ItemAdapter
from yqsj.items import YqsjChinaItem, YqsjProvinceItem
class YqsjPipeline:
def __init__(self):
self.file = open('result.txt', 'w', encoding='utf-8')
def process_item(self, item, spider):
if isinstance(item, YqsjChinaItem):
self.file.write(
"国内疫情\n现有确诊\t{}\n无症状\t{}\n现有疑似\t{}\n现有重症\t{}\n累计确诊\t{}\n境外输入\t{}\n累计治愈\t{}\n累计死亡\t{}\n".format(
item['exist_diagnosis'],
item['asymptomatic'],
item['exist_suspecte'],
item['exist_severe'],
item['cumulative_diagnosis'],
item['overseas_input'],
item['cumulative_cure'],
item['cumulative_dead']))
if isinstance(item, YqsjProvinceItem):
self.file.write(
"省份:{}\t新增:{}\t现有:{}\t累计:{}\t治愈:{}\t死亡:{}\n".format(
item['location'],
item['new'],
item['exist'],
item['total'],
item['cure'],
item['dead']))
return item
def close_spider(self, spider):
self.file.close()
配置文件改动
直接参考,自行调整:
# Scrapy settings for yqsj project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html
BOT_NAME = 'yqsj'
SPIDER_MODULES = ['yqsj.spiders']
NEWSPIDER_MODULE = 'yqsj.spiders'
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'yqsj (+http://www.yourdomain.com)'
USER_AGENT = 'Mozilla/5.0'
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
# Disable cookies (enabled by default)
COOKIES_ENABLED = False
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# Override the default request headers:
DEFAULT_REQUEST_HEADERS = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Language': 'en',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.94 Safari/537.36'
}
# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
'yqsj.middlewares.YqsjSpiderMiddleware': 543,
}
# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
DOWNLOADER_MIDDLEWARES = {
'yqsj.middlewares.YqsjDownloaderMiddleware': 543,
}
# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}
# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
'yqsj.pipelines.YqsjPipeline': 300,
}
# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
验证结果
看看结果文件
来源:https://huyi-aliang.blog.csdn.net/article/details/121200319
0
投稿
猜你喜欢
- 一、pexpect模块介绍Pexpect使Python成为控制其他应用程序的更好工具。可以理解为Linux下的expect的Python封装
- 应该是开心网(kaixin.com)的宠物功能又升级了,这几次发来的邮件内容不仅不能让我开心,反而让我觉得很恶心。开心网注册也一段时间了,之
- 本文实例为大家分享了python实现udp传输图片的具体代码,供大家参考,具体内容如下首先要了解UDP的工作模式对于服务器,首先绑定IP和端
- 所有标准的序列操作对字符串都适用,但字符串是不可变的字符串常量:单引号:‘spa"m'双引号:"spa'
- 回想自己从事Web方面的开发已经有6-7年,对于各种Web技术都已经非常熟悉.可是,身为程序员的我对于制作Web表单界面的事着实心痛。心痛1
- 使用Python获取电脑的磁盘信息需要借助于第三方的模块psutil,这个模块需要自己安装,纯粹的CPython下面不具备这个功能。在iPy
- 1.什么是局部视图局部视图是在其他视图中呈现的视图。通过执行局部视图生成的HTML输出呈现在调用视图中。与视图一样,局部视图使用 .csht
- XML文档对象模型(DOM)是什么?可扩展标记语言XML的基础是 DOM。XML 文档具有一个称为节点的信息单元层次结构;DOM 是描述那些
- 本文实例讲述了PHP实现将浏览历史页面网址保存到cookie的方法。分享给大家供大家参考。具体如下:将浏览历史页面网址保存到cookie,大
- 一个很棒的 blog 文章,是 PPK 两年前写的,文章中解释了 contains() 和 compareDocumentPosition(
- 正文之前上午给爸爸打了个电话庆祝他50岁生日,在此之前搞了个大扫除,看了会知乎,到实验室已经十一点多了。约喜欢的妹子吃饭失败,以至于工作积极
- Oracle公司6月9日宣布同意收购TimesTen公司,TimesTen是一家私营软件企业,其产品能提高用于股市和机票预订等需要快速响应时
- python opencv实现目标跟踪python-opencv3.0新增了一些比较有用的 * 算法这里根据官网示例写了一个 * 类程序只能
- 通常我们做统计图的时候需要借助组件来完成例如mschart,aspchart等但是这个类不需要任何组件,而且使用方便,本站测试可用:clsG
- 动态变量名赋值在使用 tkinter 时需要动态生成变量,如动态生成 var1...var10 变量。使用 exec 动态赋值exec 在
- 最近经常使用字符串查找功能。 包括 1、全匹配查找字符串 2、模糊查找字符串 CHARINDEX 和 PATINDEX 函数都返回指定模式的
- 用python实现FTP文件传输,包括服务器端和客户端,要求(1)客户端访问服务器端要有一个验证功能(2)可以有多个客户端访问服务器端(3)
- 听说 FaceBook 开放其网站的代码了,期前也算是了解过 FaceBook 的架构,所以重点就是看其代码的质量。可以毫不夸张的说,Fac
- 总结调试网站获取cookies时请查看,r.header和r.request.header这两个属性,因为cookie说不准出现在他们俩谁里
- Nextcloud 是一款自由 (开源) 的类 Dropbox 软件,由 ownCloud 分支演化形成。它使用 PHP 和 JavaScr