Python 批量刷博客园访问量脚本过程解析
作者:卿先生 发布时间:2023-11-23 21:30:14
标签:python,批量,刷,访问量,脚本
今早无聊。。。7点起来突然想写个刷访问量的。。那就动手吧
仅供测试,不建议刷访问量哦~~
很简单的思路,第一步提取代理ip,第二步模拟访问。
提取HTTP *
网上很多收费的代理和免费的 *
如:
无论哪个网站,我们需要的就是爬取上面的ip和端口号,整理到一起。
具体的网站根据具体的结构爬取 比如上面那个网站,ip和端口在td标签
这里利用bs4爬取即可。贴上脚本
##获取代理ip
def Get_proxy_ip():
print("==========批量提取ip刷博客园访问量 By 卿=========")
print(" Blogs:https://www.cnblogs.com/-qing-/")
print(" Started! ")
global proxy_list
proxy_list = []
url = "https://www.kuaidaili.com/free/inha/"
headers = {
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Encoding":"gzip, deflate, sdch, br",
"Accept-Language":"zh-CN,zh;q=0.8",
"Cache-Control":"max-age=0",
"Connection":"keep-alive",
"Cookie":"channelid=0; sid=1561681200472193; _ga=GA1.2.762166746.1561681203; _gid=GA1.2.971407760.1561681203; _gat=1; Hm_lvt_7ed65b1cc4b810e9fd37959c9bb51b31=1561681203; Hm_lpvt_7ed65b1cc4b810e9fd37959c9bb51b31=1561681203",
"Host":"www.kuaidaili.com",
"Upgrade-Insecure-Requests":"1",
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0",
"Referrer Policy":"no-referrer-when-downgrade",
}
for i in range(1,100):
url = url = "https://www.kuaidaili.com/free/inha/"+str(i)
html = requests.get(url = url,headers = headers).content
soup = BeautifulSoup(html,'html.parser')
ip_list = '';
port_list = '';
protocol_list = '';
for ip in soup.find_all('td'):
if "IP" in ip.get('data-title') :
ip_list = ip.get_text()##获取ip
if "PORT" in ip.get('data-title'):
port_list = ip.get_text()##获取port
if ip_list != '' and port_list != '':
proxy = ip_list+":"+port_list
ip_list = '';
port_list = '';
proxy_list.append(proxy)
iv_main()
time.sleep(2)
proxy_list = []
这样就把 提取的ip和端口放到列表里
模拟访问刷博客园文章
这里就很简单了 ,遍历上面那个代理ip的列表,使用requests模块取访问就是了
def iv_main():
proxies = {}
requests.packages.urllib3.disable_warnings()
#proxy_ip = random.choice(proxy_list)
url = 'https://www.cnblogs.com/-qing-/p/11080845.html'
for proxy_ip in proxy_list:
headers2 = {
'accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'accept-encoding':'gzip, deflate, sdch, br',
'accept-language':'zh-CN,zh;q=0.8',
'cache-control':'max-age=0',
'cookie':'__gads=ID=8c6fd85d91262bb1:T=1561554219:S=ALNI_MZwz0CMKQJK-L19DrX5DPDtYvp63Q; _gat=1; _ga=GA1.2.359634670.1561535095; _gid=GA1.2.1087331661.1561535095',
'if-modified-since':'Fri, 28 Jun 2019 02:10:23 GMT',
'referer':'https://www.cnblogs.com/',
'upgrade-insecure-requests':'1',
'user-agent':random.choice(user_agent_list),
}
proxies['HTTP'] = proxy_ip
#user_agent = random.choice(user_agent_list)
try:
r = requests.get(url,headers=headers2,proxies=proxies,verify=False) #verify是否验证服务器的SSL证书
print("[*]"+proxy_ip+"访问成功!")
except:
print("[-]"+proxy_ip+"访问失败!")
最好带上随机的ua请求头
user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3",
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
"Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0",
"Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
]
优化整合
这里可以稍微优化下,加入队列线程优化(虽然python这个没啥用)
最终代码整合:
# -*- coding:utf-8 -*-
#By 卿
#Blog:https://www.cnblogs.com/-qing-/
import requests
from bs4 import BeautifulSoup
import re
import time
import random
import threading
print("==========批量提取ip刷博客园访问量 By 卿=========")
print(" Blogs:https://www.cnblogs.com/-qing-/")
print(" Started! ")
user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3",
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
"Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
"Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
"Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0",
"Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
]
def iv_main():
proxies = {}
requests.packages.urllib3.disable_warnings()
#proxy_ip = random.choice(proxy_list)
url = 'https://www.cnblogs.com/-qing-/p/11080845.html'
for proxy_ip in proxy_list:
headers2 = {
'accept':'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8',
'accept-encoding':'gzip, deflate, sdch, br',
'accept-language':'zh-CN,zh;q=0.8',
'cache-control':'max-age=0',
'cookie':'__gads=ID=8c6fd85d91262bb1:T=1561554219:S=ALNI_MZwz0CMKQJK-L19DrX5DPDtYvp63Q; _gat=1; _ga=GA1.2.359634670.1561535095; _gid=GA1.2.1087331661.1561535095',
'if-modified-since':'Fri, 28 Jun 2019 02:10:23 GMT',
'referer':'https://www.cnblogs.com/',
'upgrade-insecure-requests':'1',
'user-agent':random.choice(user_agent_list),
}
proxies['HTTP'] = proxy_ip
#user_agent = random.choice(user_agent_list)
try:
r = requests.get(url,headers=headers2,proxies=proxies,verify=False) #verify是否验证服务器的SSL证书
print("[*]"+proxy_ip+"访问成功!")
except:
print("[-]"+proxy_ip+"访问失败!")
##获取代理ip
def Get_proxy_ip():
global proxy_list
proxy_list = []
url = "https://www.kuaidaili.com/free/inha/"
headers = {
"Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Encoding":"gzip, deflate, sdch, br",
"Accept-Language":"zh-CN,zh;q=0.8",
"Cache-Control":"max-age=0",
"Connection":"keep-alive",
"Cookie":"channelid=0; sid=1561681200472193; _ga=GA1.2.762166746.1561681203; _gid=GA1.2.971407760.1561681203; _gat=1; Hm_lvt_7ed65b1cc4b810e9fd37959c9bb51b31=1561681203; Hm_lpvt_7ed65b1cc4b810e9fd37959c9bb51b31=1561681203",
"Host":"www.kuaidaili.com",
"Upgrade-Insecure-Requests":"1",
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36 SE 2.X MetaSr 1.0",
"Referrer Policy":"no-referrer-when-downgrade",
}
for i in range(1,100):
url = url = "https://www.kuaidaili.com/free/inha/"+str(i)
html = requests.get(url = url,headers = headers).content
soup = BeautifulSoup(html,'html.parser')
ip_list = '';
port_list = '';
protocol_list = '';
for ip in soup.find_all('td'):
if "IP" in ip.get('data-title') :
ip_list = ip.get_text()##获取ip
if "PORT" in ip.get('data-title'):
port_list = ip.get_text()##获取port
if ip_list != '' and port_list != '':
proxy = ip_list+":"+port_list
ip_list = '';
port_list = '';
proxy_list.append(proxy)
iv_main()
time.sleep(2)
proxy_list = []
th=[]
th_num=10
for x in range(th_num):
t=threading.Thread(target=Get_proxy_ip)
th.append(t)
for x in range(th_num):
th[x].start()
for x in range(th_num):
th[x].join()
结果
来源:https://www.cnblogs.com/-qing-/p/11101414.html


猜你喜欢
- 一、前言构建命令行程序很酷:命令行可以按照我们的设定完成相应的工作,相比 GUI 界面程序,无需花费大量时间设计 GUI 界面。但要使命令行
- 记录一些pandas选择数据的内容,此前首先说行列名的获取和更改,以方便获取数据。此文作为学习巩固。这篇博的内容顺序大概就是: 行列名的获取
- 一、 什么是进程 / 线程1、 引论众所周知,CPU是计算机的核心,它承担了所有的计算任务。而操作系统是计算机的管理者,是一个大管家,它负责
- 周五下午,作为小白太痛苦了,这两天一直在做一件事,如下:使flask接口中的函数执行的同时,向指定的url传递数据(我甚至不知道怎么描述这个
- 下面是用SA-FileUp组件上传一个HTML文件的程序:fileup.htm < HTM
- 单例模式是一种常用的软件设计模式。在它的核心结构中只包含一个被称为单例类的特殊类。通过单例模式可以保证系统中一个类只有一个实例而且该实例易于
- 在SQL Server 中,如果给表的一个字段设置了默认值,就会在系统表sysobjects中生成一个默认约束。如果想删除这个设置了默认值的
- 本文实例讲述了Python文件去除注释的方法。分享给大家供大家参考。具体实现方法如下:#!/usr/bin/python # -*- cod
- 目录selenium模块selenium基本概念基本使用基于浏览器自动化的操作selenium处理iframe:selenium模拟登陆QQ
- 一、需求介绍该需求主要是分析某一种数据的历史数据。客户的需求是根据该数据的前两期的情况,如果存在某个斜着的两个数字相等,那么就买第三期的同一
- 前言参考文章:Python实现照片卡通化我继续魔改一下,让该模型可以支持将gif动图或者视频,也做成卡通化效果。毕竟一张图可以那就带边视频也
- 最近正在做首页,处理很棘手的浏览器兼容的问题,主要调试的浏览器为 IE6 ,IE7 ,FF3 ,Opera9.5 ,Safari3.1.2兼
- 前言matplotlib实际上是一套面向对象的绘图库,它所绘制的图表中的每个绘图元素,例如线条Line2D、文字Text、刻度等在内存中都有
- 点乘import torchx = torch.tensor([[3,3],[3,3]])y = x*x #x.dot(x)z = torc
- 如何提高Request集合的使用效率?以加快程序处理速度: strTitle=Request.Form("Title&q
- 这几天一直在看《Pro JavaScript Techniques》,书中有不少优美、健壮代码,让我不得不惊叹老外对语言这东西的研究程度之深
- 一.使用库说明Golang中连接kafka可以使用第三方库:github.com/Shopify/sarama二.Kafka Produce
- 前言我们在做微信小程序开发的过程中,总会遇到各种奇葩的问题。今天就把我在小程序开发过程中遇到的各种问题,及对应的解决方案总结在这里,方便以后
- 一、本讲学习目标1.掌握构造方法的使用2.掌握析构方法的使用3.掌握self变量的使用二、构造方法(一)概述构造方法指的是__init__(
- mat矩阵和npy矩阵互相转换numpy.narray矩阵保存为mat文件import numpy as npimport scipy.io