吾爱破解 - LCG - LSG |安卓破解|病毒分析|www.52pojie.cn

 找回密码
 注册[Register]

QQ登录

只需一步,快速开始

查看: 3293|回复: 33
收起左侧

[Python 转载] 分享下百度图库批量下载爬虫源码

 关闭 [复制链接]
fqy2022 发表于 2022-7-26 12:16
应坛友需求分享下百度图库批量下载工具 的源码:
[Python] 纯文本查看 复制代码
# @风清扬(fqy2022)
import requests
import time
import os
# 创建保存文件夹
if os.path.isdir(r'./保存'):
    print('已存在文件夹!')
else:
    os.mkdir('./保存')
    print('已为您创建文件夹!')

class Image(object):
    def __init__(self):
        # URL
        self.url = 'https://image.baidu.com/search/acjson?'
        # 拼接headers
        self.headers = {
            'Cookie': 'BDqhfp=%E7%8B%97%26%260-10-1undefined%26%260%26%261; BIDUPSID=A063B6D6CC13957DA917CAA433A26251; PSTM=1583301079; MCITY=-315%3A; BDUSS=TBSSlRRQU9QbmR-MGt6NUFQa01iR3VQWHBUbnNacW9zMnJUN0N-QndGSzNkMkJnSVFBQUFBJCQAAAAAAAAAAAEAAADuVM9dw~vX1tPQybbIobXEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALfqOGC36jhgS; BDUSS_BFESS=TBSSlRRQU9QbmR-MGt6NUFQa01iR3VQWHBUbnNacW9zMnJUN0N-QndGSzNkMkJnSVFBQUFBJCQAAAAAAAAAAAEAAADuVM9dw~vX1tPQybbIobXEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAALfqOGC36jhgS; BAIDUID=857FDC525D72D7899014BED3AB7A9EFF:FG=1; __yjs_duid=1_bd666ba46de51678e9fb98774eb68df71616750528301; BDORZ=FFFB88E999055A3F8A630C64834BD6D0; BDSFRCVID_BFESS=0X0OJeCmHlQJPareecEsuUw4D2KK0gOTHllnm4-TLeKNvakVJeC6EG0Ptf8g0KubFTPRogKK0gOTH6KF_2uxOjjg8UtVJeC6EG0Ptf8g0M5; H_BDCLCKID_SF_BFESS=fRkfoKPKfCv8qTrmbtOhq4tHePPLexRZ5mAqoJIXQCjvDR5eD4TD3J-0jhbhtPvLtnTnaIQhtqQnqnQTXPoYBpku5bOR2f743bRT2MKy5KJvfj6gjj7qhP-UyPkHWh37aGOlMKoaMp78jR093JO4y4Ldj4oxJpOJ5JbMonLafD_bhD-4Djt2eP00-xQja--XKKj2WROeajrjDnCrDhA2XUI8LUc72poZLI6H0R5J34OhSt0mQ55vyT8sXnO72P7XaRPL-pRHWhr-HJvKy4oTjxL1Db3JKjvMtg3t3qQmLUooepvoD-Jc3MvByPjdJJQOBKQB0KnGbUQkeq8CQft20b0EeMtjW6LEK5r2SCDMtC0b3D; indexPageSugList=%5B%22%E7%8B%97%22%2C%22%E4%BA%8C%E5%93%88%22%2C%22%E9%87%87%E8%80%B3%E5%9B%BE%E7%89%87%20%E5%94%AF%E7%BE%8E%22%2C%22%E9%87%87%E8%80%B3%E5%9B%BE%E7%89%87%E9%AB%98%E6%B8%85%22%2C%22%E9%87%87%E8%80%B3%E5%AE%A3%E4%BC%A0%E5%9B%BE%E7%89%87%22%2C%22%E9%87%87%E8%80%B3%22%2C%22%E5%96%9D%E5%80%92%E4%BA%86%E7%9A%84%E8%A1%A8%E6%83%85%E5%8C%85%22%2C%22%E8%A5%BF%E6%B8%B8%E8%AE%B0%20%E8%AF%8D%E4%BA%91%22%2C%22%E5%AD%99%E6%82%9F%E7%A9%BA%20%E8%AF%8D%E4%BA%91%22%5D; delPer=0; PSINO=7; BDRCVFR[dG2JNJb_ajR]=mk3SLVN4HKm; BDRCVFR[-pGxjrCMryR]=mk3SLVN4HKm; BDRCVFR[EJrvrN3l0S0]=pDgu-4B3j7tIZ-EIy7GQhPEUf; H_PS_PSSID=; BDRCVFR[X_XKQks0S63]=mk3SLVN4HKm; firstShowTip=1; ZD_ENTRY=baidu; cleanHistoryStatus=0; BA_HECTOR=a401010ka584240lm51g6r0320r; userFrom=www.baidu.com; ab_sr=1.0.0_YjAxODJmMjA1MDU3YTUyZjIyMzk2MGQ4YjM3MTQ5OGNjNDI5NWFkNjkxOTA0YjkxMDBlYjY0Y2JmMDU5NzY5MDY1NDAxZDY0ZDhhYjUzZDhkNGY4ZDUwOWVhMzkwMGMxYzQ5OTA1MjE3OTViYzZmN2QxNzMyN2M2ZjYxMzBkYTE=',
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36'
        }
        self.params = {
            'tn': 'resultjson_com',
            'logid': '11625870838566749778',
            'ipn': 'rj',
            'ct': '201326592',
            'is': '',
            'fp': 'result',
            'queryWord': '',
            'cl': '2',
            'lm': '-1',
            'ie': 'utf-8',
            'oe': 'utf-8',
            'adpicid': '',
            'st': '-1',
            'z': '',
            'ic': '0',
            'hd': '',
            'latest': '',
            'copyright': '',
            'word': '',
            's': '',
            'se': '',
            'tab': '',
            'width': '',
            'height': '',
            'face': '0',
            'istype': '2',
            'qc': '',
            'nc': '1',
            'fr': '',
            'expermode': '',
            'force': '',
            'pn': '',
            'rn': '30',
            'gsm': '',
            'time': ''
        }
        self.image_list = []
        a = input('请输入要爬取的图片名称:')
        self.params['queryWord'] = a
        self.params['word'] = a
    def get_image(self, num):
        for i in range(0, num):
            self.params['time'] = int(time.time() * 1000)
            self.params['pn'] = i * 30
            response = requests.get(url=self.url, headers=self.headers, params=self.params)
            for j in range(0, len(response.json()['data']) - 1):
                self.image_list.append(response.json()['data'][j]['thumbURL'])
    # 图片保存函数
    def save_image(self):
        n = 1
        for i in self.image_list:
            image = requests.get(url=i)
            print('正在下载第{}张'.format(n))
            with open('./保存/{}.jpg'.format(n), 'wb') as f:
                f.write(image.content)
            n += 1


if __name__ == '__main__':
    c = int(input('请输入要爬取的页数(每页有30张图片):'))
    image = Image()
    image.get_image(c)
    image.save_image()

免费评分

参与人数 10吾爱币 +13 热心值 +10 收起 理由
guo15049434245 + 1 + 1 谢谢@Thanks!
苏紫方璇 + 7 + 1 欢迎分析讨论交流,吾爱破解论坛有你更精彩!
lm398792990 + 1 + 1 我很赞同!
DengKun + 1 + 1 感谢感谢
18070980818 + 1 谢谢@Thanks!
Sealsclerk + 1 + 1 谢谢大佬,学习一下
blindcat + 1 + 1 谢谢@Thanks!
郭小生 + 1 我很赞同!
HjiaLe02 + 1 + 1 我很赞同!
wuxin4 + 1 我很赞同!

查看全部评分

本帖被以下淘专辑推荐:

发帖前要善用论坛搜索功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。

lingyi01 发表于 2022-7-26 16:14
小白想请教一下楼主,在爬百度的时候很容易触发验证码尤其是手机端。我再网上找了一个header带params的,pc端基本不会触发  但是移动端还是会触发。这种问题一般怎么解决?
cdsgg 发表于 2022-7-26 12:23
这个是下载的略缩图 百度的图片 真实地址还需要用js拼接出来
wpan26 发表于 2022-7-26 12:24
xulei226 发表于 2022-7-26 12:27
新手报道~学习一下
zcdwazhl 发表于 2022-7-26 12:33
爬图,可以有 感谢
charleschai 发表于 2022-7-26 12:34
有点用处,学习下
zxyzdcfy 发表于 2022-7-26 12:42
谢谢大佬
cdk005 发表于 2022-7-26 12:47
感谢楼主奉献,学习下
haowb 发表于 2022-7-26 12:54
不错,支持一下~
rinfintiy 发表于 2022-7-26 12:59
谢谢分享
您需要登录后才可以回帖 登录 | 注册[Register]

本版积分规则 警告:本版块禁止灌水或回复与主题无关内容,违者重罚!

快速回复 收藏帖子 返回列表 搜索

RSS订阅|小黑屋|处罚记录|联系我们|吾爱破解 - LCG - LSG ( 京ICP备16042023号 | 京公网安备 11010502030087号 )

GMT+8, 2024-5-17 08:08

Powered by Discuz!

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表