吾爱破解 - LCG - LSG |安卓破解|病毒分析|www.52pojie.cn

 找回密码
 注册[Register]

QQ登录

只需一步,快速开始

查看: 4502|回复: 43
收起左侧

[Python 转载] 发现个不错的壁纸网站

  [复制链接]
话痨司机啊 发表于 2022-7-5 10:33
本帖最后由 话痨司机啊 于 2022-7-5 19:27 编辑

先看下效果图吧:


2.png

1.png

都下载了,30分钟换一张壁纸,都是4K 2K的壁纸

下载源码:

[Python] 纯文本查看 复制代码
import requests
from pathlib import Path
from lxml import etree
from rich import print
from loguru import logger
from requests.adapters import HTTPAdapter

logpath = Path(__file__).parent.joinpath('img.log')
logger.add(str(logpath))

def get_res(url):
    """
    获取网页内容
    """
    headers = {
        "user-agent":
        "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36"
    }
    r = requests.Session()
    r.mount('https://',HTTPAdapter(max_retries=5))
    res = r.get(url, headers=headers,timeout=30)
    return res


def parse_src(res):
    """
    分析src,获取图片下载链接
    """
    try:
        et = etree.HTML(res.text)
        masonry = et.xpath("//div[@class='masonry']")[-1]
        src = masonry.xpath("//article//a[@class='entry-thumbnail']/img/@data-src")
        img_url_list = []
        for s in src:
            img_url_list.append("-".join(s.split('x')[0].split('-')[:-1]) + "." +
                                s.split('x')[1].split('.')[-1])
        return img_url_list
    except Exception as e:
        logger.error(f"此页{res.url}访问失败,请重试!")
        
def download_img(img_url_list):
    """
    下载图片
    """
    if type(img_url_list) is list:
        path = Path(__file__).parent.joinpath('images')
        path.mkdir(parents = True, exist_ok = True)
        file_name = [imgurl.split('/')[-1].replace("?","") for imgurl in img_url_list]
        for i,imgurl in enumerate(img_url_list):
            if path.joinpath(file_name[i]).exists():
                img_url_list.remove(imgurl)
                print(f"文件{file_name[i]}已下载不能重复下载")
        if len(img_url_list)>0:
            ress = map(get_res, img_url_list)
            for i, res in enumerate(ress):
                if res:
                    with open(str(path.joinpath(file_name[i])), 'wb') as f:
                        f.write(res.content)
                        print(f'已经成功下载{file_name[i]},保存在{str(path)}')

def main(startnum=1,endnum=20):
    '''
    逻辑主函数
    '''
    url = lambda num: f"https://bz.qinggongju.com/page/{num}/"
    urls = [url(i) for i in range(startnum,endnum+1)]
    list(map(download_img, [image_url_list for image_url_list in map(parse_src, [res for res in map(get_res, urls)])]))


if __name__ == "__main__":
    startnum = input('共20页热门图片,请输入开始页面数字:')
    endnum = input('请输入结束页面数字,不能超过20:')
    if int(startnum) >= 1 and int(endnum) <= 20 :
        main(int(startnum),int(endnum))
    else:
        print('[red] Error:请重新启动程序输入数字!')


如果质疑缩略图,请看文件属性(家里电脑又下了一遍~):

Snipaste_2022-07-05_18-55-34.jpg



点评

我觉得你这个可能是略缩图  发表于 2022-7-5 12:35

免费评分

参与人数 2吾爱币 +8 热心值 +2 收起 理由
苏紫方璇 + 7 + 1 欢迎分析讨论交流,吾爱破解论坛有你更精彩!
suyuewen + 1 + 1 感谢发布原创作品,吾爱破解论坛因你更精彩!

查看全部评分

发帖前要善用论坛搜索功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。

cdsgg 发表于 2022-7-5 12:34
[Python] 纯文本查看 复制代码
import requests
from bs4 import BeautifulSoup


def get_pic_real_url_list():
    headers = {
        "cookie": "Hm_lvt_618c9e04ccc77a6b8c744b5199bd3c3b=1656994115; Hm_lpvt_618c9e04ccc77a6b8c744b5199bd3c3b=1656994762",
        "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.66 Safari/537.36 Edg/103.0.1264.44"
    }

    url = 'https://bz.qinggongju.com/category/%e4%ba%8c%e6%ac%a1%e5%85%83/page/1/'

    html = requests.get(url, headers=headers).content.decode()

    soup = [x.get('href') for x in BeautifulSoup(html, 'lxml').select('div.entry-top>a')]

    return soup


def Download_pic(soup):
    headers = {
        "cookie": "Hm_lvt_618c9e04ccc77a6b8c744b5199bd3c3b=1656994115; Hm_lpvt_618c9e04ccc77a6b8c744b5199bd3c3b=1656994762",
        "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5060.66 Safari/537.36 Edg/103.0.1264.44"
    }
    for i in soup:
        r = requests.get(i, headers=headers).content.decode()
        real_url = BeautifulSoup(r, 'lxml').select('#xiazai')[0].get('href')
        print(real_url)


if __name__ == '__main__':
    s = get_pic_real_url_list()
    Download_pic(s)
luxingyu329 发表于 2022-7-6 11:11
[Python] 纯文本查看 复制代码
"""

日期:2022年 07月 06日  10:46 
"""
import os
import time

import requests
from bs4 import BeautifulSoup


def get_pic_real_url_list():

    url = 'https://bz.qinggongju.com/category/%e4%ba%8c%e6%ac%a1%e5%85%83/page/1/'

    html = requests.get(url, headers=headers).content.decode()

    soup = [x.get('href') for x in BeautifulSoup(html, 'lxml').select('div.entry-top>a')]

    return soup


def download_pic(soup):
    urls = []
    for i in soup:
        r = requests.get(i, headers=headers).content.decode()
        real_url = BeautifulSoup(r, 'lxml').select('#xiazai')[0].get('href')
        print(real_url)
        urls.append(real_url)

    return urls


# 执行请求图片地址,保存图片的函数
def down_save_img(url_pic):
    img_name = os.path.split(url_pic)[1]
    time.sleep(4)
    # print('显示此说明下一行开始请求图片地址了.................')
    resp = requests.get(url_pic, headers=headers)
    # assert resp.status_code == 200
    if resp.status_code == 200:
        print(f'请求{img_name}链接成功')
        save(resp.content, url_pic)
    else:
        print(f'请求出错:代码{resp.status_code},可能反爬了...')


def save(date, url_pic):
    if not os.path.exists(f'D:/新建文件夹/{os.path.split(url_pic)[1]}'):
        with open(f'D:/新建文件夹/{os.path.split(url_pic)[1]}', 'wb') as f:
            f.write(date)
            print(f'图片{os.path.split(url_pic)[1]}保存成功')
    else:
        print(f'图片{os.path.split(url_pic)[1]}已经存在......')


if __name__ == '__main__':
    headers = {
        "cookie": "Hm_lvt_618c9e04ccc77a6b8c744b5199bd3c3b=1656994115; "
                  "Hm_lpvt_618c9e04ccc77a6b8c744b5199bd3c3b=1656994762",
        "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) "
                      "Chrome/103.0.5060.66 Safari/537.36 Edg/103.0.1264.44"
    }
    s = get_pic_real_url_list()
    da = download_pic(s)
    for ur in da:
        down_save_img(ur)
xiaoshan1818 发表于 2022-7-5 10:46
netpeng 发表于 2022-7-5 11:01
壁纸都很精美,爱了爱了,感谢分享。
AndersonChan 发表于 2022-7-5 11:06
不错不错
Ritsu_Namine 发表于 2022-7-5 11:09
Traceback (most recent call last):
  File "C:\Users\Ritsu_Namine\Desktop\pic.py", line 70, in <module>
    main(int(startnum),int(endnum))
  File "C:\Users\Ritsu_Namine\Desktop\pic.py", line 63, in main
    list(map(download_img, [image_url_list for image_url_list in map(parse_src, [res for res in map(get_res, urls)])]))
  File "C:\Users\Ritsu_Namine\Desktop\pic.py", line 44, in download_img
    file_name = [imgurl.split('/')[-1].replace("?","") for imgurl in img_url_list]
TypeError: 'NoneType' object is not iterable
zeng16 发表于 2022-7-5 11:17
学习学习
达摩院的老巢 发表于 2022-7-5 11:18
感觉没有拍摄的清晰啊。不过免费的是这样啦
pangpang02 发表于 2022-7-5 11:18
谢谢分享,这个网站这不错我也来试一下
龍謹 发表于 2022-7-5 11:19
盘他,谢谢楼主分享PY源码。
你是我的人 发表于 2022-7-5 11:24
感谢大佬的分享
您需要登录后才可以回帖 登录 | 注册[Register]

本版积分规则 警告:本版块禁止灌水或回复与主题无关内容,违者重罚!

快速回复 收藏帖子 返回列表 搜索

RSS订阅|小黑屋|处罚记录|联系我们|吾爱破解 - LCG - LSG ( 京ICP备16042023号 | 京公网安备 11010502030087号 )

GMT+8, 2024-5-8 02:26

Powered by Discuz!

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表