吾爱破解 - LCG - LSG |安卓破解|病毒分析|www.52pojie.cn

 找回密码
 注册[Register]

QQ登录

只需一步,快速开始

查看: 1667|回复: 14
收起左侧

[Python 转载] 站长素材之全站模板都拿完

  [复制链接]
小奥2014 发表于 2022-6-17 16:27
人狠话不多...
直接上代码,喜欢支持下就行,希望可以帮到有需要的你
[Python] 纯文本查看 复制代码
import urllib.request
import urllib.error
from lxml import etree
import wget
import os
import time

def make_url():
    global get_page_url
    url = "https://sc.chinaz.com/moban/"
    n = 1
    # 页面规则信息 默认包含首页
    urllist = ['https://sc.chinaz.com/moban/']
    get_page_url = []
    # 从第一页爬到199页
    for i in range(0,199):
        n += 1
        name = 'index_' + str(n) + '.html'
        urllist.append(url + name)
    # 查看组合的页面
    # print(urllist)
    for x in urllist:
        url = x
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.63 Safari/537.36',
            'Cookie': 'UM_distinctid=17ffdf74bf1300-0a9bd18437839e-1f343371-1fa400-17ffdf74bf29d3; toolbox_words=mi.fiime.cn; CNZZDATA300636=cnzz_eid%3D1939574942-1649307441-%26ntime%3D1651196126; user-temp=ff83bd25-fcbc-ae79-f517-80e679c62fba; qHistory=aHR0cDovL3JhbmsuY2hpbmF6LmNvbV/nu7zlkIjmnYPph43mn6Xor6J8aHR0cDovL3Rvb2wuY2hpbmF6LmNvbV/nq5nplb/lt6Xlhbd8aHR0cDovL3Rvb2wuY2hpbmF6LmNvbS93ZWJzY2FuL1/nvZHnq5nlronlhajmo4DmtYt8aHR0cDovL3Nlby5jaGluYXouY29tX1NFT+e7vOWQiOafpeivonxodHRwOi8vd2hvaXMuY2hpbmF6LmNvbS9yZXZlcnNlP2RkbFNlYXJjaE1vZGU9MF/mibnph4/mn6Xor6J8Ly9udG9vbC5jaGluYXouY29tL3Rvb2xzL2xpbmtzX+atu+mTvuaOpeajgOa1iy/lhajnq5lQUuafpeivonxodHRwOi8vdG9vbC5jaGluYXouY29tL25zbG9va3VwL19uc2xvb2t1cOafpeivonxodHRwOi8vdG9vbC5jaGluYXouY29tL25vdGlmaWNhdGlvbl/mm7TmlrDlhazlkYp8aHR0cDovL3JhbmsuY2hpbmF6LmNvbV/mnYPph43ljoblj7Lmn6Xor6J8aHR0cDovL3dob2lzLmNoaW5hei5jb20vX1dob2lz5p+l6K+ifGh0dHA6Ly93aG9pcy5jaGluYXouY29tL3JldmVyc2U/ZGRsU2VhcmNoTW9kZT0yX+azqOWGjOS6uuWPjeafpQ==; inputbox_urls=%5B%22mi.fiime.cn%22%5D; auth-token=a496d647-d0b7-4745-9bb0-cf54708f5730; toolbox_urls=www.szmgwx.com|mi.fiime.cn|br.hemumeirong.cn|y.hemumeirong.cn|u.hemumeirong.cn|s.hemumeirong.cn|www.geligw.com|g.5ewl.com|a.5ewl.com|ar.cqdajinkt.com; Hm_lvt_ca96c3507ee04e182fb6d097cb2a1a4c=1653015614,1655276039,1655429327; Hm_lvt_398913ed58c9e7dfe9695953fb7b6799=1652966343,1654433064,1655082610,1655444268; ASP.NET_SessionId=0125qj2fj05anx2ogj1jm4e2; Hm_lpvt_ca96c3507ee04e182fb6d097cb2a1a4c=1655446442; Hm_lpvt_398913ed58c9e7dfe9695953fb7b6799=1655447097'
        }
        request = urllib.request.Request(url=url, headers=headers)
        handler = urllib.request.HTTPHandler()
        opener = urllib.request.build_opener(handler)
        response = opener.open(request)
        content = etree.HTML(response.read().decode("utf-8"))
        content = content.xpath('//div[@id="container"]//p/a[@target="_blank" and @alt]/@href') # 获取模板下载页面
        for add in content:
            get_page_url.append(add)
    # 查看下载页面
    # print(get_page_url)
    # 查看下载数量
    # print(len(get_page_url))

def make_download() :
    for url in get_page_url:
        try :
            url = 'https:' + url
            headers = {
                'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.63 Safari/537.36',
                'Cookie': 'UM_distinctid=17ffdf74bf1300-0a9bd18437839e-1f343371-1fa400-17ffdf74bf29d3; toolbox_words=mi.fiime.cn; CNZZDATA300636=cnzz_eid%3D1939574942-1649307441-%26ntime%3D1651196126; user-temp=ff83bd25-fcbc-ae79-f517-80e679c62fba; qHistory=aHR0cDovL3JhbmsuY2hpbmF6LmNvbV/nu7zlkIjmnYPph43mn6Xor6J8aHR0cDovL3Rvb2wuY2hpbmF6LmNvbV/nq5nplb/lt6Xlhbd8aHR0cDovL3Rvb2wuY2hpbmF6LmNvbS93ZWJzY2FuL1/nvZHnq5nlronlhajmo4DmtYt8aHR0cDovL3Nlby5jaGluYXouY29tX1NFT+e7vOWQiOafpeivonxodHRwOi8vd2hvaXMuY2hpbmF6LmNvbS9yZXZlcnNlP2RkbFNlYXJjaE1vZGU9MF/mibnph4/mn6Xor6J8Ly9udG9vbC5jaGluYXouY29tL3Rvb2xzL2xpbmtzX+atu+mTvuaOpeajgOa1iy/lhajnq5lQUuafpeivonxodHRwOi8vdG9vbC5jaGluYXouY29tL25zbG9va3VwL19uc2xvb2t1cOafpeivonxodHRwOi8vdG9vbC5jaGluYXouY29tL25vdGlmaWNhdGlvbl/mm7TmlrDlhazlkYp8aHR0cDovL3JhbmsuY2hpbmF6LmNvbV/mnYPph43ljoblj7Lmn6Xor6J8aHR0cDovL3dob2lzLmNoaW5hei5jb20vX1dob2lz5p+l6K+ifGh0dHA6Ly93aG9pcy5jaGluYXouY29tL3JldmVyc2U/ZGRsU2VhcmNoTW9kZT0yX+azqOWGjOS6uuWPjeafpQ==; inputbox_urls=%5B%22mi.fiime.cn%22%5D; auth-token=a496d647-d0b7-4745-9bb0-cf54708f5730; toolbox_urls=www.szmgwx.com|mi.fiime.cn|br.hemumeirong.cn|y.hemumeirong.cn|u.hemumeirong.cn|s.hemumeirong.cn|www.geligw.com|g.5ewl.com|a.5ewl.com|ar.cqdajinkt.com; Hm_lvt_ca96c3507ee04e182fb6d097cb2a1a4c=1653015614,1655276039,1655429327; Hm_lvt_398913ed58c9e7dfe9695953fb7b6799=1652966343,1654433064,1655082610,1655444268; ASP.NET_SessionId=0125qj2fj05anx2ogj1jm4e2; Hm_lpvt_ca96c3507ee04e182fb6d097cb2a1a4c=1655446442; Hm_lpvt_398913ed58c9e7dfe9695953fb7b6799=1655447097'
            }
            request = urllib.request.Request(url=url, headers=headers)
            handler = urllib.request.HTTPHandler()
            opener = urllib.request.build_opener(handler)
            response = opener.open(request)
            content = etree.HTML(response.read().decode("utf-8"))
            download_pack_url = content.xpath('//div[@class="dian"]/a/@href') # 获取下载节点
            get_download_relurl = download_pack_url[0] # 选取第一个节点下载
            path = os.getcwd()
            filepath = path + "//muban//" # 保存的路径
            if os.path.exists(filepath) != True:
                os.mkdir(filepath)
            print("开始下载:%s"%(get_download_relurl))
            time.sleep(3)
            wget.download(get_download_relurl, filepath)  # 下载模板文件
        except urllib.error.HTTPError:
            print("资源不存在!")
        except urllib.error.URLError:
            print("链接不存在!")


if __name__ == '__main__':
    # 组合页面
    make_url()
    # 获取并下载
    make_download()



为了方便大家查看 或者继续提交完善新功能
https://github.com/aoaoemoji/Html_spider_ChinaZ

发帖前要善用论坛搜索功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。

话痨司机啊 发表于 2022-6-17 20:18
本帖最后由 话痨司机啊 于 2022-6-17 20:48 编辑

来个异步的,让站长更谢谢你,哈哈!
[Python] 纯文本查看 复制代码
import asyncio
import aiohttp
import aiofiles
from lxml import etree
import os
from loguru import logger as log


headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/102.0.5005.63 Safari/537.36',
    'Cookie': 'UM_distinctid=17ffdf74bf1300-0a9bd18437839e-1f343371-1fa400-17ffdf74bf29d3; toolbox_words=mi.fiime.cn; CNZZDATA300636=cnzz_eid%3D1939574942-1649307441-%26ntime%3D1651196126; user-temp=ff83bd25-fcbc-ae79-f517-80e679c62fba; qHistory=aHR0cDovL3JhbmsuY2hpbmF6LmNvbV/nu7zlkIjmnYPph43mn6Xor6J8aHR0cDovL3Rvb2wuY2hpbmF6LmNvbV/nq5nplb/lt6Xlhbd8aHR0cDovL3Rvb2wuY2hpbmF6LmNvbS93ZWJzY2FuL1/nvZHnq5nlronlhajmo4DmtYt8aHR0cDovL3Nlby5jaGluYXouY29tX1NFT+e7vOWQiOafpeivonxodHRwOi8vd2hvaXMuY2hpbmF6LmNvbS9yZXZlcnNlP2RkbFNlYXJjaE1vZGU9MF/mibnph4/mn6Xor6J8Ly9udG9vbC5jaGluYXouY29tL3Rvb2xzL2xpbmtzX+atu+mTvuaOpeajgOa1iy/lhajnq5lQUuafpeivonxodHRwOi8vdG9vbC5jaGluYXouY29tL25zbG9va3VwL19uc2xvb2t1cOafpeivonxodHRwOi8vdG9vbC5jaGluYXouY29tL25vdGlmaWNhdGlvbl/mm7TmlrDlhazlkYp8aHR0cDovL3JhbmsuY2hpbmF6LmNvbV/mnYPph43ljoblj7Lmn6Xor6J8aHR0cDovL3dob2lzLmNoaW5hei5jb20vX1dob2lz5p+l6K+ifGh0dHA6Ly93aG9pcy5jaGluYXouY29tL3JldmVyc2U/ZGRsU2VhcmNoTW9kZT0yX+azqOWGjOS6uuWPjeafpQ==; inputbox_urls=%5B%22mi.fiime.cn%22%5D; auth-token=a496d647-d0b7-4745-9bb0-cf54708f5730; toolbox_urls=www.szmgwx.com|mi.fiime.cn|br.hemumeirong.cn|y.hemumeirong.cn|u.hemumeirong.cn|s.hemumeirong.cn|[url=http://www.geligw.com]www.geligw.com[/url]|g.5ewl.com|a.5ewl.com|ar.cqdajinkt.com; Hm_lvt_ca96c3507ee04e182fb6d097cb2a1a4c=1653015614,1655276039,1655429327; Hm_lvt_398913ed58c9e7dfe9695953fb7b6799=1652966343,1654433064,1655082610,1655444268; ASP.NET_SessionId=0125qj2fj05anx2ogj1jm4e2; Hm_lpvt_ca96c3507ee04e182fb6d097cb2a1a4c=1655446442; Hm_lpvt_398913ed58c9e7dfe9695953fb7b6799=1655447097'
}

async def fetch(url,files=False):
    '''
    异步请求函数
    '''
    async with asyncio.Semaphore(10):
        async with aiohttp.ClientSession(headers=headers) as session:
            async with session.get(url) as response:
                if files:
                    return await response.read(),response.url
                else:
                    return await response.text()

async def get_info_url():
    '''
    获取详情页
    '''
    page_url = lambda x :"https://sc.chinaz.com/moban/index_{}.html".format(x)
    all_page_url = [fetch(page_url(i)) for i in range(1,199)]
    res_list = await asyncio.gather(*all_page_url)
    return map(lambda res:['https:' + page_url for page_url in etree.HTML(res).xpath('//div[@id="container"]//p/a[@target="_blank" and @alt]/@href')],res_list)

async def get_download_url():
    '''
    获取下载链接
    '''
    url_list = []
    for info_url in await get_info_url():
        if info_url !=[]:
            for url in info_url:
                url_list.append(fetch(url))
    _res_list = await asyncio.gather(*url_list)
    return map(lambda res:etree.HTML(res).xpath('//div[@class="dian"]/a/@href'),_res_list)


    
async def download_file():
    '''
    下载文件
    '''
    file_contents = await asyncio.gather(*[fetch(url[0],files=True) for url in await get_download_url()])
    await asyncio.gather(*[aiofiles.open(os.path.join(os.getcwd(),'{}.rar'.format(content[1].split('/')[-1])),'wb').write(content[0]) for content in file_contents])


async def async_main():
    try:
        await download_file()
    except Exception as e:
        log.exception(e)
    
if __name__ == '__main__':
    asyncio.run(async_main())
jimoguying2020 发表于 2022-6-17 18:16
 楼主| 小奥2014 发表于 2022-6-17 19:39
头像被屏蔽
735882888 发表于 2022-6-17 20:07
提示: 作者被禁止或删除 内容自动屏蔽
sssguo 发表于 2022-6-17 20:37
感谢分享!
bobxie 发表于 2022-6-17 20:42
Emmm......
 楼主| 小奥2014 发表于 2022-6-17 21:10
话痨司机啊 发表于 2022-6-17 20:18
来个异步的,让站长更谢谢你,哈哈!
[mw_shl_code=python,true]import asyncio
import aiohttp

老哥牛逼 学到了
文西思密达 发表于 2022-6-17 21:21
失效了?

[Python] 纯文本查看 复制代码
Traceback (most recent call last):
  File "E:\站长之家模板.py", line 71, in <module>
    make_url()
  File "E:\站长之家模板.py", line 31, in make_url
    response = opener.open(request)
  File "D:\Python3.10.5\lib\urllib\request.py", line 519, in open
    response = self._open(req, data)
  File "D:\Python3.10.5\lib\urllib\request.py", line 536, in _open
    result = self._call_chain(self.handle_open, protocol, protocol +
  File "D:\Python3.10.5\lib\urllib\request.py", line 496, in _call_chain
    result = func(*args)
  File "D:\Python3.10.5\lib\urllib\request.py", line 1391, in https_open
    return self.do_open(http.client.HTTPSConnection, req,
  File "D:\Python3.10.5\lib\urllib\request.py", line 1352, in do_open
    r = h.getresponse()
  File "D:\Python3.10.5\lib\http\client.py", line 1374, in getresponse
    response.begin()
  File "D:\Python3.10.5\lib\http\client.py", line 318, in begin
    version, status, reason = self._read_status()
  File "D:\Python3.10.5\lib\http\client.py", line 287, in _read_status
    raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
 楼主| 小奥2014 发表于 2022-6-18 12:46
文西思密达 发表于 2022-6-17 21:21
失效了?

[mw_shl_code=python,true]Traceback (most recent call last):

http.client.RemoteDisconnected: Remote end closed connection without response
没有返回response无法继续了
检查下opener的对象
您需要登录后才可以回帖 登录 | 注册[Register]

本版积分规则 警告:本版块禁止灌水或回复与主题无关内容,违者重罚!

快速回复 收藏帖子 返回列表 搜索

RSS订阅|小黑屋|处罚记录|联系我们|吾爱破解 - LCG - LSG ( 京ICP备16042023号 | 京公网安备 11010502030087号 )

GMT+8, 2024-5-8 01:27

Powered by Discuz!

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表