吾爱破解 - 52pojie.cn

 找回密码
 注册[Register]

QQ登录

只需一步,快速开始

查看: 19046|回复: 63
收起左侧

[Python 转载] python爬虫之爬取美女图片

  [复制链接]
lang6i5u623 发表于 2017-11-13 10:03
import requests
from lxml import html


def get_page_number(num):
    #构建函数,用来查找该页内所有图片集的详细地址。目前一页包含15组套图,所以应该返回包含15个链接的序列。
    url = 'http://www.mmjpg.com/home/' + num
    #构造每个分页的网址
    response = requests.get(url).content
    #调用requests库,获取二进制的相应内容。注意,这里使用.text方法的话,下面的html解析会报错,大家可以试一下。这里涉及到.content和.text的区别了。简单说,如果是处理文字、链接等内容,建议使用.text,处理视频、音频、图片等二进制内容,建议使用.content。
    selector = html.fromstring(response)
    #使用lxml.html模块构建选择器,主要功能是将二进制的服务器相应内容response转化为可读取的元素树(element tree)。lxml中就有etree模块,是构建元素树用的。如果是将html字符串转化为可读取的元素树,就建议使用lxml.html.fromstring,毕竟这几个名字应该能大致说明功能了吧。
    urls = []
    #准备容器
    for i in selector.xpath("//ul/li/a/@href"):
    #利用xpath定位到所有的套图的详细地址
        urls.append(i)
        #遍历所有地址,添加到容器中
    return urls
    #将序列作为函数结果返回


def get_image_title(url):
    #现在进入到套图的详情页面了,现在要把套图的标题和图片总数提取出来
    response = requests.get(url).content
    selector = html.fromstring(response)
    image_title = selector.xpath("//h2/text()")[0]
    #需要注意的是,xpath返回的结果都是序列,所以需要使用[0]进行定位
    return image_title

def get_image_amount(url):
    #这里就相当于重复造轮子了,因为基本的代码逻辑跟上一个函数一模一样。想要简单的话就是定义一个元组,然后把获取标题、获取链接、获取图片总数的3组函数的逻辑揉在一起,最后将结果作为元组输出。不过作为新手教程,还是以简单易懂为好吧。想挑战的同学可以试试写元组模式
    response = requests.get(url).content
    selector = html.fromstring(response)
    image_amount = selector.xpath("//div[@class='page']/a[last()-1]/text()")[0]
    # a标签的倒数第二个区块就是图片集的最后一页,也是图片总数,所以直接取值就可以
    return image_amount


def get_image_detail_website(url):
    #这里还是重复造轮子。
    response = requests.get(url).content
    selector = html.fromstring(response)
    image_detail_websites = []
    image_amount = selector.xpath("//div[@class='page']/a[last()-1]/text()")[0]
    #这里重复构造变量,主要是为了获取图片总数。更高级的方法是使用函数间的传值,但是我忘了怎么写了,所以用了个笨办法。欢迎大家修改
    #构建图片具体地址的容器
    for i in range(int(image_amount)):
        image_detail_link = '{}/{}'.format(url, i+1)
        response = requests.get(image_detail_link).content
        sel = html.fromstring(response)
        image_download_link = sel.xpath("//div[@class='content']/a/img/@src")[0]
        #这里是单张图片的最终下载地址
        image_detail_websites.append(image_download_link)
    return image_detail_websites


def download_image(image_title, image_detail_websites):
    #将图片保存到本地。传入的两个参数是图片的标题,和下载地址序列
    num = 1
    amount = len(image_detail_websites)
    #获取图片总数
    for i in image_detail_websites:


        filename ='%s%s.jpg' % (image_title, num)
        print('正在下载图片:%s第%s/%s张,' % (image_title, num, amount))
        with open(filename, 'wb') as f:
            f.write(requests.get(i).content)
        num += 1


if __name__ == '__main__':
    page_number = input('请输入需要爬取的页码:')
    for link in get_page_number(page_number):
        download_image(get_image_title(link), get_image_detail_website(link))

免费评分

参与人数 3吾爱币 +3 热心值 +3 收起 理由
尊孔复古 + 1 + 1 谢谢@Thanks!
杂牌川军 + 1 + 1 谢谢@Thanks!
宿墨 + 1 + 1 我很赞同!

查看全部评分

本帖被以下淘专辑推荐:

发帖前要善用论坛搜索功能,那里可能会有你要找的答案或者已经有人发布过相同内容了,请勿重复发帖。

rhinorhino 发表于 2017-11-24 15:56
不过后面又出现了以下错误:
正在下载图片:容貌似杨幂的美女馨怡美腿极致诱惑图第43/44张,
正在下载图片:容貌似杨幂的美女馨怡美腿极致诱惑图第44/44张,
Traceback (most recent call last):
  File "C:\Python34\lib\site-packages\urllib3\connection.py", line 141, in _new_conn
    (self.host, self.port), self.timeout, **extra_kw)
  File "C:\Python34\lib\site-packages\urllib3\util\connection.py", line 83, in create_connection
    raise err
  File "C:\Python34\lib\site-packages\urllib3\util\connection.py", line 73, in create_connection
    sock.connect(sa)
TimeoutError: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python34\lib\site-packages\urllib3\connectionpool.py", line 601, in urlopen
    chunked=chunked)
  File "C:\Python34\lib\site-packages\urllib3\connectionpool.py", line 357, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "C:\Python34\lib\http\client.py", line 1137, in request
    self._send_request(method, url, body, headers)
  File "C:\Python34\lib\http\client.py", line 1182, in _send_request
    self.endheaders(body)
  File "C:\Python34\lib\http\client.py", line 1133, in endheaders
    self._send_output(message_body)
  File "C:\Python34\lib\http\client.py", line 963, in _send_output
    self.send(msg)
  File "C:\Python34\lib\http\client.py", line 898, in send
    self.connect()
  File "C:\Python34\lib\site-packages\urllib3\connection.py", line 166, in connect
    conn = self._new_conn()
  File "C:\Python34\lib\site-packages\urllib3\connection.py", line 150, in _new_conn
    self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x0233A070>: Failed to establish a new connection: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Python34\lib\site-packages\requests\adapters.py", line 440, in send
    timeout=timeout
  File "C:\Python34\lib\site-packages\urllib3\connectionpool.py", line 639, in urlopen
    _stacktrace=sys.exc_info()[2])
  File "C:\Python34\lib\site-packages\urllib3\util\retry.py", line 388, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='www.mmjpg.com', port=80): Max retries exceeded with url: /mm/1147/24 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0233A070>: Failed to establish a new connection: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。',))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "D:\My Documents\Python\SpiderBeauty.py", line 79, in <module>
    download_image(get_image_title(link), get_image_detail_website(link))
  File "D:\My Documents\Python\SpiderBeauty.py", line 53, in get_image_detail_website
    response = requests.get(image_detail_link).content
  File "C:\Python34\lib\site-packages\requests\api.py", line 72, in get
    return request('get', url, params=params, **kwargs)
  File "C:\Python34\lib\site-packages\requests\api.py", line 58, in request
    return session.request(method=method, url=url, **kwargs)
  File "C:\Python34\lib\site-packages\requests\sessions.py", line 508, in request
    resp = self.send(prep, **send_kwargs)
  File "C:\Python34\lib\site-packages\requests\sessions.py", line 618, in send
    r = adapter.send(request, **kwargs)
  File "C:\Python34\lib\site-packages\requests\adapters.py", line 508, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='www.mmjpg.com', port=80): Max retries exceeded with url: /mm/1147/24 (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x0233A070>: Failed to establish a new connection: [WinError 10060] 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。',))
>>>
rhinorhino 发表于 2017-11-24 14:34
本帖最后由 rhinorhino 于 2017-11-24 15:00 编辑
lang6i5u623 发表于 2017-11-24 14:09
你没有导入request模块吧,你用的是python2.7还是python3?

已在命令行模式下用pip install requests 导入模块
我用的是windows xp下的2.7

现在运行后提示如下:
>>>
============== RESTART: D:\My Documents\Python\SpiderBeauty.py ==============
请输入需要爬取的页码:1

Traceback (most recent call last):
  File "D:\My Documents\Python\SpiderBeauty.py", line 75, in <module>
    for link in get_page_number(page_number):
  File "D:\My Documents\Python\SpiderBeauty.py", line 7, in get_page_number
    url = 'http://www.mmjpg.com/home/' + num
TypeError: cannot concatenate 'str' and 'int' objects
>>>
============== RESTART: D:\My Documents\Python\SpiderBeauty.py ==============
请输入需要爬取的页码:2

Traceback (most recent call last):
  File "D:\My Documents\Python\SpiderBeauty.py", line 75, in <module>
    for link in get_page_number(page_number):
  File "D:\My Documents\Python\SpiderBeauty.py", line 7, in get_page_number
    url = 'http://www.mmjpg.com/home/' + num
TypeError: cannot concatenate 'str' and 'int' objects
>>>
另外,经对实际网址测试,第1页并不是http://www.mmjpg.com/home/1,仅仅是http://www.mmjpg.com,
从第2页起才是http://www.mmjpg.com/home/2
songshuyl 发表于 2017-11-13 10:18
哈喽,你好 发表于 2017-11-13 10:19 来自手机
感谢楼主大神发的教程,学习了
qqyyh 发表于 2017-11-13 10:24
好想学,就是感觉好难的。
heqi2014 发表于 2017-11-13 10:27
前面看得懂  后面架构函数咋写啊  大神
daban2009 发表于 2017-11-13 10:28
看着有点费劲
jacksun 发表于 2017-11-13 10:29
已在学习的路上,
roturier 发表于 2017-11-13 10:29
美女很吸引人,但源码太坑爹了。@@
宿墨 发表于 2017-11-13 10:41
写的很详细,谢谢楼主
supervision 发表于 2017-11-13 10:46
正在学习,谢谢分享
您需要登录后才可以回帖 登录 | 注册[Register]

本版积分规则

返回列表

RSS订阅|小黑屋|处罚记录|联系我们|吾爱破解 - LCG - LSG ( 京ICP备16042023号 | 京公网安备 11010502030087号 )

GMT+8, 2024-11-1 07:11

Powered by Discuz!

Copyright © 2001-2020, Tencent Cloud.

快速回复 返回顶部 返回列表