使用 Python 3 从网络下载文件
- 2025-01-07 08:44:00
- admin 原创
- 131
问题描述:
我正在创建一个程序,它将通过读取同一游戏/应用程序的 .jad 文件中指定的 URL 从 Web 服务器下载 .jar (java) 文件。我使用的是 Python 3.2.1
我已设法从 JAD 文件中提取了 JAR 文件的 URL(每个 JAD 文件都包含 JAR 文件的 URL),但正如您可能想象的那样,提取的值是 type() 字符串。
相关函数如下:
def downloadFile(URL=None):
import httplib2
h = httplib2.Http(".cache")
resp, content = h.request(URL, "GET")
return content
downloadFile(URL_from_file)
但是我总是得到一个错误,说上面的函数中的类型必须是字节,而不是字符串。我试过使用 URL.encode('utf-8') 和 bytes(URL,encoding='utf-8'),但总是得到相同或类似的错误。
所以基本上我的问题是当 URL 以字符串类型存储时如何从服务器下载文件?
解决方案 1:
如果要将网页内容获取到变量中,只需read
响应urllib.request.urlopen
:
import urllib.request
...
url = 'http://example.com/'
response = urllib.request.urlopen(url)
data = response.read() # a `bytes` object
text = data.decode('utf-8') # a `str`; this step can't be used if data is binary
下载和保存文件的最简单方法是使用以下urllib.request.urlretrieve
函数:
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
urllib.request.urlretrieve(url, file_name)
import urllib.request
...
# Download the file from `url`, save it in a temporary directory and get the
# path to it (e.g. '/tmp/tmpb48zma.txt') in the `file_name` variable:
file_name, headers = urllib.request.urlretrieve(url)
但请记住,这urlretrieve
被视为遗留问题并且可能会被弃用(但不确定为什么)。
因此,最正确的方法是使用urllib.request.urlopen
函数返回代表 HTTP 响应的类文件对象,然后使用 将其复制到真实文件中shutil.copyfileobj
。
import urllib.request
import shutil
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
shutil.copyfileobj(response, out_file)
如果这看起来太复杂,您可能希望更简单,将整个下载存储在一个bytes
对象中,然后将其写入文件。但这只适用于小文件。
import urllib.request
...
# Download the file from `url` and save it locally under `file_name`:
with urllib.request.urlopen(url) as response, open(file_name, 'wb') as out_file:
data = response.read() # a `bytes` object
out_file.write(data)
可以.gz
动态提取(也可能是其他格式)压缩数据,但是这样的操作可能需要 HTTP 服务器支持对文件的随机访问。
import urllib.request
import gzip
...
# Read the first 64 bytes of the file inside the .gz archive located at `url`
url = 'http://example.com/something.gz'
with urllib.request.urlopen(url) as response:
with gzip.GzipFile(fileobj=response) as uncompressed:
file_header = uncompressed.read(64) # a `bytes` object
# Or do anything shown above using `uncompressed` instead of `response`.
解决方案 2:
每当我需要与 HTTP 请求相关的东西时,我都会使用requests
包,因为它的 API 非常容易上手:
首先,安装requests
$ pip install requests
然后是代码:
from requests import get # to make GET request
def download(url, file_name):
# open in binary mode
with open(file_name, "wb") as file:
# get request
response = get(url)
# write to file
file.write(response.content)
解决方案 3:
我希望我理解正确了这个问题,即:当 URL 以字符串类型存储时,如何从服务器下载文件?
我使用以下代码下载文件并将其保存在本地:
import requests
url = 'https://www.python.org/static/img/python-logo.png'
fileName = 'D:PythondwnldPythonLogo.png'
req = requests.get(url)
file = open(fileName, 'wb')
for chunk in req.iter_content(100000):
file.write(chunk)
file.close()
解决方案 4:
您可以使用wget,它是一种流行的下载 shell 工具。https ://pypi.python.org/pypi/wget
这将是最简单的方法,因为它不需要打开目标文件。这是一个例子。
import wget
url = 'https://i1.wp.com/python3.codes/wp-content/uploads/2015/06/Python3-powered.png?fit=650%2C350'
wget.download(url, '/Users/scott/Downloads/cat4.jpg')
解决方案 5:
这里我们可以使用Python3中的urllib的Legacy接口:
以下函数和类是从 Python 2 模块 urllib(而不是 urllib2)移植而来的。它们将来可能会被弃用。
示例(2行代码):
import urllib.request
url = 'https://www.python.org/static/img/python-logo.png'
urllib.request.urlretrieve(url, "logo.png")
解决方案 6:
是的,requests 绝对是一个很棒的包,可用于与 HTTP 请求相关的操作。但我们也需要小心传入数据的编码类型,下面是一个解释差异的例子
from requests import get
# case when the response is byte array
url = 'some_image_url'
response = get(url)
with open('output', 'wb') as file:
file.write(response.content)
# case when the response is text
# Here unlikely if the reponse content is of type **iso-8859-1** we will have to override the response encoding
url = 'some_page_url'
response = get(url)
# override encoding by real educated guess as provided by chardet
r.encoding = r.apparent_encoding
with open('output', 'w', encoding='utf-8') as file:
file.write(response.content)
解决方案 7:
动机
有时我们想要获取图片但不需要将其下载到实际文件中,
即下载数据并将其保存在内存中。
例如,如果我使用机器学习的方法,训练一个可以识别带有数字(条形码)的图像的模型。
当我爬取一些有这些图片的网站时,我就可以用模型来识别它,
我不想把这些照片保存在我的磁盘驱动器上,
然后您可以尝试下面的方法来帮助您将下载数据保存在内存中。
积分
import requests
from io import BytesIO
response = requests.get(url)
with BytesIO as io_obj:
for chunk in response.iter_content(chunk_size=4096):
io_obj.write(chunk)
基本上,就像@Ranvijay Kumar
一个例子
import requests
from typing import NewType, TypeVar
from io import StringIO, BytesIO
import matplotlib.pyplot as plt
import imageio
URL = NewType('URL', str)
T_IO = TypeVar('T_IO', StringIO, BytesIO)
def download_and_keep_on_memory(url: URL, headers=None, timeout=None, **option) -> T_IO:
chunk_size = option.get('chunk_size', 4096) # default 4KB
max_size = 1024 ** 2 * option.get('max_size', -1) # MB, default will ignore.
response = requests.get(url, headers=headers, timeout=timeout)
if response.status_code != 200:
raise requests.ConnectionError(f'{response.status_code}')
instance_io = StringIO if isinstance(next(response.iter_content(chunk_size=1)), str) else BytesIO
io_obj = instance_io()
cur_size = 0
for chunk in response.iter_content(chunk_size=chunk_size):
cur_size += chunk_size
if 0 < max_size < cur_size:
break
io_obj.write(chunk)
io_obj.seek(0)
""" save it to real file.
with open('temp.png', mode='wb') as out_f:
out_f.write(io_obj.read())
"""
return io_obj
def main():
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3',
'Accept-Encoding': 'gzip, deflate',
'Accept-Language': 'zh-TW,zh;q=0.9,en-US;q=0.8,en;q=0.7',
'Cache-Control': 'max-age=0',
'Connection': 'keep-alive',
'Host': 'statics.591.com.tw',
'Upgrade-Insecure-Requests': '1',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.87 Safari/537.36'
}
io_img = download_and_keep_on_memory(URL('http://statics.591.com.tw/tools/showPhone.php?info_data=rLsGZe4U%2FbphHOimi2PT%2FhxTPqI&type=rLEFMu4XrrpgEw'),
headers, # You may need this. Otherwise, some websites will send the 404 error to you.
max_size=4) # max loading < 4MB
with io_img:
plt.rc('axes.spines', top=False, bottom=False, left=False, right=False)
plt.rc(('xtick', 'ytick'), color=(1, 1, 1, 0)) # same of plt.axis('off')
plt.imshow(imageio.imread(io_img, as_gray=False, pilmode="RGB"))
plt.show()
if __name__ == '__main__':
main()
解决方案 8:
from urllib import request
def get(url):
with request.urlopen(url) as r:
return r.read()
def download(url, file=None):
if not file:
file = url.split('/')[-1]
with open(file, 'wb') as f:
f.write(get(url))
解决方案 9:
如果您使用的是 Linux,则可以wget
通过 python shell 使用 Linux 的模块。以下是示例代码片段
import os
url = 'http://www.example.com/foo.zip'
os.system('wget %s'%url)