How to download files from url in python


Python provides different modules like urllib, requests etc to download files from the web. I am going to use the request library of python to efficiently download files from the URLs.

Let’s start a look at step by step procedure to download files using URLs using request library−

1. Import module

import requests
url = 'https://www.facebook.com/favicon.ico'
r = requests.get(url, allow_redirects=True)

3. Save the content with name.

open('facebook.ico', 'wb').write(r.content)

save the file as facebook.ico.

Example

import requests


url = 'https://www.facebook.com/favicon.ico'
r = requests.get(url, allow_redirects=True)

open('facebook.ico', 'wb').write(r.content)

Result

How to download files from url in python

We can see the file is downloaded(icon) in our current working directory.

But we may need to download different kind of files like image, text, video etc from the web. So let’s first get the type of data the url is linking to−

>>> r = requests.get(url, allow_redirects=True)
>>> print(r.headers.get('content-type'))
image/png

However, there is a smarter way, which involved just fetching the headers of a url before actually downloading it. This allows us to skip downloading files which weren’t meant to be downloaded.

>>> print(is_downloadable('https://www.youtube.com/watch?v=xCglV_dqFGI'))
False
>>> print(is_downloadable('https://www.facebook.com/favicon.ico'))
True

To restrict the download by file size, we can get the filezie from the content-length header and then do as per our requirement.

contentLength = header.get('content-length', None)
if contentLength and contentLength > 2e8: # 200 mb approx
return False

Get filename from an URL

To get the filename, we can parse the url. Below is a sample routine which fetches the last string after backslash(/).

url= "http://www.computersolution.tech/wp-content/uploads/2016/05/tutorialspoint-logo.png"
if url.find('/'):
print(url.rsplit('/', 1)[1]

Above will give the filename of the url. However, there are many cases where filename information is not present in the url for example – http://url.com/download. In such a case, we need to get the Content-Disposition header, which contains the filename information.

import requests
import re

def getFilename_fromCd(cd):
"""
Get filename from content-disposition
"""
if not cd:
return None
fname = re.findall('filename=(.+)', cd)
if len(fname) == 0:
return None
return fname[0]


url = 'http://google.com/favicon.ico'
r = requests.get(url, allow_redirects=True)
filename = getFilename_fromCd(r.headers.get('content-disposition'))
open(filename, 'wb').write(r.content)

The above url-parsing code in conjunction with above program will give you filename from Content-Disposition header most of the time.

How to download files from url in python

Updated on 30-Jul-2019 22:30:26

  • Related Questions & Answers
  • Downloading file using SAP .NET Connector
  • How are files extracted from a tar file using Python?
  • Rename multiple files using Python
  • Using SAP Web Service from WSDL file
  • Web Scraping using Python and Scrapy?
  • Python Implementing web scraping using lxml
  • How to copy files from one folder to another using Python?
  • How to copy files from one server to another using Python?
  • How to convert PDF files to Excel files using Python?
  • How to copy certain files from one folder to another using Python?
  • Implementing web scraping using lxml in Python?
  • Does HTML5 allow you to interact with local client files from within a web browser?
  • Generate temporary files and directories using Python
  • How to remove swap files using Python?
  • How to create powerpoint files using Python

I wanted do download all the files from a webpage. I tried wget but it was failing so I decided for the Python route and I found this thread.

After reading it, I have made a little command line application, soupget, expanding on the excellent answers of PabloG and Stan and adding some useful options.

It uses BeatifulSoup to collect all the URLs of the page and then download the ones with the desired extension(s). Finally it can download multiple files in parallel.

Here it is:

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import (division, absolute_import, print_function, unicode_literals)
import sys, os, argparse
from bs4 import BeautifulSoup

# --- insert Stan's script here ---
# if sys.version_info >= (3,): 
#...
#...
# def download_file(url, dest=None): 
#...
#...

# --- new stuff ---
def collect_all_url(page_url, extensions):
    """
    Recovers all links in page_url checking for all the desired extensions
    """
    conn = urllib2.urlopen(page_url)
    html = conn.read()
    soup = BeautifulSoup(html, 'lxml')
    links = soup.find_all('a')

    results = []    
    for tag in links:
        link = tag.get('href', None)
        if link is not None: 
            for e in extensions:
                if e in link:
                    # Fallback for badly defined links
                    # checks for missing scheme or netloc
                    if bool(urlparse.urlparse(link).scheme) and bool(urlparse.urlparse(link).netloc):
                        results.append(link)
                    else:
                        new_url=urlparse.urljoin(page_url,link)                        
                        results.append(new_url)
    return results

if __name__ == "__main__":  # Only run if this file is called directly
    # Command line arguments
    parser = argparse.ArgumentParser(
        description='Download all files from a webpage.')
    parser.add_argument(
        '-u', '--url', 
        help='Page url to request')
    parser.add_argument(
        '-e', '--ext', 
        nargs='+',
        help='Extension(s) to find')    
    parser.add_argument(
        '-d', '--dest', 
        default=None,
        help='Destination where to save the files')
    parser.add_argument(
        '-p', '--par', 
        action='store_true', default=False, 
        help="Turns on parallel download")
    args = parser.parse_args()

    # Recover files to download
    all_links = collect_all_url(args.url, args.ext)

    # Download
    if not args.par:
        for l in all_links:
            try:
                filename = download_file(l, args.dest)
                print(l)
            except Exception as e:
                print("Error while downloading: {}".format(e))
    else:
        from multiprocessing.pool import ThreadPool
        results = ThreadPool(10).imap_unordered(
            lambda x: download_file(x, args.dest), all_links)
        for p in results:
            print(p)

An example of its usage is:

python3 soupget.py -p -e <list of extensions> -d <destination_folder> -u <target_webpage>

And an actual example if you want to see it in action:

python3 soupget.py -p -e .xlsx .pdf .csv -u https://healthdata.gov/dataset/chemicals-cosmetics

Can I use Python to download files from website?

Requests is a versatile HTTP library in python with various applications. One of its applications is to download a file from web using the file URL.

How do I download multiple files from a website using Python?

Import module. import requests..
Get the link or url. url = 'https://www.facebook.com/favicon.ico' r = requests.get(url, allow_redirects=True).
Save the content with name. open('facebook.ico', 'wb').write(r.content) save the file as facebook. ... .
Get filename from an URL. To get the filename, we can parse the url..

How do I download an Excel file from a website using Python?

How to download a file using Selenium and Python.
Prerequisites:.
Step 1: Import required packages to Python test script..
Step 2: Set Chrome options..
Step 3: Create chrome driver object with options..
Step 4: Create a script to navigate to the website and click on download .csv..
Step 5: Run the test..

How do you download an image from a URL in Python?

How to download an image using requests in Python.
response = requests. get("https://i.imgur.com/ExdKOOz.png").
file = open("sample_image.png", "wb").
file. write(response. content).
file. close().