Newer isn’t always better, and the
wget command is proof. First released back in 1996, this application is still one of the best download managers on the planet. Whether you want to download a single file, an entire folder, or even mirror an entire website, wget lets you do it with just a few keystrokes.
Of course, there’s a reason not everyone uses wget: it’s a command line application, and as such takes a bit of time for beginners to learn. Here are the basics, so you can get started.
Before you can use wget, you need to install it. How to do so varies depending on your computer:
brew install wgetin the Terminal.
Once you’ve installed wget, you can start using it immediately from the command line. Let’s download some files!
Let’s start with something simple. Copy the URL for a file you’d like to download in your browser.
Now head back to the Terminal and type
wget followed by the pasted URL. The file will download, and you’ll see progress in realtime as it does.
Note that the file will download to your Terminal’s current folder, so you’ll want to
cd to a different folder if you want it stored elsewhere. If you’re not sure what that means, check out our guide to managing files from the command line. The article mentions Linux, but the concepts are the same on macOS systems, and Windows systems running Bash.
If, for whatever reason, you stopped a download before it could finish, don’t worry: wget can pick up right where it left off. Just use this command:
wget -c file
The key here is
-c, which is an “option” in command line parlance. This particular option tells wget that you’d like to continue an existing download.
If you want to download an entire website, wget can do the job.
wget -m http://example.com
By default, this will download everything on the site example.com, but you’re probably going to want to use a few more options for a usable mirror.
--convert-linkschanges links inside each downloaded page so that they point to each other, not the web.
--page-requisitesdownloads things like style sheets, so pages will look correct offline.
--no-parentstops wget from downloading parent sites. So if you want to download http://example.com/subexample, you won’t end up with the parent page.
Combine these options to taste, and you’ll end up with a copy of any website that you can browse on your computer.
Note that mirroring an entire website on the modern Internet is going to take up a massive amount of space, so limit this to small sites unless you have near-unlimited storage.
If you’re browsing an FTP server and find an entire folder you’d like to download, just run:
wget -r ftp://example.com/folder
r in this case tells wget you want a recursive download. You can also include
--noparent if you want to avoid downloading folders and files above the current level.
If you can’t find an entire folder of the downloads you want, wget can still help. Just put all of the download URLs into a single TXT file.
then point wget to that document with the
-i option. Like this:
wget -i download.txt
Do this and your computer will download all files listed in the text document, which is handy if you want to leave a bunch of downloads running overnight.
We could go on: wget offers a lot of options. But this tutorial is just intended to give you a launching off point. To learn more about what wget can do, type
man wget in the terminal and read what comes up. You’ll learn a lot.
Having said that, here are a few other options I think are neat:
-t 10. That will try to download 10 times; you can use whatever number you like.
--limit-rate=200kwill cap your download speed at 200KB/s. Change the number to change the rate.