wget network

download files from the web without opening a browser.

The wget man page is 2,400 lines long. You need about five flags and you’ll never right-click “Save As” again.

You needed to download a file. So you opened a browser. You navigated to the URL. You right-clicked the link. You clicked “Save As.” You chose a folder. You waited. The browser showed a progress bar at the bottom of the screen that may or may not have been accurate. The download finished. You navigated to the Downloads folder. You moved the file where you actually wanted it.

Then you needed to download twelve more files. So you did that eleven more times.

Or — you needed to download something on a remote server that doesn’t have a browser. Because it’s a server. It doesn’t have a desktop environment. It doesn’t have Chrome. It has wget and a URL.

Unless you’re running Windows then wtf none of this applies to you. But hey, come to the dark side, go install WSL2 and you can follow along. We’ll wait. Impatiently.

If you’re lazy like me (all sysadmins are!) then click here for the wget cheat sheet.


Download a file

wget https://example.com/file.tar.gz

That’s it. File downloaded. Saved in the current directory with the original filename. Progress bar included — one that actually shows speed, size, and ETA.

No browser. No “Save As” dialog. No navigating to your Downloads folder afterwards.


Save with a different name

wget -O report.pdf https://example.com/downloads/q4-2025-financial-report-final-v3-FINAL.pdf

-O (capital O) sets the output filename. Because sometimes the URL has a name that makes you question the sender’s organizational skills.


Download to a specific directory

wget -P /home/owner/documents/ https://example.com/file.pdf

-P sets the download directory. The file goes straight where you want it. No download-then-move.


Resume an interrupted download

wget -c https://example.com/large-file.iso

-c continues a partial download. Your internet dropped at 87%. You don’t start over. You run the same command with -c and it picks up where it left off. Your browser would have made you start from zero.


Download multiple files

From a list

wget -i urls.txt

Create a text file with one URL per line. wget downloads all of them. No clicking. No tabs. No “download manager” application.

Multiple URLs inline

wget https://example.com/file1.tar.gz https://example.com/file2.tar.gz https://example.com/file3.tar.gz

Just list them. wget downloads them sequentially. One command, three files.


Run in the background

wget -b https://example.com/huge-file.iso

-b backgrounds the download. It writes progress to wget-log in the current directory. Check on it with:

tail -f wget-log

For large downloads on a remote server, combine with tmux and you can disconnect entirely.


Mirror a website

wget --mirror --convert-links --page-requisites --no-parent https://example.com/docs/

Downloads an entire website for offline viewing.

  • --mirror — recursive download with timestamps
  • --convert-links — rewrite links to work locally
  • --page-requisites — grab CSS, images, JS — everything the page needs
  • --no-parent — don’t crawl up to the parent directory

Or the short version:

wget -mkpnp https://example.com/docs/

Now you have an offline copy of the documentation. For when the internet is unreliable, the site might go down, or you’re about to board a flight.


Limit download speed

wget --limit-rate=1m https://example.com/large.iso

Caps the download at 1 megabyte per second. For when you’re sharing bandwidth and don’t want to be the person who saturated the office connection to download an ISO.


Download with authentication

Basic HTTP auth

wget --user=admin --password=secret https://example.com/protected/file.zip

Avoid putting passwords in your shell history

wget --user=admin --ask-password https://example.com/protected/file.zip

--ask-password prompts you to type it. Doesn’t show up in history. Better.


Retry on failure

wget --tries=10 --retry-connrefused https://example.com/file.tar.gz

Retries up to 10 times if the connection fails. --retry-connrefused retries even when the server actively refuses the connection. For unstable networks or servers that occasionally return errors.

wget --waitretry=30 --tries=0 https://example.com/file.tar.gz

--tries=0 means retry forever. --waitretry=30 waits up to 30 seconds between retries. For when you absolutely need that file and the server is being difficult.


Quiet mode

wget -q https://example.com/file.tar.gz

No progress bar. No output. Just downloads the file silently. Useful in scripts where you don’t need the visual feedback.

wget -q -O - https://example.com/api/status

-O - outputs to stdout instead of a file. Combined with -q, this turns wget into a basic HTTP client — fetches a URL and outputs the response. Not as powerful as curl for API work, but works in a pinch.


wget vs curl

You have both. Here’s when to use which:

Use case Use this
Download a file wget — simpler syntax, auto-names the file
Resume a download wget -c — built-in and automatic
Download multiple files wget -i urls.txt — purpose-built
Mirror a website wget --mirror — nothing else compares
API calls curl — better header/method support
POST requests / JSON curl — designed for it
Debugging HTTP curl -v — shows full request/response headers

They overlap, but wget is optimized for downloading and curl is optimized for talking to APIs. Use both.


The flags that actually matter

Flag What it does
-O FILE Save with a specific filename. -O - for stdout.
-P DIR Save to a specific directory.
-c Continue a partial download.
-b Run in the background.
-q Quiet — no output.
-i FILE Download URLs from a list file.
--mirror Mirror a website recursively.
--convert-links Fix links for offline viewing.
--no-parent Don’t crawl above the starting directory.
--limit-rate=RATE Throttle download speed.
--tries=N Retry N times on failure.
--user / --password HTTP authentication.

“But I just use my browser—”

And that’s why your Downloads folder has 847 files in it.

“Chrome downloads files fine.” Chrome downloads one file at a time with a “Save As” dialog. Need to download twenty files? Twenty dialogs. Need to download files on a headless server? Chrome isn’t even installed. wget handles both.

“I use a download manager.” You installed a dedicated application to download files. An application with a GUI, a system tray icon, browser integration, and probably browser extensions. To save files. From the internet. wget is six characters and was installed before your download manager existed.

“JDownloader handles multiple downloads.” JDownloader is 150MB, runs on Java, and has a “Dark Mode” setting. For downloading files. It’s like buying a forklift to carry groceries.

“But my browser can resume downloads too.” Sometimes. When the server supports it. And when Chrome feels like it. wget -c resumes reliably because it stores the partial file and sends the right HTTP headers automatically. It’s not a feature request — it’s the default behavior.

“I need a download manager for captcha-protected sites.” Fair. wget can’t solve CAPTCHAs. You got me on that one.


wget cheat sheet

You made it. Or you skipped straight here. Either way, no judgment. Copy and paste these. Pin them. Tattoo them on your forearm. Whatever works.

What you’re doing Command
Download a file wget URL
Save with a different name wget -O name.zip URL
Save to a specific directory wget -P /dest/ URL
Resume interrupted download wget -c URL
Download from a list wget -i urls.txt
Background download wget -b URL
Mirror a website wget --mirror --convert-links --page-requisites --no-parent URL
Limit speed wget --limit-rate=1m URL
Retry on failure wget --tries=10 URL
With authentication wget --user=admin --ask-password URL
Quiet mode (scripts) wget -q URL
Output to stdout wget -q -O - URL

The one command: wget URL — that’s all you need. File downloaded. Name preserved. Done.

Back to the top, you overachiever.