Content-type: text/html
Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user's presence, which can be a great hindrance when transferring a lot of data.
Wget can follow links in HTML and XHTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as ``recursive downloading.'' While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). Wget can be instructed to convert the links in downloaded HTML files to the local files for offline viewing.
Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.
However, if you specify ---force-html, the document will be
regarded as html. In that case you may have problems with
relative links, which you can solve either by adding "<base
href="url">" to the documents or by specifying
---base=url on the command line.
When running Wget without -N, -nc, or -r, downloading the same file in the same directory will result in the original copy of file being preserved and the second copy being named file.1. If that file is downloaded yet again, the third copy will be named file.2, and so on. When -nc is specified, this behavior is suppressed, and Wget will refuse to download newer copies of file. Therefore, ``"no-clobber"'' is actually a misnomer in this mode---it's not clobbering that's prevented (as the numeric suffixes were already preventing clobbering), but rather the multiple version saving that's prevented.
When running Wget with -r, but without -N or -nc, re-downloading a file will result in the new copy simply overwriting the old. Adding -nc will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored.
When running Wget with -N, with or without -r, the decision as to whether or not to download a newer copy of a file depends on the local and remote timestamp and size of the file. -nc may not be specified at the same time as -N.
Note that when -nc is specified, files with the suffixes
.html or (yuck) .htm will be loaded from the local disk
and parsed as if they had been retrieved from the Web.
wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.ZIf there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local file.
Note that you don't need to specify this option if you just want the current invocation of Wget to retry downloading a file should the connection be lost midway through. This is the default behavior. -c only affects resumption of downloads started prior to this invocation of Wget, and whose local files are still sitting around.
Without -c, the previous example would just download the remote file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone.
Beginning with Wget 1.7, if you use -c on a non-empty file, and it turns out that the server does not support continued downloading, Wget will refuse to start the download from scratch, which would effectively ruin existing contents. If you really want the download to start from scratch, remove the file.
Also beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explanatory message. The same happens when the file is smaller on the server than locally (presumably because it was changed on the server since your last download attempt)---because ``continuing'' is not meaningful, no download occurs.
On the other side of the coin, while using -c, any file that's bigger on the server than locally will be considered an incomplete download and only "(length(remote) - length(local))" bytes will be downloaded and tacked onto the end of the local file. This behavior can be desirable in certain cases---for instance, you can use wget -c to download just the new portion that's been appended to a data collection or log file.
However, if the file is bigger on the server because it's been changed, as opposed to just appended to, you'll end up with a garbled file. Wget has no way of verifying that the local file is really a valid prefix of the remote file. You need to be especially careful of this when using -c in conjunction with -r, since every file will be considered as an ``incomplete download'' candidate.
Another instance where you'll get a garbled file if you try to use -c is if you have a lame HTTP proxy that inserts a ``transfer interrupted'' string into the local file. In the future a ``rollback'' option may be added to deal with this case.
Note that -c only works with FTP servers and with HTTP
servers that support the "Range" header.
The ``bar'' indicator is used by default. It draws an ASCII progress bar graphics (a.k.a ``thermometer'' display) indicating the status of retrieval. If the output is not a TTY, the ``dot'' bar will be used by default.
Use ---progress=dot to switch to the ``dot'' display. It traces the retrieval by printing dots on the screen, each dot representing a fixed amount of downloaded data.
When using the dotted retrieval, you may also set the style by specifying the type as dot:style. Different styles assign different meaning to one dot. With the "default" style each dot represents 1K, there are ten dots in a cluster and 50 dots in a line. The "binary" style has a more ``computer''-like orientation---8K dots, 16-dots clusters and 48 dots per line (which makes for 384K lines). The "mega" style is suitable for downloading very large files---each dot represents 64K retrieved, there are eight dots in a cluster, and 48 dots on each line (so each line contains 3M).
Note that you can set the default style using the "progress"
command in .wgetrc. That setting may be overridden from the
command line. The exception is that, when the output is not a TTY, the
``dot'' progress will be favored over ``bar''. To force the bar output,
use ---progress=bar:force.
wget --spider --force-html -i bookmarks.htmlThis feature needs much more work for Wget to get close to the functionality of real web spiders.
Whenever Wget connects to or reads from a remote host, it checks for a timeout and aborts the operation if the time expires. This prevents anomalous occurrences such as hanging reads or infinite connects. The only timeout enabled by default is a 900-second timeout for reading. Setting timeout to 0 disables checking for timeouts.
Unless you know what you are doing, it is best not to set any of the
timeout-related options.
Note that Wget implements the limiting by sleeping the appropriate
amount of time after a network read that took less time than specified
by the rate. Eventually this strategy causes the TCP transfer to slow
down to approximately the specified rate. However, it may take some
time for this balance to be achieved, so don't be surprised if limiting
the rate doesn't work well with very small files.
Specifying a large value for this option is useful if the network or the
destination host is down, so that Wget can wait long enough to
reasonably expect the network error to be fixed before the retry.
Note that this option is turned on by default in the global
wgetrc file.
A recent article in a publication devoted to development on a popular consumer platform provided code to perform this analysis on the fly. Its author suggested blocking at the class C address level to ensure automated retrieval programs were blocked despite changing DHCP-supplied addresses.
The ---random-wait option was inspired by this ill-advised
recommendation to block many unrelated users from a web site due to the
actions of one.
For more information about the use of proxies with Wget,
Note that quota will never affect downloading a single file. So if you specify wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz, all of the ls-lR.gz will be downloaded. The same goes even when several URLs are specified on the command-line. However, quota is respected when retrieving either recursively, or from an input file. Thus you may safely type wget -Q2m -i sites---download will be aborted when the quota is exceeded.
Setting quota to 0 or to inf unlimits the download quota.
However, in some cases it is not desirable to cache host names, even for the duration of a short-running application like Wget. For example, some HTTP servers are hosted on machines with dynamically allocated IP addresses that change from time to time. Their DNS entries are updated along with each change. When Wget's download from such a host gets interrupted by IP address change, Wget retries the download, but (due to DNS caching) it contacts the old address. With the DNS cache turned off, Wget will repeat the DNS lookup for every connect and will thus get the correct dynamic address every time---at the cost of additional DNS lookups where they're probably not needed.
If you don't understand the above description, you probably won't need
this option.
By default, Wget escapes the characters that are not valid as part of file names on your operating system, as well as control characters that are typically unprintable. This option is useful for changing these defaults, either because you are downloading to a non-native partition, or because you want to disable escaping of the control characters.
When mode is set to ``unix'', Wget escapes the character / and the control characters in the ranges 0---31 and 128---159. This is the default on Unix-like OS'es.
When mode is set to ``windows'', Wget escapes the characters \, |, /, :, ?, ", *, <, >, and the control characters in the ranges 0---31 and 128---159. In addition to this, Wget in Windows mode uses + instead of : to separate host and port in local file names, and uses @} instead of @samp{? to separate the query portion of the file name from the rest. Therefore, a URL that would be saved as www.xemacs.org:4300/search.pl?input=blah in Unix mode would be saved as www.xemacs.org+4300/search.pl@input=blah in Windows mode. This mode is the default on Windows.
If you append ,nocontrol to the mode, as in unix,nocontrol, escaping of the control characters is also switched off. You can use ---restrict-file-names=nocontrol to turn off escaping of control characters without affecting the choice of the OS to use as file name restriction mode.
Take, for example, the directory at ftp://ftp.xemacs.org/pub/xemacs/. If you retrieve it with -r, it will be saved locally under ftp.xemacs.org/pub/xemacs/. While the -nH option can remove the ftp.xemacs.org/ part, you are still stuck with pub/xemacs. This is where ---cut-dirs comes in handy; it makes Wget not ``see'' number remote directory components. Here are several examples of how ---cut-dirs option works.
No options -> ftp.xemacs.org/pub/xemacs/ -nH -> pub/xemacs/ -nH --cut-dirs=1 -> xemacs/ -nH --cut-dirs=2 -> .
--cut-dirs=1 -> ftp.xemacs.org/xemacs/ ...If you just want to get rid of the directory structure, this option is similar to a combination of -nd and -P. However, unlike -nd, ---cut-dirs does not lose with subdirectories---for instance, with -nH --cut-dirs=1, a beta/ subdirectory will be placed to xemacs/beta, as one would expect.
Note that filenames changed in this way will be re-downloaded every time
you re-mirror a site, because Wget can't tell that the local
X.html file corresponds to remote URL X (since
it doesn't yet know that the URL produces output of type
text/html or application/xhtml+xml. To prevent this
re-downloading, you must use -k and -K so that the original
version of the file will be saved as X.orig.
Another way to specify username and password is in the URL itself. Either method reveals your password to anyone who bothers to run "ps". To prevent the passwords from being seen, store them in .wgetrc or .netrc, and make sure to protect those files from other users with "chmod". If the passwords are really important, do not leave them lying in those files either---edit the files and delete them after Wget has started the download.
For more information about security issues with Wget,
Caching is allowed by default.
You will typically use this option when mirroring sites that require that you be logged in to access some or all of their content. The login process typically works by the web server issuing an HTTP cookie upon receiving and verifying your credentials. The cookie is then resent by the browser when accessing that part of the site, and so proves your identity.
Mirroring such a site requires Wget to send the same cookies your browser sends when communicating with the site. This is achieved by ---load-cookies---simply point Wget to the location of the cookies.txt file, and it will send the same cookies your browser would send in the same situation. Different browsers keep textual cookie files in different locations:
If you cannot use ---load-cookies, there might still be an alternative. If your browser supports a ``cookie manager'', you can use it to view the cookies used when accessing the site you're mirroring. Write down the name and value of the cookie, and manually instruct Wget to send those cookies, bypassing the ``official'' cookie support:
wget --cookies=off --header "Cookie: <name>=<value>"
With this option, Wget will ignore the "Content-Length" header---as
if it never existed.
You may define more than one additional header by specifying ---header more than once.
wget --header='Accept-Charset: iso-8859-2' \ --header='Accept-Language: hr' \ http://fly.srk.fer.hr/Specification of an empty string as the header value will clear all previous user-defined headers.
Security considerations similar to those with ---http-passwd
pertain here as well.
The HTTP protocol allows the clients to identify themselves using a "User-Agent" header field. This enables distinguishing the WWW software, usually for statistical purposes or for tracing of protocol violations. Wget normally identifies as Wget/version, version being the current version number of Wget.
However, some sites have been known to impose the policy of tailoring
the output according to the "User-Agent"-supplied information.
While conceptually this is not such a bad idea, it has been abused by
servers denying information to clients other than "Mozilla" or
Microsoft "Internet Explorer". This option allows you to change
the "User-Agent" line issued by Wget. Use of this option is
discouraged, unless you really know what you are doing.
Please be aware that Wget needs to know the size of the POST data in advance. Therefore the argument to "--post-file" must be a regular file; specifying a FIFO or something like /dev/stdin won't work. It's not quite clear how to work around this limitation inherent in HTTP/1.0. Although HTTP/1.1 introduces chunked transfer that doesn't require knowing the request length in advance, a client can't use chunked unless it knows it's talking to an HTTP/1.1 server. And it can't know that until it receives a response, which in turn requires the request to have been completed --- a chicken-and-egg problem.
Note: if Wget is redirected after the POST request is completed, it will not send the POST data to the redirected URL. This is because URLs that process POST often respond with a redirection to a regular page (although that's technically disallowed), which does not desire or accept POST. It is not yet clear that this behavior is optimal; if it doesn't work out, it will be changed.
This example shows how to log to a server using POST and then proceed to download the desired pages, presumably only accessible to authorized users:
# Log in to the server. This can be done only once. wget --save-cookies cookies.txt \ --post-data 'user=foo&password=bar' \ http://server.com/auth.php
# Now grab the page or pages we care about. wget --load-cookies cookies.txt \ -p http://server.com/interesting/article.php
Note that even though Wget writes to a known filename for this file, this is not a security hole in the scenario of a user making .listing a symbolic link to /etc/passwd or something and asking "root" to run Wget in his or her directory. Depending on the options used, either Wget will refuse to write to .listing, making the globbing/recursion/time-stamping operation fail, or the symbolic link will be deleted and replaced with the actual .listing file, or the listing will be written to a .listing.number file.
Even though this situation isn't a problem, though, "root" should
never run Wget in a non-trusted user's directory. A user could do
something as simple as linking index.html to /etc/passwd
and asking "root" to run Wget with -N or -r so the file
will be overwritten.
wget ftp://gnjilux.srk.fer.hr/*.msgBy default, globbing will be turned on if the URL contains a globbing character. This option may be used to turn globbing on or off permanently.
You may have to quote the URL to protect it from being expanded by
your shell. Globbing makes Wget look for a directory listing, which is
system-specific. This is why it currently works only with Unix FTP
servers (and the ones emulating Unix "ls" output).
When ---retr-symlinks is specified, however, symbolic links are traversed and the pointed-to files are retrieved. At this time, this option does not cause Wget to traverse symlinks to directories and recurse through them, but in the future it should be enhanced to do this.
Note that when retrieving a file (not a directory) because it was specified on the command-line, rather than because it was recursed to, this option has no effect. Symbolic links are always traversed in this case.
wget -r -nd --delete-after http://whatever.com/~popular/page/The -r option is to retrieve recursively, and -nd to not create directories.
Note that ---delete-after deletes files on the local machine. It
does not issue the DELE command to remote FTP sites, for
instance. Also note that when ---delete-after is specified,
---convert-links is ignored, so .orig files are simply not
created in the first place.
Each link will be changed in one of the two ways:
Example: if the downloaded file /foo/doc.html links to
/bar/img.gif, also downloaded, then the link in doc.html
will be modified to point to ../bar/img.gif. This kind of
transformation works reliably for arbitrary combinations of directories.
Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to point to http://hostname/bar/img.gif.
Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet address rather than presenting a broken link. The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which links have been downloaded. Because of that, the work done by -k will be performed at the end of all the downloads.
Ordinarily, when downloading a single HTML page, any requisite documents that may be needed to display it properly are not downloaded. Using -r together with -l can help, but since Wget does not ordinarily distinguish between external and inlined documents, one is generally left with ``leaf documents'' that are missing their requisites.
For instance, say document 1.html contains an "<IMG>" tag referencing 1.gif and an "<A>" tag pointing to external document 2.html. Say that 2.html is similar but that its image is 2.gif and it links to 3.html. Say this continues up to some arbitrarily high number.
If one executes the command:
wget -r -l 2 http://<site>/1.htmlthen 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded. As you can see, 3.html is without its requisite 3.gif because Wget is simply counting the number of hops (up to 2) away from 1.html in order to determine where to stop the recursion. However, with this command:
wget -r -l 2 -p http://<site>/1.htmlall the above files and 3.html's requisite 3.gif will be downloaded. Similarly,
wget -r -l 1 -p http://<site>/1.htmlwill cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded. One might think that:
wget -r -l 0 -p http://<site>/1.htmlwould download just 1.html and 1.gif, but unfortunately this is not the case, because -l 0 is equivalent to -l inf---that is, infinite recursion. To download a single HTML page (or a handful of them, all specified on the command-line or in a -i URL input file) and its (or their) requisites, simply leave off -r and -l:
wget -p http://<site>/1.htmlNote that Wget will behave as if -r had been specified, but only that single page and its requisites will be downloaded. Links from that page to external documents will not be followed. Actually, to download a single page and all its requisites (even if they exist on separate websites), and make sure the lot displays properly locally, this author likes to use a few options in addition to -p:
wget -E -H -k -K -p http://<site>/<document>To finish off this topic, it's worth knowing that Wget's idea of an external document link is any URL specified in an "<A>" tag, an "<AREA>" tag, or a "<LINK>" tag other than "<LINK REL="stylesheet">".
According to specifications, HTML comments are expressed as SGML declarations. Declaration is special markup that begins with <! and ends with >, such as <!DOCTYPE ...>, that may contain comments between a pair of -- delimiters. HTML comments are ``empty declarations'', SGML declarations without any non-comment text. Therefore, <!--foo--> is a valid comment, and so is <!--one-- ---two-->, but <!--1---2--> is not.
On the other hand, most HTML writers don't perceive comments as anything other than text delimited with <!-- and -->, which is not quite the same. For example, something like <!------------> works as a valid comment as long as the number of dashes is a multiple of four (!). If not, the comment technically lasts until the next --, which may be at the other end of the document. Because of this, many popular browsers completely ignore the specification and implement what users have come to expect: comments delimited with <!-- and -->.
Until version 1.9, Wget interpreted comments strictly, which resulted in missing links in many web pages that displayed fine in browsers, but had the misfortune of containing non-compliant comments. Beginning with version 1.9, Wget has joined the ranks of clients that implements ``naive'' comments, terminating each comment at the first occurrence of -->.
If, for whatever reason, you want strict comment parsing, use this option to turn it on.
In the past, the -G option was the best bet for downloading a single page and its requisites, using a command-line like:
wget -Ga,area -H -k -K -r http://<site>/<document>However, the author of this option came across a page with tags like "<LINK REL="home" HREF="/">" and came to the realization that -G was not enough. One can't just tell Wget to ignore "<LINK>", because then stylesheets will not be downloaded. Now the best bet for downloading a single page and its requisites is the dedicated ---page-requisites option.
Before actually submitting a bug report, please try to follow a few
simple guidelines.
Also, while I will probably be interested to know the contents of your
.wgetrc file, just dumping it into the debug message is probably
a bad idea. Instead, you should first try to see if the bug repeats
with .wgetrc moved out of the way. Only if it turns out that
.wgetrc settings affect the bug, mail me the relevant parts of
the file.
Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation; with the Invariant Sections being ``GNU General Public License'' and ``GNU Free Documentation License'', with no Front-Cover Texts, and with no Back-Cover Texts. A copy of the license is included in the section entitled ``GNU Free Documentation License''.