Today I wanted to download a few standards documents, about 15 PDF and ZIP files. But this stupid webserver always just sent between 100kB and 700kB, then it completely stopped sending data. This was annoying me. I wanted to look into these documents.
wget has a nice
--continue option. This way I can hit CTRL+C and continue. repeating this until I have the whole file. But this is too cumbersome when downloading many and/or big files. I wanted a completely automated way to do that. And there is one,
wget has many more options. Keep retrying forever and set the read timeout to 3 seconds:
wget --continue --tries=0 --read-timeout=3 URLS...
Tadaa! It downloads everything, problem solved