Archive Web Pages for offline reference?, Part 2

Dennis Faas's picture

Recall --

Back in late January of this year, Infopackets Reader David J. asked me if I knew of a program was capable of archiving web pages from web sites for offline reading. I responded by providing instruction on how to use Internet Explorer's built-in web page archiver (article here), but then asked Infopackets Readers if they knew of a better solution.

Infopackets Reader Michael O. writes:

" Dennis, it's funny you should ask about Webpage archiver programs... I was just thinking (a few days ago, in fact), 'I haven't seen many of them around, I wonder if people just don't have a use for them or if people are afraid to ask?'

[Well, anyway, the program I use] is called Getleft (sic) ... Some of the nice features include: download a site map before you start downloading the actual site, will follow 'external' links, will download files that are links on the page (with some limitations) and has filters to avoid downloading certain type of files. [Another neat thing about the program is that it] resumes downloading where it left off, if it gets interrupted. It comes with support for 13 languages (at last count). One thing to note is that it does not understand Java, only pure HTML. "

http://personal1.iddeo.es/andresgarci/getleft/english/

Quite a number of users recommended a program called HTTrack. Infopackets Reader Thornton S. comments:

" I have used a program called WinHTTrack Website Copier for several years and been very pleased with it. It downloads everthing from a specified URL-- including, if you wish, any referenced files at other websites. Best of all, it's freeware!"

From the httrack.com site:

" [HTTrack] allows you to download a World Wide Web site from the Internet to a local directory, building recursively all directories, getting HTML, images, and other files from the server to your computer. HTTrack arranges the original site's relative link-structure. Simply open a page of the 'mirrored' website in your browser, and you can browse the site from link to link, as if you were viewing it online. HTTrack can also update an existing mirrored site, and resume interrupted downloads. HTTrack is fully configurable, and has an integrated help system. "

http://www.httrack.com/

And finally, die-hard Infopackets Reader 'Pippie' suggested a freeware program called WebReaper. From the webreaper.com site:

" WebReaper is web crawler or spider, which can work its way through a website, downloading pages, pictures and objects that it finds so that they can be viewed locally, without needing to be connected to the internet. Web sites can be saved locally as a fully-browsable website which can be viewed with any browser (such as Internet Explorer, Netscape, Opera, etc), or they can be saved into the Internet Explorer cache and viewed using IE's offline mode as if the you'd surfed the sites 'by hand'. "

http://www.webreaper.net/

Thanks to all who wrote in with their suggestions!

Rate this article: 
No votes yet