does anyone know of a good website copier that runs on ubuntu and that downloads and saves html files so that i can open them without the need of the original program.
does anyone know of a good website copier that runs on ubuntu and that downloads and saves html files so that i can open them without the need of the original program.
If you just want to download the html text code then I guess on command line
exampleCode:wget -r [url]
Website are built differently, depending on the website, they might thwart you efforts.Code:wget -r google.com
All html code can be opened by any webbrowser
(\ /)
(O.o)
(> <)
This is Bunny. Copy Bunny into your signature to help him on his way to world domination.
if you don't know the location of the data, how will your software?
wget can open links recursively, and you can specify recursion depth, but if there are no links on the front page and no way to figure out where the content is, you wont get very far
Bookmarks