Good day. I would like to download 19 consecutive websites using GNU wget in a Windows platform.

The websites are shown below:

HTML Code:
http://faculty.washington.edu/chudler/stm0.html
HTML Code:
http://faculty.washington.edu/chudler/stm1.html
....
HTML Code:
http://faculty.washington.edu/chudler/stm18.html
In fact, the entire 19 websites are a quiz, in which some of them will automatically loads the subsequent website after a certain time. You may check on the website
HTML Code:
http://faculty.washington.edu/chudler/stm1.html
I have tried using this wget code:
Code:
wget -i abc.txt
,
in which all 19 website links are stored in "abc.txt"

The downloaded html files work for
HTML Code:
http://faculty.washington.edu/chudler/stm0.html
HTML Code:
http://faculty.washington.edu/chudler/stm1.html
After that,
HTML Code:
http://faculty.washington.edu/chudler/stm2.html
fails to load offline as the
HTML Code:
http://faculty.washington.edu/chudler/stm1.html
reconnects to the next website automatically after a certain time.

Can anyone provide some guidances? Any advice is appreciated. Thanks.