>As long as I'm babbling about automated file extraction... a utility >I've always wanted was one that could iterate through a sequence of >URLs. Say the docs for your favorite program are ONLY available as >HTML from some web site. And to make matters worse, they're ONLY >avaialble as SPLIT HTML files (doc01.html, doc02.html...). SUCK! > >So you want something that, given a specially formatted URL, can just >iterate through the sequence, sucking down as it goes. AFAIK, no such >thing exists. It won't handle authentication or referer but I just use something like this? perl -e 'for $i (1..10){system sprintf("wget http://www.foo.com/docs/docs%02d.html", $i)}' Fergal ==== Want to unsubscribe from this list? (Don't you love us anymore?) ==== Well, if you insist... Send mail with body "unsubscribe" to ==== fwp-request@technofile.org