At 8:02 PM -0500 3/4/97, Brian L. Matthews wrote: >Kent Cowgill writes: >|foreach $FILE (@FILES) >|{ >| $count = 0; >| open(FILE,"$FILE"); >| @LINES = <FILE>; >| close(FILE); >|[- irrelevant code snipped -] >|} >|... something seems to break somewhere. > >About the only thing that could go wrong with the above is if $FILE >can't be opened, or if it's large. You can look for the first problem >by checking open's return value (which you should always be doing, >whether things appear to be working or not). You can avoid the second >problem by processing the file a line at a time, or check for it by >wrapping the read from the file in an eval and checking $@. Actually, I found that the problem was with large files. The files I thought were slipping through -T were .pdf files, which I didn't realize until yesterday were actually TEXT files. Some of the .pdf files in the directories were > 1.3MB, which I suppose was causing MacPerl to run out of memory. As far as iterating through each file line by line, I suppose I could do that; but I'd be concerned about a decrease in speed. Can anyone tell me if this concern is justified? >This probably won't help much, but I have a similar sort of script that >processes about 4,000 files, 2/3 of them binary, the rest text, and it >works just fine under both MacPerl and Unix perl. The only consideration >I give to operating system is to use ':' for the path separator on the >Mac and '/' on Unix. Yeah, the way I worked around using "$ls = `ls $file`;" made a few other things in a few of the other routines break; as a result, I'm going to have to do twice the work to convert paths into URL's. Heck, though. It makes it twice as fun to learn :) -Kent Kent Cowgill .---'''''---... 1 West State Street Intersites, Inc. 'i n t e r s i t e s. Geneva, IL 60134 .-'-.-'-.-'-.-'-.-'-. ''''---.....---' .-'-.-'-.-'-.-'-.-'-.-'-. kentc@intersites.com http://www.intersites.com/