Kent Cowgill writes: |At 8:02 PM -0500 3/4/97, Brian L. Matthews wrote: |>Kent Cowgill writes: |>|foreach $FILE (@FILES) |>|{ |>| $count = 0; |>| open(FILE,"$FILE"); |>| @LINES = <FILE>; |>| close(FILE); |>|[- irrelevant code snipped -] |>|} |>|... something seems to break somewhere. |>About the only thing that could go wrong with the above is if $FILE |>can't be opened, or if it's large. |Actually, I found that the problem was with large files. The files I |thought were slipping through -T were .pdf files, which I didn't realize |until yesterday were actually TEXT files. Some of the .pdf files in the |directories were > 1.3MB, which I suppose was causing MacPerl to run out of |memory. Yep, that's pretty big. However, I should have suggested that you may be able to increase MacPerl's memory allocation and get it to work. I've cured some problems that way. It's certainly worth a try. |As far as iterating through each file line by line, I suppose I could do |that; but I'd be concerned about a decrease in speed. Can anyone tell me |if this concern is justified? No matter how slow it runs line by line, it should get done sooner than never. :-) :-) Anyway, yes, it will probably be slower going line by line, although just how much depends upon what you're doing. I can only suggest coding it up and trying it. |Yeah, the way I worked around using "$ls = `ls $file`;" made a few other |things in a few of the other routines break; as a result, I'm going to have |to do twice the work to convert paths into URL's. Heck, though. It makes |it twice as fun to learn :) Yes, I often wish I could set some variable or use some package that would make MacPerl convert Unix paths to Mac paths internally so I wouldn't have to do it. It wouldn't need to handle every situation to be useful for developing and testing scripts under MacPerl that will eventually run on Unix. Maybe I'll have to try to hack something together someday... Brian