At 11:17 AM 6/26/99 -0700, Vicki Brown wrote: ># Which way would you pick? Why Why, it depends, of course :-) ># Problem ># You have an external command to run from perl; the command returns a ># large chunk of newline separated data which you want to process, line ># per line. ># ># Would you: ># ># 1) write the returned data to a file, open it, and loop through the lines Unlikely, but if I was in an environment where fault tolerance and checkpoint/restart or having an audit trail in the event of a crash was important, I might do this. ># 2) do an open with a pipe when you call the external command and attach ># the data to the filehandle, then iterate over <HANDLE> > > open(HANDLE, "command |") > ... > while (<HANDLE>) { The best way in general, unless the command is going to output more data than my input loop is going to handle in time and I have some concern about how long I want the command process to stick around. This would not do for real-time monitoring, for instance, if command was returning data faster than I could or wanted to process it. ># 3) stuff the returned data into a large string variable, split it, and ># foreach across the result > > $result = `command`; > @lines = split(/\n/, $result); > foreach $line (@lines) { What I usually do when I know the command is going to output a known and not large amount of data is just foreach (`command`) { rather than create the extra variables. I make sure that command always ends in 2>&1. ># 4) none of the above (in which case what would you do?) Send data in morse code by going into tight loops in the child and watching for jumps in the system clock in the parent :-) Peter Scott Pacific Systems Design Technologies >==== Want to unsubscribe from Fun With Perl? >==== Well, if you insist... Send mail with body "unsubscribe" to >==== fwp-request@technofile.org ==== Want to unsubscribe from Fun With Perl? ==== Well, if you insist... Send mail with body "unsubscribe" to ==== fwp-request@technofile.org