Matthew Langford writes: |But some of the fields I'm extracting are multiline fields, so I do have |to read in, well, the whole file. A technique I've used is to read a line, determine if the next line is really part of this line, and if so read the next line and append it to the current line. Repeat. This lets you read a line at a time without slurping the whole file. However, if you're going to end up with the whole file in memory anyways, you might as well slurp it. |Is there a CPAN module that tames this hairy monster? |I'm pretty sure there wasn't one nearly a year ago. Look at Text::ParseWords.pm, specifically quotewords. It hacks up a string on a delimiter r.e. you pass in, taking into account quoting. It croaks on unmatched quotes so you could eval it and use this as a signal to append the next line and try again. |But in the future, I would prefer graceful exits to reboots. In particular, |since the regular expressions I was writing were legal Perl, I didn't like |the icy hand of the debugger across the face. While I agree, part of the problem is perl itself and its Unix roots. Memory management is much simpler when each script is a process and when the script ends, the process goes away and the system reclaims all the memory. Also, while the stack and the heap can collide on Unix, you'd have to fill your whole virtual address space (like 1 or 2 *gig*abytes) to do it. Brian ***** Want to unsubscribe from this list? ***** Send mail with body "unsubscribe" to mac-perl-request@iis.ee.ethz.ch