I'm writing a suite of tools for custom log analysis. Some of the datafiles are Rather Large (2-20MB) and macperl is crashing when I attempt to process them. My primary development machine only has 64MB RAM, but I tried giving macperl 150MB elsewhere (using the script below and a 2.2 MB datafile), and while it lasted longer than on my powerbook, it still brought the house down. On the mac I typically get dumped into the debugger when datafile sizes exceed about 500K, though I do see graceful out of memory errors in about 1 in 5 attempts. Under Linux, the script completes calmly in a few seconds no matter how big the file is. As I'm looking to create a point-and-click solution to be run by people intimidated by command lines, it would be a very big plus to have it operational under MacPerl instead of on a dedicated Linux box. I would appreciate any suggestions on how to bring the memory considerations under control (assuming memory is where the problem lies). Thanks very much. -nat This example is attempting to sort a file based on a two-to-five digit number enclosed in brackets. ---------- Tue Jul 14 19:59:00 1998 [27132] PORT Tue Jul 14 19:59:00 1998 [27132] RETR mrfile.hqx Tue Jul 14 19:59:40 1998 [26717] NOOP Tue Jul 14 19:59:50 1998 [26912] PORT Tue Jul 14 19:59:50 1998 [26912] RETR mrsfile.hqx Tue Jul 14 20:00:18 1998 [28874] PASV --------------- #!perl -w foreach $line (<>) { next if ($line =~ /^$/); $logID = $line; $logID =~ s/^[^\[]*\[(\d{1,5}).*/$1/; # $logID =~ s/[^\[]*(\d{1,5).*/$1/; #better # print "logid is \"$logID\"\n"; push @master_lines, $line; push @ids, $logID; } @sorted = @master_lines[ sort byID 0 .. $#ids]; print "sorted:\n\n @sorted\n"; sub byID { # $counter++; # print "called $counter\n"; # print "called $ids[$a]\n"; $ids[$a] <=> $ids[$b]; } ***** Want to unsubscribe from this list? ***** Send mail with body "unsubscribe" to mac-perl-request@iis.ee.ethz.ch