I work for an ISP, and I'm working on a database parsing problem. It inputs a database consisting of (tab delimited) userid, date of record, online time (secs), logons, bad discos, and router[s]. It looks for duplicates using a hash system (not too accurate, I know, but it works, and I figure it's much faster than using s// (am I wrong?)) and outputs a database without duplicates. The file I'm parsing is 4.7mb, and using 10mb of memory, Perl runs out. Is there a leak here, do hashes take up a huge amount of space more than tab-delimited text, or what? #!perl for $i (0 .. $#ARGV) { open (IN, "$ARGV[$i]") || die "couldn't open file in"; # open file dropped on droplet while (<IN>) { chomp ( @info = split/\t/ ); # @info = id, date, online (secs), logons, bad discos, router[s] $user = shift( @info ); $date = shift( @info ); $data{ $user }{ $date } = join /\t/,@info; # now %data } close IN; } foreach $user ( keys %data ) { foreach $date ( keys %{ $data{$user} } ) { $output .= "$user\t$date\t$data{ $user }{ $date }\n"; } } open (OUT, ">nodup.tab"); print OUT $output; close OUT; Thanks, Strider ***** Want to unsubscribe from this list? ***** Send mail with body "unsubscribe" to mac-perl-request@iis.ee.ethz.ch