I have written a script to load a large associative array--a 5 x ~18,000 table (each entry is less than 5 bytes, using one of the columns as a key). The problem is that although the table itself uses about 350 K of disk space, to load it into memory I need to allocate over 6 MB of memory to MacPerl. (I have also written another script using s/// to perform the same search from the disk, but this is very very slow.) Of course, this is all Extremely Rapid Application Development--I was originally trying to save time by writing it. Is there any way to do fast (keyed) searches in memory, or to compact the array in memory? And is there any way to store the array to disk, so that to load it into memory I do not have to reprocess the file every time? Thanks for your suggestions, Roger Hart The code (simple enough): sub loadxref { while (<CHTPYPY>) { chop; if (! /^$/ && /^[^\#]/) { #if line is non-empty and not comments ($ch, $tpy, $py, $fourc, $cangjie) = split; $ch2tpy{$ch} = $tpy; #assigns $tpy to key $ch. $ch2py{$ch} = $py; #assigns $py to key $ch. $ch2fc{$ch} = $fourc; #assigns $fourc to key $ch. $ch2cj{$ch} = $cangjie; #assigns $canjie to key $ch. } } print "\nFinished loading the cross-reference table!!\n"; close(CHTPYPY); }