>According to Strider: <snip> >I looked at your script and I believe you are doing a bit >of overkill. Why not something like this: <snip> Mainly because of the memory problems. >The second problem is the size of the database. 300mb >won't all fit into memory so you will have to determine a >different way to sort the information. My suggestion is to >break the file up into X number of smaller files which you >can pull into memory individually. These files should be >no larger than 1/3 of your available memory. Thus, if you >have MacPerl set to 8192 and you have 2mb of memory >available to use, your files should not be any larger than >about 500k. Then the easiest thing to do (although it is a >bit slow) is to do the following: <snip> Although I'd do that, I need the WHOLE thing sorted. Eventually, the whole database would have to be in memory to do that, unless I misunderstand you. >If this is a continuation of the single entry problem you >wrote about earlier, then you will probably want to >continue using the hash entries. However, 300mb will not >fit into your computer's memory unless you have about 450mb >of RAM. This is due to overhead in the creation of >strings, the hash entries, and the like. So unless you >have that much memory you need to go to a disk based >methodology. I don't believe it is, although I'm not sure what you're referring to. All I planned to do was sort the file using a low amount of memory- it seems that the sorting (and duplication combination or replacement) would be well fitted here. The only problem is that of the output array, which I will work on once this works. I won't be working on 300mb files for a little while, so this one will work for now, if I can get it to work at all. Thanks, Strider ***** Want to unsubscribe from this list? ***** Send mail with body "unsubscribe" to mac-perl-request@iis.ee.ethz.ch