[Date Prev][Date Next][Thread Prev][Thread Next] [Search] [Date Index] [Thread Index]

Re: [MacPerl] Memory leak?



At 7:21 PM -0500 12/8/97, Chris Nandor wrote:
>At 17.29 12/8/97, Strider wrote:
>>It looks for duplicates using a hash system (not too accurate, I know, but
>>it works, and I figure it's much faster than using s// (am I wrong?)) and
>>outputs a database without duplicates. The file I'm parsing is 4.7mb, and
>>using 10mb of memory, Perl runs out. Is there a leak here, do hashes take
>>up a huge amount of space more than tab-delimited text, or what?
>
>Hashes take lots of internal Perl memory, yes.
>
>I am not sure what you are trying to do in "finding duplicates" ... there
>is probably another way.

Well, I'm trying to take records which are exactly the same as one another
and delete, say, the 'second' record that it finds. Again, I suppose I
could do this simply with s//, but I thought that would probably be slower-
actually processing the data instead of just moving it around. Am I right?

- Strider



***** Want to unsubscribe from this list?
***** Send mail with body "unsubscribe" to mac-perl-request@iis.ee.ethz.ch