[Date Prev][Date Next][Thread Prev][Thread Next] [Search] [Date Index] [Thread Index]

Re: [MacPerl] Welcome to America!



I'll check it out tonight when I get home.  After thinking about it for
a couple of hours - the file might have gotten converted to Unix file
format.  If so - it's a gotcha from StuffIt.  I used StuffIt to create
the tar file and then the uuencoded file.  The original data file was
what I had checked before.  But I didn't check the file type after
StuffIt got done with it.  So I will check it out tonight (as I said)
after I get home.

Even so - why would MacPerl run out of memory?  It can't be that
inefficient memory wise can it?  20MB data file = >60MB MacPerl?

After all, each line is 80 characters long (or more likely somewhere
around 70).  20,000,000 / 70 = 285,714.29 or 285,715 lines.  Accounting
for 20 bytes of overhead per line would be 285,715 * 20 = 5,714,300
additional bytes being used to store the information.  Which would give
25,714,300 bytes total.  Since strings are terminated with a zero in C
we could also give an additional 285,715 bytes or basically round the
5.7MB to 6MB.  So even if it did read in the entire file I shouldn't
have run out of memory.

(The 20 bytes of overhead break down into:

8 bytes for each long (integer)
8 bytes for each double (floating point)
4 bytes for each pointer (zero terminated string))

The above probably isn't totally accurate but it is the best I can do
with the information I currently have.

Now if MacPerl is allocating some fixed length string amount (ie: Maybe
it is allocation 100 bytes per string) that would account for the large
memory allocation.  But from what I've seen in the source code it looks
as if Matthias tries to keep it in C as much as possible and then does
the conversion to a Pascal strings only when needed.

Anyway, I'll look at the thing when I get home.  Later!  :-)

***** Want to unsubscribe from this list?
***** Send mail with body "unsubscribe" to mac-perl-request@iis.ee.ethz.ch