At 9:05 AM -0800 3/28/00, John Murray wrote: How would one make that calculation? Earlier: ############################ Then, subject to a correction by My Betters, (assuming a linear time/size relation, non-cached IO) it's a simple calculation based on the longest acceptable IO delay. In general, I guess I wouldn't call it a 'calculation', because that makes it sound like there's a formula to follow. Here are some considerations: Flat <--> DB 1. Size: small <--> large 2. Access: read only <--> modify 3. Records: append only <--> delete 4. Structure: simple <--> complex 5. Users: one <--> concurrent 6. Standards: local <--> SQL 7. Develop: quick <--> long term 8. Functions: basic <--> lots 9. Viewable: any sw <--> proprietary 10. Safety Net: yikes <--> whew 11. Lifetime: short <--> long Each of these on its own wouldn't determine what's best. For the most part you can add your own clever Perl interface to a flatfile system to get any of the benefits of a 'real' RDBMS. If you found yourself doing that to cover just one issue, stick with flatfile; if with many, migrate. I recently went back to add some features to a multi-user transaction data system whose data is held in a set of related flatfiles on an NT server for a 21-site intranet. At the time of development, no one had any idea what the volume of transactions would be. I discovered that one of the flat tables had grown to over 20 MB in four months, which seems huge for flatfile, yet there had been no perceived decline in service. This file was searched line by line, not slurped, of course; and it was only being appended to. BTW, are you talking Mac? Part of what got me using MacPerl was its ability to manipulate data and files with incredible speed. In my work I have to process data that comes to me in many forms from other systems, including very large and complex data sets. Generally, once the statistics are done, the labels printed, or whatever the end use is, I just archive the original and intervening transformations of the data, All flat. My only tools are MacPerl and BBEdit. Never yet tried to use MacPerl with a Mac SQL RDBMS, and I gather from this list that there are few choices and some barriers. I use an SE-30 with 20 MB RAM as a dedicated data-muncher while I do risky stuff on my main machine. By necessity the SE-30 has low-footprint software. It still surprises me how fast that cutie can rip through data with MacPerl. Incidentally, the best way to find out what's optimal for your use is to try it. Consider reading O'Reilly's Programming the Perl DBI, by Descartes and Bunce. Excellent discussion of both approaches, allowing you to make a decision with understanding rather than with someone's rules of thumb, even mine ;-). 1; -- - Bruce __Bruce_Van_Allen___bva@cruzio.com__831_429_1688_V__ __PO_Box_839__Santa_Cruz_CA__95061__831_426_2474_W__ # ===== Want to unsubscribe from this list? # ===== Send mail with body "unsubscribe" to macperl-request@macperl.org