Subject: | |
From: | |
Reply To: | |
Date: | Tue, 19 Aug 2003 15:47:42 -0400 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
With the explosion of data collection into IMAGE datasets, I see the day when 128Gb for a large file will be exceeded, especially if you are in the medical field and now must start collecting data for 6 years. Why is it that IMAGE cannot be modified to have 128Gb chunks instead of 4Gb chunks with straight record# access and dynamic expansion ? There would be no need for the new large file functionality, and for those of us staying on MPE, there would be plenty of breathing room for extra large datasets. These changes I am told are not major changes to IMAGE, QUERY, DBUTIL, etc.
Wyell Grunwald
* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
|
|
|