HP3000-L Archives

August 2003, Week 3

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Wyell Grunwald <[log in to unmask]>
Reply To:
Wyell Grunwald <[log in to unmask]>
Date:
Tue, 19 Aug 2003 15:47:42 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (7 lines)
With the explosion of data collection into IMAGE datasets, I see the day when 128Gb for a large file will be exceeded, especially if you are in the medical field and now must start collecting data for 6 years.  Why is it that IMAGE cannot be modified to have 128Gb chunks instead of 4Gb chunks with straight record# access and dynamic expansion ?  There would be no need for the new large file functionality, and for those of us staying on MPE, there would be plenty of breathing room for extra large datasets.  These changes I am told are not major changes to IMAGE, QUERY, DBUTIL, etc.

Wyell Grunwald

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2