Wyell asks:
> With the explosion of data collection into IMAGE datasets, I see the day
when
> 128Gb for a large file will be exceeded, especially if you are in the
medical
> field and now must start collecting data for 6 years. Why is it that IMAGE
> cannot be modified to have 128Gb chunks instead of 4Gb chunks with straight
> record# access and dynamic expansion ? There would be no need for the new
> large file functionality, and for those of us staying on MPE, there would
be
> plenty of breathing room for extra large datasets. These changes I am told
> are not major changes to IMAGE, QUERY, DBUTIL, etc.
In the future (not too very far from now), when there won't be anyone to
perform any future modifications to anything IMAGE related, you can always fall
back on the simple ingenuity that earlier legions of HP3000 users were known for
and "chunkify" your databases yourself. Simply create new databases in new
groups, all named and architected the same, but where one group's database
contains the data from 1999 to 2004, the next 2004 to 2008, etc.
Doing this is far better than archiving the data. It's been my impression
than archived data is as good as dead data. Once it's off the machine, it's
really never restored. In this auto-chunkified state however, all of the data is
still spinning and remains immediately accessible, although perhaps not often
accessed in reality.
Better yet, there's no practical limit to how much data you want to store
under this mode of operation.
Wirt Atmar
* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
|