HP3000-L Archives

February 2000, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Sletten Kenneth W KPWA <[log in to unmask]>
Reply To:
Sletten Kenneth W KPWA <[log in to unmask]>
Date:
Thu, 10 Feb 2000 21:18:58 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (74 lines)
James after Vikram:

> Adding more space to Image would only suffice in the short
> term. ......  I believe the saying then, before PA-RISC,
> what are you going to do with a Image DB with 4G records?
> Well Jumbo sets gives you more room to accomplish the 4G
> record limit. But until HP again changes the underlying
> structure, having more space will only make you cry when
> you hit the 4G limit. Of course you can have larger and larger
> records to bypass this limit, but I think you DB would begin to
> look a little funny.
>
> So until users start hitting the limits, HP will probably tinker with
> structure changes within the Labs, until they come up with
> something that answers the need and allows forward conversion
> without major problems.


With the current "EntryByName" scheme that TurboIMAGE uses,
you can put *up to* 80 GBytes of data in one DATASET (was up
to only 40 GB until recently;  relatively minor change to use a
sign bit allowed double current JUMBO scheme max to 80 GB)...

BUT:
Max number of individual RECORDS that can currently be
entered in one dataset depends on Block Factor....

HOWEVER:
There is near-term hope (more than just hope, actually)...:
On the SIGIMAGE "Now / Soon Available in Image/SQL" list is
an item that will be coming "soon":  Change EntryByName to
EntryByNumber (a migration utility will be provided).  WRT
IMAGE internal limits, users will then be able to enter up to 2G
records in an IMAGE dataset regardless of record size..
Considering that this is a 250-fold increase over what people
have been living with for datasets with Block Factor = 1,
hopefully that will hold most users who bumped up against that
particular limit for at least a little while....   :-)

THEN:
*Also* coming "soon" to IMAGE will be MPE "Large Files".  IIRC
the initial incarnation of Large Files will support single MPE and
KSAM files (and therefore IMAGE B-Tree files) up to 128 GB.
Just what the "next increment" for MPE Large Files will be after
that and when it will happen is still (at least AFAIK) TBD....

BUT:
Eventually, with a combination of the EntryByNumber internal
limit expansion and MPE Large Files, a single TurboIMAGE
dataset will (at least theoretically) be able to hold 10 TERABytes.

BUT....  CONSIDER:
Jerry Fochtman recently posted some interesting real-world
data on how long it took to load;  what was it;  70 GB, running
on a high-end machine...  think he said about seven days (and
that was with only two search items in a Detail)....  At that rate
loading 1 TB would take something like 100 days....  if people
are going to need to do that, guess HP will have a good market
for new, higher-performance HP 3000's....  oops...:  HPe3000
(I refuse to put a space between "HP" and "e")...

NOW:
In his first above James talks about 4G *records*, not bytes of
capacity in a dataset....  I have to ask:  Are there really sites out
there now (or projected any time soon) that think they will have a
need to store more than two billion individual RECORDS in a
single IMAGE dataset anytime soon ??...  Seriously, if there are
the SIGIMAGE Executive Committee (SIEC) would very much
like to hear from you....  Even better, attend SIGIMAGE meeting
next week on Wed 16 Feb 00 at SIG3000 if you can....   ;-)

Ken Sletten
SIGIMAGE Chair

ATOM RSS1 RSS2