HP3000-L Archives

February 1998, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jerry Fochtman <[log in to unmask]>
Reply To:
Jerry Fochtman <[log in to unmask]>
Date:
Mon, 9 Feb 1998 07:47:09 -0600
Content-Type:
text/plain
Parts/Attachments:
text/plain (62 lines)
At 07:30 AM 2/9/98 -0800, Backuj wrote:
>     Does anybody know the limitations of an IMAGE Detail Dataset with
>regards to the number of records and the amount of disk space allowed, under
>MPE/iX 5.0.

Without considering entry size, IMAGE's current record pointer limit is
255 entries per block with a maximum number of blocks of 2**23 - 1, or
8,388,607 blocks.  Therefore, the maximum possible number of records
for a dataset is 2,139,094,785.  However, keep in mind that the blocking
factor is limited by a maximum of 2560 1/2 words (5120 bytes).  So divide
your media entry size (this includes pointers as well....!) into this to
determine the maximum blocking factor you can achieve and then multiple
the results by 8,288,607 to obtain the maximum possible entry count for
the set. NOTE: When calculating this you'll also need to consider that each
block has a bit-map consisting of 1 bit per entry.  So for each 16 entries
(or part of 16 thereof) you'll need one 1/2 word of overhead in the block
as well.

In terms of file space, with jumbo sets one can have 99 chunk files, each
to a maximum of 4GB, for a total of 396GB.  However, I believe it was once
determined that using an 'ideal' detail set whose characteristics fit within
the 255 blocking factor limit, the maximum space that would be needed
ended-up
to be somewhere around 44GB when the IMAGE internal pointer limit was reached.

(By the way, members of the SIGIMAGE Executive Committee have been discussing
IMAGE's current pointer limitations with HP ...)

MPE/iX 5.0, with Express 3 was the first release to provided support for
JUMBO sets.  Without jumbo sets, all the above is limited to a total
space of 4GB (minus approx 64K for file system overhead) for a data
set.  With a small blocking factor, this will have a significant impact
on the maximum number of entries possible for the set.

/plug-alert

DBGeneral's option 1.4 will calculate the maximum attainable capacity of a
set given its current attributes, along with (if appropriate) calculating
a more optimum blocking factor and showing the maximum capacity using this
as well.  The file system limitations of 4GB is taken into consideration
when determining maximum capacity of standard sets.  For master sets, the
maximum possible capacity calculations will also be tempered (if needed)
by the capacity limitations of a b-tree index if one is attached to the
dataset (version 7.2 feature).

/end-plug



/jf
                              _\\///_
                             (' o-o ')
___________________________ooOo_( )_OOoo____________________________________

                        Monday, February 9th

___________________________________Oooo_____________________________________
                            oooO  (    )
                           (    )  )  /
                            \  (   (_/
                             \_)

ATOM RSS1 RSS2