HP3000-L Archives

December 1996, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jerry Fochtman <[log in to unmask]>
Reply To:
Jerry Fochtman <[log in to unmask]>
Date:
Thu, 12 Dec 1996 07:50:05 -0600
Content-Type:
text/plain
Parts/Attachments:
text/plain (55 lines)
At 09:51 PM 12/10/96 +0100, Goetz Neumann wrote:
>SIMPKINS, Terry wrote:
/SNIP
>> What is the current maximum number of datasets in a TurboImage database (I
>> ought to remember that one, but I don't)?
>
>I will leave that to the IMAGE experts here, although I think it are 256
>(though the naming convention would allow 360).

The limit for TurboIMAGE is 199 'standard' datasets plus the rootfile, or
200 files.  But this may not be the number of actual physical files in the
database....

If all sets are stand-alone masters which are setup for b-tree indexes, one
can have 399 files (2*199 + rootfile).  If all the sets are stand-along
detail jumbo sets, one can have upto 99 'small' HSF 'chunk files' in
addition to the standard dataset file for each set, for a total of 19,901
files ((199*99 (chunk files))+199(std setfiles)+rootfile).

Finally, if the sets are stand-alone *small* jumbo masters that also have
b-tree indexes (this assumes that the volume of index information (key and
pointer) can be accommodated in a single KSAM file), the total number of
files in an unopened database would be 20,100 files ((199*99 chunk
files)+(199 std set files)+(199 index files)+rootfile)).

Now if the base is opened, add 1 for the control file structures built at
open time.  These counts also do not take into consideration any 3rd-party
product external files (i.e. OMNIDEX indexes) or other application-dependent
files tied to the database that may be opened as well.

While these are the database limits, there is one significant limit which
would prevent one from actually opening an entire database with this many
files. It is the fact that a single process has a limit of 1024 files which
can be opened simultanously.

Recently there have been some discussions on the XM and its impact on the
file count within a particular directory node (see Winston's previous note).
I'm not sure if that really comes into play here or not, as I seem to recall
the issue surrounding the MPE file count is maintaining sequential order in
the directory. The space needed to perform this ordering when the file count
got above 10,000 would exceed the XM transaction buffer size at some point,
causing the system abort.

Yet as I understand it, the HSF files are not maintained in sorted order and
as such, directory management of these files does not involve potentially
the same type of large directory-type transactions through XM which MPE
utilizes.  Given the max'ed-out cases outlined above, with all jumbo sets
the majority of the files are in the HFS, it 'may' be possible to construct
just such a database.  Although access would be another issue.... ;-)




/jf

ATOM RSS1 RSS2