HP3000-L Archives

August 2003, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Wyell Grunwald <[log in to unmask]>
Reply To:
Wyell Grunwald <[log in to unmask]>
Date:
Fri, 8 Aug 2003 11:35:22 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (162 lines)
What about this approach:

If a dataset is going to cross over the 4 GB barrier, scan to see if there are other jumbo datasets out there.  If there are, convert it to a Jumbo, otherwise make it a LFDS if Image is at the correct level.

Which brings up a good question - is it possible to have both jumbo's and LFDS in the same database ?

Wyell

>>> "F. Alfredo Rego" <[log in to unmask]> Friday, August 08, 2003 11:21:46 AM >>>
At 7:57 AM -0600 8/8/2003, Rene Woc wrote:

>Jumbo datasets are multi-file datasets to support datasets with more
>than 4 gigabytes. They consist of a "control file" and two or more
>"chunks," each one with a size of up to 4 gigabytes. The control file
>has the original (single-file) dataset file name, each one of the
>chunks has the dataset file name with a file extension ".001",
>".002", etc.
>
>LargeFile datasets (also called LFDS in the MPE/iX 7.5 Communicator)
>are single-file file datasets with a size up to 128 gigabytes. LFDS
>allows you to use dynamic expansion on datasets with maximum
>capacities larger than 4 gigabytes.

For additional information, please see
http://www.adager.com/AdagerGuide3.html to get an overall
perspective on various database issues (including high-level
database concepts, jumbo datasets, dynamic capacity expansion,
LargeFile datasets, B-trees, and so on).


>Kevin Cooper's presentation on 7.5 performance discusses the
>potential performance implications with LFDS. The main concern is in
>how single-threaded checkpoints in the transaction manager affect
>your overall application performance.
>
>The Adager default is to use jumbo datasets when a dataset exceeds 4
>gigabytes. The reasons for this default are historical as well as
>technical.

Rene and I have had long discussions on the subject.  We might as
well get some help from our friends at hp3000-L.

Rene strongly believes that what he wrote above should be the
default.

I strongly respect Rene's view, from a customer-support perspective.
My feeling, though, is that the default (provided the user is at
least on TurboIMAGE version c.10.05 AND the database does NOT have
any jumbo datasets) should be to create a LargeFile dataset (LFDS)
whenever a dataset's disc space exceeds 4 GB (either as a new dataset
or when changing a new dataset via a variety of possibilities,
such as increasing its capacity or adding new fields/paths to it,
or increasing the size of some fields, and so on).

We have toyed with all kinds of possibilities (including setting
JCWs and/or variables, changing the Adager dialogue, and so on).

What are YOUR opinions and desires?  The "functionality" is the
easy part.  The "user interface" is extremely hard and we are at
the end of our wits trying to juggle all the possibilities and
trying to satisfy an enormous tangle of diverging requirements.

From my ivory-tower (Adager Labs) perspective, I favor MY opinion
regarding the default.  From Rene's real-life customer-support
perspective, dealing with legacy material (pre-IMAGE-c.10.05
and pre-MPE/iX 7.5), the view is slightly different.  As he writes:

>Once you enable a database for LFDS you cannot open it on
>a system with a version of TurboIMAGE previous to C.10.xx. C.10.xx is
>only available on 7.5. Support for jumbo datasets has been available
>for a long time, since IMAGE version C.06.xx (shipped with MPE/iX 5.0
>PowerPatch 3).

The "stealth" thing is that OPERATING SYSTEM SUPPORT for Large *Files*
has been there for a VERY long time (MPE/iX 6.SomethingOrAnother?).

TurboIMAGE support for LargeFile *datasets* is more recent, but there
were some early versions of TurboIMAGE that had some "issues".
Rather than dealing with those issues, I made the unilateral choice
of NOT supporting LargeFile datasets on any version of TurboIMAGE
older than c.10.05.  Sorry, but this is for everybody's benefit.


>To enable a database for LFDS with Adager you use the command "use
>largefiles", available in our 2003 version.

Rene's terse statement doesn't reveal the agonizing times we
spent trying to come up with a good (non-ambiguous, non-misleading
name) for the command.  At first, we had "convert jumbo" but we
soon realized that many databases did NOT have any jumbo datasets
and they should be allowed to use LargeFile datasets if the user
so desired ("convert jumbo" was a No-Op on such Jumbo-free databases
and Adager would not allow LargeFile datasets for them -- bummer).
And so on...  I will spare you the sequence of tries.


>Adager's 2003 release contains support for LFDS as well as a
>high-performance function to convert your jumbo datasets to LFDS.

Use your browser's SEARCH function to look for "LargeFile" in
http://www.adager.com/AdagerGuideVerbObject.html and follow
the links to get to the specific Adager functions that handle
LargeFile datasets (A.K.A. LFDS).  Adager accepts both "LFDS"
and "LargeFile" as synonyms for the same database object.


>Please contact Adager Support if you are interested in receiving an
>early shipment.


Why the "caveat" (and the following note from Wyell)?


At 9:25 AM -0400 8/8/2003, Wyell Grunwald wrote:
>Adager is not quite ready - very soon.  They are testing this release at present.  Performance should be about the same.  The biggest difference is dynamic expansion.  You cannot do dynamic expansion on jumbo chunks, but you can on large files.  This should save significant disc space, as you won't have to have your capacities set ultra high.


It turns out that Adager has TWO parts of the LargeFile structure in place:

1) The CREATION of new LargeFile datasets from scratch.

2) The CONVERSION of existing Jumbo datasets to LargeFile datasets.


We are, however, dealing with having to craft a work-around for what
appears to be a deep file-system challenge regarding Adager's
"fast technology" way to EXPAND certain datasets "instantaneously".

My decision today, after having seen the traffic on hp3000-L, involves
the following stages:

a) Release what I have now (i.e., Adager's "slow technology") to expand
   those "certain datasets" into the Large-File realm with the
   time-tested brute-force approach that I started using in 1978 and
   with (1) & (2) above as complementary building blocks.

b) Release a "faster technology" (faster than the brute-force approach
   but not as fast as Adager's ideal "fast technology").

c) Continue working on solving the deep puzzle that has us stumped so
   that we can unleash the full power of Adager's fast technology for
   LargeFile datasets.


With my best regards and appreciation for the assistance I get from my
friends and colleagues,

  _______________
 |               |
 |               |
 |            r  |  Alfredo                     [log in to unmask] 
 |          e    |                           http://www.adager.com 
 |        g      |  F. Alfredo Rego                   
 |      a        |  Manager, R & D Labs           
 |    d          |  Adager Corporation
 |  A            |  Sun Valley, Idaho 83353-3000            U.S.A.
 |               |
 |_______________|

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2