Once the HWM reaches capacity, it then goes and follows the delete chain
for an open slot for the DBPUT. Once the delete chain is exhaused, you
have a full dataset....
HWMPut is a fast way to batch-in large amounts of data from a performance
perspective and can be dynamically enabled for that purpose. Especially if
the incoming data is already sorted consistent with how it will be retrieved.
At 12:00 PM 9/24/2004 -0700, Tracy Pierce wrote:
>I didn't even know there was such an option as HWPUT!
>
>That could be great as long as you never let HW approach capacity. But it
>seems to leave a problem just waiting to happen if the authors assume and
>the users, maybe years later, don't know about that assumption.
>
>Tracy
>
> > -----Original Message-----
> > From: Wyell Grunwald [mailto:[log in to unmask]]
> > Sent: Friday, September 24, 2004 11:52 AM
> > To: [log in to unmask]; [log in to unmask]
> > Subject: Re: Turbo Image Help
> >
> >
> > Oh - good point - I forgot that the default action of IMAGE is to add
> > new records in where old ones were deleted. We have HWPUT (high water
> > put) turned on all our databases - which means the new
> > entries are added
> > at the bottom of the dataset UNLESS you reach the maximum capacity
> > (which we always set really high). The way we have it set up, we are
> > guaranteed using a serial read that the entries are always in
> > chronological order.
> >
> > >>> Tracy Pierce <[log in to unmask]> Friday, September 24, 2004
> > 2:45:18 PM >>>
> > this had better be a detail set, or you're simply out of luck.
> > further, if
> > it's an existing database and you don't already have the dummy item,
> > ditto.
> > then,
> >
> > Option 1: provided that you've never deleted a record, yes,
> > the records
> > will
> > appear in the order they were added. Your performance should be
> > approximately that of reading a plain flat sequential file with a tiny
> > amount of Image overhead. If you've ever deleted records, new records
> > will
> > fill their holes, so option 1 is out. If no deletions and you want
> > super
> > performance, use Suprtool, which bypasses not just Image but the file
> > system, so you get huge blocks of recs for each read.
> >
> > Option 2: assuming an always-existent path, each record on each chain
> > (each
> > chain occuring, separately with every possible value in the path's
> > item) on
> > that path will be returned in entry sequence, deletions or not. So
> > you'll
> > be able to see the relative sequence of entry for each
> > path-item value,
> > but
> > for different values, no dice.
> >
> > If this is for a NEW database, your dummy item trick will work fine,
> > but be
> > sure to put it at the end of the record - sort items AND EVERYTHING TO
> > THEIR
> > RIGHT are included in the 'sort'. You can use this trick to keep a
> > detail
> > set in 'key' sequence, too - put the dummy field at the front, and the
> > key
> > field next.
> >
> > Creating a new path on an existing base is not going to help.
> >
> >
> >
> > > -----Original Message-----
> > > From: Venkatraman Ramakrishnan [mailto:[log in to unmask]]
> > > Sent: Friday, September 24, 2004 10:09 AM
> > > To: [log in to unmask]
> > > Subject: Turbo Image Help
> > >
> > >
> > > Hello Everybody,
> > >
> > > We are writing HP Cobol programs to access Turbo IMAGE
> > > databases in MPE
> > >
> > > We want to retrieve the records in the sequence in which they were
> > > inserted into the dataset. We have thought of the below three
> > options
> > >
> > > Option 1: Serial read using DBGET Mode 2
> > >
> > > a) Serial read the dataset using DBGET Mode 2
> > > b) Process each record
> > >
> > > Question:
> > > 1. Will be the sequence in this case be maintained?
> > > 2. Will there be performance issues when the dataset size is
> > > going to be
> > > huge as DBGET Mode 2 needs to pass through each record
> > >
> > > Option 2: Chain read using DBGET Mode 5
> > >
> > > a) Chain read using a dummy field created which will select all the
> > > records in the dataset
> > > b) Sort the table using a Sequence based on the above dummy field
> > > c) Process each record
> > >
> > > Option 3: Message file (Option not preferred)
> > >
> > > a) Insert records into message file
> > > b) Read from message file and process the record
> > >
> > > Pls. let us know your thoughts on the feasibility of first two
> > options
> > > and which would be preferable and why. We have decided not to use
> > > message files for the same.
> > >
> > >
> > > Warm Regards
> > > Venkat=20
> > >
> > > * To join/leave the list, search archives, change list settings, *
> > > * etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
> > >
> >
> > * To join/leave the list, search archives, change list settings, *
> > * etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
> >
> >
> > IMPORTANT WARNING: The information in this message (and the
> > documents attached to it, if any) is confidential and may be
> > legally privileged. It is intended solely for the addressee.
> > Access to this message by anyone else is unauthorized. If you
> > are not the intended recipient, any disclosure, copying,
> > distribution or any action taken, or omitted to be taken, in
> > reliance on it is prohibited and may be unlawful. If you have
> > received this message in error, please delete all electronic
> > copies of this message (and the documents attached to it, if
> > any), destroy any hard copies you may have created and notify
> > me immediately by replying to this email. Thank you.
> >
>
>* To join/leave the list, search archives, change list settings, *
>* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
|