HP3000-L Archives

September 1997, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Gary Groves <[log in to unmask]>
Reply To:
Gary Groves <[log in to unmask]>
Date:
Fri, 5 Sep 1997 14:52:04 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (94 lines)
Isn't assuming a optimally packed dataset a dangerous assumption OR at
least unrealistic? (Just as assuming that one does not repack may also
be dangerous and/or unrealistic:-) ). Some shops, I know have the luxury
of being able to go down for hours and do maintenance. That's great.
Some do not.

For our shop to repack 1 dataset, would require over 15 HOURS of down
time. We simply can't afford to do this. When 100% uptime is required
(we are 24x7), how do you repack the set & maintain availability?

ENHANCEMENT REQUEST

I would love for Alfredo to enhance Adager to repack a dataset and allow
it to be available for use! ;-).

END ENHANCMENT REQUEST

I will 'never' be able to repack the set.  So....over time, after
removing data no longer required, then there will then be no fundamental
difference between a forward chain read & a reverse chain read. Correct?
Not Correct?

> -----Original Message-----
> From: Lee Gunter [SMTP:[log in to unmask]]
> Sent: Friday, September 05, 1997 12:52 PM
> To:   [log in to unmask]
> Subject:      Re[2]: [HP3000-L] Image Inefficient?
>
> Image will allow you to begin at end-of-chain and read the chain in
> reverse (DBGET-mode 6, I think).  The inefficiency referred to is
> probably due to the way records are retrieved from disk.  Image
> retrieves blocks of data in increments of 4096 bytes (2048
> half-words).
> Assuming the chain you're reading is optimally packed - i.e., all
> blocks contiguous - the prefetch characteristics cause the blocks to
> be
> read forward and will read 1-to-n contiguous entries of the chain into
> memory, reducing the number of I/O's required to retrieve all the
> records.  A reverse chained read doesn't cause the blocks to be read
> "backward"; therefore, this will usually incur more disk I/O's to
> retrieve all the chain entries.
>
> Typically, a reverse chained read is most useful for applications
> needing data which are typically stored in a sequence for which the
> most recently added entry/entries are required.
>
> I hope this helps (and is essentially accurate    :-).
>
> Lee Gunter
> Regence Blue Cross Blue Shield of Oregon / Regence HMO Oregon
>
> mailto:[log in to unmask]
>
> voice...503-375-4498
> fax.....503-375-4401
> ==========================================================
> The opinions expressed, here, are mine and mine alone, and do not
> necessarily reflect those of my employer.
>
>
> ______________________________ Reply Separator
> _________________________________
> Subject: Re: [HP3000-L] Image Inefficient?
> Author:  "Michael A. Dobies" <[log in to unmask]> at ~INTERNET
> Date:    9/5/97 8:35 AM
>
>
> Don't you have to go to the endof chain, before you can go backwards?
>
> Michael Dobies
>
> -----Original Message-----
> From:   Gary Groves [SMTP:[log in to unmask]]
> Sent:   Friday, September 05, 1997 7:49 AM
> To:     [log in to unmask]
> Subject:        Image Inefficient?
>
> I need to create a new Image dataset. I want to retrieve the records
> in
> reverse date order (LIFO). I was going to do a backwards chain read.
> Someone told me that a reverse chain read was not efficient.
>
> For other design reasons, I've decided to use a composite Omnidex key,
> but I'm curious if the above is true. If so, Why?
>
> Gary Groves
>
> Kwestions, Kwalms, Kweries, Komments....Kall!
>
>
> http://www.superstar.com
> http://www.netlinksat.com
> http://www.uvsg.com

ATOM RSS1 RSS2