HP3000-L Archives

October 2002, Week 5

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
fred White <[log in to unmask]>
Reply To:
fred White <[log in to unmask]>
Date:
Tue, 29 Oct 2002 07:54:26 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (79 lines)
On Monday, October 28, 2002, at 08:32 PM, Sletten Kenneth W KPWA wrote:

> Last Friday Steve Almes asked:
>
>> ... We currently have a home-grown application that
>> logs on at night and checks the capacity vs. entries
>> of our IMAGE databases and then does a resize on them.
>> However, sometimes our production databases still
>> blow up in the middle of a run with a "data set full"
>> error message.  I was looking in the IMAGE manual and
>> now I'm trying to set up automatic capacity management
>> on all my production databases.  ...
>
> Fred already addressed the "prime number" issue, so I'll
> add $0.02 elsewhere:
>
> We have been using dataset dynamic expansion (DDX) on
> our Detail datasets for years;  never had a problem
> (there were some real problems with DDX in certain cases
> in the early years, but if you are on a recent / current
> release of TurboIMAGE you should have no worries on that
> score).  Two other things to consider:
>
> (1)  Don't make your DDX increments too small.  While
> in large part dependent on the daily "record add rate",
> all else being doable I tend to make the DDX increment
> somewhere around one percent of the CURRENT capacity,
> plus or minus a bit.  There is nothing magic about that
> formula;  it's just a rough convention that we adopted.
> I have heard stories about people setting DDX increments
> at less than 10 entries.  If you add a lot of entries
> with very small DDX increments, the disc fragmentation
> could get really ugly. Several of our large, "add-active"
> datasets have increments of 20,000 entries or more...
>
> (2)  Properly set up (and perhaps / probably with an
> occasional run with a defrag product), DDX on Details
> can run until you get close to your MAXCAP;  no problem.
> *HOWEVER*:  That's not the case (or at least likely will
> not be the case) with DDX for Masters:  Pressed for time
> right now, so I'll just say that DDX for Masters should I
> belive usually be considered as just a temporary stop-gap
> (although temporary might last for a while); until you
> get a chance to "normally" expand and rehash the master
> with the database tool of your choice.  If you push too
> many master entries into the expansion area, you could
> run in to serious performance problems..  As usual, YMMV;
> depending on your environment...
>
> Ken Sletten


Ken is "right on". INCREMENTS shouldn't be HUGE (adequate free space
may not exist at the time of a dynamic expansion) nor should they be
TINY (fragmentation of disk space with lower performance).

Don't try to save space with Masters. Make them larger than needed but
not way too large (Master serial read performance degradation). The
"wasted" space generally results in fewer synonyms which results in
improved non-serial performance (DBFINDs, DBPUTs and keyed DBGETs).

Do try to save space with Details. Make them as small as you can
afford. If they don't expand or if they expand only a few times, you've
saved disk space. If the initial capacity is set too large, you've
wasted disk space.

Also, most databases have more Details than Masters and most of those
Details are much larger than most of the Masters. That's why your
efforts to conserve disk space should focus on Details.

Further rationale for these "simple" rules is covered in "Dynamic
Dataset Expansion", a "Technical Paper" of mine accessible via Adager's
Website.

FW

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2