HP3000-L Archives

October 2002, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Sletten Kenneth W KPWA <[log in to unmask]>
Reply To:
Sletten Kenneth W KPWA <[log in to unmask]>
Date:
Mon, 28 Oct 2002 19:32:31 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (52 lines)
Last Friday Steve Almes asked:

> ... We currently have a home-grown application that
> logs on at night and checks the capacity vs. entries
> of our IMAGE databases and then does a resize on them.
> However, sometimes our production databases still
> blow up in the middle of a run with a "data set full"
> error message.  I was looking in the IMAGE manual and
> now I'm trying to set up automatic capacity management
> on all my production databases.  ...

Fred already addressed the "prime number" issue, so I'll
add $0.02 elsewhere:

We have been using dataset dynamic expansion (DDX) on
our Detail datasets for years;  never had a problem
(there were some real problems with DDX in certain cases
in the early years, but if you are on a recent / current
release of TurboIMAGE you should have no worries on that
score).  Two other things to consider:

(1)  Don't make your DDX increments too small.  While
in large part dependent on the daily "record add rate",
all else being doable I tend to make the DDX increment
somewhere around one percent of the CURRENT capacity,
plus or minus a bit.  There is nothing magic about that
formula;  it's just a rough convention that we adopted.
I have heard stories about people setting DDX increments
at less than 10 entries.  If you add a lot of entries
with very small DDX increments, the disc fragmentation
could get really ugly. Several of our large, "add-active"
datasets have increments of 20,000 entries or more...

(2)  Properly set up (and perhaps / probably with an
occasional run with a defrag product), DDX on Details
can run until you get close to your MAXCAP;  no problem.
*HOWEVER*:  That's not the case (or at least likely will
not be the case) with DDX for Masters:  Pressed for time
right now, so I'll just say that DDX for Masters should I
belive usually be considered as just a temporary stop-gap
(although temporary might last for a while); until you
get a chance to "normally" expand and rehash the master
with the database tool of your choice.  If you push too
many master entries into the expansion area, you could
run in to serious performance problems..  As usual, YMMV;
depending on your environment...

Ken Sletten

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2