HP3000-L Archives

October 2002, Week 5

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Steve Almes <[log in to unmask]>
Reply To:
Date:
Tue, 29 Oct 2002 09:23:40 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (67 lines)
Well, I had a real problem with this.  I enabled it on several of my
production databases and had database corruption on all of them.  I don't
know what I'm doing wrong.  I'm on MPEiX 5.5 with the Y2K patches on a 957.
I should probably go to 6.0 at least, shouldn't I???

Thanks,

Steve

 -----Original Message-----
From:   Sletten Kenneth W KPWA [mailto:[log in to unmask]]
Sent:   Monday, October 28, 2002 7:33 PM
To:     [log in to unmask]; [log in to unmask]
Subject:        RE: Automatic Resize considerations for IMAGE

Last Friday Steve Almes asked:

> ... We currently have a home-grown application that
> logs on at night and checks the capacity vs. entries
> of our IMAGE databases and then does a resize on them.
> However, sometimes our production databases still
> blow up in the middle of a run with a "data set full"
> error message.  I was looking in the IMAGE manual and
> now I'm trying to set up automatic capacity management
> on all my production databases.  ...

Fred already addressed the "prime number" issue, so I'll
add $0.02 elsewhere:

We have been using dataset dynamic expansion (DDX) on
our Detail datasets for years;  never had a problem
(there were some real problems with DDX in certain cases
in the early years, but if you are on a recent / current
release of TurboIMAGE you should have no worries on that
score).  Two other things to consider:

(1)  Don't make your DDX increments too small.  While
in large part dependent on the daily "record add rate",
all else being doable I tend to make the DDX increment
somewhere around one percent of the CURRENT capacity,
plus or minus a bit.  There is nothing magic about that
formula;  it's just a rough convention that we adopted.
I have heard stories about people setting DDX increments
at less than 10 entries.  If you add a lot of entries
with very small DDX increments, the disc fragmentation
could get really ugly. Several of our large, "add-active"
datasets have increments of 20,000 entries or more...

(2)  Properly set up (and perhaps / probably with an
occasional run with a defrag product), DDX on Details
can run until you get close to your MAXCAP;  no problem.
*HOWEVER*:  That's not the case (or at least likely will
not be the case) with DDX for Masters:  Pressed for time
right now, so I'll just say that DDX for Masters should I
belive usually be considered as just a temporary stop-gap
(although temporary might last for a while); until you
get a chance to "normally" expand and rehash the master
with the database tool of your choice.  If you push too
many master entries into the expansion area, you could
run in to serious performance problems..  As usual, YMMV;
depending on your environment...

Ken Sletten

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2