Stan wrote <<
The current released method is the first phase of allowing
per-database (or possibly per-dataset) configuration.

We currently say:

   Search the primary area for up to N blocks and up to K percent of
   the primary area.  If we don't find a hole before either of those
   limits, we'll use the expansion area.

   At present, N is 40, and K is 10%.

   These values seemed like a good compromise.  We're not searching
   the entire primary area, but we're giving secondaries some chance
   to be clustered near their primary entry.

Now...if we make them configurable, setting either to 0 would mean
"always use expansion area".

SS >>

Thanks Stan,

The implementation you describe is more reasonable but doesn't seem to match
the documentation. ( I've never been guilty of this, not!:)

I guess the bottom line is that master capacities are changed more often for
performance reasons than for actual capacity reasons. Most
applications/datasets, (except for those that take control of record
placement) attempt to keep fill factor of about %30 free entries. Although
MDX is good insurance in case entries are added for some unexpected reasons.

The startling fact is that for master intensive application designs, about
one third of everything (disc, backups, and memory content) is zeros! A
setting to "always use expansion area" and the associated decrease in the
free entry requirements, would certainly result in a significant performance
gain for a serial read of a master and backup/ recovery times (if not using
host compression). I'm certainly looking forward to 6.0 and doing some
performance tests.

Mike Hornsby
[log in to unmask]