HP3000-L Archives

March 1999, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jerry Fochtman <[log in to unmask]>
Reply To:
Jerry Fochtman <[log in to unmask]>
Date:
Wed, 10 Mar 1999 11:59:07 -0600
Content-Type:
text/plain
Parts/Attachments:
text/plain (73 lines)
At 11:19 AM 3/10/99 -0500, Nick Demos wrote:
>Curt Narvesen wrote:
>>
>> I have a client with performance problems on their database. I have run
a how
>> messy report and identified several datasets that could use repacking
based upon
>> the elongation values. Besides adding or deleting paths, is there
anything else
>> that I can look at to help improve their database efficiency?
>>
>Curt, look at the block sizes, which could be important depending
>on
>how the data base is accessed.

It's not so much the block size that impacts performance related to storage
and retreval as it is the amount of space wasted (unused) in the block due
to the blocking factor.  One could have fairly small blocks with no wasted
space and actually have better performance than larger blocks which may have
wasted space.

This is all related to the fact that IMAGE no longer does MR/NOBUF I/O in
reading the datasets, but rather, using long-mapped pointers.  In using
pointers, the file is now 'paged' into memory when an address is accessed
using the pointer.  That page of disc can contain 1 or more IMAGE blocks.

What is important to note is that when IMAGE builds the dataset, it
determines the size of the block and then rounds it up to the nearest
multiple of 128.  As an example, if the blocking factor coupled with the
data record size and chain pointer space requires a block of 898 halfwords,
IMAGE rounds the block size up to 1024 halfwords, thereby consuming 126
halfwords of unused space for each block.  So if one has, say 100,000
blocks in a dataset (which is not uncommon), one has wasted 25,200,000
bytes (98,438 sectors) of storage space.  So in traversing the dataset,
a process would basically be I/O paging through this amount of unused
space.

But if the blocking factor was such that there was no waste in each block,
consider how much less storage is used and the reduction in I/O paging.  Also,
the data is essentially 'denser' in terms of the amount of actual data per
page of disc so the processing would be more efficient as it would be able
to process the same amount of data with less I/O paging taking place.

Oh well, getting a bit technical.  Needless to say, I'd focus more on the
blocking factor and the wasted space per block than the actual size of the
blocks.  Although having larger, efficient blocks does provide a slight
edge in that there would possibly be less space consummed by block-level
bitmaps.

There are a host of other possible issues that may be impacting the
performance of the database, 'densifying' the storage is but one
possible attribute of performance. System memory, entry locality,
whether we're dealing with a master or detail, sort keys, application
access/locking strategy, hardware component contention (disc/channel),



/jf
                              _\\///_
                             (' o-o ')
___________________________ooOo_( )_OOoo____________________________________

                       Wednesday, March 10th

          Today in 1876 - The International Centennial Exposition
                          opened in Philadelphia.

___________________________________Oooo_____________________________________
                            oooO  (    )
                           (    )  )  /
                            \  (   (_/
                             \_)

ATOM RSS1 RSS2