HP3000-L Archives

March 2001, Week 3

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
LES BUREAUX DE CRÉDIT DU NORD INC <[log in to unmask]>
Reply To:
LES BUREAUX DE CRÉDIT DU NORD INC <[log in to unmask]>
Date:
Fri, 16 Mar 2001 15:56:33 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (12 lines)
I thought that if datasets were not optimized in blocking factor that disk space was wasted for each record in a dataset because they either fell short of block boundaries or overlapped block boundaries.  If they fall short of block bounderies then empty space is being paged into memory wasting valuable memory space right?? Empty space in memory = Less information is cached and thus more io's are done through the disk.  It might not seem important, but on 1,000,000 +++  records read from the master dataset every day, it looks like inefficient access to the database because the blocking factor of the dataset was not optimized.


Another issue, I want to bring up is:  "Master data sets should be no less then 35% full." .... "By changing capacities of heavily accessed master data sets so that they are not too empty, we are creating greater locality of data in that master."    It appears to be hard to please both world.  Documentation is taken off THE TURBOIMAGE TEXTBOOK by Michael C. Hornsby  Copyright 1989 (also available on their web site www.beechglen.com).  To me data locality rimes with hit rate and hit rate = performance optimization.


Anyone wants to comment on that.


Jean Huot
Northern Credit Bureaus Inc.

ATOM RSS1 RSS2