I thought that if datasets were not optimized in blocking factor that disk space was wasted for each record in a dataset because they either fell short of block boundaries or overlapped block boundaries. If they fall short of block bounderies then empty space is being paged into memory wasting valuable memory space right?? Empty space in memory = Less information is cached and thus more io's are done through the disk. It might not seem important, but on 1,000,000 +++ records read from the master dataset every day, it looks like inefficient access to the database because the blocking factor of the dataset was not optimized.
Another issue, I want to bring up is: "Master data sets should be no less then 35% full." .... "By changing capacities of heavily accessed master data sets so that they are not too empty, we are creating greater locality of data in that master." It appears to be hard to please both world. Documentation is taken off THE TURBOIMAGE TEXTBOOK by Michael C. Hornsby Copyright 1989 (also available on their web site www.beechglen.com). To me data locality rimes with hit rate and hit rate = performance optimization.
Anyone wants to comment on that.
Jean Huot
Northern Credit Bureaus Inc.