As we realized in our department there is a problem with a master-dataset :
More than 20% of all entries belong to entries where the hashing-algorithm had
to calculate a secondary key-entry. Due to that the performance is very bad.
The master-dataset is 50% full. Is there a way to optimize the dataset-filling
? How does the hashing-algorithm work ? (I know there must be an article
describing it )
 
Regards Rudi