Subject: | |
From: | |
Reply To: | |
Date: | Fri, 31 Jul 1998 18:35:17 -0400 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
As somebody has probably already pointed out, it depends whether you
are talkiing about Master or Detail date sets. Master records get
spread out, detail records don't, ususally (only in certain very
special circumstances).
Nick D.
Dirickson Steve wrote:
>
> <<Agreed: disc is cheap, and time is money. But doesn't too much
> slack space hurt performance? Or did I misunderstand what I read?>>
>
> Depends on what you understood ;-)
>
> I guess there are cases where having "gross overcapacity" in a data set would
> hurt performance. Say you had a set with a million records, and deleted all
> but ten thousand of them, and the existing records were distributed over the
> entire file. I'd think that having to retrieve records from a space 100 times
> larger than needed would definitely reduce performance. Conversely, expanding
> an existing detail set by a factor of 100 should have no impact, since the
> data is still located in the same amount of space, and VSM would never to
> spend time paging in the unused 90% of the file. Making a master set orders
> of magnitude too large would not be good, as it would end with the data
> scattered all over the file by the hash algorithm. But then, master records
> tend to be, proportionally, multiples or orders of magnitude smaller than
> detail-set records, so a page will hold a lot of them, mitigating the problem
> to some extent.
>
> Speaking from a totally fact-free position, I'd WAG that the slope of the
> "performance drop-off curve" is *much* steeper on the "not enough space" side
> of the baby-bear point than that on the "too much space" side.
>
> Steve
|
|
|