Subject: | |
From: | |
Reply To: | |
Date: | Mon, 4 Aug 2003 16:47:08 -0700 |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
--- [log in to unmask] wrote:
> Also, for serial read performance, can anybody comment on the
> expected gain or difference between just deleting records vs.
> deleting, setting a lower capacity and repacking? As an example, let
> say we delete 25% of records in a dataset with 20 million records.
> Less records clearly means less time right? even if you don't resize
> and repack right?
Jason,
Not exactly, when a record in a detail dataset is deleted it is "flagged" as
deleted not physically removed, hence when the next program does a serial read
it still needs to process the "missing" record. With one record missing there
is not much to worry about but 5 million records may produce a noticeable
difference.
Now when dealing with Masters or Automatic datasets the answer is different.
Since a serial read always has to read to the "capacity". Adjusting the
capacity down will improve serial reads but may affect the random access
performance.
There is lots of information out there to read. A good place to start is
http://www.adager.com/TechnicalPapers.html
Happy learning.
-Craig
__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com
* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
|
|
|