HP3000-L Archives

October 1999, Week 3

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Date:
Tue, 19 Oct 1999 00:09:09 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (18 lines)
In one set of bases I know, serially reading the largest set first
always results in faster reads. If I chain from Part-Master or
Customer-Master, or even Shipment Header to Shipment-Line, I can expect
it to take 2.5 to 3 hours. If I extract the Shipment-Line high-speed (no
manipulation, read-select-write only), sort by what I want, then read it
serially, chaining to what I need, I can get the same task done in 30
minutes.

This is especially acute if the largest dataset is much larger than
physical memory by several times. If it's, say, 10 times larger than
available RAM, you will have on the average a 90% chance of a memory
flush just from the large set thrashing. This gets worse if more things
demand memory.

Serially reading the large set is most efficient especially if the
number of fields you are picking out is small. Your subsequent sort then
becomes much more efficient.

ATOM RSS1 RSS2