Yes, Mode 6 is slower, but let say, chain is sort by Invoice date
and Invoice chain has full of historical data in the chain,
backward chain read will be helpful.
FP (Fetch PRIOR : Backward chain read: Mode 6 DBGET)
For example, FP routine read from current invoice to older
Invoice data of historical Invoice data set by chain.
Once, This Backward chain read routine found out beginning date
of invoice( or from-date-1) in chain,
there is no point to look further in the chain , because it will be
older than current records and Logic could exit from chain read loop
and find another customer's chain from serial read from Customer.
One condition; Customer record is less than Invoice and most time
it deals with current year or month...
Cheers
Peter C.
SR. ERP/MRP Analyst
>>> Tom <[log in to unmask]> 10/19 3:27 PM >>>
Jim Phillips wrote:
>
> John Krussel <[log in to unmask]> writes:
>
> > Since Image places records Serially (unless there is a delete chain) there
> > is probably a very small chance that the same customer placed two orders
> one
> > right after another. In that case your test will almost always fail and
> > you'll have to do the Find and the Get. If you read all the entries first,
> > sort them by Cust# and then go through them again getting any additional
> > data, there is a greater likelihood that you will already have the record
> > you want in your buffer. And have to do less reading
>
> Actually, since the invoices data set has the invoice number as its primary
> key, I would expect to find all invoice records for a given invoice in the
> same block (or as close as possible). Since there are (usually) multiple
> invoice records per invoice, I think the test would almost always be true
> and I wouldn't have to have Image (or MPE) check anything, since I am
> checking it in the program.
You would think this would be so. The dataset I'm talking about uses
Invoice Number as primary key also. But it still takes 2.5 hours to read
Invoice Header and chaining to the Invoice Line vs 30 minutes the other
way around (632000 line records). Unless you are using an Integer key,
the Invoice Header keys are going to be randomly hashed irrespective of
Insert Order, so the first Invoice Line Read is always going to be a
random spot in the dataset resulting in memory pressure if that set size
exceeds physical memory size.
Also, about an earlier idea, a mode-5 forward chain is alway more
efficient than a mode-6 backwards read. I believe that an Image read
fetches about 90,000 bytes ahead for serial and 16,000 for chained read.
So for mode-6 reads, it's Read 16000, back up 500 or so and 16000
forward, back up 500 or so and 16000 forward, and so on.
|