HP3000-L Archives

April 1997, Week 5

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Date:
Wed, 30 Apr 1997 07:42:50 EST
Content-Type:
text/plain
Parts/Attachments:
text/plain (26 lines)
The last time I saw a significant performance problems, some QTP code (if my
memory is working) was written where it would do a serial/chained read.  It did
a serial read on 20,000 records and for each of these records, it did a single
keyed read.

20,000 * 2 = 40,000 reads.

When the new version of QTP was installed, it started doing a serial/serial
read.  The first dataset still had 20,000 records but the second dataset had
close to 800,000 records.

Simple math puts that at: 20,000 * 800,000 = 16,000,000,000 reads!

A simple change in the code, and we were back at 40,000 reads, but nobody
realized that the performance problem was due to slight change in a coding
loophole that got plugged with the new version.

What I'm trying to get at:  your performance problems might not have *anything*
to do with your database, something may have changed in your code, or your code
might all be the same, but in the case above, the software that runs your code
might have changed in functionality.  There are *lots* of reasons for
performance problems, and without the right tools, you might spend a *long* time
looking for the reasons.

Kevin Newman

ATOM RSS1 RSS2