HP3000-L Archives

September 1998, Week 3

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Gavin Scott <[log in to unmask]>
Reply To:
Gavin Scott <[log in to unmask]>
Date:
Wed, 16 Sep 1998 11:35:33 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (26 lines)
I hear a lot of people say that they think they need a faster computer
because their CPU utilization is at >95% or something like that.  Some
fail to realize that 100% utilization is quite often a good thing.

People usually think of performance as a one dimensional scale running
from slow to fast, and for tuning and optimization of a single process,
that's usually all there is.  You make your program require less
resources to execute and it becomes "faster".

But in measuring and optimizing system wide performance on a multi-user
system there is another dimension to consider, that of "response time"
versus "throughput".

For the best response time, you want to try to keep the CPU 100% idle,
because that way it is always available in the shortest amount of time
to start working on your problem.

For the best throughput, you want to try to keep the CPU 100% busy, because
otherwise resources are being wasted that could otherwise be doing useful
work and thus increasing the total amount of work completed per unit of time.

This fundamental conflict is why performance consultants make so much
money.

G.

ATOM RSS1 RSS2