HP3000-L Archives

November 1995, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Doug Perry <[log in to unmask]>
Reply To:
Doug Perry <[log in to unmask]>
Date:
Wed, 8 Nov 1995 18:57:45 GMT
Content-Type:
text/plain
Parts/Attachments:
text/plain (92 lines)
Scott_Gavin wrote:
 
> From reading about the Workload Manager...
 
> If you assign a MinCPU percentage to a workgroup, it will gets that %
> of the CPU whether they needs it or not.  This means that if you have
> a workgroup with a 20% MinCPU, but there are no processes in that
> workgroup that need the CPU right now, then the CPU will be 20% idle
> even if there are 100 other processes that are all desperately want
> CPU.
 
Actually, the MINCPU percentage is permissive.  That is, if a workgroup
cannot use the minimum percentage you assign to it, any other process
can use that CPU time.  Therefore, a MINCPU percentage will never
force the system to go idle unless absolutely no process is ready
to use the CPU.
 
It is the MAXCPU percentage cap which is restrictive.  If you assign a
maximum CPU percentage to a workgroup, that cap will be enforced even if
it means that the system must be idle if no other workgroups can use the
CPU at that time.
 
In response to Hans Fohrman, who wrote:
 
>> It is important that the sum of all MinCPU never goes over 80 % because
>> the OS will then have problem and not have CPU.
 
Scott said:
 
> Because with 80% allocated to various workgroups MinCPU, you only have
> one fifth of the system available to MPE and other users :-)
 
Because of the fear of starving system processes, we actually changed
the algorithms in the dispatcher so that system processes will override
the minimum CPU percent specifications for workgroups.  Therefore, there
is no danger of assigning minimum CPU percents which have such a large
total that system processes will be starved.
 
Scott also wrote:
 
> Workload Manager is similar to mainframe utilites which do the same
> kind of stuff.  In those environments people are willing to pay 10%
> of the CPU's time in overhead to get this kind of control.
 
Fortunately, the extra overhead required to enforce CPU minimums and
maximums turned out to be far less than 10%.  In our tests the
extra Workload Manager overhead was more like 2 percent in the case that
both minimums and maximums were being enforced.  However, our tests used
workloads which were not as complex as an actual production system
might have.
 
> If you have a large, complex environment, then Workload Manager looks
> like a great tool for getting more control over things.  Of course this
> comes at some cost in performance, and provides new interesting ways
> to shoot yourself in the foot.
 
I think the cost in performance is low.  However, I agree about the
"new interesting ways to shoot yourself in the foot" part.
 
Steve Cole wrote:
 
> In regards to using the Workload Manager on smaller systems I agree that
> it would work.  The problem with the Workload Manager on smaller systems
> is that the interrupt scheme is not as grandular as it was on previous
> releases.  This means that the slower the system the less control you
> have.
 
While the interrupt scheme was unchanged, we did in fact change the
preemption algorithm for the Workload Manager.  That means that
preemptions are slightly less likely and that more processes run
until they block or until they are timesliced.  So Steve is
quite right that a process may not be able to preempt as quickly, and
that might seem to result in less control on a slower system.
 
This change in preemption algorithms is really a part of the dispatcher
which is part of all 5.0 push systems; the change is not restricted
only to those systems which have the Workload Manager product.
 
Fortunately, toward the very end of the Workload Manager user test period,
we were able to improve the CPU percentage controls such that they worked
well on systems of all sizes.  Before the change, our controls were not
as accurate for workgroups which had few processes. After the change,
controls were accurate for all workgroups, whether they had one or hundreds
of processes.
 
So to me, the determiner of the need for the Workload Manager is not
the size or speed of the system;  it is the presence or absence of
competing populations of processes which need to be controlled.
 
 
Doug Perry, MPE/iX Lab

ATOM RSS1 RSS2