HP3000-L Archives

April 1997, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Steve Dirickson b894 WestWin <[log in to unmask]>
Reply To:
Steve Dirickson b894 WestWin <[log in to unmask]>
Date:
Sat, 5 Apr 1997 17:13:00 P
Content-Type:
text/plain
Parts/Attachments:
text/plain (59 lines)
<<hey, anyone, i need help. surely with all the tech people here, someone
can make a better guess than
me. does anyone know how the HP3000 995-xxx machines distribute lthe load
between the multiple processors?
thanks for any thoughts you might have..... >>

The same as most other SMP machines: when a processor becomes available,
the system-selected "schedulable entity" is assigned to it.

OK, but what does that mean?

"System-selected" is whatever the system selects according to its
internal rules. First, the entity has to be ready to run. If something is
blocked for I/O or other activity, it is not eligible for scheduling. The
entities that are ready to run compete with each other for CPU time.
Typically, the selected entity is at the top of some priority list,
possibly modified by other criteria. For example Windows NT has a
"priority boost" system that tries to ensure that lower-priority entities
eventually get some CPU time. MPE does not have any kind of "fairness"
built into its scheduler; if there is always something in the C queue (or
higher) that is ready to run (not likely, but possible), something in the
D queue that never gets its priority equal to or higher than the ones
above it will never get any CPU time at all. That's why installations
frequently adjust the priority levels with the :TUNE command so that the
queues overlap slightly; that way, ever-ready C queue items will
eventually decay to a priority lower than the top of the D queue, so
something waiting at the top of the D queue will get some time.

"Schedulable entity" is the kicker in the current discussion. Until
recently, the only schedulable entity on MPE was the process. No matter
how much work a program might do, or how many different things it might
want to do, a single process could only be running on a single CPU.
That's why many MPE "programs" actually work by spawning a number of
child processes, and parcel out the work among the children (and maybe
the top-level process too). For example, some compilers (Transact/iX used
to do it) use one process to parse the input source and generate the
intermediate file, while another crunches the processed intermediate file
to generate the output. That way, as soon as the front-end processor has
enough intermediate output for the code-generator to work on, the code
generator can start. On a dual-or-more-CPU box, both processes could be
active at the same time.

Now that kernel threads are available on MPE, the "schedulable entity" is
the thread, so you can take advantage of multiple CPUs without having to
split your code into multiple programs that run as separate processes.

Of course, splitting a process into multiple processes or threads will
only take advantage of multiple CPUs if the operation is actually
CPU-bound. If the operation is limited by something else, such an
exercise would be a waste of time. A somewhat extreme example would be
splitting FCOPY into two pieces, one to read from the source and one to
write to the destination. Since the CPU-limited part of this operation is
already outrunning the I/O-limited part by orders of magnitude, such an
exercise would be useless, at least in terms of speeding up the copy
operation.

Steve Dirickson         WestWin Consulting
(360) 598-6111  [log in to unmask]

ATOM RSS1 RSS2