HP3000-L Archives

August 2001, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Wirt Atmar <[log in to unmask]>
Reply To:
Date:
Tue, 28 Aug 2001 21:59:29 EDT
Content-Type:
text/plain
Parts/Attachments:
text/plain (150 lines)
Stan writes:

> Re:
>  > CM programs still existing in the SYS account on 7.0! Would anyone have
an
>  > explanation of why this is so? Our system monitoring tool (SOS/3000)
shows
>  > lots ("lots" being one of those highly technical terms) of switching
> between
>
>  Why?  Because:
>
>    - some programs are OS-bound or file-system-bound, and moving them to
>      NM won't appreciably help them (i.e., there might be better places to
>      use the time/energy it takes to convert to NM)
>
>    - some programs are difficult to move to NM because of the language
>      they are written in (BASIC, FORTRAN/V), and the cost of porting to
>      NM might be more than the value derived from being in NM.
>
>    - some programs might use features available in CM that aren't
>      readily available in NM (note: there aren't many left)
>      E.g., variable-record KSAM files ... if you're file-system bound,
>      you *might* get better performance from a CM program working with
>      a KSAM file than from an NM program (we're *not* talking KSAM/iX
here!),
>      because you don't need to do switch-to-CM (which is expensive).
>

Let me not only agree with Stan's points but expand on them a bit as well.
We've done extensive testing over the years comparing the performance gains
we'd get out of converting to full NM in comparison to OCTCOMP'ed CM, and if
the gains are truly there (and we're not sure that they are), they're only
going to be about 1 or 2% percent.

We've programmed up all of our products on the HP3000 in BASIC/V and SPL.
These were the two fastest, most well optimized languages on the Classic
series, and we chose them specifically for that reason 15 years ago. Because
of that level of optimization early on, they still continue to produce
excellent object code, which converts nowadays to highly efficient NM object
code, both in speed and size.

This is not to say that there aren't routines that if they were written in
BASIC/V couldn't be sped up by a thousand times by going to a more native
mode compiler, but we already do that in those cases where it's important
enough to do. Nonetheless, the speed of processing isn't the only engineering
consideration. If a routine only contributes to one thousandth of the overall
time of processing, increasing its particular speed to the point that it is
infinitely fast only increases overall performance by 0.1%, generally a
performance increase that isn't worth either the cost of a complex conversion
or taking the risk that you might make it more unstable.

I generally tell our customers not to bother with purchasing a hardware
upgrade unless they're going to see something significantly more than 50%
gain. Anything less than that, and they're never really going to notice the
difference in a multiprocessing environment. You simply get that much "noise"
in performance moment-to-moment because of the variation in what other people
are doing on the machine. In fact, it generally takes a two or three time
processor throughput performance improvement before people out on the floor
sit up and take notice and say, "Whoa!"

It's against this background that you have to measure a 1 or 2% performance
gain. I agree completely with Bill Gates when he says that product
performance gains are made almost wholly in the choice of algorithms and
hardware, not the computer language.

Code it right, code it intelligently, and you'll make much more difference
than if you did it sloppily. However, I don't want to say that choosing the
right language can't make a difference. In our initital tests 15 years ago,
we found BASIC to run twice as fast as COBOL, to produce much smaller object
code, to be enormously easier to read, with one-hundredth the volume of the
text to be typed in and immediate syntax checking. All of these are
attributes that increase productivity dramatically and make old assembly
coders hearts sing.

You should never be lulled into the trap that newer is better. A good portion
of the time, it's not only not true, it's dead backwards. There was a very
strong motivation in the old days to do things carefully and optimally. In an
era of vast resources, there's more than a mild tendency to waste those
resources.

QCTerm is as good an example of this effect as anything. We programmed QCTerm
up as a finite state automaton, just as we do all of our products, including
QueryCalc (which also has a monocephalic Turing machine in it, just to see if
I could actually ever use this bit of theory after teaching it for many years
:-). In this FSA architecture, a supervisory, behavioral language sits on top
of a hierarchy of utility drivers. In QueryCalc on the HP3000, those roles
are played out by 200 BASIC/V hierarchically organized programs calling 200
small SPL utility/intrinsic drivers. In QCTerm, the same architecture has
Visual Basic calling a raft of APIs. Indeed, its because of our extensive use
of API's that QCTerm achieves its great efficiencies. Indeed, if it weren't
for this architecture being employed, the common wisdom that VB can't be used
for commercial product development would undoubtedly be true.

We recently converted QCTerm from 16-bit to 32-bit primarily for the purposes
of staying current. Otherwise there is nothing about a terminal that needs to
be 32-bit. In fact, it could just as easily been programmed up as an 8-bit
device; all of the real terminals certainly were.

Converting from 16-bit to 32-bit was not easy. Actually, it was a lot harder
than we ever imagined it to be. It took us five months of completely
non-productive work and cost us at least $50,000 to complete, and when it was
all done, QCTerm ran a little slower than it did as a 16-bit device, but now
consuming 7 times as much filespace (300kb vs. 2.3mb) as it did originally.
Eventually, we were able, by analyzing where the new routines were slower, to
get the 32-bit version slightly faster than the original, and then by further
by compiling the VB code, an option that wasn't available to us in the 16-bit
code, we were able to achieve a 30% performance gain.

But, all totalled, that's not much gain for $50,000.

And that's the engineering decision that always has to be made. The real pain
of staying in OCTCOMP'ed CM (which is really executing native mode
instructions anyway) is not in performance, as you might initially imagine.
Rather, it is the constraints of a 16-bit stack, but that's a designer's
problem, not an end-user's. If the released code is designed well, you'll
never see that problem.

Nevertheless, there are still excellent reasons for sticking with CM on the
HP3000:

    o We still have a very few customers that run Classics. We could never
maintain code as complex and as massive as QueryCalc in two completely
distinct versions and have them both operate properly.

    o Even more importantly, BASIC and SPL emit extremely well optimized
code, an optimization that is even further improved by OCTCOMP It's worth
taking another look at Mike Yawn's performance comparison chart that he
presented at HP World last year. It's at:
http://aics-research.com/basic/index.html

    o But, surprisingly, one of the most important reasons to us is Stan's
last reason: "CM programs are a lot smaller, and that's an advantage in some
cases." We are in the process now of converting over all of our updates and
original site installations into an emailable process, rather than actually
send a tape to anyone. We began doing this 6 years ago when we went out of
our way to "modemize" all of customers, buying and configuring their modems
if need be. Email has only further increased the possibilities. In a very
real way, you can consider CM to be the world's most efficient & intelligent
compression routine, with OCTCOMP being the reverse decompressor. We can take
our NM routines, 80mb in size, and compress them to about 600k when
transmitted in zipped CM. That makes a dramatic difference in what you can
email and what you can't, and for that reason alone, I doubt that we would
now ever consider dropping CM.

Wirt Atmar

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2