HP3000-L Archives

April 1996, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Date:
Thu, 11 Apr 1996 14:58:36 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (50 lines)
Item Subject: Compiler "advances"?
 
Glen writes:
>>"Future machines will depend even more on optimizing compilers for
>>performance -- and on the programmers who know how to use them."
 
>I was thinking that with the direction things are moving, the application
>designers have to worry less and less about the specifics of the target
>machine. As long as the chosen algorithm is relatively efficient, the
>compiler will generate decent code.
 
The quote above says a couple things.  "Future machines will depend even
more on optimizing compilers" is certainly true.  New architectures like
the Intel/HP chip push much of the responsibility for performance back
into the compilers. "...and on the programmers who know how to use them"
implies that you need to learn about how the optimizer works in order to
get the best performance.  I think this will be true for some class of
programs, but for most people I would agree with Glen that it is the
compiler's job to generate decent code, so I think whoever wrote this
quote was being unnecessarily melodramatic.
 
There's an old joke that says you can have it cheap, fast, or good, but
you can only pick two of these.  For many things, including any nontrivial
software project, this seems to hold true most of the time.  For most
applications, "good" is a requirement, which leaves you with a choice
between "good and fast" and "good and cheap".  In the past, you usually had
to go with "good and fast" (giving up cheap in the process) because the
hardware was slow enough that "fast" was often as critical as "good".
 
These days the hardware is getting *really* fast, and it is my contention
that the number of cases where you have to spend the extra money to get the
utmost performance should start becoming quite limited.  On an architecture
like HP/Intel, this may be limited to kernel operating system services and
the low level things like "compatibility mode" code that allows you to run
old PA-RISC and Intel x86 code.  The rest of us can start taking advantage
of development tools that are less efficient from a runtime perspective but
much more efficient from a programmer productivity and resultant quality
point of view.  It started in the early '80s with 4GLs, and probably the
best example today is Visual Basic.  Now we are starting to see "new"
programming languages (Java, Python, etc.) and other tools (Object Oriented
Databases and so on) that hold out the promise of insulating application
programmers from the growing complexity (and version to version
uncertainty) of today's operating systems, graphical user interfaces,
networks, and so on and so on.
 
So with a little luck, the future holds great hope for "good and cheap"
applications with "fast" (or at least "fast enough") coming along for free.
 
G.

ATOM RSS1 RSS2