HP3000-L Archives

August 2000, Week 3

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Ken Graham <[log in to unmask]>
Reply To:
Ken Graham <[log in to unmask]>
Date:
Mon, 21 Aug 2000 17:23:42 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (330 lines)
Hi Stan,

Here is your clarification.

>> In 6.5 the addition of "Large Files"(>4GB) to the FOS shows clearly how
>> committed HP is to the MPE - that is  to say: "They are not".  It(Large
>> Files implementation) is a joke.

>I'd have to disagree.  I think that the lab made a major effort, and
>implemented a significant new feature in a pretty short timeframe.

Had the design been complete and appropriate, to begin with, as would be
appropriate for a company committed to the platform in question, then
the implementation would have been totally different.  It would have
been simpler to implement, document, and debug.  As it is, they did an
adhoc addition to the FOS, as indicated by their implementation design.
In order to have a valid 64 bit pointer, you must HPFOPEN the file and
request the 64 bit pointer at that time.  Any other method used, even
with the AIF intrinsics, returns a pointless value.

>> It is so incomplete in its implementation as to be laughable.

>I'm lost here...it's probably more complete than, say, 64-bit files on
>Unix systems.

By this I mean that copying data, and mixing LARGE_MAPPED with SHORT_MAPPED
or LONG_MAPPED values is not supported.

By this I mean that the supplied intrinsic to do 64 bit pointer manipulation
only does a single operation PTR=PTR+constant.  As any programmer can tell
you,
that is not sufficient, when the compiler overrides 64 bit pointer
comparison, and
strips the top 32 bits prior to expression evaluation.

The result is that HP went to a lot of effort to produce quantity, not
quality.

Had the compilers simply had a pragma for MPE added to allow TRUE 64 bit
manipulation,
then they would not have had to add any intrinsics for doing pointer
manipulation.
The reason a pragma is appropriate is that for 64 bit pointers, they emit
code
to override what would normally have been emitted.

>It's not done, of course.  We still need to see all file types (including
>byte stream) implement > 4 GB.

>Keep in mind that MPE has *much* more work than a simple-minded operating
>system like Unix.  With Unix, all one has are "byte stream" files.
>There is *NO* structured file type.  MPE has a lot: byte stream,
>fixed record, variable record, message files, KSAM/iX, RIO, and Circular
>files.  (Ok, and directory files)

>CSY knew they didn't have the resources to make all of those types of files
>Large Files in the same release, so they chose the most frequently used
>ones for the first release ... and I think that's appropriate.

The complexity you allude to is at a fundamental level really not that
complex.
Having done it myself allows me some expertise on the subject.  You simply
abstract the various functionality, and build it on a solid paradigm.
What you never do, is make type dependent changes in fundamental behavior.
This will always produce bugs which are difficult to reproduce and the
resulting
behavior will be difficult to describe and document.

>> It produces inconsistent results, depending on the machine you use,

>Like on Unix? :)

>Like on MPE V vs. MPE XL vs. MPE/iX (when we went from 2 GB to 4 GB files)?

>I guess I'm saying: there are two kinds of inconsistent results:

>   1) running on an older system (that doesn't support Large Files)
>      and requesting a Large File  (that's the Unix and MPE V/XL/iX
comparison)

>      ...those are to be expected, and the 6.5 vs. 6.0 ones are the same
>      as the MPE/iX (with 4 GB max) to MPE XL (with 2 GB max) ones.

>   2) any other kind you might mean.

>I'd be interested in knowing what you meant.

No Stan, I mean on two different machines running the same OS version, with
the
same licenses, the same amount of memory, the same software running, the
same
CPU Model, the same amount of disc free space(relatively speaking), will
produce
different results, in that one will work fine, and the other will produce
errors,
so that those sites that are developing 64 Bit code, cannot assume that
their in house
tests, showing that everything is working fine, should have any confidence
in their
test results.  Given a user with a problem, there is no guarantee that they
will be
able to reproduce it in house, even given the same model, memory, etc, etc,
etc.
This is not an opinion.  This is a fact.  I called in with a problem that HP
could
not reproduce on all of the machines at their disposal.  Here, it occured on
only
one machine, and for reasons that could not be ascertained.  HP dialed into
our machine
to confirm what was being seen.  The lab was not concerned that at the most
basic
and fundamental level, their implementation produces such inconsistent
results,
and knowingly.


>> It produces inconsistent results, depending on [...] what the current
load is

>huh?

Because of how they implemented Large Files, normal memory mapped access
does not
mark a Large File page as "dirty", so writes using a Large Pointer obtained
by any method
other than HPFOPEN may not result in a "dirty" page being written to disk.
Notice that
the word is "may".  It is not guaranteed.  You can write your own routines
to move data
between pointers, including the use of Large Pointers, and it may appear to
work.
Move everything to a different machine, with a slightly different load, or
number of CPU's,
and the results may differ, or they may not.  It is inconsistent.  That you
must get the
pointer via HPFOPEN ___ONLY___ is not clearly documented.  That they provide
a 64 Bit pointer
via other methods allows a person of normal intelligence, to assume that
they are in fact
usable, when in fact they are not.

>>, etc, etc.  The languages were
>> not changed to allow you to do 64 bit pointer manipulation.

>I'm aware of only three languages that support 64-bit pointers:
>   Pascal/iX, C/iX, and SPLash!

>...and, you're right: none of them directly support 64-bit pointer
>manipulation.  (Their manipulations are limited to the bottom 32-bits of
>the 64-bit address...which works for long mapped files, but not Large
Files.)

This is the context for the "pragma" suggestion above.

>But...HP provided intrinsics to do such access:

>   HPFADDTOPOINTER ... does proper 64-bit arithmetic for pointer
add/subtract

What is missing is pointer comparison, or, to put it differently, pointer
subtraction.

>   HPFMOVEDATA     ... properly moves data to/from a Large File

This is true, but works only for Large Mapped files, not for SHORT or LONG.

>   HPFMOVEDATALTOR ... ditto, in a known order (allowing "clever"
>                       data overlap tricks)

>   HPFMOVEDATARTOL ... ditto, in a known order (allowing "clever"
>                       data overlap tricks)

These two should have been rolled into HPFMOVEDATA.  The software technique
to do so
has existed since K&R.  Since PASCAL is the language being used, maybe the
developers
have not heard of K&R, and probably have no idea what I am even talking
about.

>The other facilities that Unix lacks and sorely needs, like post()
>and prefetch(), also support Large Files.

>OTOH, MPE has had a history of what seems like at least 10 years of
>neglect from the Compiler Language group within HP.  ... but that's not
>new ... we've been complaining about it for years!

Hear Hear.  HP?  Hear? Hear?

>> The only
>> facility added to do so, is a single intrinsic call, to add a constant
>> to a pointer.

>4 intrinsics

There exists only one pointer manipulation intrinsic, which adds a constant
to a pointer, as stated.

>> than just adding a constant.  How absolutely irresponsible.  Now the only
>> way to add one to your 64 bit pointer is to call an intrinsic.  Intrinsic
>> calls are not cheap.  They incur a great deal of overhead.  The calls to

>On a 927, HPFADDTOPOINTER takes about 24 microseconds.

Assume that you are processing a file and you have a choice between using
the intrinsic, or adding some value x to a 64 bit value yourself, and you
are going to do this over a million times.  Every single line of code which
is a waste of time, is still a waste of time.  It only matters when you
start wasting a lot of time, which in my case I was wasting A LOT of time
using their intrinsic.  Since I had to implement pointer comparison myself,
I also implemented the rest of the suite of functions that should have been
done as well.  It took so long to do too.  About all of ten minutes at most.

>If you "roll your own" HPFADDTOPOINTER, it takes about 2 microseconds
>(unoptimized, and 0.8 microseconds if compiled with the optimizer on).

>So, if you had a file with 100 million 80 byte records (8 GB), and did
>one HPFADDTOPOINTER call per record, that's about 2400 seconds of
>time ... but ...let's say you spent 0.5 seconds of CPU processing
>each record (not counting the HPFADDTOPOINTER call)... that's
>about 18 months of CPU time, out of which 2400 seconds is pretty small.

This analysis is good, and you are right.  In the simplest case, where one
is making just a few calls, the overhead is negligible, and ignorable.
In my case, it was the most significant part of a loop which had to
iterate over almost every byte in a LARGE file, thus making pointer
manipulation
and comparison an issue.

>> The calls to
>> get a 64 bit file pointer to a LARGE file that is already opened, cannot
>> be used, even though they are added with 6.5 and documented, even though
>> they appear to work, almost all the time, and on some machines, 100% of
the
>> time.

>Huh?

AIFFILELGET, 4101.

>> The documentation for the new intrinsics they provided are truly at
>> the most fundamental level, incorrect(Parameters are not the types
>> described), and incomplete.

>I noticed one small difference between the *preliminary* documentation
>I received, and the actual final documentation.

><snip></snip>

The problem is that one should pass a SHORT pointer to the LARGE_POINTER,
instead
of what is described in the manual.

>> and their refusal to do so is further evidence.  That management allowed
a
>> design and implementation that consistently, knowingly, produces
>> inconsistent
>> results, would in some cultures result in such "loss of face" as to have
the
>> problem be "self remediating".

>I'd be interested in knowing more about this.  Are there bugs in
>Large Files?  (I know of one, which has a patch, which affected the
>reporting of the EOF in some cases.)

It is not so much of a "bug" as it is that if you read the documentation as
a normal
person might, and then implement something, it may work perfectly for you.
It is
only when you start hammering away at the LARGE_MAPPED section of the code
that you
might - just maybe - discover a problem.  That this is not a concern to HP
is also
one of the points I raised.  One of the hallmarks of a good design is
proactive error
reporting.  If something is to be disallowed, disallow it overtly.  If
something is
not going to be supported, do not allow it in the first place.  These are
simple
design decisions which are in stark contrast to the implementation HP
provided.

The other problem is what you have to go through to support LARGE files.  In
the case
of a single application doing a single task, it is a no brainer.  If you
have to
call their intrinsic to add a constant to a pointer a dozen or so times in a
single
IO bound loop, no big deal.  In the case of someone trying to do something
generic,
and applicable across all cases, the tools HP has given us, are inadequate
to say the
least, and show the level of commitment to the MPE which was my original
point.
In order to produce basic functionality, I have had to replace or supplant
most of
what HP has supplied.  When the lab is told what is wrong, they respond, "no
it isn't".

"No it isn't" is not an argument.  In a Monty Python film it is funny.
(this is not a quote but an accurate characterization, of countless dialogs,
where descriptions of the problems, and solutions, were sent to the lab,
and their response, ignored what had been given.  I was not the only one
frustrated with the lab's response and seeming commitment to the party
line regardless of evidence to the contrary.  The person(s) referred to
here asked that they/their name be withheld.)

>> year plan is meaningful, I have lived through this before.  The HP260 was
>> another machine that HP wanted to get rid of.  They had a 5 and 10 year
>> plan, including hardware and software, that the head of development told
>> the HP260 community, as well as to the company I worked for specifically,
>> in person.  Then two months later, HP pulled the plug, and left us all
high
>> and dry.

>I think that's a cautionary tale that we should all keep in mind.

Thanks Stan.

>No matter how dedicated Winston and CSY is, they are at the mercy of
>higher-level managers who have frequently killed golden geese for a quick
>bite of pate.

I could not have said it better myself.  :-)

Is it true that the P in MPE really stands for PATE?  ;-)
Or maybe HP stands for Haute Pate?  ;-)))


Ken Graham.

ATOM RSS1 RSS2