Subject: | |
From: | |
Reply To: | |
Date: | Fri, 25 Nov 2005 18:44:51 EST |
Content-Type: | text/plain |
Parts/Attachments: |
|
|
Jeff writes:
> EBCDIC character hex codes are not contiguous like ASCII, and follow more
> closely to the Hollerith codes as above. Thus we have the EBCDIC
characters
> C1-C9 being "A"-"I" and D1-D9 being "J"-"S" as in Glenn's example.
>
> The IBM didn't operate on zoned decimal natively, you had to convert to
and
> from packed decimal. But this didn't quite yet dictate the sign nibble.
>
> PACK instruction packed the low-order nibbles of the zoned bytes into
> consecutive nibbles of the packed field, and stuck the high-order nibble of
> the last byte into the last nibble of the packed field.
Not only is your explanation excessively complex, I think that you're missing
the point: BCD encodings have nothing to do with EBCDIC or ASCII or even
Swahilli. There are only sixteen bit encodings possible and if you're going to
indicate a numeric sign, you're going to have to pick two (or three) of the
remaining six non-numeric characters to do so.
While in my youth I overpunched with the best of them, overpunching isn't
possible with only sixteen characters, thus a distinct nibble has to be allocated
if you want to record the sign of the number. The fact that you can find some
correlation with the sequence of characters between BCD and EBCDIC isn't
surprising. The BCD part is right there in the name, but the same sort of
correlation exists in ASCII as well. People, even if working completely independently,
would simply tend to put characters in the same order time after time.
Wirt Atmar
* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
|
|
|