>2. slide 14, "IA-64 Floating Point Architecture"
>
> highlights the "Freddie Mac" (my term) instruction:
> FMAC = Floating-point Multiply/ACcumulate = a * b + c
>
> The odd thing here (to me) is that floating point numbers
> are 82 bits.
>
>
Ah...There is a bit of a super computer here. Many scientific architectures
use
slightly larger registers for calculations then the memory architectures
support.
The extra bits aid in rounding, handling overflows and dealing with
irrational and
imaginary numbers.
Gary L. Biggs, N5TTO
[log in to unmask]
Interex SIG Allbase Chair
"Abandon all hope, Ye who Inter(net) here" --
Dante, over the portal(router) to Hell