<<This shouldn't be an issue. There are no MPE intrinsics that take byte offsets (into files) as arguments, so there wouldn't be any need for new APIs unless the file system were extended to allow >2**32 records. I don't think anyone's proposing that at this stage.>> I guess it depends on your viewpoint. For byte-stream files, which seem to be getting more and more prevalent with time, as well as really-big files with small record lengths, the following are some of the intrinsics that would have problems with files exceeding 4GB: FFILEINFO item 9 (Current logical record pointer) FLABELINFO item 19 (Number of logical records in file) and 28 (Total number of bytes allowed in file) FPOINT FSPACE (probably not much of an issue; it is already limited to a 16-bit displacement, and probably isn't used much/at all for disk files anyway) <<In addition, for any argument passed by value, the compiler can take care of the appropriate conversions. It's only for arguments passed by reference, e.g., FFILEINFO, FLABELINFO, that there's a compatibility problem, and that problem can be solved by using new item numbers to return 64-bit values. The same approach was used for "wide" values during the 16- to 32-bit transition.>> Not sure that is meant by the compiler "taking care" of conversions-unless we're talking about a new compiler with 64-bit base types? I think I've heard that ANSI X3J16/ISO WG21 are talking about adding 64-bit types to C++, but I haven't heard anything about C. HP could just change C/iX to make 'long's 64 bits, but I'd be (pleasantly) surprised to see that happen on 32-bit hardware. Of course, 64-bit 'long's would preempt problems with library functions like fseek() and ftell(); if a 'long' were 64 bits, these functions would "just work" for terabyte-size files. Steve