Gary Groves writes: > I need to create a new Image dataset. I want to retrieve the records in > reverse date order (LIFO). I was going to do a backwards chain read. > Someone told me that a reverse chain read was not efficient. > > For other design reasons, I've decided to use a composite Omnidex key, > but I'm curious if the above is true. If so, Why? There is absolutely nothing about the way that an IMAGE chain is constructed that makes it any more or less efficient to track it in its forward direction as compared to its reverse direction. The master dataset only has three pieces of information in it for each detail dataset it references: o the length of the chain (number of records in chain) o the first record number (the chain head) of the chain o the last record number (the chain tail) of the chain In contrast, the records in the detail dataset (that are chained together by a particular item value) only have two pieces of information for each search item defined for the detail dataset: o the record number of the next record with an identical item value o the record number of the previous record with an identical item value The next and previous record numbers of the detail dataset records that are defined to be the chain head and tail are marked with a zero to indicate that there is no where left to go (we've reached the end of our rope). These few pointers create a doubly-linked list -- one that is as fast to traverse in the forward direction as it is in the reverse. Indeed, forward and reverse are merely terms that you impose on the process. Successively following a series of pointers is just that. The machine doesn't really care. When you do an initial DBFIND to set up the chained read, what you're really doing is retrieving the little bit of pointer information outlined above from the master dataset. Once you begin a forward or reverse chained DBGET, you set your initial pointers to the chain head and the chain tail records, respectively. Each successive DBGET will advance the previous and next record pointers, based on the direction that you are traversing the chain, reading that pointer information from the new record that the DBGET just landed on (thus, you can also reverse your search direction at any time, in midstream, and go back where you just were, if you have any legitimate reason to do so). The inefficiencies that might arise from a chained search (in either direction) are primarily associated with the lack of "locality" (your pointers are pointing all over the physical dataset file). This requires the file system manager to work a little harder for you -- and may require a great number of additional I/O's in order for you to complete a full search of the chain -- and is the primary reason that people like Adager recommend that you periodically perform a "DETPACK," a detail dataset repacking, to get all search items that have the value adjacent to one another. Clearly, though, a detail dataset can only be "packed" on the basis of one search item's values, just as you can only pack a paper Rolodex file by people's last and first names (and let the states fall where they may) -- or by the city and state (and let last names fall where they may). The search item that you consider to be most important for high speed retrieval should be designated as the "primary" search path. During loads and unloads (if anyone does that anymore), all of the records that have identical search item values in the primary search item will be placed adjacent to one another, and is thus functionally equivalent to Adager's DETPACK. Wirt Atmar