HP3000-L Archives

July 2000, Week 3

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Wirt Atmar <[log in to unmask]>
Reply To:
Date:
Mon, 17 Jul 2000 15:31:05 EDT
Content-Type:
text/plain
Parts/Attachments:
text/plain (74 lines)
Ted writes:

> > The reason that your first bit of code is longer than the second is that
>  > you're basically duplicating the same test that IMAGE itself performs
>  > it performs the delete. While that may be psychologically safer, it
could
>  > be described as a bit more inefficient as well.
>
>  Seems like it would be considerably less efficient.  When I do the delete,
I
>  would expect IMAGE to go to the current record, check its pointers to see
if
>  any records were on the chain where as a DBFIND implies, conceptually at
>  least, a rehashing of the value.

That's true. It is considerably less efficient. I used the phrase "basically
duplicating" only to get across the essence of the test, not its true
mechanism. And as Tom pointed out, you'd have to do a DBFIND for every linked
detail dataset as well.

A master dataset's architecture is not particularly complicated. There are
two entries in the set for every detail dataset that's linked to the master:
one entry containing the record number of the chain's head for that
particular key value, and a second entry containing the record number of the
chain's tail.

If there are no entries for that particular key value in a particular detail
dataset, these record numbers are both zeroes. If the board is completely
clear (all zeroes for all linked detail datasets), the master record is
deletable. Otherwise, it's not.

As you imagine, reading this simple table for the key item value is an
extremely quick -- and safe -- method to determine if this particular value
has any detail records associated with it.



>  Would you consider as safe to loop through the master set twice, ignoring
on
>  the first pass any primaries with secondaries in their chains and then
>  returning to remove, if necessary, any "encumbered" primaries?  That way,
>  secondary migration would be of no import.

While I am quite liberal when it comes to social issues, and conservative in
regards to fiscal issues, and a hard-nosed conservative when it comes to
Constitutional issues, I am a paranoid delusional when it comes to database
safety issues.

I would still advocate reading a flat file, one key value at a time, from a
file that has recently been filled with the master's key values. Doing this
represents an extremely conservative, extremely safe way of using IMAGE. More
than that, it can be accomplished with little interruption of the database
while it's currently in use.

Each value would be read from the flat file, one at a time, and the master
dataset locked and a dbdelete attempted, and then unlocked. If the dbdelete
succeeded, all the better. If not, then you move on to the next value.

The only possible way that you could get in trouble doing this is if another
transaction, just by great chance, had deleted the last detail record for a
particular key value and had released its locks just an instant before you
went to perform the dbdelete on that key value -- and then the first process
went to add another detail record based on that key value. At that point, the
first process would fail because of a lack of referential integrity.

But even that wildly, extremely unlikely condition (requiring bad programming
technique in the first process' programmer) could be avoided by the truly
paranoid delusional tack of locking the entire database for your scan &
delete procedure.

However, I'm not that much of a paranoid delusional :-).

Wirt

ATOM RSS1 RSS2