HP3000-L Archives

April 1995, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Denys Beauchemin <[log in to unmask]>
Reply To:
Date:
Thu, 27 Apr 1995 23:17:01 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (52 lines)
[log in to unmask]  writes:
<<
The solutions mentioned so far indeed do solve your functional problems of
getting rid of all the master entries. However, they do not address the
associated performance problems you cited - since you will experience
migrating secondaries each time you delete a master entry which is a primary
entry and has at least one secondary.
 
So......., to both complete the task properly AND have it done EFFICIENTLY,
one way to accomplish these is to:
 
loop1:
        read next master entry at end go to part2.
                is it a secondary (words 5-6, relative to 1, of status array
=
0)
                if yes
                   delete it.
                go to loop1.
 
at this point, you will have only master entries left.
 
part2:
        dbclose mode 3 (rewind dataset)
loop2:
        read next master entry at end JOB WELL DONE
        delete
        go to loop2:
 
 
Please forgive my lousy pseudo-code.
 
>>
 
Gilles,
 
I respectfully disagree that your solution is efficient.  The other solutions
provided for this problem all involved re-reading the master entry location
after the DBDELETE.  If a migrating secondary situation is involved, the
secondary chain will be deleted in its entirety at that location.  Secondary
chains are usually not very long unless you are dealing with an Integer key
type and you have sampled the data to get a capacity instead of calculating
the exact capacity which will yield no secondaries.
 
However, in the re-read technique, you only go through the dataset once,
catching everything.  You advocate going through the dataset twice.  I find
this to be inefficient.
 
Kind regards,
 
Denys. . .

ATOM RSS1 RSS2