HP3000-L Archives

April 2004, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
John Clogg <[log in to unmask]>
Reply To:
John Clogg <[log in to unmask]>
Date:
Mon, 5 Apr 2004 12:40:01 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (48 lines)
A repack can be done with Adager, and several types of repacks are available.  The least complicated and fastest repack is a "serial" repack, which simply squeezes out the deleted entries, without altering the sequence of any of the entries.  

You can also perform a "Chained" repack, which reorganizes the entries in the dataset so that all entries that are on a given chain are located contiguously in the dataset.  If your application does a lot of chained retrieval, this option can improve performance considerably.  It will typically take much longer than a serial repack, however.  If you choose to do a chained repack, and your dataset has multiple paths, you must carefully choose which path to use for the repack, based on frequency of chained reads and chain length.  Do not assume the primary chain is the best choice.

A "Sorted" repack can be useful in very limited circumstances.  Note that chained repacks do not alter the chronological sequence of entries on a chain.  Descriptions of all these options can be found in "The Adager Guide" at http://www.adager.com/TechnicalPapersPDF/AdagerGuide.Book.pdf.

-----Original Message-----
From: Baker, MikeAMG [mailto:[log in to unmask]]
Sent: April 05, 2004 9:16 AM
To: [log in to unmask]
Subject: repacking large image datasets


Hello.  We have a large image dataset with capacity of 16.2 million records.
We just deleted 8 million plus records from the set, and have been told we
need to repack the dataset.  What are all the options we have available?
Forgive my lack of knowledge on image datasets.  We are just looking for the
quickest way to repack the dataset to get rid of the delete chains.  Not
sure if using adager to erase the set and then reload, or use suprtool, not
sure what would be the fastest approach.  This is part of a voice response
system, so down time is not something that we really have a lot of.
Probably opening another can of worms here, but I'm not sure what is best at
this point.  The specific dataset in question is part of a database that has
been moved off the VA7100 and onto a volume set that is on three mirrored
pairs of 18.2GB drives.

Thanks.

Mike Baker
DISCLAIMER:
This communication, along with any documents, files or attachments, is
intended only for the use of the addressee and may contain legally
privileged and confidential information. If you are not the intended
recipient, you are hereby notified that any dissemination, distribution or
copying of any information contained in or attached to this communication is
strictly prohibited. If you have received this message in error, please
notify the sender immediately and destroy the original communication and its
attachments without reading, printing or saving in any manner. This
communication does not form any contractual obligation on behalf of the
sender, the sender's employer, or the employer's parent company, affiliates
or subsidiaries.

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2