HP3000-L Archives

January 2000, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Date:
Thu, 13 Jan 2000 19:00:33 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (28 lines)
1. Create a Database containing one Manual Master dataset with a single
field of 270 bytes and about 1.2 million capacity (preferably a prime
number).

2. Load the Ascii file into this Database. Each record will be
pseudo-randomly "Hashed" into a specific location.

3. Read the file serially back into a Flat Ascii file.

4. Voila! Randomized file.

5. Use of slightly different Dataset sizes will create completely
different randomizations.

Porter, Allen H wrote:
>
> I have a very large ASCII file (270 bytes per record with approx. 1,000,000
> records) that needs to be randomized.  The current program we are using
> takes a couple of hours to run.  Does anyone have a faster method?  I was
> hoping TurboSort could randomize but I can't find anything in the
> documentation.
> Allen Porter
> Programmer II
> Maritz Marketing Research INC
>
> 636.827.6833
> [log in to unmask]

ATOM RSS1 RSS2