HP3000-L Archives

November 2003, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Ken Hirsch <[log in to unmask]>
Reply To:
Ken Hirsch <[log in to unmask]>
Date:
Fri, 14 Nov 2003 11:42:42 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (52 lines)
You could delete everything in the dataset and reload it from a file which
has the duplicates removed.  Of course you should only do this when you have
exclusive access to the database.  It might take a while, but it would work.

i.e.
GET A
SORT ...
DUP N R
O FILE1,link
xeq
GET A
O FILE2,link
del
xeq
in FILE1
put A
xeq


----- Original Message -----
From: "Venkataraman Ramakrishnan" <[log in to unmask]>
To: <[log in to unmask]>
Sent: Friday, November 14, 2003 7:21 AM
Subject: [HP3000-L] Suprtool Help


Hello Everybody,

I am trying to remove non-unique duplicates in a dataset using SUPRTOOL.


I am following the approach as is given in the SUPRTOOL Manual, which
goes something like this.

Get all such non-unique records into say file-1 using dup only record.
Use dup none record on file-1 and get a file-2 which has only unique
records.
Load file-2 into a table.
But here am facing a problem. My record-length is 536 and when I try to
load the table it throws up the following error.
"Error:  TABLE only works with items <= 256 bytes long".

Any workarounds or suggestions ??

Venkat

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2