HP3000-L Archives

November 1999, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Paul H. Christidis" <[log in to unmask]>
Reply To:
Date:
Tue, 2 Nov 1999 17:20:00 -0800
Content-Type:
TEXT/PLAIN
Parts/Attachments:
TEXT/PLAIN (81 lines)
Well,  This was an eye opener!

About a year ago we were restructuring the contents of a couple of jumbo
datasets.  We had a couple of competing methodologies (one was using QTP the
other a custom program).  The custom program won during the testing phase and
ended up being used for the actual conversion.   The through put observed during
the testing, however, was not duplicated during the actual conversion.  It was a
close call and the conversion was performed on time during our
Christmas shutdown period but it always bothered me that the update rate noticed
during the testing phase was not maintained during the actual conversion.

Now it makes sense.  My testing was performed against a 'scaled down' version of
the database (thus no jumbo datasets) and although 'AUTODEFER' was enabled
during the conversion the most recent messages on this thread indicate that the
XM was not disabled and thus the lower through put.

Paul

____________________Reply Separator____________________
Subject:    Re: Performance question
Author: [log in to unmask]
Date:       11/2/99 3:40 PM

Stan,

Our testing confirms what you say below.  We ran the same job stream against
a non-jumbo data set and all the references to XM in the stack traces
disappeared.  Our through put also increased.

I have an open call with HP and will see if there is a fix or work around.
As soon as I know something I will pass it on.

Carl

> -----Original Message-----
> From: [log in to unmask] [mailto:[log in to unmask]]
> Sent: Tuesday, November 02, 1999 5:36 PM
> To: [log in to unmask]
> Cc: [log in to unmask]
> Subject: Re: [HP3000-L] Performance question
>
>
> Re:
>
> > Ok, I recreated the scenario and now have the following
> stack trace.  Can
> > anyone tell me what is going on?
> >
> > Carl
> > NM  7) SP=41858368 RP=a.00484d48 xm_w_unlock_and_copyai_var+$2b0
> > NM  8) SP=418582a8 RP=a.005cd1cc disc_sm_finish_write+$98
> > NM  9) SP=41858228 RP=a.005cd100 ?disc_sm_finish_write+$8
> >          export stub: 29c.002f5484 putdetail_340+$3048
> > NM  a) SP=41858068 RP=29c.002f7480 nmdbput+$17c4
>
> Looks like IMAGE hasn't detached the dataset from the XM.
>
> There's a known problem in MPE that I reported several years ago,
> that the internal routine xm_detachufid_without_purge (used by IMAGE
> when AUTODEFER enabled) cannot successfully detach an HFS file from
> the XM.   (there was a double bug: it failed to detach,
> and it reported success!)
>
> I suggested to HP that they implement something like:
> xm_detachufid_and_filename_without_purge, which IMAGE could use
> instead of the original routine.  (The original routine would be
> expensive to fix...passing in the filename would make it easier to
> solve the problem).
>
> I've suggested an approach to Steve Cole, who's helping Carl in
> this area.
>
> --
> Stan Sieler
> [log in to unmask]
> P.s.: please forgive typos/brevity,
> http://www.allegro.com/sieler/
>       I'm typing left-handed
> for awhile.
>

ATOM RSS1 RSS2