HP3000-L Archives

November 1999, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
James Clark <[log in to unmask]>
Reply To:
James Clark <[log in to unmask]>
Date:
Tue, 2 Nov 1999 18:39:23 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (119 lines)
Seems to me that your code called dbput which is going through the steps to
accomplish your put. After accomplishing its task it informs xm that it is
finished and would it please place it on the disc. XM is then attempting to
get a resource which is currently locked by another process, in which case
it is informing the dispatcher I am blocked because of this resource, let me
know when it is available. Then waits. The only thing I can think of that
the XM would need to lock is real-estate in memory or on the disc. In order
to have to wait, that same area must be in use by someother process. I don't
currently know the limits of volume sets or directory structures, but if you
have a lot of area under control of just one directory then that control
point could have high contention. Does your jobs have a high cpu timeslice
for their execution? If so you may want to cut that back to a smaller
number. Haven't kept up with the current dispatcher algorithm, but the old
set up was CS 152-200 with a slice of 0-300 and DS 190-250 with a slice of
1000-1000, this being the job queue. And the way I remember it CS would get
any where from 0 to 300 clock ticks to accomplish some task, if some wait
accured, the process would immediately be suspended to allow someone else to
run. DS would get 1000 units when it finally get the CPU and run until then.
(I believe there was some waits which would interrupt this) Now if the below
was running in a high priority it may not leave until its time slice is up.
If this is the case then you would not have a high I/O bottleneck, but high
wait states as each process was waiting for a resource to become free. You
need to try to get more data into a disc write thus allowing more I/O per
wait.
I am not sure if all this made sense, or if all of it is correct with
current OS release. Let me know.

James

-----Original Message-----
From: HP-3000 Systems Discussion [mailto:[log in to unmask]]On
Behalf Of Carl McNamee
Sent: Tuesday, November 02, 1999 5:02 PM
To: [log in to unmask]
Subject: Re: Performance question


Ok, I recreated the scenario and now have the following stack trace.  Can
anyone tell me what is going on?

Carl

Procedure Trace for Pin  253 is:

       PC=a.0015f70c enable_int+$2c
NM* 0) SP=418587b0 RP=a.002a4b98
notify_dispatcher.block_current_process+$324
NM  1) SP=418587b0 RP=a.002a703c notify_dispatcher+$264
NM  2) SP=41858730 RP=a.0018e3e4 sem_block.wait_for_resource+$1c4
NM  3) SP=41858630 RP=a.0018e5fc sem_block+$178
NM  4) SP=41858570 RP=a.001669a4 joint_lock_path+$34
NM  5) SP=41858470 RP=a.004a4098 xm_deallocaterecord+$30c
NM  6) SP=41858428 RP=a.004a5c34 xm_end_user_trans+$1c4
NM  7) SP=41858368 RP=a.00484d48 xm_w_unlock_and_copyai_var+$2b0
NM  8) SP=418582a8 RP=a.005cd1cc disc_sm_finish_write+$98
NM  9) SP=41858228 RP=a.005cd100 ?disc_sm_finish_write+$8
         export stub: 29c.002f5484 putdetail_340+$3048
NM  a) SP=41858068 RP=29c.002f7480 nmdbput+$17c4
NM  b) SP=41858008 RP=29c.002680e0 nbput+$15c
NM  c) SP=418547d0 RP=29c.00267f58 ?nbput+$8
         export stub: 29c.010947f0 dbput+$3fc
NM  d) SP=418546d0 RP=29c.01094368 ?dbput+$8
         export stub: 188.00088ad8
NM  e) SP=41853450 RP=188.00088a40
         export stub: 2bc8.0007fbac
NM  f) SP=418533d0 RP=2bc8.0009a448
NM 10) SP=418531b0 RP=2bc8.000a9434
NM 11) SP=41852f88 RP=2bc8.000b4ee0
NM 12) SP=41852d68 RP=2bc8.00081128
NM 13) SP=41852b48 RP=2bc8.0009c258
NM 14) SP=41852928 RP=2bc8.0009c7bc
NM 15) SP=41852708 RP=2bc8.00100d4c
NM 16) SP=418524e8 RP=2bc8.00101114
NM 17) SP=418522c8 RP=2bc8.00000000
     (end of NM stack)

> -----Original Message-----
> From: [log in to unmask] [mailto:[log in to unmask]]
> Sent: Tuesday, November 02, 1999 1:44 PM
> To: [log in to unmask]
> Cc: [log in to unmask]
> Subject: Re: [HP3000-L] Performance question
>
>
> Re:
>
> > Glance and Perfview both show impedes.  Perfview goes as
> far as to indicate
> > that the jobs are spending most of their time, ~ 60%, waiting for a
> > semaphore.  Perfview also indicates that the jobs spend ~
> 5% of their time
> > in a Pri Wait and the Stop Reason is CtrlBlk.  Disk wait is
> > 1% and Memory
> > wait is > 1%.
>
> <plug, sorry>
> If you have SHOT, from Lund, version 2.22 or later, you can say:
>
>    ADM + WAITSEMPIN
>
> that will cause SHOT's Delta/ALL displays to report the PIN
> that's holding
> the semaphore that a process is waiting for.
> </plug>
>
> > Anything else that would be pertinant?
>
> Stack traces of the blocked processes, to help determine what they're
> waiting for (e.g., file, database, or ?)
>
> --
> Stan Sieler
> [log in to unmask]
> P.s.: please forgive typos/brevity,
> http://www.allegro.com/sieler/
>       I'm typing left-handed
> for awhile.
>

ATOM RSS1 RSS2