HP3000-L Archives

May 2000, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Tony Summers <[log in to unmask]>
Reply To:
Tony Summers <[log in to unmask]>
Date:
Tue, 23 May 2000 09:45:34 +0100
Content-Type:
text/plain
Parts/Attachments:
text/plain (30 lines)
I've seen this ONCE on our box.  We had to reboot - since our problem was with
the HPSYSJQ which you can't delete.   

I'm assuming some system table is getting corrupted - I noticed that once corrupted,
the value remained stable (but wrong) until the reboot. 

Here's our implementation of job queues.

We have put all the permanent job streams into a background job queue - and set the
system job queue limit to 1.   There are a couple of other job queues which we use 
to fast track (or slow track) individual (and non-critical) jobs. 

CURRENT LIMITS ARE 20 FOR JOBS

JOBQ      LIMIT     EXEC  TOTAL

HPSYSJQ   1         0     0 
BACKJQ    13        10    10 
SLOWJQ    1         0     0 
FASTJQ    1         0     0                            

The beauty of this approach is that we retain single threading for our overnight 
job streams,  and the failure of any background job does not impact on that single
threading.   

When we had our problem with the job queue's counts, I  simply set the limit on
job q with the problem to a maximum value, but ensured single threading of the all jobs 
by setting the global job limit to be just ONE greater than the number of 
permanent jobs. 

ATOM RSS1 RSS2