HP3000-L Archives

October 1997, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Reply To:
Date:
Fri, 10 Oct 1997 19:59:33 PST8
Content-Type:
text/plain
Parts/Attachments:
text/plain (64 lines)
Alfredo asked for ideas so I'm contributing a description of what we have in
place.


Many years ago we would take turns and each morning would go through entire
stacks of $STDLISTs looking for errors, a task that would consume more than
an hour each morning.  After performing the task for a couple of days I
thought that there *must* be an easier way.   I proceeded to design a
process utilizing 'spook', 'fcopy', two programs that I wrote and an editor
'use file' that would 'scan' the $STDLIST's for a specific account and
produce a single report high lighting any jobs that contained errors (The
whole project took one week to develop).

Editor and the 'use file' were used to allow programmers to 'install' and
customize the process.  Installation consisted of the generation of a
'custom' stream file and the 'customization' included execution frequency,
retention period, inclusion of entire account's $stdlists or those created
by specific users, report types to generate, notification of operator, etc.

Spook was used to generate a listing of the qualifying spoolfiles.

One of the custom program parses the Spook output generating additional
Spook 'copy ' commands to combine the qualifying spoolfiles while also
checking that enough space exists to hold each qualifying spoolfile. It also
creates a command file for deleting the processed $stdlists.

The second custom program would then 'scan' this combined file looking for
errors, warnings, file information displays, abort messages, jobs without
'eoj', sections designated as 'must show', etc..  From each $stdlist it
lists, at a minimum, the job card, when streamed and by whom, the cpu and
elapsed times and the record number, within the archive, where the entire
$stdlist can be located.  For jobs with errors it lists the context of the
error with an 'error tag' at the right margin.

Needless to say that we had a very happy camp after the process was made
available.  While over the years the process has been enhanced to handle the
native mode spooler and other MPE/iX features the basic structure is still
the same and in fact it still works on MPE/V machines, most recently I added
the capability to mail the report containing jobs that erred to the
responsible programmers using 'JTMAIL' (thanks Jim).

What I'm trying to show is that there are ways, without marrying some third
party vendor or spending a lot of money, to perform those mundane tasks that
still must be performed.  If we were to place our Image capacity management
on autopilot we would designate that section of the job as 'must show' and
would be notified whenever a capacity change was performed.  Or we would key
on whatever bells and whistles Alfredo sets off when a dataset went Jumbo
and again be notified.

Some months ago I made available to the List a command file that uses ADAGER
to automatically monitor dataset capacities and launch a job to change them.
Our version has an additional *required* parameter:  The programmer's
payroll number where the $stdlist of the job performing the capacity change
will be mailed.

In our site our main database is too large to put on autopilot.  We need to
schedule capacity changes and thus we have developed another process that
runs multiple times each week and mails back to the responsible point a list
of all datasets whose capacity has exceeded a specified threshold.

Regards

Paul H. Christidis

ATOM RSS1 RSS2