HP3000-L Archives

February 1996, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Larry Byler <[log in to unmask]>
Reply To:
Larry Byler <[log in to unmask]>
Date:
Sat, 3 Feb 1996 06:22:58 GMT
Content-Type:
text/plain
Parts/Attachments:
text/plain (62 lines)
Jeff Flowers ([log in to unmask]) wrote:
: I think that Larry (Spoolers-R-Us) Byler explained the
: issues with having a very large number of linked output spoolfiles in a post
: here a while ago; basically you'll take a performance hit on some spooler
: commands and on START RECOVERY if the number of spoolfiles is extremely
large.
 
The hit is on some *forms* of the SPOOLF command.  The numbers vary, but should
not affect most normal users (who can only manage their own spool files).
Users who can ALTER or DELETE all ~10000 (or, with the patch, ~50000) will
take a monster hit.  I don't have my numbers handy, but I believe it took tens
of minutes to delete 9000 spool files on a minimum machine (930, 24 Mb, 7935
disc).  Faster machines and more memory (less directory page faulting) might
improve this.  The good news is that the only process saddled with this
delay is the hapless soul doing the delete.  Once the spool files are marked
DELPND, the spool file directory (SPFDIR) is available to other processes.
 
There is no performance problem on START [NO]RECOVERY.  The SPFDIR is rebuilt
by a separate process running in parallel with Progen.  This process can con-
tinue to run as the system comes fully up and is available to users.  The
only aberration is that spool files whose corresponding SPFDIR entry has not
yet been built will be unavailable to spooler processes and to the LISTSPF
and SPOOLF commands.  Also, the wildcard form of the SPOOLF command
(SPOOLF O@...) is disabled while the SPFDIR is being rebuilt.  But the system
and the SPFDIR are available to users while this takes place.  Still, the
more spool files that must be recovered at system start, the longer it will
take before they are all fully available.
 
: These considerations are probably what prompted the choice of 10000 (more or
: less) as the limit for linked output spoolfiles.  Considering that the prior
: limit (before NMS) was about 600-800, it must have seemed like enough at the
: time.
 
Actually, only the second sentence is correct -- we felt that over an order of
magnitude improvement would be enough.  We didn't have a clue about perfor-
mance or other problems.  After all, if you don't count commands such as
PURGEGROUP and PURGEACCT, this was the first time anyone had to contend
with modifying and deleting such a large amount of files at a time.  We
had no idea that it would be so costly in terms of performance.  Also, in
retrospect, we were fairly shortsighted in putting all the spool files in
one group (OUT) of one account (HPSPOOL).  It really skews the directory
namespace and b-tree distribution.  It was only after we got some mileage
with the 10000 spool files that we realized what a lucky choice of limit
we had made.
 
And we continue to marvel at how many customers use SPSAVEd spool files as
on-line archives, thereby exposing themselves to the limit-du-jour much
faster than if they archived another way.  When we designed the SPSAVE
facility, we figured it would be a manual operation -- except for true
archive documents (such as manuals and other reference material), we
thought users would purge their own SPSAVEd files once they had assured
themselves that their hardcopy output was complete and correct.  And
they would not SPSAVE all spool files, only "important" ones.  Instead,
SPSAVE has been co-opted by the system management crowd, which saves
*all* spool files for some number of days (anywhere from four to seven).
With a 10000 spool file limit, this only allows the generation of
between 1400 and 2500 new spool files per day.
 
I hope this insight is useful to someone.
 
-Larry "MPE/iX Spoolers 'R' Us" Byler-

ATOM RSS1 RSS2