HP3000-L Archives

July 1999, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Christian Lheureux <[log in to unmask]>
Reply To:
Date:
Tue, 27 Jul 1999 11:17:53 +0200
Content-Type:
text/plain
Parts/Attachments:
text/plain (122 lines)
Some clarifications about disk space usage in MPE/iX.

> VOLUTIL will not allow you to configure permanent space on LDEV 1 at
100%.
> This is a convention dating back forever, (at least to the early days of
> MPE/XL)

True. Dates back to the genesis of MPE/XL. Did not exist on MPE V/E, cause
virtual memory was configured once and forever then.

> to assure that at least 25% of LDEV 1 is available for bootable
> transient space.

Not 100 % true. As I said in my previous post, it IS to make sure LDEV 1
can always boot (excellent point, Lee), but it's 360,000 sectors (rounded),
which translated into 23 %  of total disk space on good ole' 7935 (404
megabytes), and translates into 5 % on the 2 GByte I have on LDEV 1 here on
my demo box.

> As disk sizes grew, and because LDEV1 is now supported up
> to 4 GB in size, you can probably make the permanent space allocation
> somewhat higher than 75% on larger drives, but I don't see this often
> recommended.

Correct. You can go up till you reach the non-configurable 360 Ksec limit,
but bear in mind that 360 ksec, while it's OK for startup, is hardly
sufficient when the system runs a normal user load, except for a very light
activity. If you forecast, say, 10 megs per session (estimate), 360 ksecs
will translate into 9 users.

>You may certainly increase the allocation percentages on your
> new device to 100/100, if you wish, and you can do it 'on the fly'.

Correct for disks other than LDEV 1.

> The original push release of 5.0 (IIRC) introduced a change (bug?) to the
> algorithms used to keep the disks balanced, if fewer than five (5)
volumes
> were in use in the MPEXL_SYSTEM_VOLUME_SET.  This caused more permanent
> file extents to reside on LDEV1 within its defined allocation limits than
> would normally have occurred.  A later patch fixed this problem, as I
> remember.

OK, let me try and clarify this point a bit. Sorry, but it's gonna be
somehow longish.

First, I did not remember there was a 5-volume limit for this feature.

What happened prior to 5.0 is that MPE maintained a list of available
(mounted) volumes per volume set, sorted by available space, i.e. with the
volume with the most free space on top of the list and the volume with the
least available space at the bottom. Each time a request for allocation was
issued against the volume set, this sorted list made sure that the volume
with the most available disk space was searched first for an available
extent, or "hole". The layout of this list was the same for system and user
volume sets. This was originally designed (dates back to the very first
MPE/XL release, I think) to ensure that disk space occupation was
comparable for all volumes belonging to the same volume set.

What was changed in 5.0 was the layout of that list for the system volume
set, MPEXL_SYSTEM_VOLUME_SET. Ldev 1 was always maintained at the BOTTOM of
the list. So ldev 1 was always queried last when a request for allocation
was issued against the system domain. This made sure that ldev 1 kept the
lowest disk space usage of the system volume set, thus implicitly ensuring
that sufficient free space was available to boot the system under any
circumstances. Of course, this could not always be 100 % granted, for
instance if you fill ALL non-ldev 1 system volumes up to 100 %, then disk
space allocation requests are going to be issued against ldev 1 anyway,
filling up that volume too, till limits (77 %, or 95 %, or whatever has
been configured) are hit. But it was a good measure to make sure ldev 1
kept some free space most of the time.

Another consideration was the fact that significant free space is also
needed, in addition to the 360 ksectors mentioned hereabove, when you boot
from tape. What happens when an HP3000 is booted from tape is that system
files are copied from tape onto temporary files (which are basically perm
space), then, when all files have been safely copied from tape to disk,
original systems files on disk are purged and copies are renamed to their
original names. This two-step process ensures that, in the event that a
tape boot gets interrupted, the system can be brought up in any case. But
this feature requires significant perm space to host the temporary copies
of system files. I remember an article in the 5.0 Communicator that
described a 60,000 sector requirement, but I think it was for FOS only.
When I did system updates, I always made sure I had 200,000 sectors
available. Then, there is fragmentation. Some of this space has to be
contiguous (system libraries are big files and their extents cannot be
further fragmented during restore from the SLT). So what I recommended was
to have 200,000 CONTIGUOUS sectors, even if that looks like an extremely
conservative option. I never had any disk space problems during updates
each time I got a chance to apply my own recommendation. But I agree that
systems can be updated and brought up with significantly less available
disk space on ldev 1. This is just another reason why 5.0 implements a
scheme that puts less pressure on ldev 1 disk space usage.

I do not remember this "feature" ever being patched. What I remember is
seeing a few SRsto make this behavior configurable, and issueing a few
"me-too" similar SRs on behalf of my customers. I also remember writing a
quite detailed paper about this feature at about the time C.50.00 was
released, but once again, I wrote it in French. I can probably get a copy
of that paper from my former colleagues at the French Support Center.

> User volumesets do not support transient space, so any allocation above
0%
> for this domain, while set and reported by VOLUTIL, is ignored.

True enough. Since user volume sets are supposed to be mounted and
unmounted, it does not make sense to allow transient space to go to theve
volumes. Imagine the system overhead if you had to copy hundreds of
megabytes (and update all memory management related and secondary storage
related system tables !!!) each time you want to "close" (i.e. logically
unmount) a user volume set !!!

Hope this clarifies a bit ldev 1 disk space management. Perhaps someone
still with HP (I'm no longer a support engineer) can fill in the gray areas
or correct anything.

Christian Lheureux
Head of Systems and Networks Department
APPIC R.H.
An HPConnect Systems Integrator
An HP3000 Expert

ATOM RSS1 RSS2