HP3000-L Archives

April 1999, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jeff Woods <[log in to unmask]>
Reply To:
Jeff Woods <[log in to unmask]>
Date:
Mon, 12 Apr 1999 13:59:22 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (53 lines)
At 4/12/99 01:45 PM -0400, Juan Rojas wrote:
>Neil, Denys:

Well, I'm neither of them, but I'll chime in anyway.

>I am very interested in your discussions on blocking factors.
>If I have a non-Image file with the following characteristics:
>
>FILENAME  CODE  ------------LOGICAL RECORD-----------  ----SPACE----
>--DAYS--
>                  SIZE  TYP        EOF      LIMIT R/B  SECTORS #X MX  ACC
>MOD
>
>PGHCLMPX         2500B  FA         139     105000   1     1360  4 32   3
>28
>
>Since the file's blocking factor is one:
>Does this mean that I have one record per disc page of 4096 bytes?
>Am I wasting 1596 bytes per page (4096 - 2500) or, is record number two
>stored in two pages?

MPE/iX (but not MPE/V) doesn't put any padding between blocks (other than force
record sizes to even byte lengths which in turn means all blocks are even byte
length too).  So each of your 2500B blocks are contiguous with no wasted space,
and thus most of your blocks span 4K page boundaries.

Since [2500 = 25*100 = 25*25*4 = 5^4 * 2^2] and [4096 = 2^12] I know that only
every 1024th block begins on a page boundary because [2^(12-2) = 2^10].

Also, because of the same arithmetic, to force each block to begin on a page
boundary would require a blocking factor of 1024, which exceeds the maximum
blocking factor MPE allows, 255.  However, if you *could* change the blocking
factor to 1024, the records would still lie in exactly the same place.  The
second record would still begin 2500 bytes from the beginning of the page and
span to the next page.  What this means is that the blocking factor for flat
files is essentially irrelevant, except perhaps for a few relatively exotic
circumstances.  Therefore, there's no advantage to changing just the blocking
factor.

On the other hand, perhaps there might be some benefit in changing the record
size up or down to something that's fits evenly into 4096 pages to conceivable
gain some performance advantage, probably at the cost of disk space.  The
performance difference in most if not all applications is going to be
negligible, assuming the machine isn't suffering from memory pressure, while
the cost in disk space is clear.  And if the memory pressure is already high,
then increasing the size of a file which needs to be in that memory will
compound the problem.  In such case adding memory, if possible, would be a much
better solution than trying to eek out slivers of improvement by tweaking file
record and block sizes.
--
Jeff Woods
[log in to unmask]  [PGP key available here via finger]

ATOM RSS1 RSS2