HP3000-L Archives

November 2004, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Roy Brown <[log in to unmask]>
Reply To:
Roy Brown <[log in to unmask]>
Date:
Tue, 23 Nov 2004 10:48:32 +0000
Content-Type:
text/plain
Parts/Attachments:
text/plain (145 lines)
In message <[log in to unmask]>, Walter Murray
<[log in to unmask]> writes
> [My apologies if this is a duplicate for some readers.  I had trouble last
>week with some of my postings not making the jump through the gateway to
>3000-L.]
>
>When I learned IMAGE yea many years ago, I got the notion that the right way
>to call DBGET and related procedures was with an explicit list parameter
>specifying the particular items of interest.  If I was concerned with the
>overhead of processing such a list, I could establish a "current list",
>typically by doing a directed read to record 0 (which I knew would return
>condition 12) and using "*;" in subsequent calls.  The theory was that, if
>there were structural changes to the database, such as new items added to
>the dataset, it would not be necessary to change and recompile any programs
>that did not use the new items.

I guess you only wrote single, atomic, stand-alone programs, or very
close-knit families, then?

If you consider a program which calls many subprograms, then the current
list gets established by the first one that operates on a given dataset
(hence the dummy read to ensure an explicit list is established), and
the rest can just use '*'.

But only if they agree on what '*' is; so your whole suite has to be in
sync on this. In any sufficiently general application where you extend
any dataset, the chances are that at least one of your programs will
need to use one or more of the extended fields.

In this case, rather than keeping all sorts of partial lists of the
dataset contents, from various dates in the past depending on what
development was done when, it makes an awful lot of sense to keep one
single up-to-date definition, of the complete dataset, in a copy library
as you say, and have every program use that as its buffer area for that
dataset.

>In practice, however, it seems as though everybody just uses "@;" all the
>time.

'All the time' certainly isn't efficient. But if you use '@' for that
first dummy read instead of an explicit list, then after that you can
use '*' everywhere, for the continued efficiency which you correctly
state comes from this.

And since every program will be using the new copy library definition
[1], they will all, automatically and safely, agree on what '*' is.

>The buffer layout gets put into a COPY library.  If items are added
>or modified, you have to track down every program that uses that dataset
>and, at a minimum, recompile it.  If you miss one, mysterious things happen,
>as when the program's buffer becomes too short for the newly enlarged
>dataset layout.

[1] We don't 'track down every program'; we just compile the whole app.
Today's HP3000s compile whole suites in minutes, with maybe one flick of
a drive light.

>Am I correct in my belief, based on admittedly limited observation, that
>practically everybody always uses an "@;" list?
@ first, * to follow is best.

> If so, is there any good reason for this?
For all @s, no. For an initial @, yes. See above.

>  Is "@;" faster than "*;"?  If so, why?
No, it's slower, as you might expect. 1 @ good, many @s bad.

>  And is it enough faster to justify the risk and inconvenience of
>having to recompile many programs whenever a minor structural database
>change is made?  Or am I missing something more significant?

I want to stand this right on its head.

Firstly 'what risk'? 'what inconvenience'?

The inconvenience of having to construct a job to compile all your
programs, maybe? I keep one that dynamically reads my source group, and
submits each source program it finds, in turn, to my 'COBCOMP' command
file. Then I parse the Stdlist with QEdit (or at a pinch, QUAD), looking
for ERROR(s).

Secondly, let's be programmers @-man and his rival X-man (short for
Explicit Listing Man) when a minor database change is requested:

@-man:

"Yeah, OK, on the Dev system, I'll change one Copy Library entry (maybe
more if I have a mapping of the dataset for a flat file, an internal
sort, or what-have -you), and recompile the system.

Next, I'll grep the source group for the sources that reference those
copy library entries, and make sure that MOVE CORR is still doing its
job for us.

Then I'll look at those programs that will actually utilise these new
fields, and add any logic needed.

Finally, I'll recompile the whole Dev system again.

Good... now I can test... Hmmm, that's odd... oh silly me, I got one of
the new fields in the wrong place in the dataset definition.. OK, no
sweat, I'll just fix the one central copy library entry and put the
suite compile job on again..."

X-man:

"Uh, well, there's maybe four or five descriptions of that dataset out
in the code there. So let's see which programs need changing to
specifically use these new fields, and let's see which of the four or
five descriptions each one uses. And let's hand-change each individual
description in each affected one to what we need. Oh, hey, now we've got
five or six differing descriptions. Oh, well...

And what's that you say? Some of these changes are going to require that
some of these programs now use existing fields in those datasets that
they didn't need to use before, and so don't reference? Well, OK, we'll
add those in as well...

.....and resync that flat file definition for the output of some of that
dataset... and change those internal sorts in the two or three programs
that use them.

All this while I'm adding the logic needed.... hey, this is in a
subprogram, better make sure the calling program which establishes my
list is establishing what I think I'm  getting...

OK, here's my time saving at last, I only have to compile eight out of
the thirty-four programs in the suite - oops, nine with that calling
program... but I have to put these on by hand, as I haven't got a script
(Or maybe I have, but I have to tweak it by hand to reference just the
nine affected programs).

Good... now I can test... Hmmm, that's odd... oh silly me, I got one of
the new fields in the wrong place in the dataset definition in one of
the programs... better check the other eight individually....."

Which superhero would you want writing your code? :-)

--
Roy Brown        'Have nothing in your houses that you do not know to be
Kelmscott Ltd     useful, or believe to be beautiful'  William Morris

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2