HP3000-L Archives

October 1997, Week 2

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Jerry Fochtman <[log in to unmask]>
Reply To:
Jerry Fochtman <[log in to unmask]>
Date:
Wed, 8 Oct 1997 12:00:50 -0500
Content-Type:
text/plain
Parts/Attachments:
text/plain (58 lines)
At 08:28 AM 10/8/97 -0700, Lee Gunter wrote:
>Charlie is right:  I would agree that some form of notification is
>desirable; however, this goes more to speaking to the pitfalls of
>automation.  It's too easy to turn over mundane tasks to an automated
>process and think you (the collective 'you', that is    ;-) never have to
>worry about it again, so  $STDLISTs and backup listings go unread.
>Unfortunately, this seldom comes to pass, and you're bitten.
>
>I think it's important, especially because of cases like this, to monitor
>database capacities at least weekly to gauge trends and prepare for the
>unexpected.  Additionally, most backup software of which I'm aware will
>store HFS files if the fileset being stored is specified with [log in to unmask]  It's
>when we specify specific files or filesets that this kind of trouble
>occurs.
>
>Adager, Bradmark, HP, Orbit, Unison, et al:   any comments re: Charlie's
>request?

<plug-alert - product information>

During capacity change, DBGENERAL requires the user to explicitly request
that the dataset be converted to jumbo.  This is similar to DBSCHEMA
processing, in that
to build a jumbo base one has to include $CONTROL JUMBO in the schema file.

DBG does not require the target size of the dataset to be > 4GB before
allowing one to convert it to a jumbo set. Also, the user can build
multiple chunk files for a jumbo set with varying sizes.  We provided this
capability so users with small development systems can actually test
application/reporting tools/procedures using jumbo sets without requiring
the use of large amounts of disc storage space.  However, one should not
blindly do this in production without a full understanding of the potential
impacts on other operations/software, namely backup/restore.

</end-plug>

By the way, the issue of forgetting to check the backup processes to ensure
they handle HFS files will probably raise its ugly head more frequently
when b-tree indexes start to be used, as more sites, in particular smaller
sites, may enable b-trees than need jumbo sets.... :-)



/jf
                              _\\///_
                             (' o-o ')
___________________________ooOo_( )_OOoo____________________________________


         This day in 1871 - The Chicago Fire began and burned for
                            about 30 hours.

___________________________________Oooo_____________________________________
                            oooO  (    )
                           (    )  )  /
                            \  (   (_/
                             \_)

ATOM RSS1 RSS2