HP3000-L Archives

October 2000, Week 1

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
"Stigers, Greg [And]" <[log in to unmask]>
Reply To:
Stigers, Greg [And]
Date:
Thu, 5 Oct 2000 12:58:35 -0400
Content-Type:
text/plain
Parts/Attachments:
text/plain (73 lines)
X-no-Archive:yes
There is something about this reasoning that nags at me, and I cannot put my
finger on it. But perhaps better minds can respond.

Do your applications use DBX calls? Why or why not?

Do the various benchmarks require that their be some kind of transaction
rollback, or not? TPC-C seems to require this with their ACID (Atomicity,
Consistency, Isolation, and Durability) properties.

Or, is this no more germane that requiring exactly a certain kind of
security in the database or application?

What about other RDBMSs? Are there reasons why they might need this, and we
do not? I am not aware of a similar structure for xSAM (KSAM ISAM VSAM, not
of which are RDBMSs). IIRC, Oracle has for instance four (!) log files, and
that DBAs are taught that good backup / recovery includes dumping the data
out of Oracle for yet another way of recovering data. At what point are we
wearing belts and suspenders, and might this be an Oracle-only issue?
(Remember when PC hard drives were extremely fragile, I believe in the 10 MB
days?). Cortland wrote:
> Much of the competition in the
> database world uses a similar level of protection.
from which I infer that at least one competitor in the database world does
not, right?

I remember asking someone (who I thought was being overly cautious in their
approach to a problem) if they ever read a record back into memory from
disk, to compare it with what they had just written out, to ensure that what
was written was what was intended. This individual could remember a time
when and a system on which this was done.

Now, I can understand an argument that reasons that this is good practice
these days, that clients can crash mid-transaction, that programmers expect
this, and so on. OTOH, if you are web-based and stateless, the client is
never really mid-transaction per se, is it?

Greg Stigers
http://www.cgiusa.com

-----Original Message-----
From: Cortlandt Wilson [mailto:[log in to unmask]]
Sent: Thursday, October 05, 2000 11:02 AM
To: [log in to unmask]
Subject: Fair comparisons of database performance

To pick up on an old thread . . .   I think this illustrates why it
doesn't make sense to compare data access with minimal recovery and
integrity protection with more protected data access.    For data
access in a Image database context the best *information* protection
offered comes from using the dynamic rollback capability (DBXBEGIN /
DBXEND on logical transactions).    Much of the competition in the
database world uses a similar level of protection.

To reduce this line of thinking to the extreme,  if you want really
fast database performance, you should ask HP to bypass the XM and use
the much faster "plain" record writing technique instead.   Lets start
a campaign!

The Moral of the Story.    Any database performance comparison that
doesn't specify the levels of recovery and integrity protection in use
by both database management systems should be considered highly
suspect.     And that principle applies to comparisions between the HP
e3000 and other platforms.    IMO anything else is unprofessional
(disseminating bad information) which is why I become concerned when I
hear it done.

Cortlandt Wilson
Cortlandt Software
Mountain View, CA
(650) 966-8555
http://www.cortsoft.com    (MANMAN Resources Guide)

ATOM RSS1 RSS2