HP3000-L Archives

June 2003, Week 4

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
Craig Lalley <[log in to unmask]>
Reply To:
Craig Lalley <[log in to unmask]>
Date:
Mon, 23 Jun 2003 12:12:06 -0700
Content-Type:
text/plain
Parts/Attachments:
text/plain (170 lines)
OK, I forget, How do you check to see if the drivers on the tape or loaded on
any system?

NMMGR?
on the tape listdir?

-Craig


--- "Johnson, Tracy" <[log in to unmask]> wrote:
> This sounds like, even though I have my 7.0 upgrade from
> HP, I should check to make sure we have our 100BT software
> on the tape for my 959/200 that we had paid for previously?
>
> BT
>
>
> Tracy Johnson
> MSI Schaevitz Sensors
>
> > -----Original Message-----
> > From: Rick Jones [mailto:[log in to unmask]]
> > Sent: Monday, June 23, 2003 2:33 PM
> > To: [log in to unmask]
> > Subject: Re: [HP3000-L] HP-PB 100Base-TX
> >
> >
> >  Jeff Kell <[log in to unmask]> wrote:
> > > I've also heard rumors about DMA transfer ability between at least
> > > the base 10BT card and the 100BT variant.  (Heard the 100BT can't).
> >
> > It's not like the HP-PB 100BT card was a PIO device :) However, it was
> > (iirc) a two-step process.  On outbound, step one was to get the data
> > DMA'd into the mothercard.  Step two was to get the daughtercard to
> > DMA (?) from the mother card.  Reverse it for inbound.  Each step
> > (again iirc) generated an interrupt to the host.
> >
> > rick jones
> >
> > Something from long long ago, on another OS... note that the netperf
> > URL should now be http://www.netperf.org/
> >
> > Date: Fri, 13 Jun 1997 17:30:12 -0700
> > From: Rick Jones <[log in to unmask]>
> >
> > Folks,
> >
> > As part of HP's effort to address <customer>'s concerns with HP-PB 100
> > BT performance, I have set-up a pair of E Series systems to
> > investigate performance differences between the HP-PB 100BT card, and
> > an HP-PB FDDI card crippled with a 1500 byte MTU.
> >
> > The system(s) used by <customer> for their tests were apparently
> > E25's, though it is not entirely clear if those are also the systems
> > in "production."  E25's were bottom-of-the-line entry-level systems
> > some years ago, and finding that type of system in the Lab proved
> > quite difficult. I was able to obtain an E35 and an E55.  Into each
> > system I placed 100BT and FDDI interface.  The FDDI interfaces were
> > connected via a Cisco concentrator, the 100BT interfaces were
> > connected to a Cisco 104T hub and were operating in a half-duplex
> > mode.
> >
> > The OS was 10.20, and the drivers were the "DART 34" versions from an
> > HP-internal depot maintained by the driver project.
> >
> > <customer>'s original tests used FTP.  For these tests I used netperf
> > (http://www.cup.hp.com/netperf/NetperfPage.html) to avoid any issues
> > with disc I/O. 56KB TCP windows were used, matching the default window
> > size used by FTP in 10.20.
> >
> > Here are the results of three separate netperf TCP_STREAM tests, all
> > with the E35 sending and the E55 receiving.  The first is with FDDI
> > unconstrained by the 1500 byte MTU:
> >
> > # ./netperf -H 192.68.2.8 -l 30 -c $LOC_RATE -- -S 56K -s 56K -m 4K
> > TCP STREAM TEST to 192.68.2.8
> > Recv   Send    Send                          Utilization
> >  Service Demand
> > Socket Socket  Message  Elapsed              Send     Recv
> >  Send    Recv
> > Size   Size    Size     Time     Throughput  local    remote
> >  local   remote
> > bytes  bytes   bytes    secs.    10^6bits/s  % I      % U
> >  us/KB   us/KB
> >
> >  57344  57344   4096    30.01        55.82   63.95    -1.00
> >  93.846  -1.000
> >
> > You can see that with the full MTU, it took about 93 microseconds of
> > CPU time to transfer a KB (K == 1024) of bulk data across the FDDI
> > interface.
> >
> > The next is with FDDI crippled with a 1500 byte MTU:
> >
> > # lanadmin -M 1500 4
> > Old MTU Size                        = 4352
> > New MTU Size                        = 1500
> > # ./netperf -H 192.68.2.8 -l 30 -c $LOC_RATE -- -S 56K -s 56K -m 4K
> > TCP STREAM TEST to 192.68.2.8
> > Recv   Send    Send                          Utilization
> >  Service Demand
> > Socket Socket  Message  Elapsed              Send     Recv
> >  Send    Recv
> > Size   Size    Size     Time     Throughput  local    remote
> >  local   remote
> > bytes  bytes   bytes    secs.    10^6bits/s  % I      % U
> >  us/KB   us/KB
> >
> >  57344  57344   4096    30.01        41.80   94.57    -1.00
> >  185.330  -1.000
> >
> > We can see that the quantity of CPU time required to transmit a KB of
> > data across the link has increased considerably - roughly 2X.  This is
> > consistent with the nearly 3X increase in the number of packets
> > required for a given quantity of data (4096/1460 = 2.8).  That means
> > nearly three times the number of interrupts as before, and three times
> > the number of trips up and down the protocol stack.  Service demand
> > did not increase 3X because not all of the CPU "cost" is per-packet.
> > The rest (eg data copy) is per-byte.  The only reason our performance
> > was not cut in half was that the original test still had 37% idle CPU
> > - this second test with the smaller MTU soaked that up and more.
> >
> > Third, the same test over the HP-PB 100BT interface:
> >
> > # ./netperf -H 192.168.1.8 -l 30 -c $LOC_RATE -- -S 56K -s 56K -m 4K
> > TCP STREAM TEST to 192.168.1.8
> > Recv   Send    Send                          Utilization
> >  Service Demand
> > Socket Socket  Message  Elapsed              Send     Recv
> >  Send    Recv
> > Size   Size    Size     Time     Throughput  local    remote
> >  local   remote
> > bytes  bytes   bytes    secs.    10^6bits/s  % I      % U
> >  us/KB   us/KB
> >
> >  57344  57344   4096    30.00        29.83   95.41    -1.00
> >  261.998  -1.000
> >
> > In this test, the service demand was nearly 262 microseconds per KB,
> > which is an increase of approximately 40%, which is consistent with
> > throughput delta's between the two interfaces.
> >
> > So, even though both interfaces are NIO, and both were configured with
> > a 1500 byte MTU, the increased service demand of the HP-PB 100BT
> > driver leads to lower throughput.  In the case of the E35, the CPU
> > bottleneck for the 100BT driver is hit before the bandwidth bottleneck
> > of the HP-PB bus converter.
> >
> >
> > --
> > firebug n, the idiot who tosses a lit cigarette out his car window
> > these opinions are mine, all mine; HP might not want them anyway... :)
> > feel free to post, OR email to raj in cup.hp.com  but NOT BOTH...
> >
> > * To join/leave the list, search archives, change list settings, *
> > * etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
> >
>
> * To join/leave the list, search archives, change list settings, *
> * etc., please visit http://raven.utc.edu/archives/hp3000-l.html *


__________________________________
Do you Yahoo!?
SBC Yahoo! DSL - Now only $29.95 per month!
http://sbc.yahoo.com

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2