HP3000-L Archives

January 2009, Week 5

HP3000-L@RAVEN.UTC.EDU

Options: Use Monospaced Font
Show Text Part by Default
Show All Mail Headers

Message: [<< First] [< Prev] [Next >] [Last >>]
Topic: [<< First] [< Prev] [Next >] [Last >>]
Author: [<< First] [< Prev] [Next >] [Last >>]

Print Reply
Subject:
From:
donna hofmeister <[log in to unmask]>
Reply To:
Date:
Thu, 29 Jan 2009 09:07:47 -0800
Content-Type:
text/plain
Parts/Attachments:
text/plain (521 lines)
please check to see if NSTHDK0A is installed:
http://www11.itrc.hp.com/service/patch/patchDetail.do?patchid=NSTHDK0A&sel={
mpeix:c.75.00,}&BC=main|search|

  - d

---
Donna Hofmeister
Allegro Consultants, Inc.
408-252-2330


> -----Original Message-----
> From: HP-3000 Systems Discussion [mailto:[log in to unmask]] On
> Behalf Of Stanfield, Randy (Carrollton, TX)
> Sent: Thursday, January 29, 2009 8:06 AM
> To: [log in to unmask]
> Subject: Re: [HP3000-L] : Network Issue
> 
> I had already included a NETtool in the original. This is a copy of the
> link control. And operating system is 7.5 powerpatch 3. I'm also
> including the error that shows in NETTOOL NMDUMP.
> 
> 
> 
> **********************************************************************
> 
> 
> * WED, JAN 28, 2009, 11:42:51.8 AM                       NETXPORT(3) *
> 
> 
> *--------------------------------------------------------------------*
> 
> 
> *    Event           : BUFFER MANAGER                                *
> 
> 
> *    Entity          : UDP                                           *
> 
> 
> *    Internal Event  : Buffer manager error                          *
> 
> 
> *    Log Class       : Operator intervention/attention required      *
> 
> 
> *    Port ID/PIN/KSO : $FFFFF65E                                     *
> 
> 
> *    Location        : 13              Parameter       : $FFE900C9   *
> 
> 
> * Info Section (hex):                                                *
> 
> 
> *    0000:  008D   0006   0007   000D   FFFF   F65E   0002   FFE9    *
> 
> 
> *    0008:  00C9   0001                                              *
> 
> 
> *--------------------------------------------------------------------*
> 
> 
> * Port Message Frame:                                                *
> 
> 
> *    Function Code   : IPC, DATAGRAM SEND REQUEST                    *
> 
> 
> *    Reply Port ID   : $00000000                                     *
> 
> 
> *    Subqueue Number : 7               Message Length  : 26          *
> 
> 
> *    Reply Subqueue #: 7               Flow ID         : $0000       *
> 
> 
> * Data Section (hex):                                                *
> 
> 
> *    0000:  0007   001A   0000   0000   0007   050B   0000   0002    *
> 
> 
> *    0008:  0000   020D   0001   DE28   B384   0087   AACC   0E50    *
> 
> 
> *    0010:  0000   0024   0030   0030   0004   0050   0000   050D    *
> 
> 
> *    0018:  4184   EF28                                              *
> 
> 
> **********************************************************************
> 
> 
> 
> 
> 
> 
> linkcontrol @,a
> 
> Linkname: DTSLINK   Linktype: PCI 100BT        Linkstate: CONNECTED
> 
> 
> Physical Path:              1/12/0/0
> 
> Current Station Address:    00-30-6E-06-01-E4
> 
> Default Station Address:    00-30-6E-06-01-E4
> 
> Current Multicast Addresses:
> 
>  09-00-09-00-00-01  09-00-09-00-00-03  09-00-09-00-00-04
> 
>  09-00-09-00-00-06
> 
> 
> 
> Transmit bytes              277420582    Receive bytes
> 96254205
> 
> Transmits                     1623708    Receives unicast
> 1070001
> 
> Transmits no error            1623708    Receives broadcast
> 25269
> 
> Transmits dropped                   0    Receives multicast
> 1606
> 
> Transmits deferred                  0    Receives no error
> 1096549
> 
> Transmits 1 retry                   0    Recv CRC error
> 0
> 
> Transmits >1 retry                  0    Recv Maxsize error
> 0
> 
> Trans 16 collisions                 0    Recv dropped: addr
> 327
> 
> Trans late collision                0    Recv dropped: buffer
> 0
> 
> Trans underruns                     0    Recv dropped: descr
> 0
> 
> Carrier losses                      0    Recv dropped: other
> 0
> 
> Trans jabber timeout                0    Recv watchdg timeout
> 0
> 
> Link disconnects                    0    Recv collisions
> 0
> 
> Link speed                        100    Recv overruns
> 0
> 
> Link duplex                      Full    Link auto sensed
> No
> 
> Link mode            100Base-TX Addon    Secs since clear
> 35212
> 
> 
> 
> 
> 
> From: Craig Lalley [mailto:[log in to unmask]]
> Sent: Thursday, January 29, 2009 10:01 AM
> To: [log in to unmask]; Stanfield, Randy (Carrollton, TX)
> Subject: Re: : Network Issue
> 
> 
> 
> Randy,
> 
> What version of MPE are you on?
> 
> Are you using a 100mb full duplex or 10mb half duplex card?
> 
> If you are experincing a network hang, that would be a good time to do
> the
> NETTOOLS.NET.SYS --> RESOURCE --> DISPLAY
> 
> and
> 
> LINKCONTROL @,a
> 
> I would also be helpful to know how the TCP/IP Parameters are set in
> NMMGR.
> 
> Regards,
> 
> -Craig
> 
> 
> --- On Thu, 1/29/09, Stanfield, Randy (Carrollton, TX)
> <[log in to unmask]> wrote:
> 
> From: Stanfield, Randy (Carrollton, TX) <[log in to unmask]>
> Subject: : Network Issue
> To: [log in to unmask]
> Date: Thursday, January 29, 2009, 7:45 AM
> 
> Network help
> 
> 
> 
> 
> 
> 
> 
> We've started using some QSDK
>  software to allow the HP3000 to become a
> 
> Webserver basicly. We've been using is pretty successfully I might add
> 
> but the volume is starting to increase and noticing some bottle necks
> 
> that look to be network related. Does anyone have a few minutes to
> maybe
> 
> look at our issue. I'm not sure what to change if anything to change in
> 
> the size of any of the buffer pools for the network. We had our
> OUTBOUND
> 
> Buf Pool on Loop max out yesterday and cause the Webservices to hang.
> We
> 
> increased that size to 512 about 2 weeks ago and that helped but didn't
> 
> want to keep changing without knowing what it affects.
> 
> 
> 
> 
> 
> 
> 
> [5]RESOURCE>>>dis
> 
> 
> 
> THU, JAN 29, 2009,  7:40:58 AM
> 
> 
> 
>  Item  Subsystem   Name   G/N  Description      Used   High    Max
> 
> 
> 
> ____________________________________________________________________
> 
> 
> 
> 
> 
> 
> 
>   1    NS XPORT  CP_POOL_ (G)  Control Buf Pool    0      1    300 :)
> 
> 
> 
>   2    NS XPORT
>  1536___D (G)  Inbound Buf Pool   42     60    512 :)
> 
> 
> 
>   3    NS XPORT  NET1____ (N)  Outbnd  Buf Pool    0     50    768 :)
> 
> 
> 
>   4    NS XPORT  LOOP____ (N)  Outbnd  Buf Pool    0     75    512 :)
> 
> 
> 
>   5    NS XPORT  UDP      (G)  GProt  Msg  Pool    0    N/A    512 :)
> 
> 
> 
>   6    NS XPORT  PXP      (G)  GProt  Msg  Pool    0    N/A    660 :)
> 
> 
> 
>   7    NS XPORT  PROBE    (G)  CM Prot Msg Pool    0    N/A    678 :)
> 
> 
> 
>   8    NS XPORT  IP_NI    (G)  IP-NI  Msg  Pool    0    N/A   2048 :)
> 
> 
> 
>   9    NS XPORT  IP_NI    (G)  IP-NI  Msg  Pool    0    N/A   2048 :)
> 
> 
> 
>  10    NS XPORT           (G)  Node  Name Cache    0      1    360 :)
> 
> 
> 
>  11    NS XPORT           (G)  TCP Control pool    0     73   2048 :)
> 
> 
> 
>  12    TELNET    PTOD     (G)  Write   Buf Pool    0      1   2048 :)
> 
> 
> 
>  13    TELNET    PTID     (G)  Outbnd  Buf Pool    0      1   2048 :)
> 
> 
> 
>  14    TELNET    PTID     (G)  Inbound Buf Pool    1      2
>   2048 :)
> 
> 
> 
>  15    TELNET    PTOD     (G)  Negot   Buf Pool    0      1   2048 :)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Randy Stanfield
> 
> 
> 
> Unisource - Dallas
> 
> 
> 
> 972-982-3621
> 
> 
> 
> 
> 
> 
> 
> From: Fatula, Steve (Norcross, DAV)
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> So, we have a program WBGETPR that primarily calculates cost and price
> 
> on HP3000. It is called via network calls as a XML web service. So,
> 
> client pc or machine of some sort calls the HP3000, sends XML string in
> 
> a POST, and, gets reply back on the same socket. We have a LOT of these
> 
> types of services on the HP, and all of them seem to work well. Except,
> 
> this one has some odd behavior we'd like to understand better. The
> 
> program is different than the other HP services we have in that it
> seems
> 
> to return a far larger amount of data. This service not only prices one
> 
> item, it prices a list of items in a single call. There is a lot of
> data
> 
> returned for each item. So, when we get
>  a list of say 300 items to price
> 
> and cost, the reply data could be around 400K. This has been
> 
> horrendously slow.
> 
> 
> 
> 
> 
> 
> 
> So, we put some timers in the code. Where the process bogs down is in
> 
> the network WRITES from the HP to the client. The client can be an
> 
> HP3000, a PC running proprietary software, or, a PC running the OPENSTA
> 
> open source test tool, it made no difference. The data sent to the
> 
> client is divided up into chunks of at most 30,000 bytes. The HP call
> 
> used is IPCSEND. The timers around calls to IPCSEND showed elapsed run
> 
> times in the hundredth of a second or smaller for the first two IPCSEND
> 
> calls in a reply, then would spike, to 4, 8, or even 15 seconds for a
> 
> single IPCSEND call. Drastic change! This would cause the process to be
> 
> very slow needles to say. We never saw the first two chunks of 30,000
> 
> bytes slow, they were always almost instant. Always, the third one (and
> 
> some of the later ones)
>  was slow, and the time varied widely. This is on
> 
> a very fast development machine. We monitored some of the network
> 
> tables, and didn't find any that maxed out. So, not sure where to look
> 
> to find out why this might occur.
> 
> 
> 
> 
> 
> 
> 
> As an experiment, we changed our maximum size to 10,000 bytes in the
> 
> IPCSEND calls. This resulted in no times exceeding .even 1 second, I
> 
> believe the largest we saw as .6 and that was with a bunch of them
> 
> running at the same time. But it was way faster than using 30,000 byte
> 
> IPCSEND calls.
> 
> 
> 
> 
> 
> 
> 
> The question is, why! What can be causing this behavior, and is there
> 
> maybe a more optimal number we can use other than 10,000. We'd like to
> 
> understand what the issue may be, it would seem 30,000 would be more
> 
> efficient since there are less calls, but that is not what we see.
> 
> 
> 
> 
> 
> 
> 
> Randy, you can add any details about the hardware, and, let me know if
> 
> you think I am missing
>  anything here.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Steve Fatula
> 
> 
> 
> 5 Diamond IT Consulting
> 
> 
> 
> 214-592-4230 Voice Ext 151
> 
> 
> 
> 206-984-9737 Fax
> 
> 
> 
> 
> 
> 
> 
> 
> 
> * To join/leave the list, search archives, change list settings, *
> 
> * etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
> 
> 
> 
> 
> * To join/leave the list, search archives, change list settings, *
> * etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *

ATOM RSS1 RSS2