Hi Randy,
Evidently we also use QSDK heavily...
Here is the display on our system:
Item Subsystem Name G/N Description Used High Max
____________________________________________________________________
1 NS XPORT CP_POOL_ (G) Control Buf Pool 0 1 300 :)
2 NS XPORT 1536___D (G) Inbound Buf Pool 49 81 1365 :)
3 NS XPORT LAN1____ (N) Outbnd Buf Pool 0 50 4096 :)
4 NS XPORT LOOP____ (N) Outbnd Buf Pool 0 0 128 :)
5 NS XPORT UDP (G) GProt Msg Pool 0 N/A 512 :)
6 NS XPORT PXP (G) GProt Msg Pool 0 N/A 660 :)
7 NS XPORT PROBE (G) CM Prot Msg Pool 0 N/A 677 :)
8 NS XPORT IP_NI (G) IP-NI Msg Pool 0 N/A 2048 :)
9 NS XPORT IP_NI (G) IP-NI Msg Pool 0 N/A 2048 :)
10 NS XPORT (G) Node Name Cache 0 2 360 :)
11 NS XPORT (G) TCP Control pool 0 50 2048 :)
duane
> -----Original Message-----
> From: [log in to unmask]
> [mailto:[log in to unmask]] On Behalf Of Stanfield,
> Randy (Carrollton, TX)
> Sent: Thursday, January 29, 2009 5:46 AM
> To: HP-3000 Systems Discussion
> Subject: : Network Issue
>
>
> Network help
>
>
>
> We've started using some QSDK software to allow the HP3000 to
> become a Webserver basicly. We've been using is pretty
> successfully I might add but the volume is starting to
> increase and noticing some bottle necks that look to be
> network related. Does anyone have a few minutes to maybe look
> at our issue. I'm not sure what to change if anything to
> change in the size of any of the buffer pools for the
> network. We had our OUTBOUND Buf Pool on Loop max out
> yesterday and cause the Webservices to hang. We increased
> that size to 512 about 2 weeks ago and that helped but didn't
> want to keep changing without knowing what it affects.
>
>
>
> [5]RESOURCE>>>dis
>
> THU, JAN 29, 2009, 7:40:58 AM
>
> Item Subsystem Name G/N Description Used High Max
>
> ____________________________________________________________________
>
>
>
> 1 NS XPORT CP_POOL_ (G) Control Buf Pool 0 1 300 :)
>
> 2 NS XPORT 1536___D (G) Inbound Buf Pool 42 60 512 :)
>
> 3 NS XPORT NET1____ (N) Outbnd Buf Pool 0 50 768 :)
>
> 4 NS XPORT LOOP____ (N) Outbnd Buf Pool 0 75 512 :)
>
> 5 NS XPORT UDP (G) GProt Msg Pool 0 N/A 512 :)
>
> 6 NS XPORT PXP (G) GProt Msg Pool 0 N/A 660 :)
>
> 7 NS XPORT PROBE (G) CM Prot Msg Pool 0 N/A 678 :)
>
> 8 NS XPORT IP_NI (G) IP-NI Msg Pool 0 N/A 2048 :)
>
> 9 NS XPORT IP_NI (G) IP-NI Msg Pool 0 N/A 2048 :)
>
> 10 NS XPORT (G) Node Name Cache 0 1 360 :)
>
> 11 NS XPORT (G) TCP Control pool 0 73 2048 :)
>
> 12 TELNET PTOD (G) Write Buf Pool 0 1 2048 :)
>
> 13 TELNET PTID (G) Outbnd Buf Pool 0 1 2048 :)
>
> 14 TELNET PTID (G) Inbound Buf Pool 1 2 2048 :)
>
> 15 TELNET PTOD (G) Negot Buf Pool 0 1 2048 :)
>
>
>
>
>
>
>
> Randy Stanfield
>
> Unisource - Dallas
>
> 972-982-3621
>
>
>
> From: Fatula, Steve (Norcross, DAV)
>
>
>
>
>
> So, we have a program WBGETPR that primarily calculates cost
> and price on HP3000. It is called via network calls as a XML
> web service. So, client pc or machine of some sort calls the
> HP3000, sends XML string in a POST, and, gets reply back on
> the same socket. We have a LOT of these types of services on
> the HP, and all of them seem to work well. Except, this one
> has some odd behavior we'd like to understand better. The
> program is different than the other HP services we have in
> that it seems to return a far larger amount of data. This
> service not only prices one item, it prices a list of items
> in a single call. There is a lot of data returned for each
> item. So, when we get a list of say 300 items to price and
> cost, the reply data could be around 400K. This has been
> horrendously slow.
>
>
>
> So, we put some timers in the code. Where the process bogs
> down is in the network WRITES from the HP to the client. The
> client can be an HP3000, a PC running proprietary software,
> or, a PC running the OPENSTA open source test tool, it made
> no difference. The data sent to the client is divided up into
> chunks of at most 30,000 bytes. The HP call used is IPCSEND.
> The timers around calls to IPCSEND showed elapsed run times
> in the hundredth of a second or smaller for the first two
> IPCSEND calls in a reply, then would spike, to 4, 8, or even
> 15 seconds for a single IPCSEND call. Drastic change! This
> would cause the process to be very slow needles to say. We
> never saw the first two chunks of 30,000 bytes slow, they
> were always almost instant. Always, the third one (and some
> of the later ones) was slow, and the time varied widely. This
> is on a very fast development machine. We monitored some of
> the network tables, and didn't find any that maxed out. So,
> not sure where to look to find out why this might occur.
>
>
>
> As an experiment, we changed our maximum size to 10,000 bytes
> in the IPCSEND calls. This resulted in no times exceeding
> .even 1 second, I believe the largest we saw as .6 and that
> was with a bunch of them running at the same time. But it was
> way faster than using 30,000 byte IPCSEND calls.
>
>
>
> The question is, why! What can be causing this behavior, and
> is there maybe a more optimal number we can use other than
> 10,000. We'd like to understand what the issue may be, it
> would seem 30,000 would be more efficient since there are
> less calls, but that is not what we see.
>
>
>
> Randy, you can add any details about the hardware, and, let
> me know if you think I am missing anything here.
>
>
>
>
>
>
>
> Steve Fatula
>
> 5 Diamond IT Consulting
>
> 214-592-4230 Voice Ext 151
>
> 206-984-9737 Fax
>
>
>
>
> * To join/leave the list, search archives, change list settings, *
> * etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
>
* To join/leave the list, search archives, change list settings, *
* etc., please visit http://raven.utc.edu/archives/hp3000-l.html *
|