Professional Web Applications Themes

kern.ipc.nmbclusters - FreeBSD

ok.. to day for a first time ever i saw this in my logs: /kernel: All mbuf clusters exhausted so i gotta up the kern.ipc.nmbclusters.. also what would be a decent nmbclusters to specify in the loader for a gig or ram and 2 gigs of swap? how many mbufs per cluster? also why is this client stuck in the netstat. how come Send-Q is so much?: # netstat -p tcp Proto Recv-Q Send-Q Local Address Foreign Address (state) ...... tcp4 0 22065 server.http c68.112.166.214..3318 FIN_WAIT_1 tcp4 0 31076 server.http c68.112.166.214..3317 FIN_WAIT_1 tcp4 0 32285 server.http c68.112.166.214..3316 FIN_WAIT_1 tcp4 0 ...

  1. #1

    Default kern.ipc.nmbclusters


    ok.. to day for a first time ever i saw this in my logs:

    /kernel: All mbuf clusters exhausted

    so i gotta up the kern.ipc.nmbclusters..

    also what would be a decent nmbclusters to specify in the loader for a gig
    or ram and 2 gigs of swap?

    how many mbufs per cluster?

    also why is this client stuck in the netstat. how come Send-Q is so much?:


    # netstat -p tcp
    Proto Recv-Q Send-Q Local Address Foreign Address (state)

    ......
    tcp4 0 22065 server.http c68.112.166.214..3318
    FIN_WAIT_1
    tcp4 0 31076 server.http c68.112.166.214..3317
    FIN_WAIT_1
    tcp4 0 32285 server.http c68.112.166.214..3316
    FIN_WAIT_1
    tcp4 0 32285 server.http c68.112.166.214..3315
    FIN_WAIT_1
    tcp4 0 32905 server.http c68.112.166.214..3314
    FIN_WAIT_1
    tcp4 0 31445 server.http c68.112.166.214..3313
    FIN_WAIT_1
    tcp4 0 33580 server.http c68.112.166.214..3312
    FIN_WAIT_1
    tcp4 0 31696 server.http c68.112.166.214..3311
    FIN_WAIT_1
    ......................


    thanks...
    --




    kalin Guest

  2. #2

    Default Re: kern.ipc.nmbclusters

    On 03/15/05 18:02:22, kalin mintchev wrote: 
    Did you check top to see if you even use swap? I never use swap with
    512MB on my desktop. Read man tuning, around byte 32372. Try netstat
    -m.

    Jason Guest

  3. #3

    Default Re: kern.ipc.nmbclusters

     

    yea. very small amount.
    Swap: 2032M Total, 624K Used, 2031M Free
     

    i did a few times. don't remember which byte was it thought...
     

    i did. here:
    this was when i send this message originally:
    # netstat -m
    6138/6832/26624 mbufs in use (current/peak/max):
    6137 mbufs allocated to data
    1 mbufs allocated to fragment reassembly queue headers
    6092/6656/6656 mbuf clusters in use (current/peak/max)
    15020 Kbytes allocated to network (75% of mb_map in use)
    11125 requests for memory denied
    1 requests for memory delayed
    0 calls to protocol drain routines

    ===================================

    this is now:

    netstat -m
    349/6832/26624 mbufs in use (current/peak/max):
    348 mbufs allocated to data
    1 mbufs allocated to fragment reassembly queue headers
    346/6656/6656 mbuf clusters in use (current/peak/max)
    15020 Kbytes allocated to network (75% of mb_map in use)
    11125 requests for memory denied
    1 requests for memory delayed
    0 calls to protocol drain routines


    huge difference. so i think about 260 lines of netstat -p tcp output like:

    tcp4 0 33580 server.http c68.112.166.214..3307
    FIN_WAIT_1

    has to do with all these 6000 clusters. but i'm not sure how. DOS may be?!
    they are all from the same client ip and all of them have much higher
    number for send then received Q's. what does the state FIN_WAIT_1 mean?
    waiting to finish? if so - why it didn't do that for hours and hours. my
    web server keeps connections alive for 10 sec. there isn't much else that
    uses tcp on that machine. the webserver was inaccessile for about 5-10
    min. so my first thought was DOS... "11125 requests for memory denied"
    made it look like it was a DOS...

    maybe somebody can explain the relation if any. it'll be appreciated...


    thanks....

     


    --


    kalin Guest

  4. #4

    Default Re: kern.ipc.nmbclusters


    where else can i ask about this?
    i tried bsdforums but still total silence there too...
     
    >
    > yea. very small amount.
    > Swap: 2032M Total, 624K Used, 2031M Free

    >
    > i did a few times. don't remember which byte was it thought...

    >
    > i did. here:
    > this was when i send this message originally:
    > # netstat -m
    > 6138/6832/26624 mbufs in use (current/peak/max):
    > 6137 mbufs allocated to data
    > 1 mbufs allocated to fragment reassembly queue headers
    > 6092/6656/6656 mbuf clusters in use (current/peak/max)
    > 15020 Kbytes allocated to network (75% of mb_map in use)
    > 11125 requests for memory denied
    > 1 requests for memory delayed
    > 0 calls to protocol drain routines
    >
    > ===================================
    >
    > this is now:
    >
    > netstat -m
    > 349/6832/26624 mbufs in use (current/peak/max):
    > 348 mbufs allocated to data
    > 1 mbufs allocated to fragment reassembly queue headers
    > 346/6656/6656 mbuf clusters in use (current/peak/max)
    > 15020 Kbytes allocated to network (75% of mb_map in use)
    > 11125 requests for memory denied
    > 1 requests for memory delayed
    > 0 calls to protocol drain routines
    >
    >
    > huge difference. so i think about 260 lines of netstat -p tcp output like:
    >
    > tcp4 0 33580 server.http c68.112.166.214..3307
    > FIN_WAIT_1
    >
    > has to do with all these 6000 clusters. but i'm not sure how. DOS may be?!
    > they are all from the same client ip and all of them have much higher
    > number for send then received Q's. what does the state FIN_WAIT_1 mean?
    > waiting to finish? if so - why it didn't do that for hours and hours. my
    > web server keeps connections alive for 10 sec. there isn't much else that
    > uses tcp on that machine. the webserver was inaccessile for about 5-10
    > min. so my first thought was DOS... "11125 requests for memory denied"
    > made it look like it was a DOS...
    >
    > maybe somebody can explain the relation if any. it'll be appreciated...
    >
    >
    > thanks....
    >

    >
    >
    > --
    >
    >
    > _______________________________________________
    > org mailing list
    > http://lists.freebsd.org/mailman/listinfo/freebsd-questions
    > To unsubscribe, send any mail to
    > "org"
    >[/ref]


    --


    kalin Guest

  5. #5

    Default Re: kern.ipc.nmbclusters

    On Mar 16, 2005, at 3:01 PM, kalin mintchev wrote: [/ref]

    You were exceeding the amount of socket buffer memory available there.
     [/ref]

    FIN_WAIT_1 means that one side of the TCP conversation sent a FIN, and
    the other side (yours) wants to flush the queue of unsent data and will
    then close the connection. It's not clear why this isn't working, and
    there is a timer which gets started which ought to close the connection
    after 10 minutes or so if no data can be sent.

    Perhaps the other side is playing games? If you do a tcpdump against
    that client, are you seeing responses with a 0 window size?

    --
    -Chuck

    Charles Guest

  6. #6

    Default Re: kern.ipc.nmbclusters

    thanks Charles...
     

    i'm aware of that. the question is why?
     [/ref]
    >
    > FIN_WAIT_1 means that one side of the TCP conversation sent a FIN, and
    > the other side (yours) wants to flush the queue of unsent data and will
    > then close the connection. It's not clear why this isn't working, and
    > there is a timer which gets started which ought to close the connection
    > after 10 minutes or so if no data can be sent.[/ref]

    well that was what i was suggesting in my post but the sever is set to cut
    inactive connections after 10 seconds - not minutes. is there any other
    timer i'm missing here?
     

    that happened yesterday - 1/2 hr ago. right now it is fine... quite....
    i tought DOS. it hasn't happened before. right now using only 250-300
    clusters which is the normal...

    thanks...

     


    --


    kalin Guest

  7. #7

    Default Re: kern.ipc.nmbclusters

    On Mar 16, 2005, at 4:44 PM, kalin mintchev wrote: 
    >
    > i'm aware of that. the question is why?[/ref]

    The literal answer is that this pool of open connections with lots of
    unsent data is clogging things up. Why those connections are not going
    away is the real question to figure out....
     
    >
    > well that was what i was suggesting in my post but the sever is set to
    > cut
    > inactive connections after 10 seconds - not minutes. is there any other
    > timer i'm missing here?[/ref]

    You are probably referring to the KeepAlive directive in the Apache
    config file, but there are other timers present in the TCP stack
    itself: specificly, the one described in RFC-793 around section 3.5,
    involving a 2 * MSL wait:

    "3.5. Closing a Connection

    CLOSE is an operation meaning "I have no more data to send." The
    notion of closing a full-duplex connection is subject to ambiguous
    interpretation, of course, since it may not be obvious how to treat
    the receiving side of the connection. We have chosen to treat CLOSE
    in a simplex fashion. The user who CLOSEs may continue to RECEIVE
    until he is told that the other side has CLOSED also. Thus, a program
    could initiate several SENDs followed by a CLOSE, and then continue to
    RECEIVE until signaled that a RECEIVE failed because the other side
    has CLOSED. We assume that the TCP will signal a user, even if no
    RECEIVEs are outstanding, that the other side has closed, so the user
    can terminate his side gracefully. A TCP will reliably deliver all
    buffers SENT before the connection was CLOSED so a user who expects no
    data in return need only wait to hear the connection was CLOSED
    successfully to know that all his data was received at the destination
    TCP. Users must keep reading connections they close for sending until
    the TCP says no more data."

    --
    -Chuck

    Charles Guest

Similar Threads

  1. Replies: 3
    Last Post: March 14th, 10:48 PM
  2. nmbclusters question
    By MacConnect Home Office in forum FreeBSD
    Replies: 1
    Last Post: February 17th, 11:37 PM
  3. kern.ipc.nmbclusters in version 4.11
    By ann kok in forum FreeBSD
    Replies: 1
    Last Post: February 17th, 03:20 PM
  4. KERN PROTECTION FAILURE
    By Hap Owen in forum Macromedia Freehand
    Replies: 0
    Last Post: July 24th, 11:04 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139