Professional Web Applications Themes

delay on select() - UNIX Programming

I execute an distributed application on LAN. When a cleant send( ) some message to server, it takes about 100 milliseconds to select( ) for that message. Is it normal? What is the possible reason for that? Shuqing...

  1. #1

    Default delay on select()

    I execute an distributed application on LAN.
    When a cleant send( ) some message to server, it
    takes about 100 milliseconds to select( ) for that
    message. Is it normal? What is the possible reason
    for that?

    Shuqing



    Shuqing Guest

  2. #2

    Default Re: delay on select()


    "Shuqing Wu" <ca> wrote in message
    news:rF1Ib.3593$bellglobal.com...

     

    Yes.
     


    The client isn't being smart about the amount of data it passes to the
    stack at a time. When the stack sees the first send, it has no idea there's
    going to be a second, so it sends the data immediately. When it sees the
    second send, it has no idea there isn't going to be a third, so it delays
    the data by about 200 milliseconds or so.

    The solution is for programs that send data over TCP to properly manage
    their transfer of data from the application to the network stack. For
    example, if you have to send three lines of data at the same time, send them
    in one call to 'send' or 'write' rather than in three. Here, 'at the same
    time' means 'without having to wait for anything in-between them'.

    DS




    David Guest

  3. #3

    Default Re: delay on select()

    On 2003-12-30, David Schwartz <com> wrote: 
    >
    > Yes.

    >
    >
    > The client isn't being smart about the amount of data it passes to the
    > stack at a time. When the stack sees the first send, it has no idea there's
    > going to be a second, so it sends the data immediately. When it sees the
    > second send, it has no idea there isn't going to be a third, so it delays
    > the data by about 200 milliseconds or so.
    >
    > The solution is for programs that send data over TCP to properly manage
    > their transfer of data from the application to the network stack. For
    > example, if you have to send three lines of data at the same time, send them
    > in one call to 'send' or 'write' rather than in three. Here, 'at the same
    > time' means 'without having to wait for anything in-between them'.[/ref]

    This would work, but makes life really miserable. Not always it is
    possible to join things into one package. Really you'd have to
    reimplement buffering that system already implements. Better to use
    socket options in this case. (Though I'm not sure about being portable
    here). Anyway. On Linux man 7 tcp lists TCP_NODELAY that
    allows immidiate sending of data in the TCP buffers, the reverse of it
    is TCP_CORK that allows to assemble package in system buffers and then
    send it as one.

    Andrei
    Andrei Guest

  4. #4

    Default Re: delay on select()

    > >I execute an distributed application on LAN. 
    >
    > Yes.

    >
    > The client isn't being smart about the amount of data it passes to the
    > stack at a time. When the stack sees the first send, it has no idea[/ref]
    there's 
    manage 
    them 

    I paste part of code here. It does try to send message at once, unless the
    message is
    too long. The problem is the long message is not avoidable. Is there
    anything I can do here?
    I tried to use setsockopt( ). But it seems no difference at all.
    while (len > 0){
    int sent = send(bsock->sock, ptr, len, 0);

    if (sent < 0){
    switch (errno){
    #ifdef EAGAIN
    case EAGAIN:
    break;
    #endif
    #if defined(EWOULDBLOCK) && (!defined(EAGAIN) || (EWOULDBLOCK !=
    EAGAIN))
    case EWOULDBLOCK:
    break;
    #endif
    case EPIPE:
    #ifdef ECONNRESET
    case ECONNRESET:
    #endif
    sprintf(bsock->errorMessage,
    "pqFlush() -- backend closed the channel
    unexpectedly.\n"
    "\tThis probably means the backend terminated
    abnormally"
    " before or while processing the request.\n");
    bsock->status = CONNECTION_ERROR; /* No more connection */
    close(bsock->sock);
    bsock->sock = -1;
    return EOF;
    default:
    sprintf(bsock->errorMessage,
    "pqFlush() -- couldn't send data: errno=%d\n%s\n",
    errno, strerror(errno));
    return EOF;
    }
    }
    else{
    ptr += sent;
    len -= sent;
    }

    if (len > 0){
    if (sockWait(FALSE, TRUE, bsock))
    return EOF;
    }
    }


    Shuqing


    Shuqing Guest

  5. #5

    Default Re: delay on select()


    "Andrei Voropaev" <ru> wrote in message
    news:bsrc0a$jb02$news.uni-berlin.de...

     [/ref]
     

    If you don't, how could the system possibly know when to send the data?
     

    Of course, you don't do it when it's not possible. The problem is that
    the system has no idea when it's possible and when it's not, so it assumes
    the application does so whenever it's possible.
     

    The system does not implement this type of buffering because it *can't*.
    When it sees data come to a TCP connection, it has no way of knowing whether
    or not more data is following immediately.
     


    Instead of setting 'TCP_CORK', just write the data to an intermediate
    buffer instead of the socket. Instead of unsetting 'TCP_CORK', flush the
    buffer to the socket. It will do almost exactly the same thing and have the
    advantage of being fully portable.

    Setting TCP_NODELAY won't help you. Setting this flag is promising the
    stack that you will do the coalescing so that it doesn't have to. This is
    only acceptable if you do the coalescing. Someone has to coalesce, otherwise
    you send too many small packets.

    DS




    David Guest

  6. #6

    Default Re: delay on select()


    "Shuqing Wu" <ca> wrote in message
    news:EaiIb.5179$bellglobal.com... 


    You have no problem in this case. The problems we're discussing only
    occur with small sends. You could, theoretically, encounter a problem with a
    small bit on the end of a big message if the subsequent message follows
    immediately, is small, and no data is received between the two messages. But
    this is a rare case that you probably won't need to worry about -- unless
    your protocol does this a lot.

    If you don't particularly care about latency, you have no issue and need
    do nothing. If each message requires a reply from the other side before the
    next message needs to be sent, you have no issue and need do nothing.

    DS




    David Guest

  7. #7

    Default Re: delay on select()

    > > I paste part of code here. It does try to send message at once, unless
    the 
    >
    >
    > You have no problem in this case. The problems we're discussing only
    > occur with small sends. You could, theoretically, encounter a problem with[/ref]

    But 
    need 
    the 

    Hi David:

    Thank you very much for your help.

    I am developing a ted database system. The response time is critical.
    I am doing
    performance test right now. One transaction costs 100milliseconds. and about
    97% of them is between send() a message (which is what we were talking
    about) and
    and receiver select( ). I know there is TCP_CORK option on Linux. I will try
    it on Linux.
    However, I also want a solution for Solaris. Is there any way?

    Shuqing


    Shuqing Guest

  8. #8

    Default Re: delay on select()

    On 2003-12-31, David Schwartz <com> wrote: 
    [...] 
    >
    >
    > Instead of setting 'TCP_CORK', just write the data to an intermediate
    > buffer instead of the socket. Instead of unsetting 'TCP_CORK', flush the
    > buffer to the socket. It will do almost exactly the same thing and have the
    > advantage of being fully portable.[/ref]

    Well. I don't want to argue about taste matters. Personally I don't like
    to maintain intermidiate buffer, grow, shrink, release it. Then you have
    to copy the same stuff twice. First to intermidiate buffer, then into
    system buffers. Then Nagle algorithm is still in the play and if your
    data is still too short then it won't go away immediately. Using
    TCP_CORK I give up portability (which I don't need for most stuff that I
    write anyway), but I gain speed a little bit. Again this is the matter
    of personal preferences.
     

    Won't help me? In SMPP protocol each PDU is not more than 300 bytes long
    and the next one won't come untill the response to the first one
    arrived. So small datagrams are inevitable in this case. That is why I
    have to turn Nagle off using TCP_NODELAY. And it does help me. :)

    I believe everything has its value, but in appropriate places :)

    Andrei
    Andrei Guest

  9. #9

    Default Re: delay on select()

    On 2003-12-30, Shuqing Wu <ca> wrote: 
    >>
    >> Yes.
    >> 
    >>
    >> The client isn't being smart about the amount of data it passes to the
    >> stack at a time. When the stack sees the first send, it has no idea[/ref]
    > there's 
    > manage 
    > them 
    >
    > I paste part of code here. It does try to send message at once, unless the
    > message is
    > too long. The problem is the long message is not avoidable. Is there
    > anything I can do here?[/ref]

    If your data is large then, as David pointed out, Nagle on your side
    plays no role.

    Use tcpdump or something similar to find out timings for TCP packets.
    Then you can really say where is delay. It can be because of scheduler
    in your kernel, it can be because of Nagle on the server side, it can be
    slow response from server...

    Andrei
    Andrei Guest

  10. #10

    Default Re: delay on select()

    > > I paste part of code here. It does try to send message at once, unless
    the 
    >
    >
    > You have no problem in this case. The problems we're discussing only
    > occur with small sends. You could, theoretically, encounter a problem with[/ref]

    But 
    need 
    the 

    I got it work. I am using setsockopt(bsock->sock, IPPROTO_TCP, TCP_NODELAY,
    &on, sizeof(on));
    The response time is much better, about 10 milliseconds, though it is still
    a bit more than normal (perhaps it is normal). (I cannot find TCP_CORK in
    Linux. If there is a way to do it, I'd like to learn it as well.) Thank you
    all for
    your help.

    Shuqing


    Shuqing Guest

  11. #11

    Default Re: delay on select()

    "Shuqing Wu" <ca> writes:
     [/ref]
    > the [/ref][/ref]

    [snip]
     

    Does it also work on Solaris? You were mentioning that you will need it on
    Solaris as well.

    Bye, Dragan

    --
    Dragan Cvetkovic,

    To be or not to be is true. G. Boole No it isn't. L. E. J. Brouwer

    !!! Sender/From address is bogus. Use reply-to one !!!
    Dragan Guest

  12. #12

    Default Re: delay on select()


    "Dragan Cvetkovic" <net> wrote in message
    news:net... [/ref][/ref]
    unless [/ref]
    >
    > [snip]
    > [/ref]
    TCP_NODELAY, [/ref]
    still [/ref]
    in [/ref]
    you 
    >
    > Does it also work on Solaris? You were mentioning that you will need it on
    > Solaris as well.[/ref]

    It is working on Solaris right now. I have not tried it on Linux yet (Since
    I got another problem:
    the execution blocked when invoke malloc(). If there is any one here can
    help, I really appreciate).
    Sorry about confusion. I should have said I cannot find TCP_CORK in
    Solaris).

    Shuqing


    Shuqing Guest

  13. #13

    Default Re: delay on select()

    Dragan Cvetkovic <net> writes:
     [/ref]
     


    TCP_NODELAY does work on Solaris (it needs to work on anything which
    supports X11)

    Casper
    --
    Expressed in this posting are my opinions. They are in no way related
    to opinions held by my employer, Sun Microsystems.
    Statements on Sun products included here are not gospel and may
    be fiction rather than truth.
    Casper Guest

  14. #14

    Default Re: delay on select()


    "Shuqing Wu" <ca> wrote in message
    news:bWrIb.7020$bellglobal.com...
     

    Okay, then you have to do sensible buffering.

     

    Do you send the entire request in a single call to 'send' or 'write'?
    About how many bytes is it?
     

    This will probably make the problem go away on Linux, just make sure to
    clear the TCP_CORK option before you start waiting for a reply.

     


    Send the entire request in one call to 'send'. If not all of it gets
    sent, loop back on 'send' immediately. If the reqeuest is more than 8Kb or
    so, it's acceptable to send it in chunks of at least 4Kb. Don't worry about
    the last chunk being small since it's right before you're going to wait for
    the other side anyway. (The delay would be your side waiting for already
    send data to be acknowledged before sending more -- that acknowledgement
    will piggy-back on the reply, so the next request won't be affected.)



    DS




    David Guest

  15. #15

    Default Re: delay on select()


    "Andrei Voropaev" <ru> wrote in message
    news:bsu5uf$1ef85$news.uni-berlin.de...

     

    Then use one static 8Kb buffer.
     

    We're talking about 100msec LAN delays here. The cost to copy a 16Kb
    request is thoroughly swamped by network speeds.
     

    You only copy the data if it's small. If it's large, you just send it.
     

    No, that's not true. I'm suspecting you don't understand how Nagle
    works.
     


    What you're saying here is correct, however, your preference is of no
    value to others because it's based upon technical misinformation. If you
    think Nagle will ever delay a send with the buffering strategy I'm
    suggesting, then you don't understand how Nagle works.

     [/ref]
     


    Again, you have just proven that you don't understand how Nagle works.

     


    That's true, but until you understand under what cirstances Nagle
    delays a send, your judgements will continue to be wrong.

    DS



    David Guest

  16. #16

    Default Re: delay on select()

    On Tue, 30 Dec 2003 23:05:31 -0500, Shuqing Wu wrote:
     
    > Hi David:
    >
    > Thank you very much for your help.
    >
    > I am developing a ted database system. The response time is
    > critical. I am doing
    > performance test right now. One transaction costs 100milliseconds. and
    > about 97% of them is between send() a message (which is what we were
    > talking about) and
    > and receiver select( ). I know there is TCP_CORK option on Linux. I will
    > try it on Linux.
    > However, I also want a solution for Solaris. Is there any way?[/ref]

    Keep in mind that Neagle's algorithm is designed to improve
    throughput. Just because one transaction takes 100ms does not mean
    you will only be able to process 10 transactions per second. When the
    socket buffers are flooded under load there will be no delay and it
    will be possible to process 1000's of transactions per second excluding
    other forces.

    Mike
    Michael Guest

  17. #17

    Default Re: delay on select()

    On Wed, 31 Dec 2003 04:46:55 -0500, Andrei Voropaev wrote: 
    >>
    >>
    >> Instead of setting 'TCP_CORK', just write the data to an
    >> intermediate
    >> buffer instead of the socket. Instead of unsetting 'TCP_CORK', flush
    >> the buffer to the socket. It will do almost exactly the same thing and
    >> have the advantage of being fully portable.[/ref]
    >
    > Well. I don't want to argue about taste matters. Personally I don't like
    > to maintain intermidiate buffer, grow, shrink, release it.[/ref]

    If you prefer to write to sockets directly that is a matter of taste
    but it is not more efficient. Regardless of protocol you are using an
    "intermediate buffer" wheather you realize it or not (a character string
    is an "intermediate buffer"). And unless you're just sendfile-ing a
    mmap-ed file it's very likely that extra system calls will result in
    the systme being *less* efficient. Most protocols require some level of
    decoding/encoding in which case you might as well create one appropriately
    sized buffer with which to encode and decode all messages and reuse
    it for the life of your program. If the program will be multiplexing
    IO heavily then use a pool of buffers. This will be much much faster
    than dribbling little bits of data to and from sockets. Ideally you
    want to read and write as much data as possible in one call even if
    it means reading multiple requests/responses and parameterizing your
    decoding/encoding routines to accept an arbitrary pointer or offset.
     
    >
    > Won't help me? In SMPP protocol each PDU is not more than 300 bytes long
    > and the next one won't come untill the response to the first one
    > arrived. So small datagrams are inevitable in this case. That is why I
    > have to turn Nagle off using TCP_NODELAY. And it does help me. :)[/ref]

    I think you guys are talking about different issues. The purpose of
    Nagle's algorithm is not to coalesce fragments of a PDU. It's to coalesce
    multiple whole PDUs. TCP_CORK is used to coalesce fragements of a PDU
    (albeit somewhat of a hack).

    Mike
    Michael Guest

  18. #18

    Default Re: delay on select()

    On Wed, 31 Dec 2003 12:48:03 -0500, Shuqing Wu wrote:
     

    You really should not do any memory allocation within the inner loop of
    your networking code. The standard library malloc and free are protected
    by locks besides being somewhat inefficent object management routines
    to start with. Allocate your memory in advance and reuse it as much
    as possible.

    Mike
    Michael Guest

  19. #19

    Default Re: delay on select()

    Michael B Allen <com> writes:
     
     [/ref]
     


    If execution is blocked inside malloc(), that sounds like someone is
    calling malloc() from a signal handler.

    Casper
    --
    Expressed in this posting are my opinions. They are in no way related
    to opinions held by my employer, Sun Microsystems.
    Statements on Sun products included here are not gospel and may
    be fiction rather than truth.
    Casper Guest

  20. #20

    Default Re: delay on select()

    > > It is working on Solaris right now. I have not tried it on Linux yet 
    >
    > You really should not do any memory allocation within the inner loop of
    > your networking code. The standard library malloc and free are protected
    > by locks besides being somewhat inefficent object management routines
    > to start with. Allocate your memory in advance and reuse it as much
    > as possible.
    >[/ref]
    It happens when it try to allocate memory for a new transaction
    context. By the way, it works fine on Solaris, but not on Linux.
    Do you suggested that Linux is not allow the application
    allocate memory dynamically? Or maybe some network application?
    In my case, I am working on a tion PostgreSQL projest. Is there
    anything I should take a look inside, in specific?

    Shuqing



    Shuqing Guest

Page 1 of 2 12 LastLast

Similar Threads

  1. Select a list of items into an aliased field when doinga select
    By ehaemmerle in forum Coldfusion Database Access
    Replies: 3
    Last Post: March 18th, 10:49 PM
  2. Replies: 0
    Last Post: September 24th, 03:24 AM
  3. Replies: 0
    Last Post: September 11th, 11:26 AM
  4. Replies: 0
    Last Post: September 11th, 12:19 AM
  5. Replies: 0
    Last Post: April 15th, 01:22 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139