Professional Web Applications Themes

Handling deadlock situation with massive inserts - IBM DB2

I have an application working on Oracle (8, 9) that needs porting to DB2 V7.2.4 on AIX for another customer. There are 40 million records (+ detail tables) that we are working with. The application consolidates data into one table, i.e. there are statements like "insert into <the one table "A"> (...) select from ... with UR". For performance reasons multiple processes run at the same time writing to the same table "A" - but with definitely different ids. Also there is no index on table "A" since it would cost more to maintain than to do some rare table ...

  1. #1

    Default Handling deadlock situation with massive inserts

    I have an application working on Oracle (8, 9) that needs porting to
    DB2 V7.2.4 on AIX for another customer. There are 40 million records
    (+ detail tables) that we are working with.

    The application consolidates data into one table, i.e. there are
    statements like "insert into <the one table "A"> (...) select from ...
    with UR". For performance reasons multiple processes run at the same
    time writing to the same table "A" - but with definitely different
    ids. Also there is no index on table "A" since it would cost more to
    maintain than to do some rare table scans. These processes insert from
    10.000 to 2.000.000 records into table "A". The processes are written
    in Java, they call the insert statement via JDBC with autocommit off.

    Ok, this was the introduction. From time to time the database
    processes run into an error:

    COM.ibm.db2.jdbc.DB2Exception: [IBM][CLI Driver][DB2/6000] SQL0911N
    The current transaction has been rolled back because of a deadlock or
    timeout. Reason code "2". SQLSTATE=40001

    Reason code "2" is a deadlock. Does anyone have an idea how to get
    around this? Oracle can handle this without problems, since there is
    no obvious need to lock records on table "A". B.t.w.: the database in
    in row level locking mode (unless escalation happens).

    Thanks

    Volker
    Volker Guest

  2. #2

    Default Re: Handling deadlock situation with massive inserts

    Volker:

    DB2 Locks at ROW Level by default. So in your case the only solution
    is to increase your INSERT performance. Try buffered insert by using

    db2 BIND db2uimpm.bnd INSERT BUF

    There are some disadvantages using this options.... Please check with
    IBM Doentation...

    Shashi Mannepalli
    [email]volkervolker-hatz.de[/email] (Volker) wrote in message news:<bc9ffb63.0307142249.260b7735posting.google. com>...
    > I have an application working on Oracle (8, 9) that needs porting to
    > DB2 V7.2.4 on AIX for another customer. There are 40 million records
    > (+ detail tables) that we are working with.
    >
    > The application consolidates data into one table, i.e. there are
    > statements like "insert into <the one table "A"> (...) select from ...
    > with UR". For performance reasons multiple processes run at the same
    > time writing to the same table "A" - but with definitely different
    > ids. Also there is no index on table "A" since it would cost more to
    > maintain than to do some rare table scans. These processes insert from
    > 10.000 to 2.000.000 records into table "A". The processes are written
    > in Java, they call the insert statement via JDBC with autocommit off.
    >
    > Ok, this was the introduction. From time to time the database
    > processes run into an error:
    >
    > COM.ibm.db2.jdbc.DB2Exception: [IBM][CLI Driver][DB2/6000] SQL0911N
    > The current transaction has been rolled back because of a deadlock or
    > timeout. Reason code "2". SQLSTATE=40001
    >
    > Reason code "2" is a deadlock. Does anyone have an idea how to get
    > around this? Oracle can handle this without problems, since there is
    > no obvious need to lock records on table "A". B.t.w.: the database in
    > in row level locking mode (unless escalation happens).
    >
    > Thanks
    >
    > Volker
    shashi b mannepalli Guest

  3. #3

    Default Re: Handling deadlock situation with massive inserts

    [email]paulabacus.co.uk[/email] (Paul Reddin) wrote in message news:<1fd2a603.0307160743.66d3bcd5posting.google. com>...
    > [email]volkervolker-hatz.de[/email] (Volker) wrote in message news:<bc9ffb63.0307142249.260b7735posting.google. com>...
    >
    > One thought...
    >
    > Are you getting Lock escalations ? i.e Lack of locklist.
    > - which is causing the deadlock ?
    Paul: yes, there are escalations since the list size is finally
    exhausted. It can not be enlarged.

    I guess the solution must be somewhere in the range of "switching off
    locking", which seems not to be possible, or "having some kind of
    autocommit each n records, with n>>1".

    (BTW: with 64 bit UDB it can be enlarged with a lot of other financial
    consequences which are not desired)
    Volker Guest

  4. #4

    Default Re: Handling deadlock situation with massive inserts

    Hard to give any advice without knowing the application but have you
    looked at alternatives
    like:
    - fetching rows in the application and doing inserts one by one with
    commits every n rows
    - using triggers to copy the data
    - using summary tables
    - using where clauses in the insert-select statements to reduce the
    number of rows in one transaction

    Volker wrote:
    >paulabacus.co.uk (Paul Reddin) wrote in message news:<1fd2a603.0307160743.66d3bcd5posting.google. com>...
    >
    >
    >>volkervolker-hatz.de (Volker) wrote in message news:<bc9ffb63.0307142249.260b7735posting.google. com>...
    >>
    >>One thought...
    >>
    >>Are you getting Lock escalations ? i.e Lack of locklist.
    >> - which is causing the deadlock ?
    >>
    >>
    >
    >Paul: yes, there are escalations since the list size is finally
    >exhausted. It can not be enlarged.
    >
    >I guess the solution must be somewhere in the range of "switching off
    >locking", which seems not to be possible, or "having some kind of
    >autocommit each n records, with n>>1".
    >
    >(BTW: with 64 bit UDB it can be enlarged with a lot of other financial
    >consequences which are not desired)
    >
    >
    --
    Anton Versteeg
    IBM Netherlands



    Anton Versteeg Guest

  5. #5

    Default Re: Handling deadlock situation with massive inserts

    Anton Versteeg <anton_versteegnnll.iibbmm.com> wrote:
    > Hard to give any advice without knowing the application but have you
    > looked at alternatives
    > like:
    > - fetching rows in the application and doing inserts one by one with
    > commits every n rows
    > - using triggers to copy the data
    > - using summary tables
    > - using where clauses in the insert-select statements to reduce the
    > number of rows in one transaction
    Is NOT LOGGED INITIALLY an option for you?

    --
    Knut Stolze
    Information Integration
    IBM Germany / University of Jena
    Knut Stolze Guest

  6. #6

    Default Re: Handling deadlock situation with massive inserts

    As far as I can see this is not a solution. This option is killed when
    the first commit occurs (if I understand it correctly). On the other
    hand this option is about loGGing, does it also prevent loCKing?

    Knut Stolze <stolzede.ibm.com> wrote in message news:<bf60bs$g1d$1fsuj29.rz.uni-jena.de>...
    > Anton Versteeg <anton_versteegnnll.iibbmm.com> wrote:
    >
    > > Hard to give any advice without knowing the application but have you
    > > looked at alternatives
    > > like:
    > > - fetching rows in the application and doing inserts one by one with
    > > commits every n rows
    > > - using triggers to copy the data
    > > - using summary tables
    > > - using where clauses in the insert-select statements to reduce the
    > > number of rows in one transaction
    >
    > Is NOT LOGGED INITIALLY an option for you?
    Volker Guest

  7. #7

    Default Re: Handling deadlock situation with massive inserts

    Thanks,

    I am afraid I have to go with the "fetch & commit every n rows". This
    is not what I had expected from the database but it seems to be DB2's
    way of doing things.
    So I am starting performance comparisons now ...


    Anton Versteeg <anton_versteegnnll.iibbmm.com> wrote in message news:<3F166229.9020608nnll.iibbmm.com>...
    > Hard to give any advice without knowing the application but have you
    > looked at alternatives
    > like:
    > - fetching rows in the application and doing inserts one by one with
    > commits every n rows
    > - using triggers to copy the data
    > - using summary tables
    > - using where clauses in the insert-select statements to reduce the
    > number of rows in one transaction
    >
    > Volker wrote:
    >
    > >paulabacus.co.uk (Paul Reddin) wrote in message news:<1fd2a603.0307160743.66d3bcd5posting.google. com>...
    > >
    > >
    > >>volkervolker-hatz.de (Volker) wrote in message news:<bc9ffb63.0307142249.260b7735posting.google. com>...
    > >>
    > >>One thought...
    > >>
    > >>Are you getting Lock escalations ? i.e Lack of locklist.
    > >> - which is causing the deadlock ?
    > >>
    > >>
    > >
    > >Paul: yes, there are escalations since the list size is finally
    > >exhausted. It can not be enlarged.
    > >
    > >I guess the solution must be somewhere in the range of "switching off
    > >locking", which seems not to be possible, or "having some kind of
    > >autocommit each n records, with n>>1".
    > >
    > >(BTW: with 64 bit UDB it can be enlarged with a lot of other financial
    > >consequences which are not desired)
    > >
    > >
    Volker Guest

Similar Threads

  1. #40261 [NEW]: Extremely slow data handling in specific situation
    By thuejk at gmail dot com in forum PHP Bugs
    Replies: 17
    Last Post: March 20th, 07:14 AM
  2. Replies: 6
    Last Post: February 17th, 02:30 PM
  3. Deadlock?
    By Philip Mak in forum Ruby
    Replies: 3
    Last Post: September 24th, 11:18 AM
  4. Deadlock
    By Lewis in forum IBM DB2
    Replies: 1
    Last Post: July 31st, 05:19 AM
  5. COM Deadlock in W2k but not NT 4.0
    By Steve in forum ASP Components
    Replies: 1
    Last Post: July 24th, 04:27 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139