Ralph Holz wrote:
No, by the time the transaction manage detects a deadlock situation, it
has already occurred. Just delaying one transaction will not affect the
deadlock situation. One transaction or the other must be backed out.
That's how every RDB works. Don't now how adding two more columns to
your test would affect it, but this has been a problem waiting to
happen. You're lucky it occurred before you went live.
That will work for now, but will increase overhead on the server and, as
your server gets more loaded, this will get much worse.
You need to fix your applications. I know it's tough to do just before
you go live. But would you rather rewrite after you've gone live?
COMMIT or ROLLBACK as soon as possible. That is, as soon as your
transaction is complete. Don't do multiple transactions before you
COMMIT or ROLLBACK.
When accessing multiple tables in the same transaction (but different
SQL statements), always access them in the same order. For instance, if
you need to update tablea and tableb, always update tablea first, then
tableb (or vice versa). Don't to tablea first in one program and tableb
first in another.
Do NOT use LOCK TABLE unless you're going to be updating the entire
table. I know someone in MySQL suggests using table level locks, but
quite frankly I doubt they have ever worked on even a mildly loaded
production. I've been working with RDB's since the mid 80's, starting
with DB2. I've been involved with some systems which pushed 100K
access/s. (yes - per second, on mainframes). Anyone in any of these
systems who used a table lock to bypass a deadlock situation would be
without a job, even on much lighter loaded systems. It really kills
Here are some other suggestions:
Remove the "x" from my email address
JDS Computer Training Corp.