You typically get log file full error message when a transaction has a
long commit scope. This causes the transaction to attempt to span more
log files than the sum of LOGPRIMARY+LOGSECOND.
You should look at 1) How often that transaction commits. 2) If you
can't change that then 2) LOGFILSIZ and LOGPRIMARY and LOGSECOND.
In circular logging, LOGPRIMARY are all allocated when the db is
activated. Then, when all used and the transaction has not committed,
they can't go circular. DB2 then starts to open the first secondary.
If the transaction then commits, that secondary is filled and circular
logging will resume using all primary and the secondary.
If the transaction does not commit, a second secondary opens. And so on....
If the transaction commit scope is too long, all the defined (in the db
cfg) secondaries will be allocated. As the db attempts to write in the
last one, it finds there are no more secondaries to be allocated and you
get log file full and the db goes into crash restart.
Most circular logging environments define LOGPRIMARY to satisfy normal
logging operations between two activations of the db and then LOGSECOND
to cover big job requirements.
Also for V8 you can use infinite logging to get around this:
From the Admin, Guide:
#####The maximum amount of log space that can be defined has increased
from 32 GB to 256 GB. This provides support for more concurrent
transactions and transactions that do more work.
Infinite active logging is also new in Version 8. Its allows an active
unit of work to span the primary logs and archive logs, effectively
allowing a transaction to use an infinite number of log files. Without
infinite active log enabled, the log records for a unit of work must fit
in the primary log space. Infinite active log is enabled by setting
logsecond to -1. Infinite active log can be used to support environments
with large jobs that require more log space than you would normally
allocate to the primary logs.
The block on log disk full function that was introduced in Version 7 is
now set using the database configuration parameter blk_log_dsk_ful in
Version 8. Block on log disk full allows you to specify that DB2 should
not fail when running applications on disk full condition from the
active log path. When you enable this option, DB2 will retry every five
minutes allowing you to resolve the full disk situation and allowing the
applications to complete. ####
Amit wrote:> Hi, We have been running into some stability issues with DB2 database. Every
> 2-3 days we start getting transaction log full error messages.
> We use DB2 8.1 for our internal testing and we do create a _lot_ of
> tables/procedures every day (and drop them) and insert/update etc.
> We mostly use autocommit mode and its not an option to change all of those
> to transactions.
> We have tried to tweak many logfile related parameters but nothing has given
> considerable improvement. I can find more details if they will be helpful.
> I am wondering if someone has come across similar issues and has a
> Is there a way to automatically purge the logs or reduce the amount of
> logging. We do use circular logging and we don't really need any logs as
> this is development environment.
> Thanks in advance
> Amit Jindal, Development Engineer
> Powerful, yet easy. Superior C++ database development.
> [Disclaimer: All views expressed here are my personal
> views and they can be incorrect at times]