Michael wrote:If you can use insert with the 'append' hint then the rollback process>
> Hye !
> I have a query which selects from several tables and inserts into a
> table X. Two ways of writing this query are (A) fast but risky (B)
> slow but certain
> (A) Writing this select in one bulk INSERT SELECT statement will be
> the fastest way, however we only commit once at the end. If something
> wrong happens then nothing will be stored in the new table and in
> addition we should wait a period of time for the database to rollback
> its work.
> (B) Now performing the same INSERT in a cursor and COMMITing after
> each INSERT is slower but more certain, because if something happens
> which will stop
> the query, then it is possible to restart from where we stopped.
> What I am looking for is a solution in between, that will balance
> speed and risk for example to commit every chunk of records. The
> reason for that is not space but time. When this query is run the
> database should be turned offline and the downtime is minimal. By
> introducing more commits, it will be much simpler to forecast how much
> time will it take to perform this INSERT, better feedback of what is
> happening. Any ideas??
> I am using Oracle 8i and the INSERT varies from 100000 to 500000
is virtually nil - but have a read up on the implications of using this.
otherwise take a look at bulk binding / bulk collection in the PL/SQL
manual with the LIMIT clause
"Some days you're the pigeon, some days you're the statue"