Professional Web Applications Themes

huge deletion - Microsoft SQL / MS SQL Server

We have vldb, which has 10 tables that need to be purged on daily bases (1 million records per table). This process HAS TO BE partly overlapped with regular working hours and it should be implemented as a daily job... How can we do this and at the same time avoid locking as much as posible? Users use these tables for selects and inserts. Another problem is that if we use ROWLOCK hint for DELETE process, sql server doesn't release locks quick enough, so sometimes we get maximum locks error message, and job fails... Thanks, P...

  1. #1

    Default huge deletion

    We have vldb, which has 10 tables that need to be purged on daily bases (1
    million records per table). This process HAS TO BE partly overlapped with
    regular working hours and it should be implemented as a daily job...
    How can we do this and at the same time avoid locking as much as posible?
    Users use these tables for selects and inserts. Another problem is that if
    we use ROWLOCK hint for DELETE process, sql server doesn't release locks
    quick enough, so sometimes we get maximum locks error message, and job
    fails...
    Thanks,
    P


    Eddie Guest

  2. #2

    Default Re: huge deletion

    >> We have vldb, which has 10 tables that need to be purged on daily
    bases (1 million records [sic] per table). This process HAS TO BE partly
    overlapped with regular working hours and it should be implemented as a
    daily job... How can we do this and at the same time avoid locking as
    much as possible? <<

    I hate to tell you this, but you are not a VLDB yet; when you are doing
    several million rows per hour or have to worry about petabytes of
    storage on more than one physical unit, then you are getting to be an
    VLDB. Look up the term in the literature. Also, rows are not records.

    Your best bet is a database with optimistic concurrency control, that
    could mean switching to InterBase or Red Brick in particular. The
    generational concurrency control in InterBase lets you see a snapshot of
    the data at several points in time, so while changing data, you can be
    querying the immediately prior version simultaneously. The Red Brick
    model is a star schema with the fact table organized by time; new stuff
    is attached to one end of the physical file as old stuff is archived off
    the other end simultaneously.

    --CELKO--
    ===========================
    Please post DDL, so that people do not have to guess what the keys,
    constraints, Declarative Referential Integrity, datatypes, etc. in your
    schema are.

    *** Sent via Developersdex http://www.developersdex.com ***
    Don't just participate in USENET...get rewarded for it!
    Joe Guest

  3. #3

    Default Re: huge deletion

    Is TRUNCATE TABLE an option for you?

    --
    Keith

    "Eddie" <com> wrote in message news:#E#phx.gbl... 
    Keith Guest

  4. #4

    Default Re: huge deletion

    There are a number of things you don't mention about how the application
    works, so here are a few suggestions:

    Can you reorganize the tables so that they are (local) partitioned views
    partitioned on the key that requires daily maintenance? Partitioning is the
    generally recognized "best practice" for dealing with this exact scenario.
    You could then just use TRUNCATE TABLE to quickly remove all the rows in the
    underlying partition.

    Do you have to delete all the rows in a single transaction, or can you break
    it up into a series of small deletes so that there is minimal lock
    contention?

    Can you modify some/all of the queries to use DIRTY READ, so that they don't
    care about the locking that is going on during delete processing? Obviously
    you can only do this for queries that return acceptable results under DIRTY
    READ.

    --
    Hal Berenson, SQL Server MVP
    True Mountain Group LLC


    "Eddie" <com> wrote in message
    news:%23E%phx.gbl... 


    Hal Guest

  5. #5

    Default Re: huge deletion

    No, because rows are removed based on time/date criteria. Except maybe if we
    create partitioned view and then truncate the view... Process history (how
    many records is deleted, when, ...) is stored in separate table...
    Thanks,
    "Keith Kratochvil" <com> wrote in
    message news:uXLm%phx.gbl...
    Is TRUNCATE TABLE an option for you?

    --
    Keith

    "Eddie" <com> wrote in message
    news:#E#phx.gbl... 


    Eddie Guest

  6. #6

    Default Re: huge deletion

    >> And if Jim or Ann see this, I apologize for calling Interbase
    obsolete :-) <<

    Well, we know who is not getting Christmas present from Jim and Ann this
    year. Interbase is still good for taking a constant data feed while
    doing queries against it. Commodity traders are still using it in
    Chicago to do near real-time ysis.
     [/ref]

    Hey, if you want to do things with a moving time window and like getting
    IBM quality support (as opposed to MS quality support <g>), Red Brick is
    worth a look.

    One of the goodies that came with V6.2 is a Query Rewriter that handles
    queries that involve COUNT DISTINCT, MIN, and MAX aggregations, provided
    that columns containing these aggregations are present in the
    corresponding pre-computed views. Queries can be rewritten against
    views that do not have these aggregations but are grouped on the input
    columns of the aggregations and you can put complex expression in the
    GROUP BY clause to be rewritten.

    --CELKO--
    ===========================
    Please post DDL, so that people do not have to guess what the keys,
    constraints, Declarative Referential Integrity, datatypes, etc. in your
    schema are.

    *** Sent via Developersdex http://www.developersdex.com ***
    Don't just participate in USENET...get rewarded for it!
    Joe Guest

  7. #7

    Default Re: huge deletion

    Partitioned view is definitely worth of trying...

    Thanks
    "Hal Berenson" <com> wrote in message
    news:phx.gbl... 
    the 
    the 
    break 
    don't 
    Obviously 
    DIRTY [/ref]
    (1 [/ref]
    with [/ref]
    posible? [/ref]
    if 
    >
    >[/ref]


    Eddie Guest

  8. #8

    Default Re: huge deletion

    "Hal Berenson" <com> wrote in
    news:phx.gbl:
     

    My best luck with large deletes is to break them up into small chunks.
    It's more code, but depending on your hardware and size of delete, it will
    run faster.

    Your locking issues will be minimal. There are also advantages with log
    maintance this way.

    Mark Guest

Similar Threads

  1. More help with DB deletion
    By Student_bob in forum Coldfusion Database Access
    Replies: 4
    Last Post: April 11th, 03:28 PM
  2. Help with database deletion
    By Student_bob in forum Coldfusion Database Access
    Replies: 1
    Last Post: April 7th, 09:59 PM
  3. Bookmarks deletion
    By Ann_Dillard@adobeforums.com in forum Adobe Acrobat Windows
    Replies: 4
    Last Post: April 22nd, 09:51 PM
  4. Deletion problems
    By Pauline in forum Windows XP/2000/ME
    Replies: 1
    Last Post: July 18th, 10:53 AM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139