Professional Web Applications Themes

Slow DB - MySQL

Hi there I have a news publishing site. Newsitems (of course) are ordered by date. Each record has: id INT(11), publish_date DATETIME() and an url VARCHAR(50) (unique) (and more fields, but not of importance now) I loaded up the database with fake random data. So the dates were also inserted randomly. (Anywhere between 2000 and 2010 ) 1.000.000 + records gave me extremely fast result being called from the url, but now with only 1.00.000 records left, it takes 7 seconds+ to get 5 records, ordered by date. How can i optimize such a system? I tried EXPLAIN, but that ...

  1. #1

    Default Slow DB

    Hi there

    I have a news publishing site. Newsitems (of course) are ordered by
    date.
    Each record has:
    id INT(11), publish_date DATETIME() and an url VARCHAR(50) (unique)
    (and more fields, but not of importance now)

    I loaded up the database with fake random data. So the dates were also
    inserted randomly.
    (Anywhere between 2000 and 2010 )
    1.000.000 + records gave me extremely fast result being called from the
    url, but now
    with only 1.00.000 records left, it takes 7 seconds+ to get 5 records,
    ordered by date.

    How can i optimize such a system? I tried EXPLAIN, but that didn't make
    sense to me ... :(

    Frizzle.

    frizzle Guest

  2. #2

    Default Re: Slow DB

    frizzle wrote:
    > Hi there
    >
    > I have a news publishing site. Newsitems (of course) are ordered by
    > date.
    > Each record has:
    > id INT(11), publish_date DATETIME() and an url VARCHAR(50) (unique)
    > (and more fields, but not of importance now)
    >
    > I loaded up the database with fake random data. So the dates were also
    > inserted randomly.
    > (Anywhere between 2000 and 2010 )
    > 1.000.000 + records gave me extremely fast result being called from the
    > url, but now
    > with only 1.00.000 records left, it takes 7 seconds+ to get 5 records,
    > ordered by date.
    >
    > How can i optimize such a system? I tried EXPLAIN, but that didn't make
    > sense to me ... :(
    >
    > Frizzle.
    >
    EXPLAIN is going to be your best resource. You should read up on it in
    the MySQL manual.

    However, don't figure the same results with random data that you get
    with your real data. Data distribution will affect response and EXPLAIN
    output.

    --
    ==================
    Remove the "x" from my email address
    Jerry Stuckle
    JDS Computer Training Corp.
    [email]jstucklexattglobal.net[/email]
    ==================
    Jerry Stuckle Guest

  3. #3

    Default Re: Slow DB

    Is there an index on the datetime field?

    The unique constraint of the url uses an index. Even of you did not
    explicitly put an index on it, MySQL has done that itself. For the date,
    MySQL probably has to use a filesort, which is slow.

    Use the EXPLIAN command. If it says: using index, then it is probably
    very fast already. If it says: using filesort, add the right index.

    You normally add only the indexes you need. So if you would never search
    on a date field, (for a log that is there only for emergency, for
    example) you would not put an index on that. Not putting an index on a
    field is somewhat faster when inserting rows, but by far slower when you
    do a search.

    Jerry Stuckle wrote:
    > frizzle wrote:
    >
    >> Hi there
    >>
    >> I have a news publishing site. Newsitems (of course) are ordered by
    >> date.
    >> Each record has:
    >> id INT(11), publish_date DATETIME() and an url VARCHAR(50) (unique)
    >> (and more fields, but not of importance now)
    >>
    >> I loaded up the database with fake random data. So the dates were also
    >> inserted randomly.
    >> (Anywhere between 2000 and 2010 )
    >> 1.000.000 + records gave me extremely fast result being called from the
    >> url, but now
    >> with only 1.00.000 records left, it takes 7 seconds+ to get 5 records,
    >> ordered by date.
    >>
    >> How can i optimize such a system? I tried EXPLAIN, but that didn't make
    >> sense to me ... :(
    >>
    >> Frizzle.
    >>
    >
    > EXPLAIN is going to be your best resource. You should read up on it in
    > the MySQL manual.
    >
    > However, don't figure the same results with random data that you get
    > with your real data. Data distribution will affect response and EXPLAIN
    > output.
    >
    Dikkie Dik Guest

  4. #4

    Default Re: Slow DB


    Dikkie Dik wrote:
    > Is there an index on the datetime field?
    >
    > The unique constraint of the url uses an index. Even of you did not
    > explicitly put an index on it, MySQL has done that itself. For the date,
    > MySQL probably has to use a filesort, which is slow.
    >
    > Use the EXPLIAN command. If it says: using index, then it is probably
    > very fast already. If it says: using filesort, add the right index.
    >
    > You normally add only the indexes you need. So if you would never search
    > on a date field, (for a log that is there only for emergency, for
    > example) you would not put an index on that. Not putting an index on a
    > field is somewhat faster when inserting rows, but by far slower when you
    > do a search.
    >
    > Jerry Stuckle wrote:
    > > frizzle wrote:
    > >
    > >> Hi there
    > >>
    > >> I have a news publishing site. Newsitems (of course) are ordered by
    > >> date.> >> (Anywhere between 2000 and 2010 )
    > >> Each record has:
    > >> id INT(11), publish_date DATETIME() and an url VARCHAR(50) (unique)
    > >> (and more fields, but not of importance now)
    > >>
    > >> I loaded up the database with fake random data. So the dates were also
    > >> inserted randomly.
    > >> 1.000.000 + records gave me extremely fast result being called from the
    > >> url, but now
    > >> with only 1.00.000 records left, it takes 7 seconds+ to get 5 records,
    > >> ordered by date.
    > >>
    > >> How can i optimize such a system? I tried EXPLAIN, but that didn't make
    > >> sense to me ... :(
    > >>
    > >> Frizzle.
    > >>
    > >
    > > EXPLAIN is going to be your best resource. You should read up on it in
    > > the MySQL manual.
    > >
    > > However, don't figure the same results with random data that you get
    > > with your real data. Data distribution will affect response and EXPLAIN
    > > output.
    > >
    Dear Dikkie Dik (asjemenou),

    it did return something with filesort, and i don't have any clue where
    to find my solution. The EXPLAIN did'nt get me a lot wiser either ...
    :(

    Frizzle.

    frizzle Guest

  5. #5

    Default Re: Slow DB

    Does this help?
    [url]http://dev.mysql.com/doc/refman/4.1/en/mysql-indexes.html[/url]
    > it did return something with filesort, and i don't have any clue where
    > to find my solution. The EXPLAIN did'nt get me a lot wiser either ...
    > :(

    Kopje,
    Dikkie
    Dikkie Dik Guest

  6. #6

    Default Re: Slow DB

    Dikkie Dik wrote:
    > Does this help?
    > [url]http://dev.mysql.com/doc/refman/4.1/en/mysql-indexes.html[/url]
    >
    > > it did return something with filesort, and i don't have any clue where
    > > to find my solution. The EXPLAIN did'nt get me a lot wiser either ...
    > > :(
    >
    >
    > Kopje,
    > Dikkie
    I read the doent, and i couldn't figure anything that i'm doing
    wrong :(
    (Could it matter i set the index *after* inserting the rows?)

    The exact query is the following:

    SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    COUNT(f.`id`) AS 'fulltext',
    COUNT(c.`id`) AS 'comments'
    FROM `ne_topics` n
    LEFT JOIN `ne_fulltext` f
    ON f.`id` = n.`id`
    LEFT JOIN `ne_comments` c
    ON c.`topic_id` = n.`id`
    GROUP BY n.`id`
    ORDER BY n.`published` DESC
    LIMIT 5


    EXPLAIN returns:

    table type possible_keys key key_len ref rows
    Extra
    n ALL NULL NULL NULL NULL 100000 Using
    temporary; Using filesort
    f index id id 4 NULL 1
    Using index
    c ALL topic_id NULL NULL NULL 1

    frizzle Guest

  7. #7

    Default Re: Slow DB

    Dikkie Dik wrote:
    > Does this help?
    > [url]http://dev.mysql.com/doc/refman/4.1/en/mysql-indexes.html[/url]
    >
    > > it did return something with filesort, and i don't have any clue where
    > > to find my solution. The EXPLAIN did'nt get me a lot wiser either ...
    > > :(
    >
    >
    > Kopje,
    > Dikkie
    I read the doent, and i couldn't figure anything that i'm doing
    wrong :(
    (Could it matter i set the index *after* inserting the rows?)

    The exact query is the following:

    SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    COUNT(f.`id`) AS 'fulltext',
    COUNT(c.`id`) AS 'comments'
    FROM `ne_topics` n
    LEFT JOIN `ne_fulltext` f
    ON f.`id` = n.`id`
    LEFT JOIN `ne_comments` c
    ON c.`topic_id` = n.`id`
    GROUP BY n.`id`
    ORDER BY n.`published` DESC
    LIMIT 5


    EXPLAIN returns:

    table type possible_keys key key_len ref rows
    Extra
    n ALL NULL NULL NULL NULL 100000 Using
    temporary; Using filesort
    f index id id 4 NULL 1
    Using index
    c ALL topic_id NULL NULL NULL 1



    Greetings Frizzle.

    frizzle Guest

  8. #8

    Default Re: Slow DB

    <snip>
    > The exact query is the following:
    >
    > SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    > COUNT(f.`id`) AS 'fulltext',
    > COUNT(c.`id`) AS 'comments'
    > FROM `ne_topics` n
    > LEFT JOIN `ne_fulltext` f
    > ON f.`id` = n.`id`
    > LEFT JOIN `ne_comments` c
    > ON c.`topic_id` = n.`id`
    > GROUP BY n.`id`
    > ORDER BY n.`published` DESC
    > LIMIT 5
    >
    >
    > EXPLAIN returns:
    >
    > table type possible_keys key key_len ref rows
    > Extra
    > n ALL NULL NULL NULL NULL 100000 Using
    > temporary; Using filesort
    > f index id id 4 NULL 1
    > Using index
    > c ALL topic_id NULL NULL NULL 1
    Let me explain "explain":
    The first row says that table "n" (ne_topics) has to be searched
    entirely because there is no index that can be used
    (possible_keys=NULL). So it has to read the entire table in memory
    somehow, sort it, and return the 5 rows that come out on top of that
    action. If there was an index on ne_topics.published, the top 5 could
    just be taken from that index, the corresponding rows looked up and the
    joins could be made. As you can see, the possible_keys for the last two
    rows are not NULL, so the joins can be made quickly.

    Look up the CREATE INDEX command to create an index on
    ne_topics.published and see if the performance gets better.

    Kopje,
    Dikkie.
    Dikkie Dik Guest

  9. #9

    Default Re: Slow DB


    Dikkie Dik wrote:
    > <snip>
    > > The exact query is the following:
    > >
    > > SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    > > COUNT(f.`id`) AS 'fulltext',
    > > COUNT(c.`id`) AS 'comments'
    > > FROM `ne_topics` n
    > > LEFT JOIN `ne_fulltext` f
    > > ON f.`id` = n.`id`
    > > LEFT JOIN `ne_comments` c
    > > ON c.`topic_id` = n.`id`
    > > GROUP BY n.`id`
    > > ORDER BY n.`published` DESC
    > > LIMIT 5
    > >
    > >
    > > EXPLAIN returns:
    > >
    > > table type possible_keys key key_len ref rows
    > > Extra
    > > n ALL NULL NULL NULL NULL 100000 Using
    > > temporary; Using filesort
    > > f index id id 4 NULL 1
    > > Using index
    > > c ALL topic_id NULL NULL NULL 1
    > Let me explain "explain":
    > The first row says that table "n" (ne_topics) has to be searched
    > entirely because there is no index that can be used
    > (possible_keys=NULL). So it has to read the entire table in memory
    > somehow, sort it, and return the 5 rows that come out on top of that
    > action. If there was an index on ne_topics.published, the top 5 could
    > just be taken from that index, the corresponding rows looked up and the
    > joins could be made. As you can see, the possible_keys for the last two
    > rows are not NULL, so the joins can be made quickly.
    >
    > Look up the CREATE INDEX command to create an index on
    > ne_topics.published and see if the performance gets better.
    >
    > Kopje,
    > Dikkie.
    Allememachies,

    It appeared the slow DB wasn't caused by the actual ne_topics table,
    but tjhe joins in the query.
    Which i don't get, because the two other table had a maximum of 1
    record in it.
    I've read upon "JOIN" etc, but wouldn't know *why* it makes it zo
    incerdibly slow.
    If i leave the joins out, result appears (almost) instantly ...

    Frizzle.

    frizzle Guest

  10. #10

    Default Re: Slow DB

    Hi!
    >>>SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    >>> COUNT(f.`id`) AS 'fulltext',
    >>> COUNT(c.`id`) AS 'comments'
    >>>FROM `ne_topics` n
    >>>LEFT JOIN `ne_fulltext` f
    >>>ON f.`id` = n.`id`
    >>>LEFT JOIN `ne_comments` c
    >>>ON c.`topic_id` = n.`id`
    >>>GROUP BY n.`id`
    >>>ORDER BY n.`published` DESC
    >>>LIMIT 5
    >>>
    >>>EXPLAIN returns:
    >>>
    >>>table type possible_keys key key_len ref rows
    >>>Extra
    >>>n ALL NULL NULL NULL NULL 100000 Using
    >>>temporary; Using filesort
    >>>f index id id 4 NULL 1
    >>> Using index
    >>>c ALL topic_id NULL NULL NULL 1
    [...]
    > It appeared the slow DB wasn't caused by the actual ne_topics table,
    > but tjhe joins in the query.
    > Which i don't get, because the two other table had a maximum of 1
    > record in it.
    You are doing a left join with ne_topics being left. A LEFT JOIN means
    you want *all* rows from the table on the left. Then the database
    matches ne_fulltext to those rows where ne_topics.id=ne_fulltext.id.
    For those rows that do not have a match, the right side is set to null.
    Because you do not specify any WHERE condition that would reduce the
    number of rows that are returned from ne_topics, this will always result
    in all ne_topics entries to be returned, no matter which indices you create.
    Don't be fooled by the LIMIT 5 clause. This only applies *after*
    everything has been done. It limits only the amount of data that is
    transferred to the client; it does not limit the number of rows that
    have to be evaluated.

    I suggest you try a different approach (the following is untested):

    SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    COUNT(f.`id`) AS 'fulltext',
    COUNT(c.`id`) AS 'comments'
    FROM ne_topics n
    INNER JOIN ne_fulltext f on n.id=f.id
    INNER JOIN ne_comments c on c.topic_id=n.id
    WHERE n.published > {current-time minus some sensible range}
    GROUP BY n.id
    ORDER BY n.published DESC
    LIMIT 5

    Depending on how many matches are found this should already reduce the
    number of rows. As you however do not tell exactly what result you need
    for your site, I can just guess.
    Because you limit to the first 5 rows after sorting by the publishing
    date you should definitely consider using a condition on that date. I
    suggested that in the statement above. You have to decide based on the
    frequency of publications. Maybe an hour is enough, maybe a day.
    Something that would usually result in at least 5 rows.

    To make sure you get exactly 5 all the time you could use something like
    SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    COUNT(f.`id`) AS 'fulltext',
    COUNT(c.`id`) AS 'comments'
    FROM ne_topics n
    INNER JOIN ne_fulltext f on n.id=f.id
    INNER JOIN ne_comments c on c.topic_id=n.id
    WHERE n.id in (select id from ne_topics order by published desc limit 5)
    ORDER BY n.published DESC
    LIMIT 5

    I have not tried this one either, but as long as you have an index on
    ne_topics.published the subselect should return the 5 most recent ids
    very quickly and then only do the join for those.

    I leave the details to you, including the syntax errors and reading the
    chapter about indices and joins more thoroughly :)

    Daniel
    Daniel Schneller Guest

  11. #11

    Default Re: Slow DB


    Daniel Schneller wrote:
    > Hi!
    >
    > >>>SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    > >>> COUNT(f.`id`) AS 'fulltext',
    > >>> COUNT(c.`id`) AS 'comments'
    > >>>FROM `ne_topics` n
    > >>>LEFT JOIN `ne_fulltext` f
    > >>>ON f.`id` = n.`id`
    > >>>LEFT JOIN `ne_comments` c
    > >>>ON c.`topic_id` = n.`id`
    > >>>GROUP BY n.`id`
    > >>>ORDER BY n.`published` DESC
    > >>>LIMIT 5
    > >>>
    > >>>EXPLAIN returns:
    > >>>
    > >>>table type possible_keys key key_len ref rows
    > >>>Extra
    > >>>n ALL NULL NULL NULL NULL 100000 Using
    > >>>temporary; Using filesort
    > >>>f index id id 4 NULL 1
    > >>> Using index
    > >>>c ALL topic_id NULL NULL NULL 1
    > [...]
    > > It appeared the slow DB wasn't caused by the actual ne_topics table,
    > > but tjhe joins in the query.
    > > Which i don't get, because the two other table had a maximum of 1
    > > record in it.
    >
    > You are doing a left join with ne_topics being left. A LEFT JOIN means
    > you want *all* rows from the table on the left. Then the database
    > matches ne_fulltext to those rows where ne_topics.id=ne_fulltext.id.
    > For those rows that do not have a match, the right side is set to null.
    > Because you do not specify any WHERE condition that would reduce the
    > number of rows that are returned from ne_topics, this will always result
    > in all ne_topics entries to be returned, no matter which indices you create.
    > Don't be fooled by the LIMIT 5 clause. This only applies *after*
    > everything has been done. It limits only the amount of data that is
    > transferred to the client; it does not limit the number of rows that
    > have to be evaluated.
    >
    > I suggest you try a different approach (the following is untested):
    >
    > SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    > COUNT(f.`id`) AS 'fulltext',
    > COUNT(c.`id`) AS 'comments'
    > FROM ne_topics n
    > INNER JOIN ne_fulltext f on n.id=f.id
    > INNER JOIN ne_comments c on c.topic_id=n.id
    > WHERE n.published > {current-time minus some sensible range}
    > GROUP BY n.id
    > ORDER BY n.published DESC
    > LIMIT 5
    >
    > Depending on how many matches are found this should already reduce the
    > number of rows. As you however do not tell exactly what result you need
    > for your site, I can just guess.
    > Because you limit to the first 5 rows after sorting by the publishing
    > date you should definitely consider using a condition on that date. I
    > suggested that in the statement above. You have to decide based on the
    > frequency of publications. Maybe an hour is enough, maybe a day.
    > Something that would usually result in at least 5 rows.
    >
    > To make sure you get exactly 5 all the time you could use something like
    > SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    > COUNT(f.`id`) AS 'fulltext',
    > COUNT(c.`id`) AS 'comments'
    > FROM ne_topics n
    > INNER JOIN ne_fulltext f on n.id=f.id
    > INNER JOIN ne_comments c on c.topic_id=n.id
    > WHERE n.id in (select id from ne_topics order by published desc limit 5)
    > ORDER BY n.published DESC
    > LIMIT 5
    >
    > I have not tried this one either, but as long as you have an index on
    > ne_topics.published the subselect should return the 5 most recent ids
    > very quickly and then only do the join for those.
    >
    > I leave the details to you, including the syntax errors and reading the
    > chapter about indices and joins more thoroughly :)
    >
    > Daniel
    Well thanks Daniel for your reply.
    I get the idea. I have no time now, but i understand it would be best
    first to get the ID's, and then get the info & joins according to them,
    so i would only have to perform (in this case) 5 JOINS.

    My condition btw (if i understand what you mean there) was:
    WHERE n.`published` <= NOW()

    I will try this tomorrow.
    Thanks a lot.

    Frizzle.

    frizzle Guest

  12. #12

    Default Re: Slow DB

    frizzle wrote:
    > Daniel Schneller wrote:
    > > Hi!
    > >
    > > >>>SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    > > >>> COUNT(f.`id`) AS 'fulltext',
    > > >>> COUNT(c.`id`) AS 'comments'
    > > >>>FROM `ne_topics` n
    > > >>>LEFT JOIN `ne_fulltext` f
    > > >>>ON f.`id` = n.`id`
    > > >>>LEFT JOIN `ne_comments` c
    > > >>>ON c.`topic_id` = n.`id`
    > > >>>GROUP BY n.`id`
    > > >>>ORDER BY n.`published` DESC
    > > >>>LIMIT 5
    > > >>>
    > > >>>EXPLAIN returns:
    > > >>>
    > > >>>table type possible_keys key key_len ref rows
    > > >>>Extra
    > > >>>n ALL NULL NULL NULL NULL 100000 Using
    > > >>>temporary; Using filesort
    > > >>>f index id id 4 NULL 1
    > > >>> Using index
    > > >>>c ALL topic_id NULL NULL NULL 1
    > > [...]
    > > > It appeared the slow DB wasn't caused by the actual ne_topics table,
    > > > but tjhe joins in the query.
    > > > Which i don't get, because the two other table had a maximum of 1
    > > > record in it.
    > >
    > > You are doing a left join with ne_topics being left. A LEFT JOIN means
    > > you want *all* rows from the table on the left. Then the database
    > > matches ne_fulltext to those rows where ne_topics.id=ne_fulltext.id.
    > > For those rows that do not have a match, the right side is set to null.
    > > Because you do not specify any WHERE condition that would reduce the
    > > number of rows that are returned from ne_topics, this will always result
    > > in all ne_topics entries to be returned, no matter which indices you create.
    > > Don't be fooled by the LIMIT 5 clause. This only applies *after*
    > > everything has been done. It limits only the amount of data that is
    > > transferred to the client; it does not limit the number of rows that
    > > have to be evaluated.
    > >
    > > I suggest you try a different approach (the following is untested):
    > >
    > > SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    > > COUNT(f.`id`) AS 'fulltext',
    > > COUNT(c.`id`) AS 'comments'
    > > FROM ne_topics n
    > > INNER JOIN ne_fulltext f on n.id=f.id
    > > INNER JOIN ne_comments c on c.topic_id=n.id
    > > WHERE n.published > {current-time minus some sensible range}
    > > GROUP BY n.id
    > > ORDER BY n.published DESC
    > > LIMIT 5
    > >
    > > Depending on how many matches are found this should already reduce the
    > > number of rows. As you however do not tell exactly what result you need
    > > for your site, I can just guess.
    > > Because you limit to the first 5 rows after sorting by the publishing
    > > date you should definitely consider using a condition on that date. I
    > > suggested that in the statement above. You have to decide based on the
    > > frequency of publications. Maybe an hour is enough, maybe a day.
    > > Something that would usually result in at least 5 rows.
    > >
    > > To make sure you get exactly 5 all the time you could use something like
    > > SELECT n.`id`, n.`published`, n.`title`, n.`url`, n.`text`,
    > > COUNT(f.`id`) AS 'fulltext',
    > > COUNT(c.`id`) AS 'comments'
    > > FROM ne_topics n
    > > INNER JOIN ne_fulltext f on n.id=f.id
    > > INNER JOIN ne_comments c on c.topic_id=n.id
    > > WHERE n.id in (select id from ne_topics order by published desc limit 5)
    > > ORDER BY n.published DESC
    > > LIMIT 5
    > >
    > > I have not tried this one either, but as long as you have an index on
    > > ne_topics.published the subselect should return the 5 most recent ids
    > > very quickly and then only do the join for those.
    > >
    > > I leave the details to you, including the syntax errors and reading the
    > > chapter about indices and joins more thoroughly :)
    > >
    > > Daniel
    >
    > Well thanks Daniel for your reply.
    > I get the idea. I have no time now, but i understand it would be best
    > first to get the ID's, and then get the info & joins according to them,
    > so i would only have to perform (in this case) 5 JOINS.
    >
    > My condition btw (if i understand what you mean there) was:
    > WHERE n.`published` <= NOW()
    >
    > I will try this tomorrow.
    > Thanks a lot.
    >
    > Frizzle.
    Even without any PHP parsing / html, only with 1 Join (should
    eventually become three, beacause it should also get the user's name
    .... ), it takes me more then half a second, to load, and too often a
    whole second (or more). Would it be wise to keep the dates in a
    separate table, or maybe just the bodytexts of the messages in another
    table?

    Because this really takes too long ... :( :(

    Frizzle.

    frizzle Guest

Similar Threads

  1. FP on Mac = slow
    By schildkroeter in forum Macromedia Flash Player
    Replies: 0
    Last Post: April 6th, 12:31 PM
  2. 7.0.8 on OS X 10.4.6 Slow?
    By David_E._S._Stein@adobeforums.com in forum Adobe Acrobat Macintosh
    Replies: 2
    Last Post: July 31st, 06:47 PM
  3. Slow printing on fast copier but fast printing on slow printer!
    By Davie_Helms@adobeforums.com in forum Adobe Acrobat Windows
    Replies: 0
    Last Post: May 7th, 08:24 PM
  4. SLOW DOWN..Why is my Photoshop CS Super Slow in PANTHER?
    By Aerosyn-Lex@adobeforums.com in forum Adobe Photoshop Mac CS, CS2 & CS3
    Replies: 4
    Last Post: February 23rd, 10:42 PM
  5. Slow! Slow! Slow
    By Al Millstein in forum Adobe Photoshop Elements
    Replies: 4
    Last Post: August 17th, 07:59 PM

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139