Discussion:
Very Slow delete times on larger databases, please help!
Allan, Mark
2005-10-18 08:35:26 UTC
Permalink
Have you been able to investigate this yet? Any ideas or recommendations? I sent you the analyzer output to drh-***@public.gmane.org as it was too large to post on here.


-----Original Message-----
From: drh-***@public.gmane.org [mailto:drh-***@public.gmane.org]
Sent: 12 October 2005 14:58
To: sqlite-users-CzDROfG0BjIdnm+***@public.gmane.org
Subject: Re: [sqlite] Very Slow delete times on larger databases, please
help!
We are experiencing incredibly slow delete times when deleting a
large number of rows:-
We are using SQLite on an embdedded platform with an ARM7 processor,
2Mb RAM and 32Mb NOR Flash memory 25Mb of which is allocated for
storage for our database.
What operating system?

Can you send the output of sqlite3_analyzer run against your
database file prior to doing the delete?

Have you tried upgrading to a later version of SQLite?

--
D. Richard Hipp <drh-***@public.gmane.org>




DISCLAIMER:
This information and any attachments contained in this email message is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, forwarding, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately by return email, and delete the original message immediately.
d***@public.gmane.org
2005-10-18 18:05:53 UTC
Permalink
Post by Allan, Mark
Have you been able to investigate this yet?
I have investigated and I found nothing wrong. I am unable
to reproduce the problem.
--
D. Richard Hipp <***@hwaci.com>
Allan, Mark
2005-10-19 08:56:58 UTC
Permalink
We are able to compile our application for both target and host. When compiled for host the application runs on Win32 and will create/read/write to a database file on the host PC.

The performance of the deletions on Win32 will not take the 11 minutes I specified, this is only a problem for our target. The same operation on the PC will take only 3-4 seconds. Please note that since my first email we have tried increasing the page size of SQLite and have increased the page size from 1024 bytes to 8192 bytes. This has decreased the time to process the same delete operation from 11 minutes to 3.75 minutes on our target hardware. Both host and target versions of the software use the same configuration a page size of 8192 bytes and a cache of 75 pages = 600k.

The only real differences are 1) the hardware, 2) the filing system. We would expect a difference in performance as the PC is much faster than our target hardware and the write speed to NOR flash is comparatively slow but not as drastic a drop-off as we see. 3-4 seconds -> 3-4 minutes.

It may be useful for us to know what SQLite is doing during deletion so that we can try and optimise our code and/or any configuration of SQLite, our filesystem code or the hardware to try and get this figure down. Can anyone give me a reasonably detailed description of what is happening during delete. The documentation on the website has not helped us diagnose where our problem lies.

Best Regards

Mark



-----Original Message-----
From: drh-***@public.gmane.org [mailto:drh-***@public.gmane.org]
Sent: 18 October 2005 19:06
To: sqlite-users-CzDROfG0BjIdnm+***@public.gmane.org
Subject: Re: [sqlite] Very Slow delete times on larger databases, please
help!
Post by Allan, Mark
Have you been able to investigate this yet?
I have investigated and I found nothing wrong. I am unable
to reproduce the problem.
--
D. Richard Hipp <drh-***@public.gmane.org>




DISCLAIMER:
This information and any attachments contained in this email message is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, forwarding, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately by return email, and delete the original message immediately.
John Stanton
2005-10-19 14:04:03 UTC
Permalink
This may or may not help, depending upon your schema. We find in
general that insertions and deletions are the major overhead in index
maintenance and there is a point where it is cheaper to drop the indices
you are not using and rebuild them rather than to involve the high
overhead of repeated B-Tree rebalancing etc. The drop and rebuild has
the added benefit of ending up with an optimally organized index free
from fragmentation.

The cost of insertions and deletions in an index is not linear with
index size.

I haven't looked at the Sqlite B-Tree algorithms, so this I can only
suggest this as an experiment. If Sqlite uses some form of B-Tree
optimization, the overhead of insertions and deletions is greater and
the drop and rebuild more likely to be an improvement.
JS
Post by Allan, Mark
We are able to compile our application for both target and host. When compiled for host the application runs on Win32 and will create/read/write to a database file on the host PC.
The performance of the deletions on Win32 will not take the 11 minutes I specified, this is only a problem for our target. The same operation on the PC will take only 3-4 seconds. Please note that since my first email we have tried increasing the page size of SQLite and have increased the page size from 1024 bytes to 8192 bytes. This has decreased the time to process the same delete operation from 11 minutes to 3.75 minutes on our target hardware. Both host and target versions of the software use the same configuration a page size of 8192 bytes and a cache of 75 pages = 600k.
The only real differences are 1) the hardware, 2) the filing system. We would expect a difference in performance as the PC is much faster than our target hardware and the write speed to NOR flash is comparatively slow but not as drastic a drop-off as we see. 3-4 seconds -> 3-4 minutes.
It may be useful for us to know what SQLite is doing during deletion so that we can try and optimise our code and/or any configuration of SQLite, our filesystem code or the hardware to try and get this figure down. Can anyone give me a reasonably detailed description of what is happening during delete. The documentation on the website has not helped us diagnose where our problem lies.
Best Regards
Mark
-----Original Message-----
Sent: 18 October 2005 19:06
Subject: Re: [sqlite] Very Slow delete times on larger databases, please
help!
Post by Allan, Mark
Have you been able to investigate this yet?
I have investigated and I found nothing wrong. I am unable
to reproduce the problem.
--
This information and any attachments contained in this email message is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, forwarding, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately by return email, and delete the original message immediately.
Christian Smith
2005-10-19 16:47:48 UTC
Permalink
Post by Allan, Mark
We are able to compile our application for both target and host. When
compiled for host the application runs on Win32 and will
create/read/write to a database file on the host PC.
The performance of the deletions on Win32 will not take the 11 minutes I
specified, this is only a problem for our target. The same operation on
the PC will take only 3-4 seconds. Please note that since my first email
we have tried increasing the page size of SQLite and have increased the
page size from 1024 bytes to 8192 bytes. This has decreased the time to
process the same delete operation from 11 minutes to 3.75 minutes on our
target hardware. Both host and target versions of the software use the
same configuration a page size of 8192 bytes and a cache of 75 pages =
600k.
The only real differences are 1) the hardware, 2) the filing system. We
would expect a difference in performance as the PC is much faster than
our target hardware and the write speed to NOR flash is comparatively
slow but not as drastic a drop-off as we see. 3-4 seconds -> 3-4 minutes.
From the VDBE output you originally posted, you are doing a fair amount of
work for each deleted row:
- Index search to find the next row from EXAMINATIONS to delete
- Removing the row from 3 indexes on EXAMINATIONS
- (trigger) Remove related row in SPIRO_TEST from 2 indexes on SPIRO_TEST
- (trigger) Remove related row in SPIRO_TEST
- Remove the row from EXAMINATIONS

Check your cache size. If the above work is causing the 75 page entry
cache to thrash, you're likely to hit worst case performance as the
thrashing pages may be being accessed in a cyclical fashion. Not sure how
like it is that your page cache is not big enough. How big is a row of
data, typically?

Also, a 60x slowdown is not to be unexpected. The PC version, while
probably having the same SQLite page cache size in the SQLite app itself,
will most likely be reading and writing to the OSes cache at memory to
memory copy speed most of the time, with synchronous writes only done when
needed. The embedded platform you're using probably writes straight to
FLASH, which is necassarily a synchronous operation if your OS doesn't
have a cache between your app and the FLASH FS. While flash writes are low
latency, they are also low bandwidth, and won't be within an order of
magnitude of performance when compared to a desktop PC write to OS
filesystem cache.

Finally, you give no indication on the actual CPU speed of the embedded
platform. It's quite reasonable to assume a development PC could be an
order of magnitude faster on sheer integer throughput. I'm amazed how slow
my 50MHz microSPARC based SPARCclassic is. Such a platform would not be
much, if at all, slower than a modern embedded platform, and has the
benefit of gobs of RAM, but still runs the same code two orders of
magnitude slower at least than my Athlon XP 1700 based desktop. You have
to keep your performance expectations realistic. You are, afterall,
running a complete, ACID transaction, SQL relational database.
Post by Allan, Mark
It may be useful for us to know what SQLite is doing during deletion so
that we can try and optimise our code and/or any configuration of SQLite,
our filesystem code or the hardware to try and get this figure down. Can
anyone give me a reasonably detailed description of what is happening
during delete. The documentation on the website has not helped us
diagnose where our problem lies.
Others have indicated that dropping indexes might help when deleting or
inserting records. However, have you tried simply not having indexes at
all? Would that cause unacceptable slowdown? Perhaps, for the demo query
from the original post, just keep the DATE index on EXAMINATIONS, and use
full table scans for queries based on EXAM_TYPE and STATUS_FLAG. Truth is,
given the small number of EXAM_TYPE and STATUS_FLAG values (I presume),
you're as well just doing table scans when looking for specific exam types
and statuses. Indexes only really help when you have a large variation in
values with few collisions. Doing this will leave a single index update in
addition to the actual row removals, which should improve performance.
Post by Allan, Mark
Best Regards
Mark
Christian
Post by Allan, Mark
-----Original Message-----
Sent: 18 October 2005 19:06
Subject: Re: [sqlite] Very Slow delete times on larger databases, please
help!
Post by Allan, Mark
Have you been able to investigate this yet?
I have investigated and I found nothing wrong. I am unable
to reproduce the problem.
--
This information and any attachments contained in this email message is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, forwarding, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately by return email, and delete the original message immediately.
--
/"\
\ / ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
X - AGAINST MS ATTACHMENTS
/ \
Allan, Mark
2005-10-21 15:49:11 UTC
Permalink
Thanks to both Christian Smith and John Stanton for your posts.
Post by Allan, Mark
From the VDBE output you originally posted, you are doing a
fair amount of
- Index search to find the next row from EXAMINATIONS to delete
- Removing the row from 3 indexes on EXAMINATIONS
- (trigger) Remove related row in SPIRO_TEST from 2 indexes
on SPIRO_TEST
- (trigger) Remove related row in SPIRO_TEST
- Remove the row from EXAMINATIONS
Check your cache size. If the above work is causing the 75 page entry
cache to thrash, you're likely to hit worst case performance as the
thrashing pages may be being accessed in a cyclical fashion.
Not sure how
like it is that your page cache is not big enough. How big is a row of
data, typically?
Our cache size is 75 pages of 8192 bytes = 600Kb.

The maximum size of an EXAMINATIONS record is about 500 bytes, 450 bytes of this is a varchar field. In the test example the average size of an EXAMINATIONS record is 60 bytes as not much text is saved.

The maximum size of a SPIRO_TEST record is about 5Kb, these will vary from test to test, but for the test example the size of each SPIRO_TEST record is fixed to approx 1Kb.

Based on this I dont think that we should be thrashing the cache. I am however unsure how SQlite works here.
Post by Allan, Mark
Also, a 60x slowdown is not to be unexpected. The PC version, while
probably having the same SQLite page cache size in the SQLite
app itself,
will most likely be reading and writing to the OSes cache at memory to
memory copy speed most of the time, with synchronous writes
only done when
needed. The embedded platform you're using probably writes straight to
FLASH, which is necassarily a synchronous operation if your OS doesn't
have a cache between your app and the FLASH FS. While flash
writes are low
latency, they are also low bandwidth, and won't be within an order of
magnitude of performance when compared to a desktop PC write to OS
filesystem cache.
Finally, you give no indication on the actual CPU speed of
the embedded
platform. It's quite reasonable to assume a development PC could be an
order of magnitude faster on sheer integer throughput. I'm
amazed how slow
my 50MHz microSPARC based SPARCclassic is. Such a platform
would not be
much, if at all, slower than a modern embedded platform, and has the
benefit of gobs of RAM, but still runs the same code two orders of
magnitude slower at least than my Athlon XP 1700 based
desktop. You have
to keep your performance expectations realistic. You are, afterall,
running a complete, ACID transaction, SQL relational database.
The maximum CPU speed of our ARM7 chip is 71Mhz.
Post by Allan, Mark
Others have indicated that dropping indexes might help when
deleting or
inserting records. However, have you tried simply not having
indexes at
all? Would that cause unacceptable slowdown? Perhaps, for the
demo query
from the original post, just keep the DATE index on
EXAMINATIONS, and use
full table scans for queries based on EXAM_TYPE and
STATUS_FLAG. Truth is,
given the small number of EXAM_TYPE and STATUS_FLAG values (I
presume),
you're as well just doing table scans when looking for
specific exam types
and statuses. Indexes only really help when you have a large
variation in
values with few collisions. Doing this will leave a single
index update in
addition to the actual row removals, which should improve performance.
I have tried permanently dropping the indexes on EXAM_TYPE and STATUS_FLAG and this gives some improvement in time. Indeed it does seem that the STATUS_FLAG index is worthless and in the initial version of the software we will have only 1 EXAM_TYPE (although this will increase for each module we release over the next few months).

I have also tried the suggested method of dropping the EXAM_PATIENT_ID_INDEX index on the examinations table before delete and rebuilding it on completion. I cannot delete the remaining indexes as they are used during the delete operation and this slows the whole operation down.

The latest changes have reduced the time to delete the same number of records from 3:45 minutes to 2:53 minutes. Still a long time to wait but any time saving is welcome, especially as the test is for a 50% full scenario so at 99% we can expect it to take 6 minutes.

Thanks again for your help.

If there are any other ideas on how we can optimise this further then please let me know.


Mark


DISCLAIMER:
This information and any attachments contained in this email message is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, forwarding, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately by return email, and delete the original message immediately.
R S
2005-10-26 17:43:55 UTC
Permalink
In my case Delete happens reasonably OK but Vaccuuming takes incredibly
long?
Post by Allan, Mark
Thanks to both Christian Smith and John Stanton for your posts.
Post by Allan, Mark
From the VDBE output you originally posted, you are doing a
fair amount of
- Index search to find the next row from EXAMINATIONS to delete
- Removing the row from 3 indexes on EXAMINATIONS
- (trigger) Remove related row in SPIRO_TEST from 2 indexes
on SPIRO_TEST
- (trigger) Remove related row in SPIRO_TEST
- Remove the row from EXAMINATIONS
Check your cache size. If the above work is causing the 75 page entry
cache to thrash, you're likely to hit worst case performance as the
thrashing pages may be being accessed in a cyclical fashion.
Not sure how
like it is that your page cache is not big enough. How big is a row of
data, typically?
Our cache size is 75 pages of 8192 bytes = 600Kb.
The maximum size of an EXAMINATIONS record is about 500 bytes, 450 bytes
of this is a varchar field. In the test example the average size of an
EXAMINATIONS record is 60 bytes as not much text is saved.
The maximum size of a SPIRO_TEST record is about 5Kb, these will vary from
test to test, but for the test example the size of each SPIRO_TEST record is
fixed to approx 1Kb.
Based on this I dont think that we should be thrashing the cache. I am
however unsure how SQlite works here.
Post by Allan, Mark
Also, a 60x slowdown is not to be unexpected. The PC version, while
probably having the same SQLite page cache size in the SQLite
app itself,
will most likely be reading and writing to the OSes cache at memory to
memory copy speed most of the time, with synchronous writes
only done when
needed. The embedded platform you're using probably writes straight to
FLASH, which is necassarily a synchronous operation if your OS doesn't
have a cache between your app and the FLASH FS. While flash
writes are low
latency, they are also low bandwidth, and won't be within an order of
magnitude of performance when compared to a desktop PC write to OS
filesystem cache.
Finally, you give no indication on the actual CPU speed of
the embedded
platform. It's quite reasonable to assume a development PC could be an
order of magnitude faster on sheer integer throughput. I'm
amazed how slow
my 50MHz microSPARC based SPARCclassic is. Such a platform
would not be
much, if at all, slower than a modern embedded platform, and has the
benefit of gobs of RAM, but still runs the same code two orders of
magnitude slower at least than my Athlon XP 1700 based
desktop. You have
to keep your performance expectations realistic. You are, afterall,
running a complete, ACID transaction, SQL relational database.
The maximum CPU speed of our ARM7 chip is 71Mhz.
Post by Allan, Mark
Others have indicated that dropping indexes might help when
deleting or
inserting records. However, have you tried simply not having
indexes at
all? Would that cause unacceptable slowdown? Perhaps, for the
demo query
from the original post, just keep the DATE index on
EXAMINATIONS, and use
full table scans for queries based on EXAM_TYPE and
STATUS_FLAG. Truth is,
given the small number of EXAM_TYPE and STATUS_FLAG values (I
presume),
you're as well just doing table scans when looking for
specific exam types
and statuses. Indexes only really help when you have a large
variation in
values with few collisions. Doing this will leave a single
index update in
addition to the actual row removals, which should improve performance.
I have tried permanently dropping the indexes on EXAM_TYPE and STATUS_FLAG
and this gives some improvement in time. Indeed it does seem that the
STATUS_FLAG index is worthless and in the initial version of the software we
will have only 1 EXAM_TYPE (although this will increase for each module we
release over the next few months).
I have also tried the suggested method of dropping the
EXAM_PATIENT_ID_INDEX index on the examinations table before delete and
rebuilding it on completion. I cannot delete the remaining indexes as they
are used during the delete operation and this slows the whole operation
down.
The latest changes have reduced the time to delete the same number of
records from 3:45 minutes to 2:53 minutes. Still a long time to wait but any
time saving is welcome, especially as the test is for a 50% full scenario so
at 99% we can expect it to take 6 minutes.
Thanks again for your help.
If there are any other ideas on how we can optimise this further then please let me know.
Mark
This information and any attachments contained in this email message is
intended only for the use of the individual or entity to which it is
addressed and may contain information that is privileged, confidential, and
exempt from disclosure under applicable law. If the reader of this message
is not the intended recipient, or the employee or agent responsible for
delivering the message to the intended recipient, you are hereby notified
that any dissemination, distribution, forwarding, or copying of this
communication is strictly prohibited. If you have received this
communication in error, please notify the sender immediately by return
email, and delete the original message immediately.
Brett Wilson
2005-10-26 18:22:08 UTC
Permalink
Vacuuming is just slow. I don't think there is much you can do except
don't do it unless you really need it, and don't turn on autovacuum.

Brett
Post by R S
In my case Delete happens reasonably OK but Vaccuuming takes incredibly
long?
Post by Allan, Mark
Thanks to both Christian Smith and John Stanton for your posts.
Post by Allan, Mark
From the VDBE output you originally posted, you are doing a
fair amount of
- Index search to find the next row from EXAMINATIONS to delete
- Removing the row from 3 indexes on EXAMINATIONS
- (trigger) Remove related row in SPIRO_TEST from 2 indexes
on SPIRO_TEST
- (trigger) Remove related row in SPIRO_TEST
- Remove the row from EXAMINATIONS
Check your cache size. If the above work is causing the 75 page entry
cache to thrash, you're likely to hit worst case performance as the
thrashing pages may be being accessed in a cyclical fashion.
Not sure how
like it is that your page cache is not big enough. How big is a row of
data, typically?
Our cache size is 75 pages of 8192 bytes = 600Kb.
The maximum size of an EXAMINATIONS record is about 500 bytes, 450 bytes
of this is a varchar field. In the test example the average size of an
EXAMINATIONS record is 60 bytes as not much text is saved.
The maximum size of a SPIRO_TEST record is about 5Kb, these will vary from
test to test, but for the test example the size of each SPIRO_TEST record is
fixed to approx 1Kb.
Based on this I dont think that we should be thrashing the cache. I am
however unsure how SQlite works here.
Post by Allan, Mark
Also, a 60x slowdown is not to be unexpected. The PC version, while
probably having the same SQLite page cache size in the SQLite
app itself,
will most likely be reading and writing to the OSes cache at memory to
memory copy speed most of the time, with synchronous writes
only done when
needed. The embedded platform you're using probably writes straight to
FLASH, which is necassarily a synchronous operation if your OS doesn't
have a cache between your app and the FLASH FS. While flash
writes are low
latency, they are also low bandwidth, and won't be within an order of
magnitude of performance when compared to a desktop PC write to OS
filesystem cache.
Finally, you give no indication on the actual CPU speed of
the embedded
platform. It's quite reasonable to assume a development PC could be an
order of magnitude faster on sheer integer throughput. I'm
amazed how slow
my 50MHz microSPARC based SPARCclassic is. Such a platform
would not be
much, if at all, slower than a modern embedded platform, and has the
benefit of gobs of RAM, but still runs the same code two orders of
magnitude slower at least than my Athlon XP 1700 based
desktop. You have
to keep your performance expectations realistic. You are, afterall,
running a complete, ACID transaction, SQL relational database.
The maximum CPU speed of our ARM7 chip is 71Mhz.
Post by Allan, Mark
Others have indicated that dropping indexes might help when
deleting or
inserting records. However, have you tried simply not having
indexes at
all? Would that cause unacceptable slowdown? Perhaps, for the
demo query
from the original post, just keep the DATE index on
EXAMINATIONS, and use
full table scans for queries based on EXAM_TYPE and
STATUS_FLAG. Truth is,
given the small number of EXAM_TYPE and STATUS_FLAG values (I
presume),
you're as well just doing table scans when looking for
specific exam types
and statuses. Indexes only really help when you have a large
variation in
values with few collisions. Doing this will leave a single
index update in
addition to the actual row removals, which should improve performance.
I have tried permanently dropping the indexes on EXAM_TYPE and STATUS_FLAG
and this gives some improvement in time. Indeed it does seem that the
STATUS_FLAG index is worthless and in the initial version of the software we
will have only 1 EXAM_TYPE (although this will increase for each module we
release over the next few months).
I have also tried the suggested method of dropping the
EXAM_PATIENT_ID_INDEX index on the examinations table before delete and
rebuilding it on completion. I cannot delete the remaining indexes as they
are used during the delete operation and this slows the whole operation
down.
The latest changes have reduced the time to delete the same number of
records from 3:45 minutes to 2:53 minutes. Still a long time to wait but any
time saving is welcome, especially as the test is for a 50% full scenario so
at 99% we can expect it to take 6 minutes.
Thanks again for your help.
If there are any other ideas on how we can optimise this further then
please let me know.
Mark
This information and any attachments contained in this email message is
intended only for the use of the individual or entity to which it is
addressed and may contain information that is privileged, confidential, and
exempt from disclosure under applicable law. If the reader of this message
is not the intended recipient, or the employee or agent responsible for
delivering the message to the intended recipient, you are hereby notified
that any dissemination, distribution, forwarding, or copying of this
communication is strictly prohibited. If you have received this
communication in error, please notify the sender immediately by return
email, and delete the original message immediately.
Allan, Mark
2005-10-27 08:37:27 UTC
Permalink
Yes we found this out too. We never vacuum the file, it is acceptable for it to just get larger not smaller. We generate an indication on the current database capacity not from the file size but from the total number of pages in the database file minus the number of free pages in the database.

So these times are not affected by vacuum.

Thanks

Mark
Post by Allan, Mark
-----Original Message-----
Sent: 26 October 2005 19:22
Subject: Re: [sqlite] Very Slow delete times on larger
databases, please
help!
Vacuuming is just slow. I don't think there is much you can do except
don't do it unless you really need it, and don't turn on autovacuum.
Brett
Post by R S
In my case Delete happens reasonably OK but Vaccuuming
takes incredibly
Post by R S
long?
Post by Allan, Mark
Thanks to both Christian Smith and John Stanton for your posts.
Post by Allan, Mark
From the VDBE output you originally posted, you are doing a
fair amount of
- Index search to find the next row from EXAMINATIONS to delete
- Removing the row from 3 indexes on EXAMINATIONS
- (trigger) Remove related row in SPIRO_TEST from 2 indexes
on SPIRO_TEST
- (trigger) Remove related row in SPIRO_TEST
- Remove the row from EXAMINATIONS
Check your cache size. If the above work is causing the
75 page entry
Post by R S
Post by Allan, Mark
Post by Allan, Mark
cache to thrash, you're likely to hit worst case
performance as the
Post by R S
Post by Allan, Mark
Post by Allan, Mark
thrashing pages may be being accessed in a cyclical fashion.
Not sure how
like it is that your page cache is not big enough. How
big is a row of
Post by R S
Post by Allan, Mark
Post by Allan, Mark
data, typically?
Our cache size is 75 pages of 8192 bytes = 600Kb.
The maximum size of an EXAMINATIONS record is about 500
bytes, 450 bytes
Post by R S
Post by Allan, Mark
of this is a varchar field. In the test example the
average size of an
Post by R S
Post by Allan, Mark
EXAMINATIONS record is 60 bytes as not much text is saved.
The maximum size of a SPIRO_TEST record is about 5Kb,
these will vary from
Post by R S
Post by Allan, Mark
test to test, but for the test example the size of each
SPIRO_TEST record is
Post by R S
Post by Allan, Mark
fixed to approx 1Kb.
Based on this I dont think that we should be thrashing
the cache. I am
Post by R S
Post by Allan, Mark
however unsure how SQlite works here.
Post by Allan, Mark
Also, a 60x slowdown is not to be unexpected. The PC
version, while
Post by R S
Post by Allan, Mark
Post by Allan, Mark
probably having the same SQLite page cache size in the SQLite
app itself,
will most likely be reading and writing to the OSes
cache at memory to
Post by R S
Post by Allan, Mark
Post by Allan, Mark
memory copy speed most of the time, with synchronous writes
only done when
needed. The embedded platform you're using probably
writes straight to
Post by R S
Post by Allan, Mark
Post by Allan, Mark
FLASH, which is necassarily a synchronous operation if
your OS doesn't
Post by R S
Post by Allan, Mark
Post by Allan, Mark
have a cache between your app and the FLASH FS. While flash
writes are low
latency, they are also low bandwidth, and won't be
within an order of
Post by R S
Post by Allan, Mark
Post by Allan, Mark
magnitude of performance when compared to a desktop PC
write to OS
Post by R S
Post by Allan, Mark
Post by Allan, Mark
filesystem cache.
Finally, you give no indication on the actual CPU speed of
the embedded
platform. It's quite reasonable to assume a development
PC could be an
Post by R S
Post by Allan, Mark
Post by Allan, Mark
order of magnitude faster on sheer integer throughput. I'm
amazed how slow
my 50MHz microSPARC based SPARCclassic is. Such a platform
would not be
much, if at all, slower than a modern embedded
platform, and has the
Post by R S
Post by Allan, Mark
Post by Allan, Mark
benefit of gobs of RAM, but still runs the same code
two orders of
Post by R S
Post by Allan, Mark
Post by Allan, Mark
magnitude slower at least than my Athlon XP 1700 based
desktop. You have
to keep your performance expectations realistic. You
are, afterall,
Post by R S
Post by Allan, Mark
Post by Allan, Mark
running a complete, ACID transaction, SQL relational database.
The maximum CPU speed of our ARM7 chip is 71Mhz.
Post by Allan, Mark
Others have indicated that dropping indexes might help when
deleting or
inserting records. However, have you tried simply not having
indexes at
all? Would that cause unacceptable slowdown? Perhaps, for the
demo query
from the original post, just keep the DATE index on
EXAMINATIONS, and use
full table scans for queries based on EXAM_TYPE and
STATUS_FLAG. Truth is,
given the small number of EXAM_TYPE and STATUS_FLAG values (I
presume),
you're as well just doing table scans when looking for
specific exam types
and statuses. Indexes only really help when you have a large
variation in
values with few collisions. Doing this will leave a single
index update in
addition to the actual row removals, which should
improve performance.
Post by R S
Post by Allan, Mark
I have tried permanently dropping the indexes on
EXAM_TYPE and STATUS_FLAG
Post by R S
Post by Allan, Mark
and this gives some improvement in time. Indeed it does
seem that the
Post by R S
Post by Allan, Mark
STATUS_FLAG index is worthless and in the initial version
of the software we
Post by R S
Post by Allan, Mark
will have only 1 EXAM_TYPE (although this will increase
for each module we
Post by R S
Post by Allan, Mark
release over the next few months).
I have also tried the suggested method of dropping the
EXAM_PATIENT_ID_INDEX index on the examinations table
before delete and
Post by R S
Post by Allan, Mark
rebuilding it on completion. I cannot delete the
remaining indexes as they
Post by R S
Post by Allan, Mark
are used during the delete operation and this slows the
whole operation
Post by R S
Post by Allan, Mark
down.
The latest changes have reduced the time to delete the
same number of
Post by R S
Post by Allan, Mark
records from 3:45 minutes to 2:53 minutes. Still a long
time to wait but any
Post by R S
Post by Allan, Mark
time saving is welcome, especially as the test is for a
50% full scenario so
Post by R S
Post by Allan, Mark
at 99% we can expect it to take 6 minutes.
Thanks again for your help.
If there are any other ideas on how we can optimise this
further then
Post by R S
Post by Allan, Mark
please let me know.
Mark
This information and any attachments contained in this
email message is
Post by R S
Post by Allan, Mark
intended only for the use of the individual or entity to
which it is
Post by R S
Post by Allan, Mark
addressed and may contain information that is privileged,
confidential, and
Post by R S
Post by Allan, Mark
exempt from disclosure under applicable law. If the
reader of this message
Post by R S
Post by Allan, Mark
is not the intended recipient, or the employee or agent
responsible for
Post by R S
Post by Allan, Mark
delivering the message to the intended recipient, you are
hereby notified
Post by R S
Post by Allan, Mark
that any dissemination, distribution, forwarding, or
copying of this
Post by R S
Post by Allan, Mark
communication is strictly prohibited. If you have received this
communication in error, please notify the sender
immediately by return
Post by R S
Post by Allan, Mark
email, and delete the original message immediately.
DISCLAIMER:
This information and any attachments contained in this email message is intended only for the use of the individual or entity to which it is addressed and may contain information that is privileged, confidential, and exempt from disclosure under applicable law. If the reader of this message is not the intended recipient, or the employee or agent responsible for delivering the message to the intended recipient, you are hereby notified that any dissemination, distribution, forwarding, or copying of this communication is strictly prohibited. If you have received this communication in error, please notify the sender immediately by return email, and delete the original message immediately.
Continue reading on narkive:
Loading...