Discussion:
"Database is locked" error in PHP via PDO despite setting timeout
Mark Robson
2006-03-19 19:15:10 UTC
Permalink
Hi all,

I'm using Sqlite3 from PHP via PDO.

My application was working fine as long as there was only one process
accessing the database, then I ran two instances at once. Now one of the
processes is getting

"Error message: SQLSTATE[HY000]: General error: 5 database is locked"

when trying to execute a statement which modifies the database.

I understand that this is liable to happen, unless a timeout is set via
sqlite3_busy_timeout().

I have set this timeout using the $db->setAttribute(PDO::ATTR_TIMEOUT, 5.0)
however it's made absolutely no difference to the behaviour.

It's certainly not waiting 5 seconds before giving me this error, and it's
happening with exactly the same frequency as before. It's as if it's being
ignored.

I've stepped through PHP in the debugger (gdb) and it's definitely calling
sqlite3_busy_timeout with the appropriate parameters (5000 ms).

What else can I do to prevent this?

If the answer is "nothing", I'm going straight over to MySQL :)

Mark
d***@public.gmane.org
2006-03-20 11:47:14 UTC
Permalink
Post by Mark Robson
If the answer is "nothing", I'm going straight over to MySQL :)
The advantages of SQLite are that there are no administrative
hassles - there is nothing to set up or configure and the
database is contained in a single disk file that you can copy
to a flash drive or something. Client/server database engines
like MySQL normally default to READ COMMITTED isolation, which
means you never have database locking problems, but at the expense
of considerable setup and configuration complexity.

It sounds to me like you are more interested in READ COMMITTED
isolation and do not mind the added complexity, in which case
you should be using a client/server database, such as MySQL.

BTW: Lots of people have multiple processes writing to the same
SQLite database without problems - the SQLite website is a good
example. I do not know what you are doing wrong to get the
locking problems you are experiencing.

--
D. Richard Hipp <***@hwaci.com>
Roger
2006-03-20 11:51:06 UTC
Permalink
What i normally do in this scenario is just a simple
httpd service restart.

That normally does the trick because i am building an application also
with PHP/Sqlite.
Post by d***@public.gmane.org
Post by Mark Robson
If the answer is "nothing", I'm going straight over to MySQL :)
The advantages of SQLite are that there are no administrative
hassles - there is nothing to set up or configure and the
database is contained in a single disk file that you can copy
to a flash drive or something. Client/server database engines
like MySQL normally default to READ COMMITTED isolation, which
means you never have database locking problems, but at the expense
of considerable setup and configuration complexity.
It sounds to me like you are more interested in READ COMMITTED
isolation and do not mind the added complexity, in which case
you should be using a client/server database, such as MySQL.
BTW: Lots of people have multiple processes writing to the same
SQLite database without problems - the SQLite website is a good
example. I do not know what you are doing wrong to get the
locking problems you are experiencing.
--
Mark Robson
2006-03-20 19:28:10 UTC
Permalink
Post by d***@public.gmane.org
BTW: Lots of people have multiple processes writing to the same
SQLite database without problems - the SQLite website is a good
example. I do not know what you are doing wrong to get the
locking problems you are experiencing.
I don't know how they manage it (unless of course, many of their writes fail
and the txns roll back, and they don't notice or care).
Post by d***@public.gmane.org
I am developing a web based application in PHP/Sqlite and i am forever
getting that error. What i normally do is a simple
service httpd restart.
This is no good. I'm creating a daemon-based server application, which is
carrying out autonomous tasks. It does not currently run under httpd, and I
have no plans to make it do so.

I have several processes which are carrying out a fair amount of work inside a
transaction - doing several writes, then doing some other time-consuming
operations, then providing everything goes OK, committing these transactions.

This means that there are some relatively long-lived transactions (several
seconds, anyway) in progress.

However, with proper locking this should NOT cause a problem - it should
simply serialise the transactional operations (or so I thought).

As it is, I've actually tried to port this to MySQL (using Mysql5 and InnoDB),
but I'm getting some problems there too - I think I'll have to review my use
of transactions etc.

Regards
Mark
John Stanton
2006-03-21 08:01:39 UTC
Permalink
If you want to use a lightweight DB like Sqlite and you are setting up
your own daemon and server situation then you can place the DB
synchronization function in the daemon around the Sqlite so that its
action is single streamed. In a similar situation we have installations
which manage many hundreds of simultaneous users.

If you don't want to do that, use a DBMS like PostgreSQL which manages
it all for you by having a DB server, not linking the DB function into
the application.
Post by Mark Robson
Post by d***@public.gmane.org
BTW: Lots of people have multiple processes writing to the same
SQLite database without problems - the SQLite website is a good
example. I do not know what you are doing wrong to get the
locking problems you are experiencing.
I don't know how they manage it (unless of course, many of their writes fail
and the txns roll back, and they don't notice or care).
Post by d***@public.gmane.org
I am developing a web based application in PHP/Sqlite and i am forever
getting that error. What i normally do is a simple
service httpd restart.
This is no good. I'm creating a daemon-based server application, which is
carrying out autonomous tasks. It does not currently run under httpd, and I
have no plans to make it do so.
I have several processes which are carrying out a fair amount of work inside a
transaction - doing several writes, then doing some other time-consuming
operations, then providing everything goes OK, committing these transactions.
This means that there are some relatively long-lived transactions (several
seconds, anyway) in progress.
However, with proper locking this should NOT cause a problem - it should
simply serialise the transactional operations (or so I thought).
As it is, I've actually tried to port this to MySQL (using Mysql5 and InnoDB),
but I'm getting some problems there too - I think I'll have to review my use
of transactions etc.
Regards
Mark
Loading...