Discussion:
[sqlite] Concurrency, MVCC
Andrew Piskorski
2004-04-14 05:16:54 UTC
Permalink
How feasible would it be to add support for higher concurrency to
SQLite, especially via MVCC? (If it turns out to be feasible and
desirable, I'd be willing to work on it in my free time.)

What I would really like in SQLite would be:
- Good concurrency, preferably similar or better to that of nsv/tsv
(see below).
- MVCC (multi-version concurrency control) model preferred, as in
PostgreSQL and Oracle.

How feasible is that? What designs have been or should be considered
for it? How much work does this seem to be? Does it become any
easier if it's considered for in-memory databases only, rather than
both in-memory and on-disk?

After some searching around I found these previous proposals or
discussions of increasing concurrency in SQLite:

http://www.sqlite.org/concurrency.html

http://www.sqlite.org/cvstrac/wiki?p=BlueSky
http://www.sqlite.org/cvstrac/attach_get/100/shadow.pdf
http://citeseer.ist.psu.edu/131901.html

http://www.mail-archive.com/cgi-bin/htsearch?config=sqlite-users_sqlite_org&restrict=&exclude=&words=concurrency

Many of the suggestions in Dr. Hipp's 2003-11-22 concurrency draft
sound very useful, especially blocking (rather than polling) locks.
(In fact, I was surprised to learn that SQLite does NOT block on locks
now.) But, that draft never mentions MVCC at all. Why not? Since
MVCC both gives better concurrency and is friendlier to use than
pessimistic locking models, I was surprised not to at least see it
mentioned. Is an MVCC implementation thought to be too complicated?
Or?

Doug Currie's <doug.currie-FrUbXkNCsVf2fBVCVOL8/***@public.gmane.org> "Shadow Paging" design sounds
promising. Unfortunately, I have not been able to download the
referenced papers at all (where can I get them?), but as far as I can
tell, it seems to be describing a system with the usual
Oracle/PostgreSQL MVCC semantics, EXCEPT of course that Currie
proposes that each Write transaction must take a lock on the database
as a whole.

But, other than the locking granularity, in what way is Currie's
Shadow Paging design the same or different from PostgreSQL's MVCC
implementation, both in terms of user-visible transaction semantics,
and the underlying implementation?

I believe PostgreSQL basically marks each row with a transaction id,
and keeps track of whether each transaction id is in progress,
committed, or aborted. Here are a few links about that:

http://developer.postgresql.org/pdf/transactions.pdf
http://openacs.org/forums/message-view?message_id=176198

Since Currie's design has only one db-wide write lock, it is
semantically equivalent to PostgreSQL's "serializable" isolation
level, correct? How could this be extended to support table locking
and PostgreSQL's default "read committed" isolation level? Would the
smallest locking granularity possible in Currie's design be one page
of rows, however many rows that happens to be?

The one process, many threads aspect of Currie's design sounds just
fine to me. The one write lock for the whole database, on the other
hand, could be quite limiting. How much more difficult would it be to
add table locks to the design? It would also be a nice bonus if the
design at least contemplates how to add row locks (or locks for pages
of rows) in the future, but my guess is that table locks would be good
enough in practice.

Currie's design also seems to defer writing any data to disk until the
transaction commits, which seems odd to me. I did not follow many of
the details of that design so I'm probably missing something here, but
since most write transactions commit rather than abort, in any sort of
MVCC model wouldn't it be better to write data to disk earlier rather
than later? I'm pretty sure that's what both Oracle and PostgreSQL
do.


My particular interest in SQLite:
------------------------------------

Now that I've asked lots of questions above, I'll describe some of the
real-world use cases that got me thinking about this, in case it helps
clarify how and why I'm interested in a high-concurrency SQLite:

A while back I wrote a high-performance multi-threaded application
that basically just accepted data requests, used various ugly low
level proprietary C APIs to fetch data from remote servers, and then
organized the data and fed it back to the client application as simple
rows of CSV-style text.

Using all those low-level C APIs meant tracking lots of somewhat
complicated housekeeping data. If my application ever crashed all
housekeeping data instantly became worthless anyway, so I definitely
didn't want to store it persistently in Oracle or PostgreSQL; for both
simplicity and performance, I wanted it in-memory only.

So in my case, I used AOLserver's nsv's for this. (The Tcl Thread
Extension has "tsv" which is just like "nsv", except better.) Nsvs
are basically just associative arrays for storing key/value pairs
(like Tcl arrays or Perl hashes), but nsv operations are both atomic
and highly concurrent, as they were designed for inter-thread
communication in AOLserver. (Mutex locking is automatic. Nsvs are
assigned to buckets with one mutex lock per bucket, and the number of
buckets is tunable.)

Using nsvs worked for me, but being limited to only key/value pairs
was UGLY. Much of the housekeeping data had 1-to-many or even
many-to-many mappings (e.g., each "query" might have many
"request_ids"), and being limited to key/value pairs was both painful
and the source of some bugs. And I believe that the pain and bugs
could have been a lot worse if my program and its housekeeping data
had more complicated.

What I REALLY wanted was an in-memory relational database, but I
didn't have one. When I need on-disk persistence I'd normally use
Oracle or PostgreSQL, but certainly it'd be convenient if I could use
the same lightweight both in-memory and on-disk.

So there's a hole in my toolkit, and I'm looking for software to fill
it. For that, SQLite seems to have the following important
advantages:

- Relational, transactional, ACID.
- Simple and very high quality code (so I am told; I have not yet read
any sources).
- Option of running either in-memory or on-disk.
- Pretty good support for SQL features, joins, etc. (No correlated
subqueries, but I can live with that.)
- Thread safe.
- Easily embeddable anywhere C is.

For my needs SQLite seems to have only one problem: Both readers and
writers must lock the whole database. Unfortunately, that's a big
one.

Maybe, in that one particular application I would have been able to
get away with using a single thread for all SQLite access, and having
all other threads talk to that one db thread. I have not measured
SQLite performance and concurrency, nor compared it to nsv/tsv, so I
don't really know, and I would certainly want to make those tests
before hacking on any SQLite code.

But it seems clear that whatever the exact numbers are, SQLite MUST
have much lower concurrency than either Tcl's simple and lightweight
nsv/tsv key value pairs or heavier weight databases like Oracle or
PostgreSQL - and that limits the number of places I'd be able to use
SQLite.


Solutions other than SQLite:
------------------------------------

One possible alternative to SQLite is Erlang's Mnesia RDBMS:

http://www.erlang.se/doc/doc-5.3/doc/toc_dat.html

It also works both in-memory and on-disk, apparently already has high
concurrency, and can be distributed. I have not (yet) investigated
it, and I could not tell from its docs what locking model it uses, how
powerful its query language is, etc. But most importantly, it
definitely requires a running Erlang interpreter to work at all, and
it seems very unclear whether or how well any sort of access from
non-Erlang environments would work at all.

On this list, Mark D. Anderson <mda-***@public.gmane.org> recently
recommended looking at the "HUT HiBASE/Shades" papers:

http://hibase.cs.hut.fi/publications.shtml

The fact that Nokia funded and then canceled the HiBase project is
interesting, as Ericsson developed Erlang, which sounds similar in
some ways.

Unfortunately, it sounds as if the HiBASE project is dead and the code
was never finished. It also appears to be strictly a main-memory
database, so I'm not sure how applicable any of that work would be so
a database like SQLite (or Mnesia for that matter) which may be used
either in-memory or on-disk.

Anyone here aware of any other good alternatives?
--
Andrew Piskorski <atp-Iii/6jn3a/***@public.gmane.org>
http://www.piskorski.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
D. Richard Hipp
2004-04-14 12:13:39 UTC
Permalink
Post by Andrew Piskorski
How feasible would it be to add support for higher concurrency to
SQLite, especially via MVCC?
My thoughts on BlueSky have been added to the wiki page:

http://www.sqlite.org/wiki?p=BlueSky

The current plan for version 3.0 is as follows:

* Support for a modification of the Carlyle method for
allowing writes to begin while reads are still pending.
All reads must finish before the write commits, however.

* Support for atomic commits of multi-database transactions,
which gives you a limited kind of table-level locking,
assuming you are willing to put each table in a separate
database.

Business constraints require that version 3.0 be working
no later than May 31. So if you have any alternative
suggestions, you should get them in quickly.
--
D. Richard Hipp -- drh-***@public.gmane.org -- 704.948.4565


---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
D. Richard Hipp
2004-04-14 12:20:34 UTC
Permalink
Post by D. Richard Hipp
http://www.sqlite.org/wiki?p=BlueSky
That URL should have been:

http://www.sqlite.org/cvstrac/wiki?p=BlueSky

Left out the "cvstrac". Sorry for the confusion.
--
D. Richard Hipp -- drh-***@public.gmane.org -- 704.948.4565


---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Doug Currie
2004-04-14 15:41:02 UTC
Permalink
Post by Andrew Piskorski
http://www.sqlite.org/cvstrac/wiki?p=BlueSky
I added some responses; I do not agree with Richard's concerns about
Shadow Paging, and I corrected some mistaken conclusions. I apologize
if my paper was not clear enough in these areas.

Thank you, Richard, for taking the time to review the Shadow Paging
option.

Regards,

e


---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Mark D. Anderson
2004-04-15 09:59:21 UTC
Permalink
Post by D. Richard Hipp
* Support for atomic commits of multi-database transactions,
which gives you a limited kind of table-level locking,
assuming you are willing to put each table in a separate
database.
and also a limited form of concurrent writers, as a consequence,
right?
assuming that table locks are acquired in a consistent order
to avoid deadlock, there could be concurrent writers that do
not touch the same tables (in this database-per-table model).

btw, what about offering better behavior about throwing away
cache pages? one approach would be something like a
commit_begin() function which is offered by some rdbms native
apis. It says "commit what i've done, but at the same time
attempt to acquire the write lock".
Failure to "win" and actually be able to retain the write
lock might not be reported -- the idea is that the application
can at least indicate its desire.

This could also be done as some sort of connection option.

So in the case that a single writer is keeping up with all
requests, it can do so efficiently without throwing away
its pages.

-mda

---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Doug Currie
2004-04-14 16:01:37 UTC
Permalink
[...]
Doug Currie's "Shadow Paging" design sounds promising.
Unfortunately, I have not been able to download the referenced
papers at all (where can I get them?),
There are three sources for the papers. The two links on the wiki page
have been reliably available.
http://www.sqlite.org/cvstrac/wiki?p=BlueSky
but as far as I can tell, it seems to be describing a system with
the usual Oracle/PostgreSQL MVCC semantics, EXCEPT of course that
Currie proposes that each Write transaction must take a lock on the
database as a whole.
I agree with your summary.
[...]
Since Currie's design has only one db-wide write lock, it is
semantically equivalent to PostgreSQL's "serializable" isolation
level, correct?
I believe that is true.
How could this be extended to support table locking and PostgreSQL's
default "read committed" isolation level? Would the smallest locking
granularity possible in Currie's design be one page of rows, however
many rows that happens to be?
Things get *much* more complicated once you have multiple simultaneous
write transactions. I didn't want to go there.

One way to get table level locking without a great deal of pain is to
integrate the shadow paging ideas with BTree management. Rather than
using page tables for the shadow pages, use the BTrees themselves.
This means that any change to a BTree requires changes along the
entire path back to the root so that only free pages are used to store
new data, including the BTree itself. Writing the root page(s) of the
BTree(s) commits the changes to that table (these tables).
The one process, many threads aspect of Currie's design sounds just
fine to me. The one write lock for the whole database, on the other
hand, could be quite limiting.
It happens to fit my applications well. I have many short duration
write transactions (data logging) and lots of long duration analysis
transactions.
[...] Currie's design also seems to defer writing any data to disk
until the transaction commits
This is not really true. The design does attempt to defer the writing
of data until commit to (1) "batch write" to as many contiguous
sectors as possible (to reduce seek/rotation time latency), and (b) to
prevent writing pages more than once (in case there are multiple
modifications to a page in a transaction). However, if the in-memory
cache fills, data pages are spilled to disk, and are written only that
once if there are no further changes to the page.

e


---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Mark D. Anderson
2004-04-15 01:19:55 UTC
Permalink
as far as I can tell, it seems to be describing a system with
the usual Oracle/PostgreSQL MVCC semantics, EXCEPT of course that
Currie proposes that each Write transaction must take a lock on the
database as a whole.
Well, i suppose from a sufficient distance they look alike,
but in practice MVCC and shadow paging are rather different.

In MVCC, each row typically has two hidden fields identifying the first
and last transaction ids for which the row is relevant.
The last transaction id is to skip rows that are deleted.
There are many variants of MVCC, but you get the idea.

Any reader (or writer) knows its own transaction id, and just
ignores rows that are no applicable.

A "vacuum" process is necessary to periodically reclaim space
taken by rows whose last transaction id is lower than any live
transaction.

In shadow paging, the basic idea is that any reader or writer
gets a view onto the data based on reachability from "pointers"
in a particular root block. Pages that are reachable from any
live root block are never modified. A vacuum process is required
to collect the space from blocks that are no longer reachable.
Updates to indexes must be treated in roughly the same way as
data pages, because they contain pointers to different data.

Shadow paging can be used for a table-based database, or
a persistent object store.
It certainly is much older than the HUT work; see for example Lorie 77,
"Physical Integrity in a Large Segmented Database."
It falls into the general class of logless transaction systems,
as opposed to the log-based approach that predominates in
current day non-academic database implementations.

-mda

---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Christian Smith
2004-04-15 13:16:01 UTC
Permalink
Post by Doug Currie
How could this be extended to support table locking and PostgreSQL's
default "read committed" isolation level? Would the smallest locking
granularity possible in Currie's design be one page of rows, however
many rows that happens to be?
Things get *much* more complicated once you have multiple simultaneous
write transactions. I didn't want to go there.
Right tool for the job. Multiple writers has client/server database
written all over it. KISS.
Post by Doug Currie
One way to get table level locking without a great deal of pain is to
integrate the shadow paging ideas with BTree management. Rather than
using page tables for the shadow pages, use the BTrees themselves.
This means that any change to a BTree requires changes along the
entire path back to the root so that only free pages are used to store
new data, including the BTree itself. Writing the root page(s) of the
BTree(s) commits the changes to that table (these tables).
Actually, this gets my vote. Keeps the pager layer the same, and only
requires a cache of the root btree for each object (table/index) in the
database to be maintained on a per-transaction basis, reducing the
complications of what to do under memory pressure when pages are spilled
from the cache as we should be able to keep them in memory all the time.

Committing of a transaction would then be an atomic update root btree page
number in the catalog table.

Christian
--
/"\
\ / ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
X - AGAINST MS ATTACHMENTS
/ \

---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Andrew Piskorski
2004-04-15 13:30:06 UTC
Permalink
Post by Christian Smith
Right tool for the job. Multiple writers has client/server database
written all over it. KISS.
No, not true, at least not when the multiple writers are all threads
within one single process, which appears to be the common case for
people who'd like greater concurrency in SQLite.

Also, if multiple writers worked well for the one-process many-threads
case, then if you wished you could write a small multi-threaded
client/server database using SQLite as the underlying storage engine.
As things stand now, the concurrency limitations mean there isn't much
point to doing that.

Simplicity however, is of course an important concern.
--
Andrew Piskorski <atp-Iii/6jn3a/***@public.gmane.org>
http://www.piskorski.com/

---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Christian Smith
2004-04-15 14:45:42 UTC
Permalink
Post by Andrew Piskorski
Post by Christian Smith
Right tool for the job. Multiple writers has client/server database
written all over it. KISS.
No, not true, at least not when the multiple writers are all threads
within one single process, which appears to be the common case for
people who'd like greater concurrency in SQLite.
SQLite is an embedded database engine, which I guess is it's main use
(I'm using it at my place to implement a package management tool) and
the desired concurrency from what I gather on the list is read access
while writing, not multiple writers.
Post by Andrew Piskorski
Also, if multiple writers worked well for the one-process many-threads
case, then if you wished you could write a small multi-threaded
client/server database using SQLite as the underlying storage engine.
As things stand now, the concurrency limitations mean there isn't much
point to doing that.
But why re-invent the wheel. There are plenty of client/server databases
that support concurrent writes.
Post by Andrew Piskorski
Simplicity however, is of course an important concern.
Absolutely, that's my main point. I'm all for the concurrent
readers/writer, but multiple writers is a whole new ball game, probably
best left to the big boys.

Cheers,
Christian
--
/"\
\ / ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
X - AGAINST MS ATTACHMENTS
/ \

---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Doug Currie
2004-04-16 00:16:32 UTC
Permalink
Post by Christian Smith
Post by Doug Currie
One way to get table level locking without a great deal of pain is to
integrate the shadow paging ideas with BTree management. Rather than
using page tables for the shadow pages, use the BTrees themselves.
This means that any change to a BTree requires changes along the
entire path back to the root so that only free pages are used to store
new data, including the BTree itself. Writing the root page(s) of the
BTree(s) commits the changes to that table (these tables).
Actually, this gets my vote. Keeps the pager layer the same,
The pager gets *much* simpler because it doesn't need to make a log
file. The log file is not necessary because writes only go to free
pages.

Well, there would be one write-ahead log. It's needed to prevent
partial updates to the page number pointers to the roots page(s) of
the BTree(s) at commit. This log is created at commit time, and is
much simpler and much smaller than the present log file.
Post by Christian Smith
and only requires a cache of the root btree for each object
(table/index) in the database to be maintained on a per-transaction
basis
Yes, you need to cache the page number of each BTree root at
transaction start.

You'd also need a forest of free pages organized by transaction so
they can be freed at the right time (when the oldest read-transaction
that can reference them has completed).
Post by Christian Smith
, reducing the complications of what to do under memory pressure
when pages are spilled from the cache as we should be able to keep
them in memory all the time.
Yes.
Post by Christian Smith
Committing of a transaction would then be an atomic update root btree page
number in the catalog table.
Yes, atomically for all the BTrees modified. This is probably a single
page of data (4 to 8 bytes of root page number per BTree, i.e., per
table and per index). Well, I usually assume fairly large pages
compared with SQLite's default of 1K. Using larger pages also
decreases the depth of the BTree which reduces the number of pages
written.

This design works well. It has the advantage (compared with shadow
pager) that reads are not burdened with page table indirection. It has
the potential disadvantage (compared with SQLite 2.8) that small
writes can modify several pages (based on the depth of the BTree).

I used this design in a proprietary database in the late 1980s. The
only reason I didn't consider modifying SQLite this way up until now
is that I was anticipating BTree changes for 3.0, so I confined my
efforts to the pager layer.

e


---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Mark D. Anderson
2004-04-16 01:40:52 UTC
Permalink
Post by Doug Currie
I used this design in a proprietary database in the late 1980s. The
only reason I didn't consider modifying SQLite this way up until now
is that I was anticipating BTree changes for 3.0, so I confined my
efforts to the pager layer.
btw, another example of this class of approach, with a bsd-style
license, is GigaBASE from the prolific Konstantin Knizhnik:
http://www.garret.ru/~knizhnik/gigabase/GigaBASE.htm

It does not offer anything approaching full SQL.
It does however have several features not available in sqlite:
- online backup [1]
- master-slave replication
- group commit [2]
- parallel query (multiple threads for full table scans)

[1] There is kind of support in sqlite for online backup, via
echo '.dump' | sqlite ex1 > backup.sql
though this would result in a largish file and blocks
everything else.

[2] Grouping commits is a mechanism that allows for pending transactions
to get fsync'd together.
This allows for greater performance with a risk only of losing
some transactions (at most the size of the group), but not
greater risk of a corrupted database.
This is more flexibility than sqlite's big knob of OFF/NORMAL/FULL.
It is also offered by DB2, Oracle, and MySQL.



In idle moments I've toyed with what it would take to splice
GigaBASE with the query parser and planner from
lambda-db or sqlite.
But then I wake up....

-mda

---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Christian Smith
2004-04-17 00:39:44 UTC
Permalink
Post by Doug Currie
Post by Christian Smith
Post by Doug Currie
One way to get table level locking without a great deal of pain is to
integrate the shadow paging ideas with BTree management. Rather than
using page tables for the shadow pages, use the BTrees themselves.
This means that any change to a BTree requires changes along the
entire path back to the root so that only free pages are used to store
new data, including the BTree itself. Writing the root page(s) of the
BTree(s) commits the changes to that table (these tables).
Actually, this gets my vote. Keeps the pager layer the same,
The pager gets *much* simpler because it doesn't need to make a log
file. The log file is not necessary because writes only go to free
pages.
Well, there would be one write-ahead log. It's needed to prevent
partial updates to the page number pointers to the roots page(s) of
the BTree(s) at commit. This log is created at commit time, and is
much simpler and much smaller than the present log file.
I'd have thought it'd be better to preserve the pager layer as is. If it
ain't broke...
Post by Doug Currie
[...]
This design works well. It has the advantage (compared with shadow
pager) that reads are not burdened with page table indirection. It has
the potential disadvantage (compared with SQLite 2.8) that small
writes can modify several pages (based on the depth of the BTree).
So for reads, there is basically no extra burden (other than the caching
of the initial tree roots,) and writing will be slightly slower, but with
decreasing penalty as updates get bigger, and probably insignificant
against dumping of the page cache when transactions are finished, and all
of course in parallel with reads, so overall performance should be improve
in many scenarios.

It would of course be limited, like shadow paging, to a single address
space (writes would block reads in other address spaces.)
Post by Doug Currie
I used this design in a proprietary database in the late 1980s. The
only reason I didn't consider modifying SQLite this way up until now
is that I was anticipating BTree changes for 3.0, so I confined my
efforts to the pager layer.
Given this design, if it is adopted, it would also be trivial (and free in
terms of IO) to maintain a running total of records in a given btree as
well, as was requested some weeks back, as any new/deleted records would
update the btree to the root anyway.

Is this design feasible given the time constraints on 3.0? I've not
studied the btree layer in much detail, so don't know how much existing
code would need to change.

Christian
--
/"\
\ / ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL
X - AGAINST MS ATTACHMENTS
/ \

---------------------------------------------------------------------
To unsubscribe, e-mail: sqlite-users-unsubscribe-CzDROfG0BjIdnm+***@public.gmane.org
For additional commands, e-mail: sqlite-users-help-CzDROfG0BjIdnm+***@public.gmane.org
Continue reading on narkive:
Loading...