[Gllug] A programming question. long.

Bruce Richardson brichardson at lineone.net
Fri Jul 6 11:19:01 UTC 2001


On Fri, Jul 06, 2001 at 10:07:41AM +0100, home at alexhudson.com wrote:
> On Thu, Jul 05, 2001 at 10:37:56PM +0100, Bruce Richardson wrote:
> > > so we need some sort of locking. Enter the spin lock.
> > 
> > No, you don't.
> 
> Yes he does, as you later state:
> 
> > To *guarantee* that no two people see the same record, you need to run
> > this inside a transaction and perform an exclusive table lock
> 
> I don't necessarily agree with 'exclusive table lock' (bit of a waste for
> one row!)

Which is why I said the lock should be placed on the view, not the
underlying table.  The view only has one (notional) row.  And the lock
only need be in place until the genuine row is marked as "in use".

>, but transactions are the answer. I don't think setting a trigger
> on a select would do anything useful - in fact, it would be dangerous,
> because if the trigger is removing the information from the table it doesn't
> yet know whether the client dealt with the information it received
> successfully, which is silent data loss. Ick. It's up to the application,
> not the database, to remove items from the store.

No, it's the database's job IMO.

In my example the select trigger does something essential - marking
the selected record "don't touch".  By having it on the view, rather
than the table, you minimise the time spent in the trigger.  I don't see
any other place from which the required function can sensibly be
performed.

My select trigger doesn't delete anything.   One way to do the deletion
is to place a post-insert trigger on the destination table.  Works like
this:

Insert record into destination table - this triggers the pre-insert
trigger described in my earlier message.  The pre-insert trigger pulls
the top (only) record from the view.  To provide that record, the view
goes to the underlying table(s) and the view's select trigger then marks
the relevant record "in use".  The pre-insert trigger then returns the
correct data to be inserted.

At which point the insertion takes place and either succeeds or fails.

If it succeeds then a post-insert trigger could then delete the "in use"
row from the other table - or just mark it "used", for later clean-up.

What is missing is what happens when the insert fails - the "in use"
field needs to be cleared.  Well...

Instead of having the client insert an appointment record directly into
the appointments table, I'd have them insert into a view (again, this
means they don't need rights to the appointments table and minimises the
damage an error can do).  Because a view has no data, you can only
insert data into a view if an insert trigger is present.  I would attach
to the view the insert trigger which was attached to the appointments
table in my previous example.  This trigger would run through the whole
process (select from the select view, insert into the appointments
table) and if the insert faile it would immediately release the "in use"
record.  (As a backup I would also run a regular tidy-up process to
release hanging records - you can never be too careful).

This way the database takes care of all data validation tasks.  All the
application has to do is insert one record into one table (view, in
fact).  The insertion either succeeds or fails.  If it succeeds, the
application can check the table to see what appointment date was
actually fixed, knowing that all steps required to keep the data
consistent have already been taken.

There is an alternative to the (very brief) exclusive lock on the
selection view.  That is to not worry about whether multiple clients
see the top record at all.  Instead you place an exclusive lock on the
*insertion* view, so that only one client process may insert an
appointment at any one time.  Then the insert trigger can check to see
if someone else has used that record between your looking at the record
and deciding to use it.  But that is scrappy and the lock is in place
for longer.

> 
> Another problem presents itself from the design of this system. Firstly, the
> database is being used as a queue. Time to look into clustered indexes, I
> think, and combine that with a row-level locked transaction, because
> otherwise the performance is really going to bite. I presume the design is
> intended to be FIFO, which is going to hurt the transaction cache if any get
> left open too. If we're talking a large number of rows (3 million was it?)
> it's going to need some proper design..

No arguments there.

-- 
Bruce

If the universe were simple enough to be understood, we would be too
simple to understand it.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 261 bytes
Desc: not available
URL: <http://mailman.lug.org.uk/pipermail/gllug/attachments/20010706/b7676769/attachment.pgp>


More information about the GLLUG mailing list