trying to minimize Write Conflicts in a multi-user database

M

Marshall Barton

Banana said:
I don't doubt there will be many variables that could skew the tests
which was why I wanted to cross reference David's test to check what I
could have had missed.

FWIW- using OLE automation to start up an Access application and run
only one query using CurrentDb.Execute (e.g. no saved query), then
immediately closing, quitting and cleaning up does not appear to
materially affect the time. The Showplan seems to suggest that a new
instance of engine was started up via OLE Automation, so this should
reflect the boot time/cold start/no caching, I'd think.

I would not make that assumption because I have no idea how
vaeious versions of Windows handle its cache when a file
closed. I have seen many/most chacing schemes that kept the
file in cache until the space was needed and then chose the
least recently used to abandon. Some schemes even go so far
as to keep usage statistics and will drop the file with the
lowest usage stats regardless of its age.
I'm not so sure about multi-cores/CPU processing. AFAIK, Access
continues to be single-threaded so it'd always use one core so the
benefit of mulitple cores/CPU would be indirect in that OS may be able
to allocate competing resources to other cores/CPUs but beyond that, not
much difference. Note that while JET/ACE has a setting for numbers of
threads, KB articles suggests that thread safety was achieved only if
you use DAO or OLEDB provider and even so, thread-safety is limited.

I was thinking more in terms of the OS and other processes
competing for CPU cycles.
I also tried tweaking the threads setting from 1 to 4 (default is 3) and
the timing was not affected at least slightest, suggesting to me that
threading was not relevant here.

KB Article:
http://support.microsoft.com/kb/169395
http://support.microsoft.com/kb/299973

I looks to me that these articles are referring to using Jet
outside Access so I don't see their relevance to our
discussion.
As for processor & memory enabling a table scan to be as fast as index
scan, this was reason why I did extra check of doing a For...Next Loop
to get a rough idea of how much time it would take to do the false
comparisons that many time. If table scan actually were being done, the
time should have had increased when I expanded the tables. It didn't
while the For...Next loop increased just as much as the increase of
records. As a further comparison, doing a "Not PK = PK" was ridiculously
slow, around 40,000 milliseconds. It's also unfair because of comparing
two variables rather than two hard coded values, but I think the point
remains- if 1=0 actually did do a table scan, it should have been just
as slow as the For...Next loop at least and certainly four times slower
after the table expanded fourfold.

"Not PK = PK was ridiculously slow" is very telling, if your
test environment was nearly the same in both cases. The
drastic difference seems to clearly imply that a "real"
table scan is defierent from the WHERE False case so maybe
the oprimizer (in Access/Jet version ??) is now using the
information to skip the table scan. While Not PK = PK is
not as trivial as False or 1=0, it is still something that
can be reduced to False, so it does appear that WHERE False
has been dealt with in some version post A97. This is very
good to know and I think I can forget about posting comments
like the one that started this subthread.
I suppose I could try the test again by re-allocating only 512 MB to the
VM and see what happens.

If your test table were wide as well as tall, that might(?)
provide useful information, but with 2 GB ram, it may not.
Well, I've kind of found Showplan to be quite lacking in the details.
For instance, it still does not support showing plans when subqueries
are used, despite having had went through 2 more JET versions and 2 ACE
versions. Another telltale sign is that when the ShowPlan starts up, it
records the Date of 0, suggesting to me that any bugs, issues and/or
enhancements associated with ShowPlan wasn't considered high-priority
and for those reasons, I think it is prudent to take the Showplan's
output with a grain of salt.

That strikes me as odd. I thought Showplan was originally a
debugging tool used by the query optimizer developers.
Either they have a different tool now or they are very
confident of their ability to make code changes ;-)
 
M

Marshall Barton

Tom Wickerath said:
I sent a private e-mail to David late last night, inquiring about any past
test results. He replied as follows:

"I don't recall anything of the sort, and don't see it in Google
Groups. Nor do I have archives of my posts from way back when.

Sorry I can't help. Maybe Marshall has a better reference?"

Not with the Google archives in such disarray. I definitely
remember the gist of David's tests because I tried it on a
client's A97 db that was kind of slow and it made a big
difference.
I agree. When I am doing such testing at work, I typically reboot several
times, between each test, and I make sure to shut down as many applications
as I can, including Outlook, to try to make a somewhat stable baseline. On my
work PC, I cannot shut off the antivirus, so I just have to live with the
possibility that it may be adding some network load.

I think a good way to test the 1=0 question might be to start with a really
large table, like Banana did, and monitor the amount of data transferred for:

1.) An intentional table scan
Run some query that forces all records to be brought over a network wire,
perhaps an aggregrate query to sum a numeric field.

2.) WHERE conditions that include 1=0, WHERE False, etc.

One needs to first display the network monitor icon in the System Tray. In
Windows XP, one does this via:

Control Panel | Network Connections

Double-click the connection of interest, and then left-click on Properties.
Select the check box for the option that reads "Show icon in notification
area when connected". When you double-click the resulting System Tray icon,
you will see Packets Sent and Packets Received values. With a baseline that
is as repeatable as possible (ie. Outlook and any other networked
applications shut down), read the Packets Received value before and after
each individual test. The difference (delta) represents how many packets of
data was transferred to solve each query. Of course, one must keep in mind
that some other application that you may not be able to shut down may have
caused some of the traffic for a given test. So, one can run the test several
times, in an attempt to make sure there is not a contributing influence from
some other application. You need a split application, with the BE on a
network share, in order to use this method. Several years ago, John Viescas
recommended this method to me as a way of testing how much data actually
comes over the network wire.


Good to know. I think that means the test needs to use a
wide table so you can tell the difference between index
retrieval and data retrieval.
 
B

Banana

Marshall said:
I would not make that assumption because I have no idea how
vaeious versions of Windows handle its cache when a file
closed. I have seen many/most chacing schemes that kept the
file in cache until the space was needed and then chose the
least recently used to abandon. Some schemes even go so far
as to keep usage statistics and will drop the file with the
lowest usage stats regardless of its age.

I could be dead wrong but I would think that how OS caches file is
irrelevant because the caching JET would do would be in memory and thus
destroyed when JET is disposed of ??
I looks to me that these articles are referring to using Jet
outside Access so I don't see their relevance to our
discussion.

Those are the only two places where you could find 'Jet' and
'multi-threading' / 'thread safety' together, and yes they deal with
non-Access uses, but what I didn't explicitly state was whether Jet was
capable of breaking a query up between threads for parallel execution.
If my tests tweaking threads are reliable, it doesn't seem to be the
case which was what I wanted to rule out.
"Not PK = PK was ridiculously slow" is very telling, if your
test environment was nearly the same in both cases. The
drastic difference seems to clearly imply that a "real"
table scan is defierent from the WHERE False case so maybe
the oprimizer (in Access/Jet version ??) is now using the
information to skip the table scan. While Not PK = PK is
not as trivial as False or 1=0, it is still something that
can be reduced to False, so it does appear that WHERE False
has been dealt with in some version post A97.

It seems to suggest this, but I should try Tom's idea of monitoring the
network traffic to be sure we didn't miss anything here.
This is very good to know and I think I can forget about posting comments
like the one that started this subthread.

Well, actually I was glad you posted it. It's too easy to get in a rut
and repeating the old truism. Indeed, I was quite shocked when I was
told that for SQL Server, existence check runs equally well if not
slightly better than the frustrated join while I've been sprouting this
"truism" for a while. (With JET, I'm inclined to say 'it depends.')

Anyway, it's always good to check the facts as time passes.

If your test table were wide as well as tall, that might(?)
provide useful information, but with 2 GB ram, it may not.

Well, the table contains five columns, 3 longs and 2 dates, or 28 bytes
per row. There's only one index, and that's the primary key. It would
seem to me that an index page would contains 7 as much records than a
data page for the same table.
That strikes me as odd. I thought Showplan was originally a
debugging tool used by the query optimizer developers.
Either they have a different tool now or they are very
confident of their ability to make code changes ;-)

But that's exactly my point - If it was a tool they were using, they
would have had enhanced it over time. It hasn't, which lead me to think
they moved on to something else for their internal tests. Indeed, I hope
the latter isn't true! ;)

On the flip side, though, I _think_ there wasn't any serious performance
enhancement between JET 3.0 and ACE 14.0 (Access 2010). It seemed to me
that the changes in between were usually more features rather than
enhancement or fixing bugs. At least, I've yet to hear of such
enhancement in between.
 
P

Paul

John,

Here are both my office and home addresses in case one of them filters out
the attachment.

Work - my first and last names (below) separated by a dot, followed by the
"at."
The remainder is dgs ca gov, but with dots instead of spaces between the
three strings.

Home - my last name only, followed by the "at.
The remainder is surewest net, again with a period instead of the space.

I'm looking forward to receiving your Email.

Thanks so much.

Paul Ponzelli
Staff Real Estate Officer
DGS Central Leasing
 
M

Marshall Barton

Banana said:
I could be dead wrong but I would think that how OS caches file is
irrelevant because the caching JET would do would be in memory and thus
destroyed when JET is disposed of ??

This is getting deeper than my knowledge, but there may very
well be multiple layers of caching going on.
Those are the only two places where you could find 'Jet' and
'multi-threading' / 'thread safety' together, and yes they deal with
non-Access uses, but what I didn't explicitly state was whether Jet was
capable of breaking a query up between threads for parallel execution.
If my tests tweaking threads are reliable, it doesn't seem to be the
case which was what I wanted to rule out.


It seems to suggest this, but I should try Tom's idea of monitoring the
network traffic to be sure we didn't miss anything here.


Well, actually I was glad you posted it. It's too easy to get in a rut
and repeating the old truism. Indeed, I was quite shocked when I was
told that for SQL Server, existence check runs equally well if not
slightly better than the frustrated join while I've been sprouting this
"truism" for a while. (With JET, I'm inclined to say 'it depends.')

Anyway, it's always good to check the facts as time passes.



Well, the table contains five columns, 3 longs and 2 dates, or 28 bytes
per row. There's only one index, and that's the primary key. It would
seem to me that an index page would contains 7 as much records than a
data page for the same table.

Yes but - if the table was already in RAM the time to scan
the indexes would be the same as scanning the table.
However, they probably use a Btree or hashing to search the
indexes instead of scanning them but a million cycles to
loop through the table in RAM is still only a millisecond
compared to less than a microsecond to check the index. Big
difference, but a human would probably not notice it. Way
back when I tried David's idea in a real table with about
50K records of about 1K each on a 1GH machine with 256MB,
the difference went from about 2 or 3 seconds to near 0,
very noticeable.
But that's exactly my point - If it was a tool they were using, they
would have had enhanced it over time. It hasn't, which lead me to think
they moved on to something else for their internal tests. Indeed, I hope
the latter isn't true! ;)

On the flip side, though, I _think_ there wasn't any serious performance
enhancement between JET 3.0 and ACE 14.0 (Access 2010). It seemed to me
that the changes in between were usually more features rather than
enhancement or fixing bugs. At least, I've yet to hear of such
enhancement in between.

I don't know the details, but there were a lot of bug fixes
in A2007, maybe some even in Jet. ACE had a lot of changes
from Jet and who knows what they might have done with
performance while adding all those new multi value field,
attachments, ... thingies.
 
T

Tom Wickerath

In my last reply, I mentioned monitoring network traffic by looking at the
packets received data before and after running a query. Earlier this
afternoon, after I posted that reply, I set up a quick test to check the
packets data. I wanted to use a large table that anyone else could also have
access to, so I downloaded Sean Lahman's Baseball database:

http://www.baseball1.com/

The largest table in this database is the Fielding table, with 158,188
records. Still not huge, but at least it's publically available, so that
others can use it, if interested, to compare results. I created a new BE
database, disabled Name Autocorrect, and imported the Fielding table. Then I
copied this database to a shared folder.

~~~~~~~~~~~~~~~~~~~~~~~~
Special Note:

I originally was thinking most of the shared folders I have access to are
located in Bellevue, WA., but it turns out that this particular server is
located in Kent, WA. I was able to get the building number, and street
address, using an internal search page. I also got the address for my
building from the same page, and then plugged these two addresses into
MapQuest. MapQuest shows a distance of 10.15 miles between my location and
the location of the file server. So, as much as I'd like to use a LAN only,
the fact is we are using a WAN, albeit a very fast and rock solid one.

~~~~~~~~~~~~~~~~~~~~~~~~

I used this aggregate query to produce an intentional table scan:

SELECT PlayerID, YearID, POS, Sum(G) AS GSum
FROM Fielding
GROUP BY PlayerID, YearID, POS;

and then I added these two WHERE clauses:

WHERE 1 = 0
and
WHERE False

I had not read Marsh's suggestion of using a wide table at the time I ran
these tests, otherwise, I would have included all fields from this table. I
ran each query three times, but started with a different query for each set.
This is indicated in the spreadsheet with the test number: Set 1 = {1a, 1b,
1c}, Set 2 = {2a, 2b, 2c} and Set 3 = {3a, 3b, 3c}. I closed the FE
application in-between sets 1 & 2 and sets 2 & 3. So, hopefully, this would
have cleared out any records cached in local memory, but I don't know that
for a fact.

I have uploaded a 30 KB .zip file, which includes the FE.mdb file, a
Results.xls spreadsheet, and a text file with the URL to the baseball site:

http://www.accessmvp.com/TWickerath/downloads/LargeTableTest.zip

The Excel spreadsheet shows that WHERE conditions of 1=0 and WHERE FALSE
resulted in an average of 83 and 82 packets transferred (n=3, with Access
closed after each set of tests), respectively, for queries with these
criteria. The query without a WHERE clause, run to get an idea of how many
packets of data would be transferred, resulted in an average of 9531 packets.
Thus, the two constrained queries resulted in about 0.87% and 0.86% of the
full table scan. I think it is safe to say, based on these results, that the
JET ShowPlan indication of a scan for this criteria is not honest.


Tom Wickerath
Microsoft Access MVP
http://www.accessmvp.com/TWickerath/
__________________________________________
 
J

John Spencer

DEMO database zipped and sent.

By the way after thinking about this, I don't think that the write conflicts
have all that much to do with the number of records you have in the recordset.

Also, note that comboboxes are limited to a maximum of 64 K records (binary
K). So if you have more than that, you would need to come up with some scheme
to limit the number of records being returned in combobox's list.


John Spencer
Access MVP 2002-2005, 2007-2010
The Hilltop Institute
University of Maryland Baltimore County
 
M

Marshall Barton

Tom Wickerath said:
In my last reply, I mentioned monitoring network traffic by looking at the
packets received data before and after running a query. Earlier this
afternoon, after I posted that reply, I set up a quick test to check the
packets data. I wanted to use a large table that anyone else could also have
access to, so I downloaded Sean Lahman's Baseball database:

http://www.baseball1.com/

The largest table in this database is the Fielding table, with 158,188
records. Still not huge, but at least it's publically available, so that
others can use it, if interested, to compare results. I created a new BE
database, disabled Name Autocorrect, and imported the Fielding table. Then I
copied this database to a shared folder.

~~~~~~~~~~~~~~~~~~~~~~~~
Special Note:

I originally was thinking most of the shared folders I have access to are
located in Bellevue, WA., but it turns out that this particular server is
located in Kent, WA. I was able to get the building number, and street
address, using an internal search page. I also got the address for my
building from the same page, and then plugged these two addresses into
MapQuest. MapQuest shows a distance of 10.15 miles between my location and
the location of the file server. So, as much as I'd like to use a LAN only,
the fact is we are using a WAN, albeit a very fast and rock solid one.

~~~~~~~~~~~~~~~~~~~~~~~~

I used this aggregate query to produce an intentional table scan:

SELECT PlayerID, YearID, POS, Sum(G) AS GSum
FROM Fielding
GROUP BY PlayerID, YearID, POS;

and then I added these two WHERE clauses:

WHERE 1 = 0
and
WHERE False

I had not read Marsh's suggestion of using a wide table at the time I ran
these tests, otherwise, I would have included all fields from this table. I
ran each query three times, but started with a different query for each set.
This is indicated in the spreadsheet with the test number: Set 1 = {1a, 1b,
1c}, Set 2 = {2a, 2b, 2c} and Set 3 = {3a, 3b, 3c}. I closed the FE
application in-between sets 1 & 2 and sets 2 & 3. So, hopefully, this would
have cleared out any records cached in local memory, but I don't know that
for a fact.

I have uploaded a 30 KB .zip file, which includes the FE.mdb file, a
Results.xls spreadsheet, and a text file with the URL to the baseball site:

http://www.accessmvp.com/TWickerath/downloads/LargeTableTest.zip

The Excel spreadsheet shows that WHERE conditions of 1=0 and WHERE FALSE
resulted in an average of 83 and 82 packets transferred (n=3, with Access
closed after each set of tests), respectively, for queries with these
criteria. The query without a WHERE clause, run to get an idea of how many
packets of data would be transferred, resulted in an average of 9531 packets.
Thus, the two constrained queries resulted in about 0.87% and 0.86% of the
full table scan. I think it is safe to say, based on these results, that the
JET ShowPlan indication of a scan for this criteria is not honest.


Very nice job Tom. The packet counts appear to conclusively
demonstrate that the query optimizer has definitely been
improved since David's post and my simple test. This is
very good news and we can now forget about playing the
impossible PK criteria game that I brought up earlier.

I missed where you said which version of Jet and/or Ace you
used in the tests??
 
D

David W. Fenton

[BTW, I completely missed this interesting thread, because I have my
newsreader configured to kill crossposts to more than 2 newsgroups.
It's really unwise to crosspost excessively as it makes your post
look like spam; in this case, I can't see that anything other than
m.p.acess and m.p.formscoding was really appropriate, and my
philosophy is that if you post in m.p.access, i.e., the general
Access newsgrsoup, you shouldn't crosspost the same article to the
specific newsgroups -- instead, *don't* post it in the general
newsgroup and crosspost to 1 or more groups with specific
non-overlapping topics that are appropriate; but definitely keep the
crossposts to a minimum]

I have been told by several developers that one way to minimize
the occurrence of the Write Conflict is to put the main form's
controls into a subform and remove the Record Source from the main
form. You then set Child and Master Field links in the subforms
to the value returned by the record selection combo box on the
main form (stored in a text box on the main form).

I would suggest that you've perhaps misunderstood the
recommendation. It is never a good idea to have the same table
loaded in more than one editable table simultaneously. If you do
that, you're definitely setting yourself up for write conflicts, as
opposed to ameliorating write conflict errors.

My first question for you is to wonder if you're using optimistic or
pessimistic locking -- the first choice in all case is OPTIMISTIC,
but it sounds to me like you're using pessimistic.

Secondly, if you're updating a record in a different subform, save
any edits to the first subform before navigating to the other
subform. This is the only way to avoid write conflicts

Third, if you're running SQL updates in code against a table that is
also loaded in a form/forms, save the form/forms *before* running
the SQL update. Failure to do so guarantees write conflicts.

Last of all, further down the thread there's substantial discussion
of record-level locking. I've never used it and I've never had
issues.

Second, even if you use the tricks cited downthread to make sure
you're using record-level locking, if you then use DAO to update,
you're back to page-level locking for the DAO update, as DAO was
never updated by MS to be able to use record-level locking (because
of the stupid anti-DAO/pro-ADO campaign, which caused a whole bunch
of the Jet 4 innovations to be supported in ADO but not in DAO -- we
are still living with the after-effects of that bloody stupid
decision on MS's part).
 
D

David W. Fenton

Side note. There was a cdma thread on this issue many years
ago where David Fenton posted the results of extensive
performance testing of using anything that boils down to
WHERE False. In my mind, he conclusively demonstrated that
Jet query optimization is not smart enough to recognize that
no records will be retrieved and consequently does a full
table scan.

As Tom reported, I can't find any record of this.

Based on further discussion downthread, I can't imagine that Jet is
actually really doing a full table scan. Expressions that don't use
data from a field in a table get evaluated by Jet just once. For
instance, Rnd() will be evaluated once, while Rnd([FieldFromTable])
will get evaluated for each row.

Thus, it seems to me that WHERE 0=1 or any other variation on WHERE
FALSE that doesn't use data that changes row-by-row will be
evaluated only once. My guess is that SHOWPLAN is WRONG and there is
no table scan. Banana's test results bear that out, I think.

That said, I think I'd rather use something that I know what it
does, and I know that AutonumberPK = .5 is going to use the index
and is going to be blazingly fast (it doesn't have to do anything
with the index except examine the metadata to find out that it's an
impossible value). It's also going to be portable without causing
performance problems, whereas there's no guarantee that other db
engines will smartly optimize WHERE FALSE.

And last of all, I never use empty recordsources for forms like this
because the forms come up blank. Instead, I use a rowsource with one
record that returns Null for all the fields used in the
ControlSources on the form:

SELECT TOP 1 Null As Field1, Null As Field2, Null As Field3
FROM SmallestTableInMyDatabase

This insures no performance hit and that the form displays with no
data and is completely uneditable. I think that's a much better way
to start with a form where you are setting the recordsource based on
user choice.
 
P

Paul

What a clever use of ListIndex, ListCount and ItemData. Thanks so much for
taking the time to create that navigation bar for me, John. The forms in
which I'd use it don't allow for any additions, so I won't experiment with
that command for the time being.
Also, note that comboboxes are limited to a maximum of 64 K records
(binary K).

My largest combo boxes have four fields with no more than a couple of
thousand records and they all seem to be working just fine.
By the way after thinking about this, I don't think that the write
conflicts have all that much to do with the number of records you have in
the recordset.

Now that you mention it, none of the experts ever told me the two were
related, I just imagined that if you loaded multiple records into a form and
began editing one, the other records were more likely to be involve in a
write conflict than if you only loaded that record you were working on. If
write conflicts aren't dependent on the number of records in the recordset,
I might just revert back to loading the entire recordset when the form
loads, because I'm only dealing with hundreds, not thousands of records.

Thanks again for the nice applet with the record navigation bar, John.

Paul
 
P

Paul

Great information, Dave.

I try to observe good posting rules, and after reading your suggestions I
now realize it's best to keep the number of groups ro a necessary minimum.
the first choice in all case is OPTIMISTIC locking

Understood. I am using Optimistic locking.
if you're running SQL updates in code against a table that is also loaded
in a form/forms, save the form/forms *before* running the SQL update.

This could be a problem, because haven't been doing that. I am running
action queries without first saving the record to enforce some of our
business rules. Armen and Banana have pointed out that such queries can
cause write conflicts. However, Tom Wickerath's page on performance
includes a link to a MS Web page that talks about how DAO 3.6 can produce
write conflicts. So does that mean it's best to use ADO in place of those
update queries? (And is that something that can be done with ADO?)
if you're updating a record in a different subform, save any edits to the
first subform before navigating to the other subform. This is the only way
to avoid write conflicts

That's something else I'm not doing. I just assumed that when you leave a
subform, the record would be saved automatically. Are you saying that you
need to write a line of code to save a record when you click another tab or
close the form?
It is never a good idea to have the same table loaded in more than one
editable table simultaneously.

My main form has a single field, the PK ProjectID from tblProject, but it's
Locked, so it can't be edited. I keep the editable data from tblProject in
a subform. I originally had all the data from that table in the Main form,
but I ran into a problem which, at the moment, I can't remember what it was,
that was solved by moving it into a subform. If users can't edit that
single field in the main form, then it shouln't create a write conflict
problem. Am I right about that?

Thanks for your comments,

Paul
 
A

Armen Stein

That's something else I'm not doing. I just assumed that when you leave a
subform, the record would be saved automatically. Are you saying that you
need to write a line of code to save a record when you click another tab or
close the form?

Paul, your assumption is correct. When your focus moves between a
main form and subform, or from one subform to another subform on the
same main form, Access implicitly saves the dirty record of the form
you are leaving. In other words, within a main form and its subforms,
only one record can be dirty at a time. No code extra code is needed
for this.

However, the explicit save *is* necessary if your focus is not leaving
the form, but running an update operation in code on the same records
as that form.

Armen Stein
Microsoft Access MVP
www.JStreetTech.com
 
D

David W. Fenton

Great information, Dave.

Er, my name is not "Dave", it's DAVID.

Pet peeve of mine: when someone uniformly identifies themselves as
"David" why would anyone presume to call them "Dave"? I wouldn't
think to call you "Paulie," unless that is what you signed your
posts with.

In actuality, I'm the least Dave-like person you're ever likely to
encounter!

<end of peevish rant>

[quoting me:]
This could be a problem, because haven't been doing that. I am
running action queries without first saving the record to enforce
some of our business rules.

This almost guarantees write conflicts, particularly if the updates
are to records loaded in the form and locked for write.
Armen and Banana have pointed out that such queries can
cause write conflicts. However, Tom Wickerath's page on
performance includes a link to a MS Web page that talks about how
DAO 3.6 can produce write conflicts.

Any data interface can produce write conflicts if you're trying to
update a record in two different places (or the same data page if
you're not using record locking, which you're not when you use DAO
for executing SQL updates)
So does that mean it's best to use ADO in place of those
update queries? (And is that something that can be done with
ADO?)

It's best to do all your updates in the form itself and keep SQL
updates to a minimum (although it's OK if you're updating different
tables than are loaded in your form).
That's something else I'm not doing. I just assumed that when you
leave a subform, the record would be saved automatically. Are you
saying that you need to write a line of code to save a record when
you click another tab or close the form?

Whether or not the form updates or not depends on a number of
things, but since you can't necessarily control how you user departs
your subform, you really have to save the edits.

Frankly, I don't understand the design you're talking about. I have
never encountered a situation where I thought it was a good idea to
have the same form open in more than one editable subform at a time.
On the other hand, I do have an app where there's an abbreviated
subform on the opening summary tab, and then larger, more detailed
subforms on other tabs. But in my tab's OnChange event, if I'm going
to the detail tab, I will check the summary tab's subform's .Dirty
property and save it if it's dirty, so by the time the user gets to
the detailed subform, it's free of any write locks from the other
suform on the same table.

But that's actually a fairly rare situation for my apps. I tend to
make subforms read-only if I'm displaying it in multiple locations
on the same parent form.

And I can't think of a case where I'd use subforms to display data
from the same table(s) as the parent form, read-only or not.
My main form has a single field, the PK ProjectID from tblProject,
but it's Locked, so it can't be edited. I keep the editable data
from tblProject in a subform.
Why?

I originally had all the data from that table in the Main form,
but I ran into a problem which, at the moment, I can't remember
what it was, that was solved by moving it into a subform. If
users can't edit that single field in the main form, then it
shouln't create a write conflict problem. Am I right about that?

Not between the parent and child forms, but if you've got more than
one child form, you can have write conflicts between those.

But I don't see any justification at all for the architecture you've
described.
 
D

David W. Fenton

Paul, your assumption is correct. When your focus moves between a
main form and subform, or from one subform to another subform on
the same main form, Access implicitly saves the dirty record of
the form you are leaving. In other words, within a main form and
its subforms, only one record can be dirty at a time. No code
extra code is needed for this.

I'm not certain this is correct. I would never leave it to chance --
if I know I have a possibility of two subforms based on the same
table and both are editable in different tabs, I would use the tab
control's OnChange event to make sure none of the subform's are
dirty.

I have seen too many problems with implicit saves that I always like
to have a line of code whose purpose is saving the data so that if
an error occurs in the process of saving, I know exactly what line
of code produced the error. Relying on an implicit save will cause
some other line of code to produce the error, one that is not
explicitly about saving the data.
However, the explicit save *is* necessary if your focus is not
leaving the form, but running an update operation in code on the
same records as that form.

One should avoid updating the same table in code when it is open in
an editable form -- that is not the Access way -- do all your
editing in the form itself, and save SQL updates for updating data
that's not loaded in your form(s).
 
T

Tom Wickerath

David,
Second, even if you use the tricks cited downthread to make sure
you're using record-level locking, if you then use DAO to update,
you're back to page-level locking for the DAO update, ....

What evidence can you point to in support of the above statement?
KB 306435 (http://support.microsoft.com/kb/306435) includes the following
quotes:

Resolution Section:
"To resolve this problem, use ActiveX Data Objects (ADO) to enable row-level
locking on an Access database, and then open DAO connections to the database.
All subsequent attempts to open DAO connections to the database will respect
the locking mode that you set."

More Information Section:
<Begin Quote>
To enforce DAO to use the row-level locking that ADO sets, follow these
steps:
Use row-level locking to open an ADO Connection to the database as follows:
Set the ADO Connection object's Provider property to Microsoft.JET.OLEDB.4.0.
Set the Connection object's dynamic Properties("Jet OLEDB:Database Locking
Mode") to 1.
Open the ADO Connection.
Use the OpenDatabase method to open the same database from DAO.

Because the locking mode is reset when you close and reopen the database,
use a DAO database that remains open as long as you need row-level locking.
For example, use Form or Module level scope in Visual Basic for the DAO
database.
Close the ADO Connection.

<End Quote>


Tom Wickerath
Microsoft Access MVP
http://www.accessmvp.com/TWickerath/
__________________________________________

David W. Fenton said:
[BTW, I completely missed this interesting thread, because I have my
newsreader configured to kill crossposts to more than 2 newsgroups.
It's really unwise to crosspost excessively as it makes your post
look like spam; in this case, I can't see that anything other than
m.p.acess and m.p.formscoding was really appropriate, and my
philosophy is that if you post in m.p.access, i.e., the general
Access newsgrsoup, you shouldn't crosspost the same article to the
specific newsgroups -- instead, *don't* post it in the general
newsgroup and crosspost to 1 or more groups with specific
non-overlapping topics that are appropriate; but definitely keep the
crossposts to a minimum]

I have been told by several developers that one way to minimize
the occurrence of the Write Conflict is to put the main form's
controls into a subform and remove the Record Source from the main
form. You then set Child and Master Field links in the subforms
to the value returned by the record selection combo box on the
main form (stored in a text box on the main form).

I would suggest that you've perhaps misunderstood the
recommendation. It is never a good idea to have the same table
loaded in more than one editable table simultaneously. If you do
that, you're definitely setting yourself up for write conflicts, as
opposed to ameliorating write conflict errors.

My first question for you is to wonder if you're using optimistic or
pessimistic locking -- the first choice in all case is OPTIMISTIC,
but it sounds to me like you're using pessimistic.

Secondly, if you're updating a record in a different subform, save
any edits to the first subform before navigating to the other
subform. This is the only way to avoid write conflicts

Third, if you're running SQL updates in code against a table that is
also loaded in a form/forms, save the form/forms *before* running
the SQL update. Failure to do so guarantees write conflicts.

Last of all, further down the thread there's substantial discussion
of record-level locking. I've never used it and I've never had
issues.

Second, even if you use the tricks cited downthread to make sure
you're using record-level locking, if you then use DAO to update,
you're back to page-level locking for the DAO update, as DAO was
never updated by MS to be able to use record-level locking (because
of the stupid anti-DAO/pro-ADO campaign, which caused a whole bunch
of the Jet 4 innovations to be supported in ADO but not in DAO -- we
are still living with the after-effects of that bloody stupid
decision on MS's part).
 
A

Armen Stein

I'm not certain this is correct. I would never leave it to chance --
if I know I have a possibility of two subforms based on the same
table and both are editable in different tabs, I would use the tab
control's OnChange event to make sure none of the subform's are
dirty.

Well, I am. :) It isn't chance - it's normal Access behavior.

That said, we don't normally have two *editable* subforms on the same
table on the same form anyway. If that's the only time you're doing
the explicit save, then I think that's fine to control the saves more
precisely. I just wouldn't recommend them as a standard practice -
only when we need them for some other reason. Basically we always try
for the simplest code possible.

Armen Stein
Microsoft Access MVP
www.JStreetTech.com
 
D

David W. Fenton

Well, I am. :) It isn't chance - it's normal Access behavior.

What about the point of having a line of code that explicitly saves
as the point of origin of any failure in the save process? That
seems pretty compelling to me.
That said, we don't normally have two *editable* subforms on the
same table on the same form anyway. If that's the only time
you're doing the explicit save, then I think that's fine to
control the saves more precisely. I just wouldn't recommend them
as a standard practice - only when we need them for some other
reason. Basically we always try for the simplest code possible.

I try for the most reliably code. The explicit save seems absolutely
required to me in order to troubleshoot save errors.

I was badly burned by an unheralded aspect of the bookmark bug,
i.e., that setting the bookmark to move the record pointer
implicitly saves any edits to the departed record, but errors in
that save were getting eaten by Access. Thus, this code was
dangerous:

With Me.RecordsetClone
.FindFirst "[PK]=" & Me!cmbComboBox
If Not .NoMatch Then
Me.Bookmark = .Bookmark
End If
End With

The line where the bookmarks are set (which does the navigation)
implicitly saves any edits, and the bug was that in certain
circumstances I could never identify, those edits would *not* be
saved, and no error would be reported.

This code is safe:

With Me.RecordsetClone
.FindFirst "[PK]=" & Me!cmbComboBox
If Not .NoMatch Then
If Me.Dirty Then
Me.Dirty = False
End If
Me.Bookmark = .Bookmark
End If
End With

....and the reason it's safe is because you're not relying on the
implicit save -- you're explicitly telling Jet to save the edit, and
if any errors happen in that save, they will be reported.

This is the same reason I'd always explicitly save dirty subforms
when changing a tab, because it's essential to do the save
EXPLICITLY, rather than relying on an implicit save which
historically has been buggy in other circumstances.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top