Changing Query Defs in VBA

D

David W. Fenton

I've encountered occasions (rarely, mind you) where the last chunk
of record changes hadn't been written to disk yet and the next
query errored because it couldn't find the data that I was
expecting to be there already. That's the problem with a desktop
database engine as opposed to a client/server database engine.
The client/server does these operations in memory and has a
snapshot of the data that includes the results of the previous
data manipulations and uses that snapshot for subsequent
operations which read from the snapshot, not from the disk, while
the database engine writes to disk the earlier transactions. (I'm
not sure I'm explaining this very clearly. Sorry.) The desktop
database engine doesn't have a snapshot to read from and always
reads what's already been written to disk. The lack of ability to
do these "will be done" operations in memory is one of the reasons
one sees lightning speed in client/server database engines and Jet
seems to take its sweet time.

I've never seen such a thing.

Jet *does* use memory, in the form of temp files on disk. And Jet
keeps track of which are the currently valid data pages, so it knows
that a data page in the MDB file has been superseded by a data page
in the temp file. Surely this is kept track of in an in-memory list,
which all query processing will access in order to get the latest
data. And it shouldn't matter if it's actually been written to disk
or is just in the disk cache queue to be written, since the disk
cache should also be keeping its own list of what is to be read from
disk and what sectors from the cache (because they haven't yet been
flushed to disk).

I would suggest that the problem you're described is caused by a
number of possible problems:

1. using a different connection/database variable for the second
query than for the first.

2. running them in separate transactions without commiting the first
before running the second.

These seem to me like the kinds of problems that could lead to a
query not seeing the updates made in the immediately previous query.
I guess the one thing that *may* happen is something about the disk
writes that DoEvents allows, but I think that's very unlikely, as
disk I/O should be running at a level much higher than Access
itself, so ought to happen anyway. Perhaps it frees up something in
Access itself to communicate with the OS in a way that makes the
write happen.

In all my years of programming in Access, I've never encountered
such a problem, and I've written lots of code where queries are
executed successively and later ones depend on writes that happened
in the earlier ones. I'd be interested to know what conditions lead
to this problem, since I've never run onto it. Perhaps ADO is the
culprit (I don't use ADO)?
 
L

Larry Linson

Jamie Collins said:
But what are we trying to achieve here? If Larry
Linson Microsoft Access MVP truly does use one
query for each task then he would presumably have
N queries, one for each value of TOP N he antici-
pates; that would give great performance but at a
cost e.g. a bloated schema that is harder to use than
it need be.

The database applications I have done do not include many TOP N Queries, and
I don't recall a single case where the requirements called for multiple
Queries or multiple searches varying only in "N". The idea of saving
multiple queries varying only in "N" seems outlandish to me. On the other
hand, I didn't begin using SQL until sometime in the 1980s, so maybe I have
missed some important points.

Unless you are querying very large numbers of records, the performance
advantage of saved Queries versus dynamic SQL is not detectable by the user
at the keyboard in the Access environment.
The fact you quote the above TOP N as an exceptional
case suggests that like me, but unlike Larry it would seem,
you regard a parameterized query (a.k.a. a procedure) as
the norm. I'd go further and say that dynamic SQL (the
term usually applied in the wider SQL community and
is pejorative - google it) should be avoided.

Interestingly, the very first article retrieved by googling "dynamic SQL"
included comments about both the benefits and drawbacks of dynamic SQL, but
the lead-in to the drawbacks discussion qualified that with "not carefully
handled". Perhaps some in the "wider SQL community" exercise little or no
care in "handling" dynamic SQL -- that would be a good reason to regard it
as dangerous and a pejorative, or to exercise more care in hiring.

There may be other good reasons for avoiding dynamic SQL in the "wider SQL
community" but it has not been a stumbling block in my Access work, whether
individual databases, multiuser environment with MDB-JET, or Access clients
to server databases.

Generalizations from the "wider SQL community" are not necessarily
universally applicable, nor necessarily applicable in the Access world. In
my observation, those who speak of "the wider SQL community" are primarily
speaking about server databases, not file-server databases, and often aren't
aware of the differences.

Larry Linson
Microsoft Access MVP
 
D

David W. Fenton

The fact you quote the above TOP N as an exceptional case suggests
that like me, but unlike Larry it would seem, you regard a
parameterized query (a.k.a. a procedure) as the norm.

You would be completely wrong.

I don't use parameter queries at all because they are not flexible
enough.
I'd go further
and say that dynamic SQL (the term usually applied in the wider
SQL community and is pejorative - google it) should be avoided.

I really don't care what the assholes in the greater SQL world think
(and they are many of them). Applying WHERE clauses at runtime and
writing SQL on the fly to fit the situation works great, and in an
Access application, it doesn't have any significant downsides,
especially in regard to performance, except in a small number of
situations that I don't encounter often.

That said, if you're not using a Jet back end, then all bets are off
-- a different database engine changes the balance of benefits to
costs completely.
 
D

David W. Fenton

Interestingly, the very first article retrieved by googling
"dynamic SQL" included comments about both the benefits and
drawbacks of dynamic SQL, but the lead-in to the drawbacks
discussion qualified that with "not carefully handled". Perhaps
some in the "wider SQL community" exercise little or no care in
"handling" dynamic SQL -- that would be a good reason to regard it
as dangerous and a pejorative, or to exercise more care in hiring.

And many databases, such as SQL Server, will cache the optimization
of an ad hoc query so that if the same SQL statement is sent a gain,
it will use the temp optimization rather than recomputing it. With
MySQL, I use the ADODB add-in that caches recordsets and thus speeds
up performance a great deal (instead of writing frequent queries to
text files, I just let ADO handle caching it, which works very well,
because the data often doesn't change).
 
L

Larry Linson

Jamie Collins said:
First, more users consume the results of SQL than write SQL
themselves. Second, try thinking in terms of code maintence, code
reuse, coupling and cohesion, etc; Access development is not totally
unrelated to the general principles of good software engineering.

The word "access" in the name of this newsgroup is there for a reason... the
subject is Microsoft Access, not OOP programming, not other client or
front-end apps. I've been a computer programmer since before someone
invented the somewhat pretentious title of "software engineer". Access can
be a nice client for many server databases, so Access as a client to server
databases are generally on-topic.
Perhaps, Larry, you always work alone and have no need to write code
that can be maintained by another coder, or write 'code for coders'
e.g. coding business object model using class modules for other coders
to use in their own code, etc (I think I know what might be coming
next: "In my experience people who work in teams/choose an OOP
approach in VBA are [aim low here]." <g>).

You are probably correct that my view is not the same as yours... I've only
been working in teams, sometimes very large, sometimes small, since the late
1950s. Prior to that time, all my work had been "alone", and, it is true,
that in the following years, some of my work has been "alone". Not only
that, I wasn't involved in managing teams and establishing management and
development procedures and guidelines until sometime in the 1970s.

As you, no doubt, are aware, VBA supports only limited OOP. Neither Access
nor Jet are likely to be tools aimed at the enterprise architecture with
which you seem to be most familiar. OOP does have its place, and one place
for it is in code-intensive environments (which is often the environment for
enterprise applications). A place where it may not fit so well is in the RAD
environment, where a typical application can often be developed by
pointing-and-clicking one's way to a friendly UI and then sprinkling just
enough code behind the UI to make it work and work smoothly (which, often,
is surprisingly little code). But, even there, there are times that it is
useful. And, we often point out that even Access developers who never write
their own classes are Object Oriented Programmers, Consumer-side.

I do know that many, as you may also, fervently believe that "if it's not
enterprise, it's not development" but there are many more individual
applications, and modest-sized multiuser and client-server applications
than enterprise applications. In my later years in the corporate
environment, I had a colleague whose motto was, "if it isn't water-cooled,
it doesn't compute" -- a slightly earlier take on a similar view.

But, just so you won't be totally disappointed: In my experience, those
single-user and modest-group applications stand a significantly better
chance of being successful, on the average, than enterprise applications; I
suspect it is because far too many approach vast projects with half-vast
ideas, or, to put it another way, because the difficulty of managing a
project increases geometrically, not linearly, with its size.

Larry Linson
Microsoft Access MVP
 
D

David W. Fenton

You seem to be having trouble imagining good reasons for avoiding
dynamic SQL, so try this. Say you have one Jet mdb and N number of
'front end' applications (Access, VB6, .NET, whatever)

A ridiculous circumstance that hardly ever happens. For that
scenario, it would make *no* sense to put the data in a Jet MDB,
while it would make perfect sense to put the data in SQL Server,
using views and stored procedures in common for all the apps.

In any event, if you're doing that kind of development, it ought to
have an abstraction layer between the front end (in whatever format)
and the back end (in whatever db engine) precisely because that's
one of the best ways to handle multiple front ends to the same data
store.

But that's a very unrealistic *Access/Jet* scenario.
using the
database as a shared application store and each application has
its own 'local' version of some SQL code (i.e. it is held as text
compiled into the application etc) which each return the exact
same resultset. Consider the scenario where the table structure
changes, requiring the SQL code to be changed in order to return
the same resultset. In this scenario you must find the code in N
number of applications and alter it using 1 to N 4GL languages.
Now consider a single application (e.g. Access) that holds the
same SQL multiple times within its VBA code; you have the same
maintenance issue i.e. finding and amending the essentially the
same code multiple times (and what if you miss one...?)

I've never had many problems with this. First off, Speed Ferret does
a darned good job of fixing such issues, but mostly, I don't change
field names after an app has been built, simply because it causes
maintenance issues. So, for all the dynamic SQL I use, it's pretty
easy to avoid issues with this. And there are other ways to fix it,
such as replacing base source tables with stored queries of the same
name that alias the underlying data to be consistent with the code.
That's usually not something I'd do as a long-term solution, but
it's a great way of porting existing code quickly to new
requirements.
Both sceanarios can be ameliorated by storing the SQL code in
a database object as a VIEW (or perhaps a PROCEDURE, although I
recognise that that executing procedures while optionally passing
strongly-typed parameter values is not popular with Access
'purists' because it doesn't fit well with the "always connected
bound controls" approach, and I commonly see names of front end
Forms/controls hard- coded into backend database PROCEDURES, which
strikes me as lacking the necessary level of code separation
between front- and backends). This way, the SQL code can be
considered as part of the schemaas indeed it should, IMO.

You seem to be under the impression that Larry and I are advocating
no stored querydefs, whereas what I've been arguing (I'll let Larry
speak for himself) is that there's little need to edit QueryDefs in
code. That, of course, has exactly the same problems as dynamic SQL
for the scenarios you're talking about, so doesn't really address
*any* of the issues you've raised above.
Likewise, IMO those who fixate on the physical implementation
("file based", "fields and records", etc) are missing the point of
a SQL product (hint: the 'S' used to be for 'Standard').

The S has always stood for "Structured" so far as I'm aware.
No one said that ideas universally applicable or applicable in the
Access world but please do open your mind to the fact that some
SQL concepts and practises may have benefit in the Access world
some of the time.

....And others may be completely irrelevant.
Unless you are querying very large numbers of records, the
performance advantage of saved Queries versus dynamic SQL is not
detectable by the user at the keyboard in the Access environment.

First, more users consume the results of SQL than write SQL
themselves. Second, try thinking in terms of code maintence, code
reuse, coupling and cohesion, etc; Access development is not
totally unrelated to the general principles of good software
engineering. Perhaps, Larry, you always work alone and have no
need to write code that can be maintained by another coder, or
write 'code for coders' e.g. coding business object model using
class modules for other coders to use in their own code, etc (I
think I know what might be coming next: "In my experience people
who work in teams/choose an OOP approach in VBA are [aim low
here]." <g>).

I think you're mistaking our position for one that rejects *all*
saved queries, whereas what we're rejecting is the insistence on
using stored QueryDefs for performance reasons (and thus, the
necessity to edit them in code). The performance reasons don't
justify editing the stored queries in code, in my opinion (and this
appears to me to be what Larry has clearly argued, as well).

This has little or nothing to do with the issues of maintenance,
because I (and, I suspect, Larry) use stored queries for some
purposes and then often use those stored queries as recordsources
for forms and reports, and apply WHERE clauses with the default
openform/report arguments, as well as occasionally altering the SQL
in the open/load events of the objects themselves.

For data manipulation that doesn't involve presentation in a UI, it
is often the case that I use pure dynamic SQL, however, that is not
uncommonly based on saved queries instead of directly on tables.

But I never need to edit those stored queries for the purposes of
changing criteria -- and that's the usual justification for editing
stored QueryDefs (for performance reasons).
 
M

Matt Lockamy

The main reason is that I am working a system through a Lawson-Oracle
database that I do not have direct acess to. I'm using ODBC drivers to
access the information. Querying the tables for each report would take
forever, so I'd rather archive the summary data I want into tables for
reporting on later. The indexes in Lawson are not set up for users to easily
get at custom information, so rather than be limited to canned reports, I
work with the tools I have. As for changing the sql, several of the tables
in lawson are set up such that each month has its own field (for example
LAWSON_ICHISTORY!ADJOUT_01 is for adjustments out that occur in January). I
want to be able to track those seemlessly. I'm definately open for
suggustions on how I could change my approach.

I'm happy for all of your help ~ and a little suprised by how much interest
there is in the topic = )
 
J

James A. Fortune

Jamie said:
Sure, from that perspective there is no difference between a VIEW and
the equivalent ad hoc query. However, there are still advantages to
having the query stored in a . I used the example of code maintenance
(the Proliferation Avoidance Rule) but there are others, especially in
products such as SQL Server that support WITH CHECK OPTION and INSTEAD
OFtriggers. See:

A View Review
by Craig S. Mullins
http://www.dbazine.com/db2/db2-mfarticles/mullins-viewreview

Another Use for Views
by Joe Celko
http://www.dbazine.com/ofinterest/oi-articles/celko28

Jamie.

--


I use both dynamically built SQL strings and saved queries. Your point
about maintenance is a good one to keep in mind, but a public function
in a module should eliminate problems with the Proliferation Avoidance
Rule. I'll be interested in learning what the dbazine people can offer.
Thanks for posting the links.

James A. Fortune
(e-mail address removed)
 
D

David W. Fenton

The main reason is that I am working a system through a
Lawson-Oracle database that I do not have direct acess to. I'm
using ODBC drivers to access the information. Querying the tables
for each report would take forever, so I'd rather archive the
summary data I want into tables for reporting on later. The
indexes in Lawson are not set up for users to easily get at custom
information, so rather than be limited to canned reports, I work
with the tools I have. As for changing the sql, several of the
tables in lawson are set up such that each month has its own field
(for example LAWSON_ICHISTORY!ADJOUT_01 is for adjustments out
that occur in January). I want to be able to track those
seemlessly. I'm definately open for suggustions on how I could
change my approach.

I honestly *still* don't see any reason for editing querydefs in
code. Why not just write dynamic SQL? In this case, there's nothing
to be gained performance-wise from the compiled saved query (that's
only a Jet issue), so I just don't quite see what the point is.
 
D

David W. Fenton

Sure, from that perspective there is no difference between a VIEW
and the equivalent ad hoc query. However, there are still
advantages to having the query stored in a . I used the example of
code maintenance (the Proliferation Avoidance Rule) but there are
others, especially in products such as SQL Server that support
WITH CHECK OPTION and INSTEAD OFtriggers.

You apparently have not read any of my replies very carefully, as
I've never argued against using saved QueryDefs. What I've argued
against is *altering* saved QueryDefs in code. I simply see no
reason whatsoever to do that, and none of the points you are making
address that issue at all.

And *I* thought the problem space was limited to the content of
Access-to-Jet applications, since so many of the issues involving
queries are a completely different animal depending on which back
end you're using. Of *course* views in a server back end are
superior to any variety of dynamic SQL. That doesn't mean that you
need a saved query to utilized that server-side view, or that if you
had the saved query, there was ever any justification for altering
it in code.

And that was what the subject of discussion was, so far as I
understood it. You seem to be providing evidence for arguments that
are simply not in dispute, and never once seem to have addressed (or
maybe you haven't comprehended?) the points that Larry and I have
been attempting to make.
 
L

Larry Linson

And that was what the subject of discussion
was, so far as I understood it. You seem to be
providing evidence for arguments that are simply
not in dispute, and never once seem to have
addressed (or maybe you haven't comprehended?)
the points that Larry and I have been attempting
to make.

David, although it is entirely possible that Mr. Collins has not
comprehended the points, it is also possible that he's doing nothing more
than trolling. In light of his past postings, one might, indeed, lean toward
the latter interpretation. Either, of course, is good enough reason not to
be overly concerned -- other than to keep newsgroup participants from being
misled.

And, in any case, he is entirely incorrect on one point: there are
newsgroups about the particular client languages that cover using Jet as a
datastore with those languages, which would be appropriate venues for such
discussion, as this one is not. But, of course, there are rarely any
consequences resulting from posts which drift off-subject either in the
sponsored newsgroups or in USENET.

Larry
 
6

'69 Camaro

Hi, David.
I would suggest that the problem you're described is caused by a
number of possible problems:

1. using a different connection/database variable for the second
query than for the first.

2. running them in separate transactions without commiting the first
before running the second.

Those are good guesses, but not the scenario I encountered.
In all my years of programming in Access, I've never encountered
such a problem, and I've written lots of code where queries are
executed successively and later ones depend on writes that happened
in the earlier ones.

I write code on a different scale than most people. So, when people ask me,
"Have you ever seen this happen?" I often say, "Yup. I tripped over that
one, too."
I'd be interested to know what conditions lead
to this problem, since I've never run onto it. Perhaps ADO is the
culprit (I don't use ADO)?

Not ADO. I use SQL statements with the Execute method of the Database
Object. For example:

Dim db As Database

Set db = CurrentDb()
db.Execute "INSERT INTO blahblah . . . ", dbFailOnError
db.Execute "INSERT INTO clangclang . . . ", dbFailOnError
db.Execute "INSERT INTO jinglejingle . . . ", dbFailOnError
' et cetera

The tables were in a split database, so they're linked, but on the same hard
drive on my workstation. The linked tables resided in the same back end,
where referential integrity was enforced and appropriate foreign keys.
There were primary keys and appropriate indexes to make the queries run as
fast as possible. The file sizes were from 600 MB to 900 MB before the
record insertions, so well within Jet's boundaries both before and
afterward, were they grew from 50 MB to 300 MB or a little more.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blog: http://DataDevilDog.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
6

'69 Camaro

Hi, David.

Sorry. I forgot to put the update query with the dependency in my example:

Dim db As Database

Set db = CurrentDb()
db.Execute "INSERT INTO blahblah . . . ", dbFailOnError
db.Execute "UPDATE clangclang INNER JOIN blahblah ON . . . ",
dbFailOnError
db.Execute "UPDATE jinglejingle INNER JOIN clangclang ON . . . ",
dbFailOnError
' et cetera

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blog: http://DataDevilDog.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.


'69 Camaro said:
Hi, David.
I would suggest that the problem you're described is caused by a
number of possible problems:

1. using a different connection/database variable for the second
query than for the first.

2. running them in separate transactions without commiting the first
before running the second.

Those are good guesses, but not the scenario I encountered.
In all my years of programming in Access, I've never encountered
such a problem, and I've written lots of code where queries are
executed successively and later ones depend on writes that happened
in the earlier ones.

I write code on a different scale than most people. So, when people ask
me, "Have you ever seen this happen?" I often say, "Yup. I tripped over
that one, too."
I'd be interested to know what conditions lead
to this problem, since I've never run onto it. Perhaps ADO is the
culprit (I don't use ADO)?

Not ADO. I use SQL statements with the Execute method of the Database
Object. For example:

Dim db As Database

Set db = CurrentDb()
db.Execute "INSERT INTO blahblah . . . ", dbFailOnError
db.Execute "INSERT INTO clangclang . . . ", dbFailOnError
db.Execute "INSERT INTO jinglejingle . . . ", dbFailOnError
' et cetera

The tables were in a split database, so they're linked, but on the same
hard drive on my workstation. The linked tables resided in the same back
end, where referential integrity was enforced and appropriate foreign
keys. There were primary keys and appropriate indexes to make the queries
run as fast as possible. The file sizes were from 600 MB to 900 MB before
the record insertions, so well within Jet's boundaries both before and
afterward, were they grew from 50 MB to 300 MB or a little more.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blog: http://DataDevilDog.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
J

James A. Fortune

Jamie said:
I couldn't disagree more. Doing so would mean persisting your database
logic in the 'front end' (Access) rather than the 'back end' (Jet). A
non-Access user (Excel, VB6, .NET, etc) wouldn't be able to use the
function (would require automating Access) therefore would have to
write ad hoc queries. Therefore, the UDF approach IMO *encourages*
proliferation.

So your point is that the same public function would have to be copied
into modules in each of the separate programs that are connecting? I
was thinking more along the lines of separate Access forms needing the
same parameter query. In:

http://groups.google.com/group/comp.databases.ms-access/browse_frm/thread/c3baeb7fea00a21


I suggested a managed code dll for UDF when working with SQL Server.
Would that solve your proliferation problem?


UDF's are so handy in Access that I'd like to see something similar when
using SQL Server without creating unwieldy T-SQL functions. In Access,
you're simply trading one proliferation problem (dynamic SQL) for
another (saved queries). In practice I don't run into a problem of six
different uses for the same parameter query much. Dynamic SQL is often
a way for me to cut down on proliferation at miminal expense.

I am not as bothered about putting database logic in the front end as
others are but perhaps that will change after reading the articles or
after I start using Excel and other programs to hook up with SQL Server
simultaneously. I have changed QueryDefs in code but I see that as
front end logic also. In my case, usually one program handles any
updating of the information and any others are simply reading the data.

What is the real issue about where the database logic is stored?
They had some 'big name' SQL authors (Joe Celko, Chris Mullins, Fabian
Pascal, etc) but I heard the site ran out of funds for new articles :
( Good archive, though.

I haven't gotten to these yet, but I will soon.

James A. Fortune
(e-mail address removed)
 
L

Larry Linson

Jamie Collins said:
I can use Jet via VBA in Word but why do you
think a Word MVP would be able to tell me
more about referential actions in Jet than an Access
MVP? To me, that makes no sense.

The discussion about appropriate venues for particular topics is due to your
seeming to spend most of your time here arguing against perfectly valid
answers to Access-Jet questions, often by introducing spurious issues based
on using Jet in some different environment, some other database entirely, or
just databases in general. It is not due to your asking questions -- you
should certainly feel free to ask valid questions, but do not be
disappointed if you are directed to more appropriate newsgroups.

If you are not intentionally trolling, your responses often make no sense,
either -- and, if you are trolling, I, for one, would be happy if you sent
your troll posts to alt.rant, where they would be even more useful. That
would let you get whatever you wanted to say off your chest, but wouldn't
cause a diversion from actual, useful answers to the question asked here.

Larry
 
D

David W. Fenton

For example:

Dim db As Database

Set db = CurrentDb()
db.Execute "INSERT INTO blahblah . . . ", dbFailOnError
db.Execute "INSERT INTO clangclang . . . ", dbFailOnError
db.Execute "INSERT INTO jinglejingle . . . ", dbFailOnError
' et cetera

The tables were in a split database, so they're linked, but on the
same hard drive on my workstation. The linked tables resided in
the same back end, where referential integrity was enforced and
appropriate foreign keys. There were primary keys and appropriate
indexes to make the queries run as fast as possible. The file
sizes were from 600 MB to 900 MB before the record insertions, so
well within Jet's boundaries both before and afterward, were they
grew from 50 MB to 300 MB or a little more.

What about *not* using CurrentDB(), but instead using a db variable
pointing directly to the back end. No, it shouldn't make a
difference, but it's one more thing to look at.

I'd also check the disk caching settings on the local machine. But I
don't even know how to do that any more!
 
6

'69 Camaro

Hi, David.
What about *not* using CurrentDB(), but instead using a db variable
pointing directly to the back end. No, it shouldn't make a
difference, but it's one more thing to look at.

I hadn't thought to try that. Thanks for the suggestion.
I'd also check the disk caching settings on the local machine. But I
don't even know how to do that any more!

Thanks for this suggestion, too. I thought to try that at the time, but I
didn't have the boss's authorization to change any of these settings. (For
some reason, they didn't want me to experiment on the company's computers.)
But I
don't even know how to do that any more!

Once upon a time I knew how, too. ;-)

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
Blog: http://DataDevilDog.BlogSpot.com
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
L

Larry Linson

Jamie Collins said:
I feel I spend most of my time here raising awareness:

You may feel you have a "mission" to enlighten less-fortunate participants,
by being an advocate or champion, and only you can decide whether you have
such a "mission". But the "access" in the name of this newsgroup is not
there by chance --this is a newsgroup for questions and answers about
Microsoft Access; it is not an advocacy newsgroup.

I assure you that you are not the only "enlightened one" who participates
here and knows about the subjects you list -- but most responders are here
with the purpose of providing answers to the real questions that posters
raise.

And, of course, however you view your intentions here, quite a lot of what
you post could as easily be understood as attempts to convince users of
Access to change and use something different -- which is definitely not the
purpose of this newsgroup.

Larry Linson
(FYI: MVPs do not, and are not authorized to, "speak for Microsoft")
 
J

James A. Fortune

Jamie said:
On Feb 2, 6:18 pm, "James A. Fortune" <[email protected]>
wrote:

...

Jamie.

--

Jamie,

You've given me a lot to think about as usual. I'll have to spend some
time thinking about your points.

Thanks again,

James A. Fortune
(e-mail address removed)
 
L

Larry Linson

Jamie Collins said:
...which encompasses Jet, which encompasses Jet SQL, which encompasses
ANSI-92 Query Mode. Same applies to ACE: can be used without the
Access UI via the OLE DB provider, natively supports ANSI-92 Query
Mode, etc.

You have a broad view of Access... IMNSHO, it does not "encompass" JET nor
"encompass" ACE. It uses them as database engines, and those are the
database engines it uses by default. However, it can also use Informix, or
DB2, or Oracle as its database engine, but that doesn't make this an
appropriate place for questions about using Informix with some other UI like
PowerBuilder, nor Oracle with Oracle Forms, nor DB2 with Java.
That much I know. Do MVPs speak _to_ Microsoft and
are they authorized to respond in these groups?

Yes, MVPs do speak to various people at Microsoft. I don't understand what
you mean by "are they authorized to respond in these groups?"; no
"authorization" is required to respond in the sponsored newsgroups.
If so, please could you ask why ON UPDATE
CASCADE, CREATE TEMPORARY TABLE
and multiple-field NOT NULL constraints are
mentioned in the Access Help but do not appear
to actually be supported?

You can let Microsoft know of your dissatisfaction with the Help; in Access
2003, almost every Help topic asks you if the Help has been useful. If you
check No, then you have a chance to respond to "How can we make this
information more helpful?". In general, "why" questions are the ones least
likely to get an answer; they sometimes do result in a change, though.

I wouldn't likely run across such items because I use Access to create my
Tables, and DAO statements to create Tables at runtime (e.g., Temporary, but
not by the definition of CREATE TEMPORARY TABLE, because mine have to be
deleted when I am done with using them) tables. I thus, do not know to my
personal knowledge, whether the SQL DDL statements you mention work or not;
and because I don't have occasion to need them in what I do, my motivation
to explore the subject is low. Given that, it is not a question that I would
put to Microsoft as you have stated it.
(Is that not a question about Microsoft Access
that begs an answer?)

It seems to me that it is a question about Microsoft Access Help. One
possibility would be that those SQL DDL statements were documented but then
the functionality did not make it into the product, in which case, the best
outcome you are likely to see would be a change to the Help text.
 
Top