All:
Each of the BE files is structurally
identical
If all the back ends are identical, then all you have to do is add one
additional column to the table that distinguishes the reason that you have
20 BE's in the first place. Remember any design problem or things that makes
things a little bit more difficult in your current designs will also make it
more difficult when you move to SQL server.
What I don’t understand is how to limit the data pulled down via the
link to a single PlantID (the user will never need to work with more
than one plant’s data at a time). My efforts include:
• Linking directly to the tables – but then I get all the data, not
just one PlantID worth
The above is not true at all. Linking to a table or view does NOT cause ANY
reocrds to come down the wire. Even in an environment where using a network
and not using SQL server, if you load up a form with a where clause to a
given Plant ID then ONLY the one plant record comes down the wire over the
network. This is TRUE FOR BOTH USING A linked table to a backend database
(mdb), or if the front end is linked to SQL server.
The fact that you link to a table has actually NO relationship to the fact
of how many records come down the pipe over the network. How many records
transfer down the network is going to be the issue of how you design your
application.
With the above information means that it is that it's your application
design that limits the amount of records that travel over the network. In
fact a good design and access right now for a split database that runs well
will ALSO tend to work well with SQL server. And a current poor design or
one that was designed without any intention of limiting the records that
travel across the network will also run poorly even when you don't use SQL
server. So, I can't strees that sql server will NOT magic speed things up.
If your application is designed and runs well now, then it'll tend to do so
with SQL server.
SQL server is indeed a high performance system, and also a system that can
scale to many many users.
If you write your application in c++, or VB or in your case with ms-access,
in GENERAL the performance of all of these tools will BE THE SAME.
In other words...sql server is rather nice, and is a standard system used in
the IT industry.
However, before you convert..how well does your application run now?
We often see posts here that a application is too slow with one user. If the
application is too slow with one user..then what can one expect when they
try and run 10 users. That is now 10 times the requirements..
The other issue is how well is the database setup?
Further..how well are the forms designed?
How well does the application work with 5 users..and then when you jump to
10 users...how much a slow down to you notice?
A few things:
Having a table with 75k records is quite small. Lets assume you have 12
users. With a just a 100% file base system (jet), and no sql server, then
the performance of that system should really have screamed. If that
Application performed poorly, then it's the design that was the problem.
Before Microsoft started "really" selling sql server, they rated JET could
handle easily 50 users. We have credible reports here of people
running 100 users. however, in those cases everything must be
"perfect".
I have some applications out there with 50, or 60 HIGHLY related tables.
With 5 to 10 users on a network, response time is instant. I don't think any
form load takes more then one second. Many of those 60+ tables are highly
relational..and in the 50 to 75k records range.
So, with my 5 users..I see no reason why I cant scale to 15 users with
such small tables in the 75,000 record range.
If the application did not perform with such small tables of only 75k
records..then upsizing to sql server will do absolute nothing to fix
performance issues. In fact, in the sql server newsgroups you see weekly
posts by people who find that upgrading to sql actually slowed things down.
I even seem some very cool numbers showing that some queries where actually
MORE EFFICIENT in terms of network use by JET then sql server.
So just moving to sql server will Usually result in slower performance.
My point here is that technology will NOT solve performance problems.
However, good designs that make careful use of limited bandwidth resources
is the key here. So, if the application was not written with good
performance in mind..then you kind are stuck with a poor design!
I mean, when using a JET file share, you grab a invoice from the 75k record
table..only the one record is transferred down the network with a file share
(and, sql server will also only transfer one record). So, at this point, you
really will NOT notice any performance difference by upgrading to sql
server. There is no magic here.
Sql server is a robust and more scalable product then is JET. And, security,
backup and host of other reasons make sql server a good choice.
However, sql server will NOT solve a performance problem with dealing
with such small tables as 75k records
Of course, when efforts are made to utilize sql server, then
significant advances in performance can be realized.
I will give a few tips...these apply when using ms-access as a file
share (without a server), or with odbc to sql server:
** Ask the user what they need before you load a form!
The above is so simple, but so often I see the above concept ignored.
For example, when you walk up to a instant teller machine, does it
download every account number and THEN ASK YOU what you want to do? In
access, it is downright silly to open up form attached to a table WITHOUT
FIRST asking the user what they want! So, if it is a customer invoice, get
the invoice number, and then load up the form with the ONE record (how can
one record be slow!). When done editing the record...the form is closed, and
you are back to the prompt ready to do battle with the next customer. You
can read up on how this "flow" of a good user interface works here (and this
applies to both JET, or sql server applications ):
http://www.members.shaw.ca/AlbertKallal/Search/index.html
My only point here is restrict the form to only the ONE record the user
needs. Of course, sub-forms, and details records don't apply to this rule,
but I am always dismayed how often a developer builds a nice form, attaches
it to a large table, and then opens it..and the throws this form attached to
some huge table..and then tells the users to go have at and have fun. Don't
we have any kind of concern for those poor users? Often, the user will not
even know how to search for something ! (so, prompt, and asking the user
also makes a HUGE leap forward in usability. And, the big bonus is reduced
network traffic too!...Gosh...better and faster, and less network
traffic....what more do we want!).
** Use caution with quires that require more then one linked table
This mostly applies to odbc to sql server.
(note that most of us tend to avoid using ADP projects for existing
applications, it's not recommend to go down that road, and you're better off
to use ODBC linked tables to SQL server for *existing* applications)
Thus, while jet does it best..these types of joins can often be
real slow (this is especially the case when there's aggregates
queries - since those aggregates (sum, count) are actually values
that are the result of "many" records. The simple solution in these
cases is to change the query to view and link to that. This is the
least amount of work compared to other approaches that can reduce
bandwidth. This also means that the aggregates occur on the
server side. This is also an example where sql server uses LESS
bandwidth then JET (since simple row of something like customerID
+ a total of sales results in only one row of data being sent across the
network - this is despite the fact that the aggregate of customer sales
total could be 1000's of reocrds. In the case of jet, every single record
will have to travel down the network pipe, whereas with SQL server, only one
row of data will travel down the pipe. Therefore if you had only ten
customers, but each customer had 10,000 sales records, in the case of jet
you would see 30,000 records travel accross the network, but in the cases
SQL server you would only see ten records traveled down the network pipe
(30,000 - 10 = 29990 records saved In terms of network reduction that's a
lot of saving here). so in some cases SQL server can substantially increase
the performance of things, but it's not very often that you're viewing
30,000 records at one time, the exception being here tottals etc in a a
report.
This also applies to combo boxes. Most
combos boxes has sql embedded in them. That sql has to be processed, and
then thrown to a linked odbc table. This is a bit sluggish. (a form can have
maybe one, or two combos..but after that ..it will start to load slow). So,
remove the sql from the combo box, build a view..and link the combo box
direct to that view (JUST USE the view name...the sort, and any sql need to
be in the view). The result is quite good combo box load performance. (and
again, not very much work. There are other approaches that can even speed
this up more..but we have to balanced the benefits VS. the amount of work
and coding. I don't think once should re-code all combo boxes to a call back
with a pass-through reocrdset..but that can be a solution also).
Notet that if you linked to SQL server and if the form load time is
acceptable, then often you don't have to do anything at all. As always the
trick here in these upsizing projects is only to do the amount of work you
need to get acceptable performance.
** Of course, if you do have sql with more then one table..then pass-though
is the best if using odbc.
This is usually done for reports as pass-through queries are read
only.
** You can continue to use bound forms..but as mentioned..restrict the form
to the one record you need. You can safely open up to a single invoice,a and
even continue to use the "where" clause of the openform. Bound forms are way
less work then un-bound forms...and performance is generally just is good
anyway when done right. So, just use a open form with the "where" clause
option to restrict the form to ONE record. This suggestion applies equally
well to SQL server based system, or one that is a split file share to a mdb
file on a network. so a bound form with a "where" clause will ONLY pull
reocrds down the pipe that satisfy the where clause.
** Large loading of combo boxes. A combo box is good for about 100
entries. After that..you are torturing the user (what..they got to look
through 100s of entries). So, keep things like combo boxes down
to a min size. This is both faster..and MORE importantly it is
kinder to your users.
After all, at the end of the day..what we really want is to make
things easy for the users...and treat them well.. It seems that
treating the users well, and reducing the bandwidth
(amount of data) goes hand in hand. So, better applications
treat the users well..and run faster! (this is good news!)