speed ADO vs DAO - what with SQL server ?

N

nova

I have been reading a few interesting threads about DAO versus ADO in regards
to speed because I wanted to find out why my uploads were going so slow.

Here's what I want to do: I have 3 big tables that I want to fill with data
for a number of users to use in own defined queries. I'm talking about tables
with up to 10 million records (database up to 8 GB). I'm using VBA code to
read text files and load in the tables. This is supposed to be a one-time
operation. After that, I will do daily updates of around 10 to 20 thousend
records per day.

Now, because of the large amount of data (60 files - up to 200MB each
month), I cannot have this database on our network server. Since our company
has an SQL server, I wanted to experiment with that (I'm not familiar with
SQL server). Because I'm only used to work in Access and DAO, I had to
convert my code to ADO, which went fine after checking a few newsgroups.

I currently have the SQL tables linked in an access database using ODBC. But
it takes a very long time to update these tables (appr. 25 min. per file).
Therefor I'm trying now to load the data in local tables in order to move
these to the SQL server later. So far everything works fine and upload time
is acceptable (less than 1 min. per file).

What I wanted to know now is if the slow connection with the SQL server
wasn't going to be a problem for the end users if they will link them in
their database to execute their queries? Is there an alternative for ODBC ?
Or isn't ODBC the cause of the slow speed? What else can I do to easily and
(especially) quickly query these SQL tables ? You might need to know that it
is necessary for the users to be able to filter the tables (actually, I still
have to create the SQL 'views') on all possible fields. Should I put an index
on each field then?
 
A

Alex White MCDBA MCSE

Hi,

Due to the large volumes of data you are talking about, a couple of things
why not move the import files locally to the SQL server then do the import
from there this would improve your import speeds as nothing has to travel
across network cables. I try to always do imports locally to the server if I
can it is so much faster.

To Access SQL data you do NOT need to use ADO, you can still use DAO, the
ADO model is better suited for SQL server, but DAO is not too bad for
accessing SQL server.

SQL server is clever (not the right word) enough to optimize it's queries
and it's not down to how much data is stored how fast it is, it is how much
data is returned to client via a query, I take a simple view that queries
that return more than 5%-10% of records in large tables are badly written,
what is the point in returning 2000+ records from a query when actually the
user is most likely looking for 1 or 2 records specifically, good queries is
the answer here.

Within SQL I never write views as I find stored procedures more flexible and
can produce the same results if not better.

Indexes are a double edged sword, great for finding things, bad for updates.

Have you looked into BCP for bulk copying of data?

You can also drop SQL logging whilst you are doing a major import of data.

One trait I see in lots of systems I come across is this,

Screen with loads of records, for you to select one to edit, the idea of
showing you all the data for you to pick through it seems a bit backward to
me.

The use of Query Hints within SQL can make a big difference in concurrency
issues when lots of users are accessing lots of data.

In my personal view SQL server is the best product for processing large
volumes of data, it will do 8GB's of data standing on it head!!!!!

It's all about how much memory have you got in your SQL server, for the
volumes of data you are talking about you want at least a couple of GB's of
RAM if not more.
 
G

Graham R Seach

If you're doing this in DAO or ADO, then *that's* the cause of the slow
speed.

If you're taling about a database that's 8GB in size, then you pretty much
have to use SQL Server as the backend. The best way to get data into SQL
Server from a file, is to use the BULK INSERT statement from SQL Server.

USE myDatabase
BULK INSERT tblMyDestinationTable
FROM "c:\myTextFile.txt"
WITH (DATAFILETYPE = 'CHAR')

If you try to do it from Access, it'll be slow, but doing it from SQL
Server, using the BULK INSERT statement, it'll be a great deal faster.

Regards,
Graham R Seach
Microsoft Access MVP
Sydney, Australia
 
N

nova

As said before, I'm still a newbie on SQL server, but I doubt if I can use
this. The file I have to upload is not tab delimited. It's a plain textfile
with 3 different record types. I have to identify each record type first and
then for each type I have a different layout to split into fields. About 70%
of the fields need to be validated first using VBA before I can store the
record in a table.
But this statement might be interesting for future use; thanks for the reply.
 
N

nova

The main purpose of having this data in tables is to have a better tool for
analyzing the data. We now receive these files as just flat text files with 3
different record types. Each record type comes in another table linked to
each other with ID's. Since we want to have the possibility to do this
analysis, we're not just looking for 1 or 2 records, neither do we need to
edit or update. It's just read-only.I may have to check if I can do something
with procedures, i have never used this before (SQL server newbie, you know).
The main thing I was worried about was if other users could easily link
these tables into their own database. I just found out that using an ADP,
they can directly connect to the SQL, so there's no need to use ODBC then,
right? Or doesn't that make any difference? I currently feel like I see all
these new things and I don't know what to try first.
Thanks
 
A

Alex White MCDBA MCSE

As long as there are no edits/updates for the normal running of things
indexes will speed things up, especially look into clustered index, what
this does is say, you have a clustered index on surname, then the data in
that table is physically stored ordered by that index, if you find yourself
doing queries like surname = "WH%" and you nearly always are querying on
that field this will produce result faster.

Precompiled select statements like views/stored procedures are faster than
dynamicly created select statements because part of the work when you send a
dynamic select statement to the server is it has to be compiled. don't use
ODBC as it is not required with the use of ADP projects.

get some data into the SQL box and looking into how your users are going to
try and query the data.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top