Split Backend DB in 2 recommendable ?

G

Glenn

hi all,

First, a brief( kind of) description of the problem:

I developed a multi-user application using Access 2000.
The data is stored in a backend database.
The backend database is stored on a network drive on a windows NT4
server.

About 15 people use the application simultaneously.

There are 3 different front-end applications:
-customer Account information
-timesheets of employees
-Invoicing of employee time to customers.

the timesheet/invoicing apps are in service since a month (=production).
the customer info app about 4 years.

Users can run the 3 front-ends at the same time, but mostly it's just
the info & timesheet apps.

In the worst case, about 40-50 connections are active to the backend
database.
On the NTserver, the number of file locks on the backend database can be
as high as 300.
The database edit scheme is (or should be) record level-locking.

The timesheet main table grows each month with about 4000 records.
The two main invoices tables grow with about the same number of records.
those records are derived from the timesheet data.

The invoices application frequently writes & reads a month's worth of
data to and from the backend database.

the size of the backend database containing 4 years of customer
information and 2 months of timesheet related information is about 6Mb
(after a compact) Which, I gather, shouldn't pose a problem ?

rather too frequently the backend database gets corrupted.
I feel this is probably due to the number of simultaneous connections to
& filelocks on the same file on the server.

What are workable limits of an access database for size, number of
simultaneous users, connections,...?

* Would it help if I split the backend database into two databases?
One database would have the tables used in the customer information app.
The other the ones used in the timesheet/invoicing app.

There are some common ones which would be stored in the first db and
linked to by the second db.

suggestions, remarks welcome...
best regards
Glenn
 
A

Arvin Meyer

You may have an OpLocks problem on your server:

http://www.granite.ab.ca/access/corruptmdbcausesoplocks.htm

You may also have a bad NIC.

You may also have a user that's just shutting down instead of backing out
properly.

The size and number of users should not be a problem if your database is
designed properly. I have a database with an average of 42 users running 11
different front-ends on a 63 MB database. I have another one with 35 data
entry users on a 10 MB database. And I have a single user that updates a 30
GB (yes you read that correctly GIGABYTE) with 80 backend databases. The
first hasn't corrupted since we dumped the flakey Dell wireless card about 8
months ago. The 2nd has never corrupted (2 years), and the 3rd hasn't
corrupted in the last 3.5 years.
--
Arvin Meyer, MCP, MVP
Microsoft Access
Free Access downloads:
http://www.datastrat.com
http://www.mvps.org/access
 
A

Albert D. Kallal

When you say a split mdb. Are you saying that each pc gets its own copy a of
a mdb front end...right? Actually, each user should gets a mde...right?

You do NOT want to share the front end. While splitting a mdb certainly
allows you to separately development and you can work on a new front end
while users work with the production front end, a LARGER REASON to split is
so that you reduce network traffic, and keep multiple users OUT of the
application code. With a front end on each pc, you reduce network load as
the forms,a nd application code does not have to be sent to each pc.
Further, you keep multiple users OUT of the same code mdb/mbe.

If your setup does not give each user their OWN front end, and that front
end is NOT being placed on EACH pc, then you missed the major point of
splitting, and have been running a incorrect setup for some time. You most
defiantly want to ensure that each user gets a copy of the application on
their pc. This setup will generally fix your corruption problems.

The whole idea of splitting means that you never have multiple users running
the same application.

So, make sure you are running a correct setup, and that should fix most
corruptions.

For further readings on corruptions, and the cause, check out:

http://www.granite.ab.ca/access/corruptmdbs.htm

And, to make distribution of a new front end to ach pc, check out Tony's
free auto front end updater:

http://www.granite.ab.ca/access/autofe.htm


And, for info on splitting:

http://www.granite.ab.ca/access/splitapp.htm
 
G

Glenn

thanks for replying so quickly
answers within

glenn

When you say a split mdb. Are you saying that each pc gets its own copy a of
a mdb front end...right? Actually, each user should gets a mde...right?
each pc gets its own copy of the mdb.

I haven't used mde yet. I'll look into that, but till now, protection of
the code was not an issue.
8<

For further readings on corruptions, and the cause, check out:

http://www.granite.ab.ca/access/corruptmdbs.htm

And, to make distribution of a new front end to ach pc, check out Tony's
free auto front end updater:

http://www.granite.ab.ca/access/autofe.htm
I have my own code using login scripts on the server. the only problem
with this is, that some users do not logout (forget to or lazy) each
day.

if a new version is present, they don't get it. I'm going to include a
version check in a next evolution (haven't had the time for it)

I'll look into Tony approach, I'm sure to learn from it :)

My gut feeling tells me the problems lies with overloading the
connection to the server. another answer also suggest this possibility
 
G

Glenn

thanks for replying so quickly
answers within

glenn

sligojoeS_PAM_2 said:
I have had some larger data bases with a like number of users and more
records (300,000+) I have not had any corruption problems. It has been on
NT and 2000 servers and on Ethernet and ... Heck, I can't remember the what
we had before.

In any case I have had only one corruption with a half dozen such
databases over about five years.

this eases my worries on that front. Although I find my own effort quite
grand ;), it does not weigh up to 300000+ records
Hopefully someone will have some suggestions for your corruption
problems.

One thing might be different. Due to record locking problems, I have
all my users who do a large number of changes, use a batch system, so there
are few conflicts. I have also eliminated all memo field types as I had
problems with them.
record-locking is been a worry. at the start, one pc had not been
upgraded to the minimum access version to allow this, costing me over 2
weeks trying to figure the problem with my code :( instead of
obliterating that pc (yes, I'm very very vengeful :)
Another option is to switch to a SQL sever which seem to have fewer
problems.
well, that's a route I am not going take immediately. I gather there
would be a substantial rewrite involved. More a last recourse...
 
G

Glenn

thanks for replying so quickly
answers within

glenn

definitely gonna read this.
You may also have a bad NIC.
would this point to the NIC in the server OR any NIC on the network?

I mean if a pc with a faulty NIC accesses the DB, the problem could show
up at another innocent pc ?

I've read a suggestion that a large number of writelocks (30,50+) by the
same person/pc on the ldb is a sign of corruption.
I've seen this sometimes on the server, but (I think) not always from
the same pc.

I really hate network problems - really hard to track down the culprit!
You may also have a user that's just shutting down instead of backing out
properly.
sometimes an error "Operation not supported by transactions" comes up
when somebody tries to delete a record. this then causes access to go
into an infinite loop reshowing that same error. only way out then is to
terminate the access process.

It is as of the record-level locking doesnot work all the time.
Due to the nature of the queries, it is impossible that two users edit a
same record.

I feel this error is a consequence of corruption rather than it's cause.
The size and number of users should not be a problem if your database is
designed properly. I have a database with an average of 42 users running 11
different front-ends on a 63 MB database. I have another one with 35 data
entry users on a 10 MB database. And I have a single user that updates a 30
GB (yes you read that correctly GIGABYTE) with 80 backend databases. The
first hasn't corrupted since we dumped the flakey Dell wireless card about 8
months ago. The 2nd has never corrupted (2 years), and the 3rd hasn't
corrupted in the last 3.5 years.
GB Hum, so I have some leeway there :)

and heck, wireless! the network I'm dealing with is supposed to be
100Mbit with only switches. If wireless works, the network should not be
the problem.

thanks for the info
glenn
 
G

Glenn

sligojoeS_PAM_2 said:
--
Joseph E. Meehan

26 + 6 = 1 It's Irish Math


...

I use a version check and it seems to work well for me.
yes, I know this to be a weak part in my application for the moment, but
till now,I've badgered the customer into some good tableside (erm)
workstation manners, so that this has yet to cause serious
inconvenience.
But as I lose the will to talk sense into the customer, in future, I'll
resort to the version check to impose my rule (haha <evil laugh> hey
that bottle of wine I just had, really helps my prose)

thanks for replying
good night & weekend
Bye
glenn
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top