Convert a Number field to Autonumber?

D

David Anderson

I have an Access 2003 back-end database that was upsized to SQL Server so
that I could use it on a website. I still need to use the original Access
front-end application for offline processing and have used the Import and
Export Wizard in SQL Server Management Studio to export the SQL Server 2005
data back to an Access 2003 Jet database.

However, this Import and Export Wizard appears to have some major
limitations. I have already had to write some code to recreate all my PKs
and indexes (all were lost in the transfer). I now find that none of my SQL
Server Identity columns have been converted to Autonumber fields in Access
(they are converted to Number fields instead).

I am left with several tables containing vital numeric data in Primary Key
fields, which are used to link to other tables, but without the convenience
of the autonumber process for new records. Unfortunately, Access does not
let you directly convert a Number field to an Autonumber field.

Can anyone suggest a workaround that would have the effect of making such a
conversion, while retaining all the current data values?

David
 
K

Ken Snell \(MVP\)

You can write existing values into an Autonumber field using an Append
query. Create empty tables that have the structure that you desire -- with
Autonumber field instead of the Numeric field that you got undesirably. Then
run an append query to copy the data from the original table to the new
table, mapping the Numeric field to the Autonumber field.
 
K

Ken Sheridan

David:

Copy and paste each table as a new table with the 'structure only' of the
original. Delete the number column from the empty table and add an
autonumber column of the same name as the original number column. Then
insert all rows from the original table into the copy with an 'append' query.
Once you are satisfied that all the data has been appended correctly you can
delete the original table and rename the copy back to the original name.
Finally you'll need to recreate the relationships between the tables.

When importing data like this you can prevent these sort of problems arising
by firstly creating the empty tables in the Access file, then linking to the
external source tables and inserting rows from them with 'append' queries.
If you set up the relationships in the Access file its important that the
tables are filled in the correct order or referential integrity will be
violated. In essence this means inserting rows into the referenced tables
(those on the one-side of a relationship) before inserting rows into the
referencing tables (those on the many side of a relationship).

Ken Sheridan
Stafford, England
 
D

David Anderson

Ken,
Thanks for that. I'm very relieved to hear that there is a solution to this
problem!

David
 
D

David Anderson

Hi Ken,
I hit problems associated with relationships in some of my early experiments
with exporting from SQL Server to Access, so I already had some awareness of
that issue. However, your post very usefully clarifies how to work around
such problems. Thanks.

After some false starts, I did more or less what you describe, i.e. I
initiated my first export by creating empty tables in Access and then
appending the data from SQL Server. However, I had not realised at that time
the problems with PKs, indexes and (in particular) the Autonumber fields.
It's been a learning experience.....

BTW, in addition to new records, any of the records in the SQL Server
version of my database may have been changed since the last export. At
present, my regular export job therefore deletes all records in the Access
tables and then rewrites all the records from scratch (leaving the table
structures unchanged). Is there a better way?

David
 
K

Ken Sheridan

David:

If there is some realistic way of identifying which rows have been updated
in the source tables since the last import (i.e. other than comparing every
column in every row in each) then you could delete and then re-import jut
those rows by restricting the delete query and append query to those rows.
If not then replacing everything is probably the only solution. This will of
course lose any changes which have been made in the data in the Access tables
since the last import, but from what you say it doesn't sound like that's an
issue. If you are happy with the way its working at presnt then: 'if it
ain't broke, don't fix it'.

Ken Sheridan
Stafford, England
 
D

David Anderson

Ken,
I just wondered if I was missing some smarter method. As there is no simple
way to identify the changed records, I'll stick with my current process.

Thanks again to you and the other Ken S. for your help on this matter.

David
 
D

David W. Fenton

You can write existing values into an Autonumber field using an
Append query.

This is possible, of course, because the Autonumber field is not a
special type of field, but just a Long Integer field with a special
kind of default value. Edits to an existing value are prohibited
(unlike a regular Autonumber field), but since the Autonumber value
is only a special kind of default value, values other than the next
Autonumber can be appended to that field, just as is the case with
any field with a default value.
 
L

Larry Kahm

David,

I have a client that pulls data from a FoxPro-based package into Access on a
daily basis. Using the delete all records/insert all records approach has
lead to significant bloat in the back-end (current size is 1.8 GB). I am in
the process of doing what Ken mentioned, building a routine to compare
records to identify only what changed - and updating the field(s)
accordingly. I am taking advantage of the temporary tables document
provided by Tony Toews (http://www.granite.ab.ca/access/temptables.htm)and
using code similar to the following:

' open the temp table for "read" and the base table for update
Set RS1 = CurrentDb.OpenRecordset(strTempTableName, dbOpenDynaset,
dbReadOnly)
Set RS2 = CurrentDb.OpenRecordset(strBaseTableName, dbOpenDynaset)

' loop through each of the records
' a) find matches and update individual fields
' b) no match, add the record

With RS1
.MoveFirst
While Not .EOF
RS2.FindFirst "KeyField='" & !KeyFieldID & "'" ' this is a
text field; otherwise, no quotes
If RS2.NoMatch Then
' Insert this record into base
RS2.AddNew
' loop through fields and copy them
For i = 0 To RS2.Fields.Count - 1
RS2.Fields(i) = Trim(.Fields(i))
Next i
RS2.Update
Else
blnUpdated = False
RS2.Edit
' loop through fields, find any differences, and copy them
For i = 0 To RS2.Fields.Count - 1
If RS2.Fields(i).Type <> dbMemo Then
If RS2.Fields(i) <> .Fields(i) Then
RS2.Fields(i) = Trim(.Fields(i))
blnUpdated = True
End If
Else
If Left(RS2.Fields(i), 255) <> Left(.Fields(i), 255)
Or _
Right(RS2.Fields(i), 255) <> Right(.Fields(i),
255) Then
RS2.Fields(i) = .Fields(i)
blnUpdated = True
End If
End If
Next i
If blnUpdated = True Then
RS2.Update
End If
End If
.MoveNext
Wend
End With

RS2.Close
RS1.Close

Set RS2 = Nothing
Set RS1 = Nothing

On my local machine, processing 18,000 records took a couple of minutes. I
have to test this on the client's network in another two weeks, but even if
it takes a while to run, having a 500MB back-end is better than a 1.8GB
monster.

I hope this can help!

Larry
 
D

David Anderson

Larry,
When you talk about "bloat in the back-end" in this situation do you mean
that the Access database grows significantly during the transfer process
from FoxPro? Is this the sort of bloat that is fixed by a Compact & Repair?

I'm not aware of this problem in my own case but maybe that's because I
haven't been checking file sizes before and after the delete/reinsert export
process.

David
 
L

Larry Kahm

David,

Yes and yes.

In the one situation I am trying to correct, the back-end grows from about
450 MB to 1.8 GB in the course of one week. The existing update process has
a series of delete/insert query pairs that removes and replaces roughly
334,000 records every day in five tables. I am convinced that they only
modify 1% of those records and add possibly one-tenth of what they modify.
The database should be about 500 MB by the end of one week if this plan
works.

The compact process takes the better part of one-half hour; mostly from
dragging the huge back-end down off the network to a local workstation to
compact and repair. Putting it back on the server doesn't take that long.
They do this once a week. I hope to reduce that to once a month.

Larry
 
A

Arvin Meyer [MVP]

Since Access 2.0, deleted data doesn't get filled by new data. The area
remains until the compact process, So if there's a significant amount of
data, or a number of repetitions of the delete/import process, bloat will
occur faster than usual. Yes, it's fixed by compaction.
 
A

Arvin Meyer [MVP]

Unless the network is fairly saturated, it shouldn't take a half hour to
compact. I would think a 1.8 GB database would compact over a network in 12
to 13 minutes, and less than half of that locally. Even those times can be
reduced by copying the database to a local machine, compacting, and copying
it back.
 
L

Larry Kahm

The half hour includes copying the data down from the server, running the
compact and repair, and uploading (replacing) the file.

It is a fairly slow network, and (unfortunately) an extremely
resource-constrained workstation.

Larry
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top