Massive Large Query Issues

J

JimS

I have a query that brings together data from 12 queries into a "shadow" flat
table to reduce form load times for certain forms. Recently, it's begun
running out of system resources. I think it's because all 12 queries (and
some of the subqueries...) call an Access (2007) user function I wrote:
______________________________
Public Function WeekNumber(ArgumentDate As Date, Optional WeekEndDate As
Date = #12/30/2000#) As Long
' Company's default week end day of week is Saturday, hence the default
12/30/00, which was a Saturday
' NOTE: This will not work well with dates prior to 12/30/00! Don't use
it!
' Modified 4/28/09 to correct wrong results for Saturday Argument Dates

WeekNumber = CLng((ArgumentDate - 1 - WeekEndDate) \ 7)
End Function
__________________________________

Anyway, I would guess I need to somehow stop all these myriad queries and
subqueries from calling this routine literally 300,000 times or more each
time I run this shadow-table create query.

What's the best way to do this?

I could create a reference table that has two columns: ReferenceDate,
WeekNbr
If I did that, could I join columns that are dates which may have
fractional (time) components? How do I do that? Do I trim the time first? How?

I could go back to the queries and replace many of those references with
the formula shown in the function above... I think I've done that in some
cases...

Of course, I'm not asking the fundamental question: Is there something
else that could be causing a query to run out of system resources (after
running for 45 minutes?)

Thanks for your advice!
 
J

Jeff Boyce

Jim

I certainly don't have enough information to be certain about this, ...

.... but if you are using "12 queries", is there a chance you are working
with something like "one query per month"? What's the significance of "12"?
Without some idea of how your data is structured, it's difficult to offer
any specific suggestions.

Regards

Jeff Boyce
Microsoft Office/Access MVP
 
J

JimS

Sorry, Jeff. It's not doing that.

It is gathering spending and budget data for several different classes of
spending (captured in different ways -- time sheets, imports, POs, etc.) and
several different budget periods (each budget has two separate "spend curves"
with pro-forma spend patterns that create weekly budgets by project...) It's
a very difficult thing to describe.

The queries aggregate spending and budgets by week, then marry them together
in a non-normalized flat table for use with pivot table forms. I run these
queries (two of them) each night. They run for a couple of hours unattended,
and create the shadow tables for use the next day.

WeekNumber was a perfect candidate for a public function subroutine since
its calculation is obscure but well-defined. I didn't want anyone working for
me to calculate it differently, or have a typo cause a miscalculation, so I
set up the function subroutine. All told, the weeknumber function is probably
called nearly a million times to assign week numbers to all the aggregated
expenses and then combine them into a table that has about 200,000 rows when
it's done.

If only I could use SQL Server...

I'm not sure it's the culprit, though. Have you experienced running out of
system resources for other reasons?
 
J

Jeff Boyce

Jim

Thanks for the clarification... one down ...<g>

Queries that run "for a couple hours" seem highly unusual. Typically, a
query will take longer to run when the table(s) it uses don't have indexing.
Can you confirm that the table(s) involved in one of these long-running
queries are indexed on all fields used as: search criteria, sort order, and
join fields?

Queries that run long may use inefficient SQL -- please post the SQL
statement for a couple of those.

Next, queries that run long may be doing so because the data is far away and
'dribbles in' -- is this a local database or a split front-end/back-end? If
split, where's the back-end? How are you connecting?

More info, please...

Regards

Jeff Boyce
Microsoft Office/Access MVP
 
J

JimS

Well, I know I can make these queries run faster, but since I can let them
run overnight, it's not worth the effort (and cost) to my client. I'll clean
'em up slowly.

The reason this is such a massive query is that it must gather all the data,
then union all the data to gather a "master list" of all project/week
combinations that have either a budget or some spending or both. Then, it has
to regather the same data to build the final flat table. Think of it as a
two-way match -- not something SQL is good at. I used to do these in COBOL
all the time (but that reveals my age....)
The final flat table has about a quarter million records.

I could certainly restructure it to work faster, but it would take lots of
time.

The db is split. The front end is local (thank you Tony T -- AutoFE.) The
back end is on a busy network, as is the object table. It's connected with
Windows connected drives. I assume Access SQL uses temp tables local to the
Front End, so that can't be faster.

The largest source table is about 235,000 records. The tables are fully
normalized and indexed. Since table population is either "by hand" (as in
time sheets...) or by import, I've indexed the hell out of them, indexing any
conceivable search criteria.

When it's running, MSACCESS runs about 25-35% of the cpu time.

Here's a SQL statement (you'll be sorry you asked....)

----------------------------

SELECT qryHours_SpendBudget.ProjectID, qryHours_SpendBudget.DisciplineID,
Weeknumber(Nz([qryoddbyproject].[odd],#1/1/2007#)+(7*[scperiodoffset])) AS
WeekNbr, [hoursbudget]*[scperiodpercent] AS PeriodHrsBudget,
tblSCPeriod.SCPeriodOffset, tblSCPeriod.SCPeriodPercent, Nz([odd],#1/1/2007#)
AS ProjectODD
FROM ((qryHours_SpendBudget LEFT JOIN tblSCPeriod ON
qryHours_SpendBudget.HCurve = tblSCPeriod.SCID) LEFT JOIN tblProjects ON
qryHours_SpendBudget.ProjectID = tblProjects.ID) LEFT JOIN qryODDbyProject ON
qryHours_SpendBudget.ProjectID = qryODDbyProject.ProjectID
ORDER BY qryHours_SpendBudget.ProjectID, qryHours_SpendBudget.DisciplineID,
tblSCPeriod.SCPeriodOffset;
 
J

Jeff Boyce

How does sorting by ProjectID and DisciplineID help your final combination
of queries? I can see that having some order makes THIS query's results a
bit more readable, but did I understand you correctly that the only use you
make of the query (SQL) you provided is to roll it up into a subsequent
query? If so, why not hold off on the sort until everything is rolled up?

I didn't notice a WHERE clause in the SQL. Often, a collection of queries
that get rolled up use separate WHERE clauses to get separate sets of data.
Are you using separate queries (?against separate tables) to collect
'comparable' data before aggregating?

If the data is comparable, why is it in different tables?

(I'm still not getting a very clear picture ... got table structure, too?)

Regards

Jeff Boyce
Microsoft Office/Access MVP



\
JimS said:
Well, I know I can make these queries run faster, but since I can let them
run overnight, it's not worth the effort (and cost) to my client. I'll
clean
'em up slowly.

The reason this is such a massive query is that it must gather all the
data,
then union all the data to gather a "master list" of all project/week
combinations that have either a budget or some spending or both. Then, it
has
to regather the same data to build the final flat table. Think of it as a
two-way match -- not something SQL is good at. I used to do these in COBOL
all the time (but that reveals my age....)
The final flat table has about a quarter million records.

I could certainly restructure it to work faster, but it would take lots of
time.

The db is split. The front end is local (thank you Tony T -- AutoFE.) The
back end is on a busy network, as is the object table. It's connected with
Windows connected drives. I assume Access SQL uses temp tables local to
the
Front End, so that can't be faster.

The largest source table is about 235,000 records. The tables are fully
normalized and indexed. Since table population is either "by hand" (as in
time sheets...) or by import, I've indexed the hell out of them, indexing
any
conceivable search criteria.

When it's running, MSACCESS runs about 25-35% of the cpu time.

Here's a SQL statement (you'll be sorry you asked....)

----------------------------

SELECT qryHours_SpendBudget.ProjectID, qryHours_SpendBudget.DisciplineID,
Weeknumber(Nz([qryoddbyproject].[odd],#1/1/2007#)+(7*[scperiodoffset])) AS
WeekNbr, [hoursbudget]*[scperiodpercent] AS PeriodHrsBudget,
tblSCPeriod.SCPeriodOffset, tblSCPeriod.SCPeriodPercent,
Nz([odd],#1/1/2007#)
AS ProjectODD
FROM ((qryHours_SpendBudget LEFT JOIN tblSCPeriod ON
qryHours_SpendBudget.HCurve = tblSCPeriod.SCID) LEFT JOIN tblProjects ON
qryHours_SpendBudget.ProjectID = tblProjects.ID) LEFT JOIN qryODDbyProject
ON
qryHours_SpendBudget.ProjectID = qryODDbyProject.ProjectID
ORDER BY qryHours_SpendBudget.ProjectID,
qryHours_SpendBudget.DisciplineID,
tblSCPeriod.SCPeriodOffset;

---------------------------------

--
Jim


Jeff Boyce said:
Jim

Thanks for the clarification... one down ...<g>

Queries that run "for a couple hours" seem highly unusual. Typically, a
query will take longer to run when the table(s) it uses don't have
indexing.
Can you confirm that the table(s) involved in one of these long-running
queries are indexed on all fields used as: search criteria, sort order,
and
join fields?

Queries that run long may use inefficient SQL -- please post the SQL
statement for a couple of those.

Next, queries that run long may be doing so because the data is far away
and
'dribbles in' -- is this a local database or a split front-end/back-end?
If
split, where's the back-end? How are you connecting?

More info, please...

Regards

Jeff Boyce
Microsoft Office/Access MVP
 
J

JimS

Examples of data dimensions:

PCard Purchases: Imported from SAS.
Internal TimeSheets: Imported from Corporate WorkOrder System (Maximo)
RFI Changes: Hand-entered (used to stratify some costs)
Inventory Usage: Imported from Maximo
PO Data: Imported from SAS into a header-detail structure.
Time Sheets: Hand-entered and used to pay vendors. Each time sheet must
include a Work Order Number. Each Trade's target budgets in hours and dollars
are kept by work order.
Work Order: Tells me the Discipline, project, Equipment ID, etc.
Project Budgets: Entered at the Capital Request level, then hand-allocated
to individual projects (Typically a few projects...)

If I'd understood the whole system up front, and had access to the totality
they ultimately wanted to collect, of course I would have put the data in far
fewer structures, but I was fed the project piecemeal and my client is
maniacal about having everything foot to the penny. A couple of projects
tracked were $60 million projects!

Having said that, my client is the envy of all the other plants in the
company. None of the others has even close to this level of cost control and
visibility. I'm working on a new module now that permits the construction
trades superintendants break down their budgeted hours by trade and date,
then compare it to a primitive "unit rate" analysis for the equipment being
installed (One install is typically $100,000 plus the cost of the equpment,
which is often $3 million). They do 2-3 a month!

My system also breaks down capital/expense, and allocates costs to cost
centers. Needless to say, they are not SAS fans.

All this with NO IT help or access. I can't use SQL to import from SAS or
Maximo, relying on pre-defined report exports (what a pain...), and I can't
install (or use their) SQL Server. It serves 5+ users at once with a universe
of 25 or so.

It's a living!
--
Jim


Jeff Boyce said:
How does sorting by ProjectID and DisciplineID help your final combination
of queries? I can see that having some order makes THIS query's results a
bit more readable, but did I understand you correctly that the only use you
make of the query (SQL) you provided is to roll it up into a subsequent
query? If so, why not hold off on the sort until everything is rolled up?

I didn't notice a WHERE clause in the SQL. Often, a collection of queries
that get rolled up use separate WHERE clauses to get separate sets of data.
Are you using separate queries (?against separate tables) to collect
'comparable' data before aggregating?

If the data is comparable, why is it in different tables?

(I'm still not getting a very clear picture ... got table structure, too?)

Regards

Jeff Boyce
Microsoft Office/Access MVP



\
JimS said:
Well, I know I can make these queries run faster, but since I can let them
run overnight, it's not worth the effort (and cost) to my client. I'll
clean
'em up slowly.

The reason this is such a massive query is that it must gather all the
data,
then union all the data to gather a "master list" of all project/week
combinations that have either a budget or some spending or both. Then, it
has
to regather the same data to build the final flat table. Think of it as a
two-way match -- not something SQL is good at. I used to do these in COBOL
all the time (but that reveals my age....)
The final flat table has about a quarter million records.

I could certainly restructure it to work faster, but it would take lots of
time.

The db is split. The front end is local (thank you Tony T -- AutoFE.) The
back end is on a busy network, as is the object table. It's connected with
Windows connected drives. I assume Access SQL uses temp tables local to
the
Front End, so that can't be faster.

The largest source table is about 235,000 records. The tables are fully
normalized and indexed. Since table population is either "by hand" (as in
time sheets...) or by import, I've indexed the hell out of them, indexing
any
conceivable search criteria.

When it's running, MSACCESS runs about 25-35% of the cpu time.

Here's a SQL statement (you'll be sorry you asked....)

----------------------------

SELECT qryHours_SpendBudget.ProjectID, qryHours_SpendBudget.DisciplineID,
Weeknumber(Nz([qryoddbyproject].[odd],#1/1/2007#)+(7*[scperiodoffset])) AS
WeekNbr, [hoursbudget]*[scperiodpercent] AS PeriodHrsBudget,
tblSCPeriod.SCPeriodOffset, tblSCPeriod.SCPeriodPercent,
Nz([odd],#1/1/2007#)
AS ProjectODD
FROM ((qryHours_SpendBudget LEFT JOIN tblSCPeriod ON
qryHours_SpendBudget.HCurve = tblSCPeriod.SCID) LEFT JOIN tblProjects ON
qryHours_SpendBudget.ProjectID = tblProjects.ID) LEFT JOIN qryODDbyProject
ON
qryHours_SpendBudget.ProjectID = qryODDbyProject.ProjectID
ORDER BY qryHours_SpendBudget.ProjectID,
qryHours_SpendBudget.DisciplineID,
tblSCPeriod.SCPeriodOffset;

---------------------------------

--
Jim


Jeff Boyce said:
Jim

Thanks for the clarification... one down ...<g>

Queries that run "for a couple hours" seem highly unusual. Typically, a
query will take longer to run when the table(s) it uses don't have
indexing.
Can you confirm that the table(s) involved in one of these long-running
queries are indexed on all fields used as: search criteria, sort order,
and
join fields?

Queries that run long may use inefficient SQL -- please post the SQL
statement for a couple of those.

Next, queries that run long may be doing so because the data is far away
and
'dribbles in' -- is this a local database or a split front-end/back-end?
If
split, where's the back-end? How are you connecting?

More info, please...

Regards

Jeff Boyce
Microsoft Office/Access MVP

Sorry, Jeff. It's not doing that.

It is gathering spending and budget data for several different classes
of
spending (captured in different ways -- time sheets, imports, POs,
etc.)
and
several different budget periods (each budget has two separate "spend
curves"
with pro-forma spend patterns that create weekly budgets by project...)
It's
a very difficult thing to describe.

The queries aggregate spending and budgets by week, then marry them
together
in a non-normalized flat table for use with pivot table forms. I run
these
queries (two of them) each night. They run for a couple of hours
unattended,
and create the shadow tables for use the next day.

WeekNumber was a perfect candidate for a public function subroutine
since
its calculation is obscure but well-defined. I didn't want anyone
working
for
me to calculate it differently, or have a typo cause a miscalculation,
so
I
set up the function subroutine. All told, the weeknumber function is
probably
called nearly a million times to assign week numbers to all the
aggregated
expenses and then combine them into a table that has about 200,000 rows
when
it's done.

If only I could use SQL Server...

I'm not sure it's the culprit, though. Have you experienced running out
of
system resources for other reasons?
--
Jim


:

Jim

I certainly don't have enough information to be certain about this,
...

.... but if you are using "12 queries", is there a chance you are
working
with something like "one query per month"? What's the significance of
"12"?
Without some idea of how your data is structured, it's difficult to
offer
any specific suggestions.

Regards

Jeff Boyce
Microsoft Office/Access MVP


I have a query that brings together data from 12 queries into a
"shadow"
flat
table to reduce form load times for certain forms. Recently, it's
begun
running out of system resources. I think it's because all 12 queries
(and
some of the subqueries...) call an Access (2007) user function I
wrote:
______________________________
Public Function WeekNumber(ArgumentDate As Date, Optional
WeekEndDate
As
Date = #12/30/2000#) As Long
' Company's default week end day of week is Saturday, hence the
default
12/30/00, which was a Saturday
' NOTE: This will not work well with dates prior to 12/30/00!
Don't
use
it!
' Modified 4/28/09 to correct wrong results for Saturday Argument
Dates

WeekNumber = CLng((ArgumentDate - 1 - WeekEndDate) \ 7)
End Function
__________________________________

Anyway, I would guess I need to somehow stop all these myriad
queries
and
subqueries from calling this routine literally 300,000 times or more
each
time I run this shadow-table create query.

What's the best way to do this?

I could create a reference table that has two columns:
ReferenceDate,
WeekNbr
If I did that, could I join columns that are dates which may have
fractional (time) components? How do I do that? Do I trim the time
first?
How?

I could go back to the queries and replace many of those references
with
the formula shown in the function above... I think I've done that in
some
cases...

Of course, I'm not asking the fundamental question: Is there
something
else that could be causing a query to run out of system resources
(after
running for 45 minutes?)

Thanks for your advice!
 
J

Jeff Boyce

Jim

Thanks for the clarification ...

I must be under-caffeinated this morning, as I am still not seeing the
relationships among these dimensions.

Even if the raw data is as you've described, have you built your own "ER"
model of how the data is inter-related?

More specific descriptions might help folks here in the newsgroup offer more
specific suggestions.

Regards

Jeff Boyce
Microsoft Office/Access MVP

JimS said:
Examples of data dimensions:

PCard Purchases: Imported from SAS.
Internal TimeSheets: Imported from Corporate WorkOrder System (Maximo)
RFI Changes: Hand-entered (used to stratify some costs)
Inventory Usage: Imported from Maximo
PO Data: Imported from SAS into a header-detail structure.
Time Sheets: Hand-entered and used to pay vendors. Each time sheet must
include a Work Order Number. Each Trade's target budgets in hours and
dollars
are kept by work order.
Work Order: Tells me the Discipline, project, Equipment ID, etc.
Project Budgets: Entered at the Capital Request level, then hand-allocated
to individual projects (Typically a few projects...)

If I'd understood the whole system up front, and had access to the
totality
they ultimately wanted to collect, of course I would have put the data in
far
fewer structures, but I was fed the project piecemeal and my client is
maniacal about having everything foot to the penny. A couple of projects
tracked were $60 million projects!

Having said that, my client is the envy of all the other plants in the
company. None of the others has even close to this level of cost control
and
visibility. I'm working on a new module now that permits the construction
trades superintendants break down their budgeted hours by trade and date,
then compare it to a primitive "unit rate" analysis for the equipment
being
installed (One install is typically $100,000 plus the cost of the
equpment,
which is often $3 million). They do 2-3 a month!

My system also breaks down capital/expense, and allocates costs to cost
centers. Needless to say, they are not SAS fans.

All this with NO IT help or access. I can't use SQL to import from SAS or
Maximo, relying on pre-defined report exports (what a pain...), and I
can't
install (or use their) SQL Server. It serves 5+ users at once with a
universe
of 25 or so.

It's a living!
--
Jim


Jeff Boyce said:
How does sorting by ProjectID and DisciplineID help your final
combination
of queries? I can see that having some order makes THIS query's results
a
bit more readable, but did I understand you correctly that the only use
you
make of the query (SQL) you provided is to roll it up into a subsequent
query? If so, why not hold off on the sort until everything is rolled
up?

I didn't notice a WHERE clause in the SQL. Often, a collection of
queries
that get rolled up use separate WHERE clauses to get separate sets of
data.
Are you using separate queries (?against separate tables) to collect
'comparable' data before aggregating?

If the data is comparable, why is it in different tables?

(I'm still not getting a very clear picture ... got table structure,
too?)

Regards

Jeff Boyce
Microsoft Office/Access MVP



\
JimS said:
Well, I know I can make these queries run faster, but since I can let
them
run overnight, it's not worth the effort (and cost) to my client. I'll
clean
'em up slowly.

The reason this is such a massive query is that it must gather all the
data,
then union all the data to gather a "master list" of all project/week
combinations that have either a budget or some spending or both. Then,
it
has
to regather the same data to build the final flat table. Think of it as
a
two-way match -- not something SQL is good at. I used to do these in
COBOL
all the time (but that reveals my age....)
The final flat table has about a quarter million records.

I could certainly restructure it to work faster, but it would take lots
of
time.

The db is split. The front end is local (thank you Tony T -- AutoFE.)
The
back end is on a busy network, as is the object table. It's connected
with
Windows connected drives. I assume Access SQL uses temp tables local to
the
Front End, so that can't be faster.

The largest source table is about 235,000 records. The tables are fully
normalized and indexed. Since table population is either "by hand" (as
in
time sheets...) or by import, I've indexed the hell out of them,
indexing
any
conceivable search criteria.

When it's running, MSACCESS runs about 25-35% of the cpu time.

Here's a SQL statement (you'll be sorry you asked....)

----------------------------

SELECT qryHours_SpendBudget.ProjectID,
qryHours_SpendBudget.DisciplineID,
Weeknumber(Nz([qryoddbyproject].[odd],#1/1/2007#)+(7*[scperiodoffset]))
AS
WeekNbr, [hoursbudget]*[scperiodpercent] AS PeriodHrsBudget,
tblSCPeriod.SCPeriodOffset, tblSCPeriod.SCPeriodPercent,
Nz([odd],#1/1/2007#)
AS ProjectODD
FROM ((qryHours_SpendBudget LEFT JOIN tblSCPeriod ON
qryHours_SpendBudget.HCurve = tblSCPeriod.SCID) LEFT JOIN tblProjects
ON
qryHours_SpendBudget.ProjectID = tblProjects.ID) LEFT JOIN
qryODDbyProject
ON
qryHours_SpendBudget.ProjectID = qryODDbyProject.ProjectID
ORDER BY qryHours_SpendBudget.ProjectID,
qryHours_SpendBudget.DisciplineID,
tblSCPeriod.SCPeriodOffset;

---------------------------------

--
Jim


:

Jim

Thanks for the clarification... one down ...<g>

Queries that run "for a couple hours" seem highly unusual. Typically,
a
query will take longer to run when the table(s) it uses don't have
indexing.
Can you confirm that the table(s) involved in one of these
long-running
queries are indexed on all fields used as: search criteria, sort
order,
and
join fields?

Queries that run long may use inefficient SQL -- please post the SQL
statement for a couple of those.

Next, queries that run long may be doing so because the data is far
away
and
'dribbles in' -- is this a local database or a split
front-end/back-end?
If
split, where's the back-end? How are you connecting?

More info, please...

Regards

Jeff Boyce
Microsoft Office/Access MVP

Sorry, Jeff. It's not doing that.

It is gathering spending and budget data for several different
classes
of
spending (captured in different ways -- time sheets, imports, POs,
etc.)
and
several different budget periods (each budget has two separate
"spend
curves"
with pro-forma spend patterns that create weekly budgets by
project...)
It's
a very difficult thing to describe.

The queries aggregate spending and budgets by week, then marry them
together
in a non-normalized flat table for use with pivot table forms. I run
these
queries (two of them) each night. They run for a couple of hours
unattended,
and create the shadow tables for use the next day.

WeekNumber was a perfect candidate for a public function subroutine
since
its calculation is obscure but well-defined. I didn't want anyone
working
for
me to calculate it differently, or have a typo cause a
miscalculation,
so
I
set up the function subroutine. All told, the weeknumber function is
probably
called nearly a million times to assign week numbers to all the
aggregated
expenses and then combine them into a table that has about 200,000
rows
when
it's done.

If only I could use SQL Server...

I'm not sure it's the culprit, though. Have you experienced running
out
of
system resources for other reasons?
--
Jim


:

Jim

I certainly don't have enough information to be certain about this,
...

.... but if you are using "12 queries", is there a chance you are
working
with something like "one query per month"? What's the significance
of
"12"?
Without some idea of how your data is structured, it's difficult to
offer
any specific suggestions.

Regards

Jeff Boyce
Microsoft Office/Access MVP


I have a query that brings together data from 12 queries into a
"shadow"
flat
table to reduce form load times for certain forms. Recently, it's
begun
running out of system resources. I think it's because all 12
queries
(and
some of the subqueries...) call an Access (2007) user function I
wrote:
______________________________
Public Function WeekNumber(ArgumentDate As Date, Optional
WeekEndDate
As
Date = #12/30/2000#) As Long
' Company's default week end day of week is Saturday, hence
the
default
12/30/00, which was a Saturday
' NOTE: This will not work well with dates prior to 12/30/00!
Don't
use
it!
' Modified 4/28/09 to correct wrong results for Saturday
Argument
Dates

WeekNumber = CLng((ArgumentDate - 1 - WeekEndDate) \ 7)
End Function
__________________________________

Anyway, I would guess I need to somehow stop all these myriad
queries
and
subqueries from calling this routine literally 300,000 times or
more
each
time I run this shadow-table create query.

What's the best way to do this?

I could create a reference table that has two columns:
ReferenceDate,
WeekNbr
If I did that, could I join columns that are dates which may
have
fractional (time) components? How do I do that? Do I trim the
time
first?
How?

I could go back to the queries and replace many of those
references
with
the formula shown in the function above... I think I've done that
in
some
cases...

Of course, I'm not asking the fundamental question: Is there
something
else that could be causing a query to run out of system resources
(after
running for 45 minutes?)

Thanks for your advice!
 
J

John W. Vinson

WeekNumber was a perfect candidate for a public function subroutine since
its calculation is obscure but well-defined. I didn't want anyone working for
me to calculate it differently, or have a typo cause a miscalculation, so I
set up the function subroutine. All told, the weeknumber function is probably
called nearly a million times to assign week numbers to all the aggregated
expenses and then combine them into a table that has about 200,000 rows when
it's done.

PMFJI but... it sounds like weeknumber is also a good candidate for a lookup
table, with a Date/Time field and that date's corresponding weeknumber; this
could be joined (by an indexed date field) to any of your queries which need
it. This would be much faster and less resource-intensive than calling a
user-defined function 200000 times!
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top