Date Format Conversion?

H

Harlan Grove

(e-mail address removed) wrote...
UDFs run on SQL Server. I do shit like that on a DATABASE SERVER not
on the client.

?

It's a fair design question whether, e.g., a udf that performs a
simulation analysis should bog down a server or only the client's
machine. It seems to me it'd be better to run the more processor or
memory intensive udfs on client boxes so as not to degrade performance
for other users. [On the almost certain chance you don't know what a
simulation analysis is, it involves generating many thousands up to
many millions of random values and using them to produce many possible
alternative scenarios. They're always processor and memory intensive.]
I am paid to build reports.

No, really?!

Wake up. Most people only produce reports as a small (and annoying)
aspect of their jobs. It's not the central focus, but I don't expect
you to understand that.
spreadsheets ARE the central focus of most spreadsheet dorks' job.. i
mean YOU SPEND ALL WEEK BUILDING THE SAME GODDAMN REPORT WEEK IN AND
WEEK OUT
....

Aaron's alternative 'reality'.
it's all numbers buddy. I mean seriously; do you think that you caress
the numbers better than I do?

Do I think you'd know what to do with any numbers without someone
handing you detailed specs? No. I think you'd be lost. You may know
some of the mechanics of working with numbers in databases, and I'll
stipulate that you know basic arithmetic. Other than that, the scope of
your ignorance appears limitless.
 
A

aaron.kempf

degrade performance?

the only people that have performance problems (im not talking abotu
sexual performance problems lol) are spreadsheet people.

because you guys use the wrong tool for 80% of the stuff that you do.

and just for the record, I dont get specs I get problems and I fix
them. Most of the time i get called in 'to fix a little excel problem'
and we end up converting everythign to a database.. so that multiple
people can use the numbers at the same time.

I'm just curious. You claim you dont make reports.. so uh.. what VALUE
do you bring to a company?

You really think that making the same damn spreadsheet 3 times a week
means that you have special numbers that can't possibly be replaced by
crystal??

-Aaron
 
H

Harlan Grove

(e-mail address removed) wrote...
degrade performance?
....

Yes, degrade server performance. If you have a udf used by one user of,
say, 100 uers, and that udf performs 100 million floating point
operations (not unreasonable for a serious similation analysis, noting
that each similation iteration could involve dozens of FLOPs), either
that user's apparent performance will be worse if the udf runs on the
server in 1% of the server's user time slices or everyone else's
performance will suffer if this one user's udf gets any sort of
priority.

No matter how wonderful you believe database may be, the processors on
which they run can only handle one operation at a time.
and just for the record, I dont get specs I get problems and I fix
them. Most of the time i get called in 'to fix a little excel problem'
and we end up converting everythign to a database.. so that multiple
people can use the numbers at the same time.

For applications that multiple people would use, that's a sensible
thing to do. For applications that only one person at a time would use,
that's a waste of resources.
I'm just curious. You claim you dont make reports.. so uh.. what VALUE
do you bring to a company?

I'll let you ponder that. Hint: some people at Ford Motors design cars
rather than write reports - do they bring any value? Some people at
financial services companies hedge currency exchange risk and don't
write reports themselves (they delegate such tasks) - do they bring any
value?

You really don't understand that most companies mainly produce things
other than reports, don't you? Very few businesses make their money by
selling reports to customers, and only a subset of them sell reports
that could be fully generated automatically by databases. Maybe you've
only worked for overhead departments that most produce reports. Sad,
but a lack of perspective seems to be one of your manifold
deficiencies.
You really think that making the same damn spreadsheet 3 times a week
means that you have special numbers that can't possibly be replaced by
crystal??

The same spreadsheet? As I've said before in other threads, I use
common templates. To the extent that the formulas are the same in each
workbook produced from these templates, there's storage redundancy. Is
that bad? Depends. It means I can work on these files offline, and it
means others can load them without having to add database software and
settings which aren't standard where I work.

One big advantage of this is that I can check the formulas actually
used in each workbook. If the financial details of a particular
contract were based on faulty formulas, it's necessary to recalculate
the numbers using the corrected formulas, but it's also necessary to
retain the original calculations. It gets messy keeping multiple
versions of canned queries and reports to produce results using
incorrect and corrected calculations.
 
A

aaron.kempf

Harlan

I'm not saying the companies dont add values.

I'm saying that you spreadsheet idiots need to get a real program.
I mean-- you guys are 'scared to be seen as a software developer' but
all you do is sit around and create the same friggin spreadsheet week
in and week out.

I just think that it's a load of crap that you guys sit around loafing
while we get all the work done.

And 100m repetitions of a function?

I'd MUCH rather do that on a database server than in Excel lol
It's faster in TSQL than anywhere else.




You can CHECK the formulas used in each worksheet?

YOU CAN PERSONALLY CHECK 65k formulas times 100 sheets times 100
columns?

that's my big beef with excel is that you have 10 billion DIFFERENT
copies of the same formula in order to get anythign done.

If i could PIN a column to a certain formula-- without copying and
pasting that formula 65k times-- then i would be fine with letting
Excel reside on my machine.

You sit there and say 'mulitple copies of canned queries and reports'--
for starters if you have a NAMING CONVENTION it isn't a problem.

You use common templates; so you take your little XLS and create it
from an XLT.. and then you have 20 copies of that XLS--- so that when
you need to change the logic in your XLT then you have to copy it into
20 different worksheets?

ARE YOU FOR REAL?

DO PEOPLE REALLY PAY YOU TO MAKE SUCH BAD DECISIONS??

I've worked for a lot of different departments. And every time I see
an overpaid Excel dork that makes the same friggin spreadsheet every
week-- it just makes me sick.

you guys deserve to be homeless; living on the streets-- for you are
not adapting to technology.

Time to lose the training wheels kids

and oh yeah-- all your worry about 'database settigns and software' and
all that other crap. I dont do 'dsns or any custom software'.

I DO ACCESS. IT COMES WITH OFFICE, REMEMBER?
 
H

Harlan Grove

(e-mail address removed) wrote...
....
I just think that it's a load of crap that you guys sit around loafing
while we get all the work done.
....

By your own admission, you only help others produce reports. That's
overhead. Do you and your ilk burn through SG&A expense? Sure, but
that's not the same as getting any valuable work done.
And 100m repetitions of a function?

No, bozo, 100 million floating point operations (FLOPs). Since you know
so little about mathematical programming, it's no surprise you have
difficulty with this.
I'd MUCH rather do that on a database server than in Excel lol
It's faster in TSQL than anywhere else.

I'd suspect optimized FORTRAN or C would be faster still.
You can CHECK the formulas used in each worksheet?

YOU CAN PERSONALLY CHECK 65k formulas times 100 sheets times 100
columns?

If I were stupid enough to build workbooks that large, yes, I could
still check them. Print the formulas to text files in R1C1 address
format, then use a scripting language to collect cells containing the
same formula text in R1C1 address format (there'll be far fewer than in
A1 address format), then use a procedure to accumulate those cells into
rectangular blocks of cells. Produce a (WTH) report showing formula
text followed by an indented list of all rectangular blocks containing
that formula. Check for inconsistencies. And just a simple fc (that's a
console mode program that comes with Windows) call to check that the
formulas in two different workbooks are the same or show where they
differ.

It does help to know which are the best tools to use for any given
task. clearly you have no clue how to check spreadsheets.
You sit there and say 'mulitple copies of canned queries and reports'--
for starters if you have a NAMING CONVENTION it isn't a problem.

You just have to remember which name to use. I'm not saying it's
impossible to reproduce older versions of reports in DBMSs, just that
it's messy.
You use common templates; so you take your little XLS and create it
from an XLT.. and then you have 20 copies of that XLS--- so that when
you need to change the logic in your XLT then you have to copy it into
20 different worksheets?
....

Only when it makes sense to change the logic. Did you fail to grasp the
need to keep how calculations were actually performed? For deals
already on the books, you wouldn't change the existing XLS files. You'd
just use their data in the new template. And just because you can't
figure out how to use macros to enter formulas in New.XLS of the form

SheetX!Y99:
='X:\Y\Z\[Old.XLS]SheetX'!Y99

in order to automate loading data from one workbook into another
doesn't mean Excel can't. Hint: it requires iterating through
worksheets and unprotected cells in the used ranges of those
worksheets. Not very difficult, but perhaps beyond your skill set.
I DO ACCESS. IT COMES WITH OFFICE, REMEMBER?

No, it comes with Office *PROFESSIONAL*. It's as absent as your wit in
Office Standard. As I've pointed out before, in the department in which
I work (22 people currently), I'm the *only* one with Access. How does
a business PC user use Access if they don't have it? If they have *ANY*
version of Office, they have Excel. The same isn't true for Access. You
continue to fail to understand that.
 
A

aaron.kempf

scripting?

you really do scripting-- like with perl right?

you're just an idiot harlan; scripting is built into excel-- it's
called VBA

and the point of the matter is; if you dont have your formulas
DUPLICATED in thousands of different places; then it's not an issue.

DATABASES ALLOW YOU TO DEFINE YOUR FORMULAS ONCE FOR A FIELD INSTEAD OF
ONCE FOR EACH CELL.

i swear to god a few months ago you told me that you didnt have
access... right??
I'm glad that you've finally come to your senses harlan

now you've just got to start using Access Data Projects and maybe i'd
give you a job someday.

the end users that dont have access can use ACCESS RUNTIME. to
add/edit data or run reports.

Office Web Components against Analysis Services gives a VASTLY superior
pivotTable experience to your little baby spreadsheet program.
Anything that is realllllllllllllllly dynamic-- the special numbers--
should be used in a pivotTable.. i mean. .this pivotTable allows you to
display collapsed numbers-- and hierarchies... much like drilldown
SHOULD work in Excel.

I just know that Excel is a disease and I hope that all you lepers come
to your senses soon and start usign a real program.

I mean-- the WORST THING that can happen to you spreadsheet dorks is
the removal of the 65k limit in the next version-- i mean-- that's the
only thing that's been keeping you guys out of trouble so far lol

-Aaron
 
H

Harlan Grove

(e-mail address removed) wrote...
scripting?

you really do scripting-- like with perl right?

you're just an idiot harlan; scripting is built into excel-- it's
called VBA

Well, actually I use gawk for this. I wrote the awk scripts in the late
1990s, before Windows Script Host was available. VBA is a very blunt
tool when it lacks regular expressions and associative arrays. But you
have no idea what these mean, do you?

Also, I really don't like VBA/BASIC. Too verbose and klunky. Why
reinvent the wheel? Oh, that's right. That's what you do.
and the point of the matter is; if you dont have your formulas
DUPLICATED in thousands of different places; then it's not an issue.
....

Flexibility! A concept you don't understand.
DATABASES ALLOW YOU TO DEFINE YOUR FORMULAS ONCE FOR A FIELD INSTEAD OF
ONCE FOR EACH CELL.

OK, here's another chance for you to show just how much you know about
replacing Excel formulas with ones that would work in SQL code. Linear
interpolation. Given a table named TBL with fields for X and Y values
named X and Y, respectively, also being the first and second fields in
the table. The linear interpolation estimate of a yy value given an xx
value that might not have a match in TBL.X is

=TREND(OFFSET(TBL,MATCH(xx,INDEX(TBL,0,1)),1,2,1),
OFFSET(TBL,MATCH(xx,INDEX(TBL,0,1)),0,2,1),xx)

How would you handle this calculation in pure Access SQL?
i swear to god a few months ago you told me that you didnt have
access... right??
....

Wrong again. See http://makeashorterlink.com/?N1EB22BFB
the end users that dont have access can use ACCESS RUNTIME. to
add/edit data or run reports.

They just can't create anything. Again, you're foolishly assuming that
most spreadsheet users do what you or your clients do - generate
periodic canned reports. You just can't conceive of anyone using a
computer to do anything that hasn't already been canned, can you?
Office Web Components against Analysis Services gives a VASTLY superior
pivotTable experience to your little baby spreadsheet program.

If I used pivot tables, I'd still need to buy Analysis Services (no,
most companies don't use it), and only you could believe programming
OWC is comparable in ease of development to Excel.
Anything that is realllllllllllllllly dynamic-- the special numbers--
should be used in a pivotTable.. i mean. .this pivotTable allows you to
display collapsed numbers-- and hierarchies... much like drilldown
SHOULD work in Excel.
....

You're assuming all data structures are a mix of hierarchy and n-cubes.
Ain't always so. It may be common in company data already in central
databases, but it's seldom true for customer data.
I mean-- the WORST THING that can happen to you spreadsheet dorks is
the removal of the 65k limit in the next version-- i mean-- that's the
only thing that's been keeping you guys out of trouble so far lol

Finally something we agree on!

Nothing has done so much harm in spreadsheets as the increase from 8K
in old versions of 123 to 16K rows in XL5 and prior then to 64K rows in
XL8 (where K = 2^10 = 1024). The new 2^20 row worksheets will be much,
much worse. As I've written before, anything that requires more than 5K
rows shouldn't be done in a spreadsheet. That's a fraction of the
workbooks that use more than 5K rows, which only goes to show that
there are Excel users who don't know what they're doing. That doesn't
mean they'd do any better using Access.

More columns, OTOH, is needed, but 512 would have been a much safer
number than 2^14.

And I wonder whether Access will have the same quantum increase in
screen clutter/eye wash, er, New & Improved Conditional Formatting
that's due for Excel 12.
 
A

aaron.kempf

hahaha yeah good stuff

yeah.. there are a lot of rules of thumb like that.. i've done some
really really really wide spreadsheets (and tables for that matter)..
like forecasting and budgeting stuff.. for microsoft finance dept..

correct, customers aren't always arranged into neat little hiearchies..
for starters though; i would seperate customers (if i always knew their
first/last name) by the first couple of letters of their name

like for olap dims and all that; you can only have 64k members in a
child (because of the limit in excel they thought that they can get
away with it) but it makes it really hard to manage dims like
'customer'.

it's not programming in OWC; i make a pretty little site where you can
drag and drop-- but it's a hierarchial drilldown type environment..
excel doesnt display hiearchies half as well..

linear interpretation; anything that is recursive like that-- i think
would be easy to do simliar to this code: www.mvps.org/access and
search for BOM-- as in bill of materials. these types of joins make
databases quite friggin powerful and easy to use.

I have done some work with regular expressions and all that.. I think
that they're way too complex for most shit; i mean-- with Access we
have input masks and format strings.. i mean.. input masks take care of
everything i've ever needed out of life; i dont need to deal with
regexp for most stuff..

and to be honest; im strong enough with parsing and all that that i
dont need to regexp anything.. i just loop through stuff-- build my own
expressions. that way i have more control over things.

you know what they really need to do -- if they're increasing columns
this much-- they need to make it easier to import a spreadsheet

A B C D E F G H I J K L M
1
2
3
4
5
6
7
8
9
0

into this shape

1A
1B
1C
1D
1E
1F
1G
1H
2A
2B
2C
3D
4D
5D
6D

just give me the non-empty cells; and make the cells one column and the
formulas another.. maybe i'll need to parse xml to do that; it's
allright i should just be able to use access since it does all the hard
work for me on that anyways lol

i just want to store that in a table like this

XLSID, ROWID, COLUMNID

I'm really looking forward to that one-- i just hope that it's easy to
do.. I've built a half-dozen spreadsheet scrubbing tools over the
years.. basically bring in a whole spreadsheet and then automate taking
sections out of it.. you know what i mean?

im really hoping that the new format does that easier; i mean-- since
it's just xml it's going to be in that format anyways right??

then i can go through and automate 'which cells have changed' and 'how
many formulas changed'-- i've had to do that too many times by hand;
and it would be like impossible to do if more columns get added

do you have any idea what im talking about?

then i scrub through my db and say 'is cell 17 = 36 on these 47
different spreadsheets' or whatever i want to do



-aaron
 
H

Harlan Grove

(e-mail address removed) wrote...
....
correct, customers aren't always arranged into neat little hiearchies..
for starters though; i would seperate customers (if i always knew their
first/last name) by the first couple of letters of their name

?

We must mean different things by the phrase 'customer data'. I mean
data on a customer's operations and financial condition, not just the
customer's name, primary headquarters address, etc. That is, I focus on
data from a single customer at a time, and that single customer's data
doesn't entirely fit into hierarchies of n-cubes.
like for olap dims and all that; you can only have 64k members in a
child (because of the limit in excel they thought that they can get
away with it) but it makes it really hard to manage dims like
'customer'.
....

So slot customers by name? I'd have thought it'd make more sense to
slot them dynamically by cumulative historical numbers or total values
of orders or contracts, age from initial transaction, current financial
rating (S&P, Moody, or whatever). Slotting by name would produce an
essentially useless random ordering. If you need multiple dims, might
as well make them meaningful.
linear interpretation; anything that is recursive like that-- i think
would be easy to do simliar to this code: www.mvps.org/access and
search for BOM-- as in bill of materials. these types of joins make
databases quite friggin powerful and easy to use.

Linear interpolation isn't recursive, unless you interpret any logical
sequence of values to be recursive. In terms of languages that use
arrays, I just mean find a record and the next record. One BIG problem
with SQL-based RDBMSs is that there's no standard way to limit query
results to a set number of records. Each RDBMS has its own syntax. Even
when you do pull just the two records of interest, it appears you need
another query to join the two records so you can use field values from
both records in the same calculation. That's simpler than the
spreadsheet formula I showed?
I have done some work with regular expressions and all that.. I think
that they're way too complex for most shit; i mean-- with Access we
have input masks and format strings.. i mean.. input masks take care of
everything i've ever needed out of life; i dont need to deal with
regexp for most stuff..

Input masks are useful but often inadequate when dealing with manual
input, but aren't much use when importing from CSV files that need data
cleansing. Validation ruls, on the other hand, would be improved with
pattern matching more advanced that what the Like operator supports.
Add-on functions could provide access to Windows Script Host regular
expressions, so there's some hope for Access.
and to be honest; im strong enough with parsing and all that that i
dont need to regexp anything.. i just loop through stuff-- build my own
expressions. that way i have more control over things.

You don't understand regular expressions if you believe you have more
control writing your own parsing code. In your sense, you'd have more
control fabricating break shoes for your car rather than buying them
from an auto parts store. Regular expressions fully and completely
describe ANY text more compactly than any code you could write on your
own. There's a reason *ALL* scripting languages now provide them, and
..Net too. What do you suppose that reason might be?
you know what they really need to do -- if they're increasing columns
this much-- they need to make it easier to import a spreadsheet

A B C D E F G H I J K L M
1
2
3
4
5
6
7
8
9
0

into this shape

1A
1B
1C
1D
1E
1F
1G
1H
2A
2B
2C
3D
4D
5D
6D

How do you believe cell records are stored in the current BIFF format?
All files are just long linear strings of bytes, so cell contents can't
be stored in any sort of grid. They're stored in sequential records.
just give me the non-empty cells; and make the cells one column and the
formulas another.. maybe i'll need to parse xml to do that; it's
allright i should just be able to use access since it does all the hard
work for me on that anyways lol

Good luck parsing XML files in Access without regular expressions. Or
state machines.
then i can go through and automate 'which cells have changed' and 'how
many formulas changed'-- i've had to do that too many times by hand;
and it would be like impossible to do if more columns get added

Not so hard. As I mentioned before, I write formula listings to text
files. In the past I've then saved such listings in RCS logfiles, so I
could get RCS diff listings. Not all that helpful when rows or columns
have been inserted or deleted, but it's not all that big a deal to
compare workbooks. Heck, all you need is a udf returning the formula in
a given cell, and you can use cell formulas in one worksheet to compare
cell formulas in any two other worksheets possibly in different
workbooks.
do you have any idea what im talking about?
....

Having had more experience doing this in spreadsheets than you, yup. I
haven't compared spreadsheets manually since the late 1980s. I may not
fully appreciate your self-inflicted agony.
 
A

aaron.kempf

yeah.. you're right

I really honestly think that housing customer information and sales
information and contact information in SPREADSHEETS is a better
architecture.

What was I thinking?
hahahahahahaha rofl

harlan-- do you really think that your data is SOOOOOOOOOOOOOOOO
unstructured that I couldn't shape it into a database in about 10
minutes?

i mean-- while you've been jerking off to spreadsheets for 20 years;
I've been getting work done-- keeping DATA IN A DATABASE

do you really think that im scared of data that 'doesnt fit in a
database'?

shit i'll either make it fit; or push it into a new table.

but it WILL relate to each other.

And I won't have to go throw hundreds of spreadsheets to find the
answer.


im just sick and tired of copying and pasting formulas.

And just for the record, i dont parse XML files usign regular
expressions-- Access does all the hard work for me. I dont do that
much with XML; i prefer CSV or TSV or something else; even binary
because it's faster than CSV

Re: 'omg, every database has a different SELECT TOP 1000 FIRSTNAME FROM
EMPLOYEES' syntax

harlan you idiot

there aren't multiple database vendors anymore.

SQL Server has taken over the world and your company is too slow to
know about it yet.

and your thing thing slotting customers by name-- yeah.. i would slot
them by name for a lot of operations.. because it makes drilldown a lot
better

you-- as a spreadsheet dork that smokes DB2 pole-- you would have no
idea what DRILLDOWN is all about.

I swear to god harlan-- all this recursive shit that you're sooooo
worried about.

I can do anything recursive in the world usign Analysis Services--
which again-- it comes free with SQL Server.

I can chew on parent-child relationships more than you would possibly
know

select NONEMPTYCROSSJOIN(Location.Children) would be all that you would
need to do to get a 'whole complex recursive list; oh my god; it's
recursive.. what are we going to do'.

there are a hundreds ways to deal with recursive queries.

a) recurse them-- using a cursor or somethign on the client side
b) WRITE A FUCKING JOIN THAT RECURSES FOR YOU
c) cartesian your data
d) build a temp table
e) write a VB function to do the heavy math

It's really not that hard Harlan; im sorry that you smoke IBM pole and
play with AOL at night.

your life would be a lot easier if you weren't so damn worried about
shit that isn't that big of a deal.

a) I have _NEVER_ had to use a regular expression-- because YES i can
write my own damn loop and it's faster and more extensible.

b) Excel doesnt have DATATYPES is just has a whole bunch of strings

c) protection in Excel can't withstand a simple copy & paste

d) of course i understand that a database isn't stored in 2
dimensions-- except for things like AS/400 and btrieve (have i ever
asked you how many well-designed as/400 apps it takes to change a
lightbulb) and a couple of other isam formats.

e) if you didnt have 20,000 copies of the same formula; maybe you could
focus on the numbers instead of worrying about 'oh my god, the parsing
is so complex'

DID YOU EVER HEAR OF THE LIKE CLAUSE HARLAN? It's about 60% as
powerful as regexp.. i've been using it for 10 years now.

I'm technically migrating apps out of btrieve into a database right
now; and I've worked on a dozen AS/400s over the past 8 years.

what you dont comprehend harlan... is that there is a better way than
all this unstructured BS that you think is 'soooooo complex'.

get a decent db developer and your shit won't be unstructured too much
longer

then take a couple of classes on how to build dummy reports-- like
crystal for example. take a couple of classes on crystal and now--
presto change-o.. you've got a CAREER.

as it is-- as an Excel developer-- you are the equivalent of a retarded
kid and a paintbrush that is trying to paint the world.

you might get a little bit of paint here and there-- shit you might get
a lot of paint all over the world.

but is the green going to match the yellow (oh my god, it just doesn't
fit-- it's soooo unstructured it'll never be able to be adequately
described by computers)

or are you just going to end up with paint everywhere and no real
design?

it's time to stop being a kid without a direction; it's time to start
BUILDING SOLUTIONS harlan.

start contributing to the solution.

stop playing with crayons and spreadsheets.

-Aaron
 
H

Harlan Grove

(e-mail address removed) wrote...
I really honestly think that housing customer information and sales
information and contact information in SPREADSHEETS is a better
architecture.

What was I thinking?
....

I didn't say customer sales and contact information, which are
obviously the only kind of 'customer data' with which you've ever
worked.

And who says you ever think?! Who says you can?!!
harlan-- do you really think that your data is SOOOOOOOOOOOOOOOO
unstructured that I couldn't shape it into a database in about 10
minutes?
....

Since I can structure it as I need it in less than a minute usually,
it's probably a fair estimate that it'd take you more than 10 times
longer to handle it in Access. Good estimate. So why would I want to
use Access?
but it WILL relate to each other.

Everything in any structured data file relates to each other. The issue
is retaining referrential integrity.

Any data structure can be reduced to a table of 2 fields: key field and
value field, with the key field allowed to be an arbitrarily long and
complex text string. That's Relational Theory 101. It's also
meaningless in terms of efficient access and use.
And I won't have to go throw hundreds of spreadsheets to find the
answer.
....

So you don't know how to organize files efficiently? Sad.
And just for the record, i dont parse XML files usign regular
expressions-- Access does all the hard work for me. I dont do that
much with XML; i prefer CSV or TSV or something else; even binary
because it's faster than CSV
....

If Access can handle the particular XML format, I'd be surprised if it
couldn't import it into table format. However, not all XML formats are
likely to be supported. Can Access extract embedded tables from Word
documents saved as XML?
there aren't multiple database vendors anymore.

SQL Server has taken over the world and your company is too slow to
know about it yet.

Tell it to Oracle, IBM, MySQL, the PostgreSQL developers, etc.

When are you going to stop acting like a moron? Oh, maybe it's not an
act.
and your thing thing slotting customers by name-- yeah.. i would slot
them by name for a lot of operations.. because it makes drilldown a lot
better

Golly. Drilldown on all customers whose names begin with ACM, since the
Association of Computing Machinery would have so much in common with,
say, Acme Septic services.

Drilldown only make sense when the entities are related by something
other than the random coincidence of the first 3 letters of their
names. But since you don't ever USE data, just build databases, you
don't understand this.
I swear to god harlan-- all this recursive shit that you're sooooo
worried about.

It's not recursive. It's sequence. You don't understand the difference.
I can do anything recursive in the world usign Analysis Services--
which again-- it comes free with SQL Server.

OK, so show IN DETAIL how you'd handle linear interpolation with it.
IOW, stop CLAIMING you know how to do X or Y and show us how you'd do
it.
I can chew on parent-child relationships more than you would possibly
know

Array references like a and a[i+1] aren't parent-child, though I
suppose they could be kludged to behave like it. Extremely inefficient
to do so, however, but that wouldn't stop you, would it?
select NONEMPTYCROSSJOIN(Location.Children) would be all that you would
need to do to get a 'whole complex recursive list; oh my god; it's
recursive.. what are we going to do'.
....

Again, you're misunderstanding. It's not a parent-child relationship.
It's proximity in a queue or array.

Also, are you sure you have any idea what you're doing?

http://blogs.msdn.com/bi_systems/articles/162841.aspx

along with the fact that for a *single* x value xx, you only want 2
sets of ordered (x,y) pairs, the one with the largest x value <= xx and
the one with the smallest x value > xx. It could be done with 2 fairly
simple SELECT queries.

SELECT Top 1 T.x, T.y
FROM T
WHERE T.x <= [xx]
ORDER BY T.X DESC;

SELECT Top 1 T.x, T.y
FROM T
WHERE T.x > [xx]
ORDER BY T.X ASC;

To you this is recursive?
there are a hundreds ways to deal with recursive queries.

No doubt, it what you need to do is recursion.
a) recurse them-- using a cursor or somethign on the client side

Golly. Procedural programming. And this is an advantage over
spreadsheet formulas how?
b) WRITE A FUCKING JOIN THAT RECURSES FOR YOU

In order to work with data from N records in a single calculation,
you'd need N-1 joins. This is simple?!
c) cartesian your data

Geez, what a waste of system resources for something that could be done
with 2 SELECT queries of 4 simple clauses each. You database guys
really get paid for gross inefficiency like this, do you?
d) build a temp table

So you can process it? This doesn't solve anything, it only adds
unnecessary additional steps in the process.
e) write a VB function to do the heavy math

Gee, procedural programming. And this is better than the spreadsheet
formula how? Especially since you'd need to parametrize the VB udf in
order to avoid having to have a separate udf for each combination of
table and X and fields. That's simple?
It's really not that hard Harlan; . . .

Then show how you'd do it, IN DETAIL, with and without Analysis
Services. Here's a simple table for you to use.

x y
1 1
3 2
5 3
7 4
9 5
11 6
13 7
15 8

and given the x value 5.5, the answer is 3.25, which is given by

3 * (7 - 5.5) / (7 - 5) + 4 * (5.5 - 5) / (7 - 5)

I ask for details (yet again) because you've never provided them
before. You *SAY* you know how to do this, but there's no proof you
know how to do anything other than rave.
a) I have _NEVER_ had to use a regular expression-- because YES i can
write my own damn loop and it's faster and more extensible.

Ignorance really isn't bliss. You just don't know how regular
expressions work. You've never used LIKE conditions in WHERE clauses
that use * or ? wildcards? They're very, very elementary types of
regular expressions.
b) Excel doesnt have DATATYPES is just has a whole bunch of strings

Just like all scripting languages. There are advantages to strong data
typing, but flexibility isn't one. There may be too many people who
misuse that flexibility, but that doesn't mean it has no value.

Databases are definitely not always the best tool for the task. I use
them when they are. You, who seem to know nothing else, are stuck
having to use them for everything. Pity.
c) protection in Excel can't withstand a simple copy & paste
....

You mean data validation? Agreed. The 'feature' is a toy. If you mean
referential integrity and are too ignorant to know the terminology,
then you'd seem to mean *CUT* & paste. It's dangerous, but it has its
uses.
e) if you didnt have 20,000 copies of the same formula; maybe you could
focus on the numbers instead of worrying about 'oh my god, the parsing
is so complex'

Having thousands of instances of the same formula floating around is of
no more concern than having thousands of resistors and capacitors in
the electronic devices I use. They're just part of the landscape.

As for complex parsing, you ever isolated and converted tables in PDF
files into Excel or Access tables without resorting to GUI copying from
Acrobat and pasting into some other app?
DID YOU EVER HEAR OF THE LIKE CLAUSE HARLAN? It's about 60% as
powerful as regexp.. i've been using it for 10 years now.
....

It's about 25% as powerful half the time, and less than 1% as powerful
the other half. Given a field containing full names (first middle last
plus generational qualifier, Jr, Sr, III, etc.), you try to use them to
find all people with last name Taylor. "LIKE '* TAYLOR'" would miss all
entries with a generational qualifier. Yes, you could use several LIKE
statements linked with ORs, but '^.* TAYLOR(,? ([JS]r\.?|[IV]+))?$' is
more compact and much less prone to error or omission once you learn
how to use regular expressions.
then take a couple of classes on how to build dummy reports-- like
crystal for example. take a couple of classes on crystal and now--
presto change-o.. you've got a CAREER.
....

Why would I want to retrogress to a job where I had to produce reports
all the time? You think that's the pinacle of corporate success,
writing reports? You need to figure out which way is up.
 
A

aaron.kempf

harlan

i'm just saying that EXCEL DOESN'T LET YOU DRILLDOWN.

I _ALWAYS_ store first and last names in different fields.

and i'd rather play with real data than screw with PDFs.

PDFs are a piece of crap app-- sort of like Excel.
Whoever wrote them has no understanding of the real world.

Closed systems-- like Excel and PDFs-- aren't reliable ways to move
data around.

so you really aren't worried about having 20,000 copies of formulas
that are 80% the same?

HOW DO YOU HAVE ANY CONFIDENCE IN YOUR NUMBERS?

You have 20k copies of every formula to do everything.. I mean-- for
starters; no wonder you try to email 60mb spreadsheets

shit man aren't you tired of having multiple copies of the same data
everywhere?

i mean-- come on

i dont believe that you use databases when you SHOULD.

you SHOULD use databases for every single friggin thing you do. but
you're a simpleton who plays with excel

i'd just much rather have some structure to my data

im tired of cutting and pasting and copying spreadsheets in order to
'make report for October'

duplicating EVERYTHING once a month is the biggest joke ever; and you
guys do this on a regular basis.

in a perfect world; all you spreadsheet dorks would have been fired in
2001 when the bubble burst and you guys would have had to get a real
skillset.

re:

------------
Ignorance really isn't bliss. You just don't know how regular
expressions work. You've never used LIKE conditions in WHERE clauses
that use * or ? wildcards? They're very, very elementary types of
regular expressions.
------------

I disagree with you.

Just because a horse and your mom each have 4 legs-- that doesn't mean
that your moms' a horse; does it?

does that mean that your mom came from a horse?

does it mean that the horse 'invented' your mom?

LIKE clauses are quite powerful; they have all of the complexity that I
need. Or else; I've got the instr and the split-- those are plenty
fast enough to do anything i need to do.

I'm not going to run out and be trendy and learn RegExp just because a
bunch of Java dorks think that it's the bomb.

and harlan-- i dont need N-1 joins

I NEED ONE JOIN DIPSHIT

One join can recurse all you want.

-Aaron
 
A

aaron.kempf

From
http://www.mvps.org/access/queries/qry0023.htm

As presented in "Joe Celko's SQL for smarties" and discussed in many
newsgroups, the nested set solutions are up to 1000 times faster than
their equivalent methods (mainly based on cursors or recordsets).

While the standard discussions turn around maintenance (adding and
deletion of nodes in the "graph"), there was no example about the BOM
problem based on that kind of solutions, at least, up to now, since
now, you can find one, in the Jet-Access 2000 zipped database included
here. That database has one table, the nested set (we assume you are
familar with the notion), a single form showing graphically the tree
represented in the table, and one query returning the list of the
required elements, and in which quantity, to make an arbitrary item
described in the nested set.

As you can see by yourself, there is no VBA, no recursion, only plain
SQL is used to solve that kind of problem. The query can surely be
re-used for any nested-set, not just for the one given as example.

-----------------------

SELECT
P_2.MemberName,
Exp(Sum(Log(P_1.Qty))) AS RequiredQty
FROM
P,
P AS P_1,
P AS P_2
WHERE
(
((P.MemberID)=[RootNodeID])
AND
((P_1.lft)
Between [P].[lft]+1
And
[P].[rgt])
AND
((P_2.lft)=[P_2].[rgt]-1
And
(P_2.lft)
Between [P_1].[lft]
And
[P_1].[rgt]
))
GROUP BY P_2.MemberName;


yeah.. so uh-- now that you can write recursive queries in plain old
sql are you going to stop bitching about all this 'oh, its sooo
complex.. it's recursive'

shit kid lose the training wheels
 
H

Harlan Grove

(e-mail address removed) wrote...
i'm just saying that EXCEL DOESN'T LET YOU DRILLDOWN.

Outlines. Not quite the same, perhaps, but most people would recognize
it's similar functionality.
I _ALWAYS_ store first and last names in different fields.

Good for you.

Not all data is stored like that. Some of us need to use what we're
sent. I often need to parse 'subfields' into separate fields. We just
work with different data, so (SURPRISE!) we use different tools.
and i'd rather play with real data than screw with PDFs.

That's nice.

For me, real data comes from customers or intermediaries, as often in
PDF files as in XLS and DOC files. Rarely in plain text or CSV files.
Never in database file formats.

I need to pull data out of PDF files, so I recognize the need for tools
that let me do that efficiently. If you've chosen jobs in which you can
avoid PDF files, maybe that's nice for you, but it shows how far
removed you are from the customer end of most businesses.
PDFs are a piece of crap app-- sort of like Excel.
Whoever wrote them has no understanding of the real world.

One possibility. The more likely possibility is that it's you who has
no conception of reality.
Closed systems-- like Excel and PDFs-- aren't reliable ways to move
data around.

XLS files are more closed that PDF files. The PDF format is open and
widely understood (though apparently not by you). PDF files can contain
embedded images that look like pages of text. If that's the case, then
OCR is the only way to pull data. Same would be true for fax image
files. If those are what a customer or intermediary sends, that's what
you have to work with. When PDF files contain formatted text, then it's
not all that hard to extract that formatted text, though it's obvious
you have no idea how to do so.

XLS files are harder to work with if you don't have Excel.
so you really aren't worried about having 20,000 copies of formulas
that are 80% the same?
Nope.

HOW DO YOU HAVE ANY CONFIDENCE IN YOUR NUMBERS?

I use TESTED templates for the repetitive stuff. I don't worry about
the formulas in the sense that I don't check the formulas in every
instance against some archived formula listing. Now I also know better
than to *cut* and paste or drag and drop unless I'm building new
models, so I don't get into trouble that way. When I build models other
people may use, I know how to add code like

Private Sub Workbook_SheetSelectionChange( _
ByVal Sh As Object, _
ByVal Target As Range _
)
'-------------------------------------------
If Application.CutCopyMode = xlCut Then _
Application.CutCopyMode = False
End Sub

to keep them out of trouble.
You have 20k copies of every formula to do everything.. I mean-- for
starters; no wonder you try to email 60mb spreadsheets

Nah, I e-mail PDF files. And I only send summaries, 3 printed pages
max.
shit man aren't you tired of having multiple copies of the same data
everywhere?

I don't work with static data. Client names and addresses are about the
only things that remain the same quarter to quarter. There's some
redundancy in the formulas, but very little in the data.

I'll admit I haven't been successful convincing my company's powers
that be to use data extracts with formula templates. In the few models
I still maintain, that's the approach I've used, but it requires more
work, so no one else likes doing it.
i dont believe that you use databases when you SHOULD.

'course not, since you seem to believe that everything up to and
possibly including wiping your butt is the right time to use a
database.
you SHOULD use databases for every single friggin thing you do.
....

Point proven.
LIKE clauses are quite powerful; they have all of the complexity that I
need. Or else; I've got the instr and the split-- those are plenty
fast enough to do anything i need to do.

Again, only because you don't know much if anything about regular
expressions. Consider a relatively simple task like checking for the
presence of phone numbers in a string of text. For the moment, just
consider US/Canada formats, ###-###-####, ### ###-####, ### ### ####,
(###) ###-####, (###) ### ####, ###.###.####. The regular expression to
find them is

\(?\d{3}[- .)]\d{3}[- .]\d{4}

Given a Windows Script Host RegExp object re, the VBA code would be
just

re.Global = True
re.Pattern = "(\(\d{3}\) ?|\d{3}[- .])\d{3}[- .]\d{4}"
If re.Test(SomeString) Then 'do something

And your best, most efficient code would be?


I'm not going to run out and be trendy and learn RegExp just because a
bunch of Java dorks think that it's the bomb.

And Perl dorks, and Python dorks, and Ruby dorks, and awk dorks, and
..Net dorks, so C# and VB.Net dorks, and so on & so on. Either all these
other programmers are all dorks or for a fine example of one all you'd
need is a mirror.
and harlan-- i dont need N-1 joins

I NEED ONE JOIN DIPSHIT

One join can recurse all you want.

Really?

If I want to perform *one* calculation requiring data from one
particular field in, e.g., 21 separate records, how do I collect all
those values into a single expression without either 20 joins or
resorting to procedural programming? For example, if I had a table of
cashflows with one field for period and another field for cashflow,
what would it take to be able to put the cashflows from 21 records
(periods 0 through 20) into an Access NPV function call? Use any
interest rate you want.
 
A

aaron.kempf

procedural programming.. gosh

i never said i was anti-procedural programming.. im more of a
procedural
guy than most people-- most people do more object oriented stuff; i
just
build small light apps so i dont have to worry about building
components
most of the time.. and the only components i ever use are from
microsoft.. things like ADO and Office Web Components-- light fun
stuff.


and the rest of the time; i just prefer using plain old html; dont need
much in the way of objects to spit out some pretty asp reports

between the two; i feel quite powerful.

it's frustrating to me that we have people that are falling in love
with
sharepoint-- when it's the wrong architecture-- just because it's a
simple easy to use frontend to a database.

It's like... People are going from Texas to Atlanta.. when all the jobs
are in really in California. So it's like-- I just feel bad

for NVP, i'll just use a prewritten UDF

http://www.sqlteam.com/Forums/post.asp?method=Reply&TOPIC_ID=23307&FORUM
_ID=11

SELECT @npv = SUM(cf*power(1+@rate,-pid))
FROM test
return(@npv)



here is the full text:
-------------------------------------------------
create table test (pid int not null, cf money not null)
go
set nocount on
insert test (pid,cf) values (0,-100)
insert test (pid,cf) values (1,10)
insert test (pid,cf) values (2,10)
insert test (pid,cf) values (3,10)
insert test (pid,cf) values (4,10)
insert test (pid,cf) values (5,10)
insert test (pid,cf) values (6,10)
insert test (pid,cf) values (7,10)
insert test (pid,cf) values (8,10)
insert test (pid,cf) values (9,10)
insert test (pid,cf) values (10,10)
insert test (pid,cf) values (11,10)
insert test (pid,cf) values (12,10)
insert test (pid,cf) values (13,10)
insert test (pid,cf) values (14,10)
insert test (pid,cf) values (15,10)
insert test (pid,cf) values (16,10)
insert test (pid,cf) values (17,10)
insert test (pid,cf) values (18,10)
insert test (pid,cf) values (19,10)
insert test (pid,cf) values (20,10)
set nocount off
go
create function dbo.npv (@rate real) returns real
begin
declare @npv real -- return value

SELECT @npv = SUM(cf*power(1+@rate,-pid))
FROM test
return(@npv)


of course, i'd make it a little more flexible.. in order to be able to
pull from any table.

cheers


-Aaron

ps - looks to me like irr is a lot more complex.. im pretty sure I
could
force some cartesian in order to generate enough test data in order to
make lookups on this a lot faster. I just know that I could house some
huge cartesian based on days and investments and it would be pretty cut
and dry to cut through it with olap.

i mean.. it's fast as snot
 
H

Harlan Grove

(e-mail address removed) wrote...
....
i never said i was anti-procedural programming.. . . .
....

Then you fail to understand that most spreadsheet users aren't
programmers. For nonprogrammers, the essentially functional programming
nature of spreadsheet formulas is easier to learn and use than
procedural code. That's one reason why so many people use spreadsheets.
for NVP, i'll just use a prewritten UDF

http://www.sqlteam.com/Forums/post.asp?method=Reply&TOPIC_ID=23307&FORUM
_ID=11

SELECT @npv = SUM(cf*power(1+@rate,-pid))
FROM test
return(@npv)

I deserved that. Your simplified approach may work when the period
field (pid) contains integers starting at 0. It won't when the period
field contains dates or when there's no period field but the cashflows
are ordered properly anyway.
here is the full text:
-------------------------------------------------
create table test (pid int not null, cf money not null)
go
set nocount on
insert test (pid,cf) values (0,-100)
insert test (pid,cf) values (1,10)
insert test (pid,cf) values (2,10)
insert test (pid,cf) values (3,10)
insert test (pid,cf) values (4,10)
insert test (pid,cf) values (5,10)
insert test (pid,cf) values (6,10)
insert test (pid,cf) values (7,10)
insert test (pid,cf) values (8,10)
insert test (pid,cf) values (9,10)
insert test (pid,cf) values (10,10)
insert test (pid,cf) values (11,10)
insert test (pid,cf) values (12,10)
insert test (pid,cf) values (13,10)
insert test (pid,cf) values (14,10)
insert test (pid,cf) values (15,10)
insert test (pid,cf) values (16,10)
insert test (pid,cf) values (17,10)
insert test (pid,cf) values (18,10)
insert test (pid,cf) values (19,10)
insert test (pid,cf) values (20,10)
set nocount off
go
create function dbo.npv (@rate real) returns real
begin
declare @npv real -- return value

SELECT @npv = SUM(cf*power(1+@rate,-pid))
FROM test
return(@npv)

The only problem with this is that it depends on the table test which
you created at the top of the code. How do you generalize it to take
cashflows from fields of any name from any given table?
of course, i'd make it a little more flexible.. in order to be able to
pull from any table.

So what's stopping you?
ps - looks to me like irr is a lot more complex.. im pretty sure I
could force some cartesian in order to generate enough test data in order
to make lookups on this a lot faster. I just know that I could house
some huge cartesian based on days and investments and it would be pretty
cut and dry to cut through it with olap.

In the IRR calculation the cashflows are static and NPvs are repeatedly
calculated at different interest rates. Only the interest rate varies,
and deciding what to use for the next interest rate follows some
variant of the Newton-Raphson approach to finding zeros of functions.
It looks like the code at the url you gave uses the secant method. Your
cartesian would be a colossal waste of processing time and system
resources since the algorithm used is mathematically optimal barring
pathological data.
 
A

aaron.kempf

most spreadsheet developers ARE programmers.

they just use a crap language. (EXCEL).

I dont think that Excel does procedural programmming better than
Access. other than the bs that you can record macros in Excel but you
can't record macros in Access.

i just dont think that you have any comprehension of how fast olap is
harlan

that is the root of the problem

olap solves every database performance problem in the world; and it is
free with SQL Server (standard edition).

and OLAP-- Analysis Services-- it provides is the most powerful
PivotTables. OLAP pivotTables-- from Analysis Services-- are superior
to normal pivotTables by a factor of about 1000. OWC with OLAP is
about 1,000,000 times more powerful than normal silly excel
pivotTables.

And i think that it's ridiculous that you guys havent grown into olap
and MDX and SQL. I mean-- as a whole-- your whole diseased spreadsheet
family needs to as a whole-- grow up and start using real programs.

you spreadsheet dorks need to all learn MDX-- that is your calling in
life. That is how you guys will be worth the air that you breathe.

MDX is the first step in an exciting new world for PivotTables.

We database people learn SQL. You excel kids learn MDX.
And maybe someday we'll all get along.

As it is; you guys still do jack shit and we have to do all the SQL
_AND_ all the MDX.

These MDX-- OLAP-- OLAP is the technology-- similiar to the word
'databases' and MDX is the language-- similiar to the word SQL.

these pivotTables really are about 1000 times more powerful than normal
--- relational or worksheet based (as in using a worksheet to have
columns and rows for the source)

ps - i'll rewrite that UDF soon. the point of the matter is that we're
not goign to have an unlimited number of periods... or even if we are;
we just store the date range in a table and then you can do a
non-equijoin in order to loop through it. you have 1 date table and
then you join to it when you need to use it. it's not rocket science.
a single table that lists all the periods; make the interest a
parameter instead of stored in the table; and you're set.

MDX and SQL are simply more powerful than Excel.

It's like.. you guys started out great-- you know how to use the
functions SUM, MAX, MIN, VLOOKUP

you guys really could get out of the whole morass by just moving your
whole world to Access.

I mean-- things are better here.

I would rather make my queries off in lala land and never keep them in
a FILE. I just dont like having 20 million copies of the same thing--
it makes it hard to deal with (even if you dont stress about having
wrong numbers). I just dont agree with your p2p method of working--
it's too unstructured and will never be more efficient than client
server.

Since MDX is FREE with SQL Server and SQL Server is FREE with Windows--
then you're pretty much an idiot for not using it.

I just swear you guys.. time to grow up-- ding a ling a ling
 
H

Harlan Grove

(e-mail address removed) wrote...
From
http://www.mvps.org/access/queries/qry0023.htm

As presented in "Joe Celko's SQL for smarties" and discussed in many
newsgroups, the nested set solutions are up to 1000 times faster than
their equivalent methods (mainly based on cursors or recordsets).
....

How is this remotely relevant to linear interpolation? [And how is this
continued back & forth remotely relevent to this thread or perhaps even
this newsgroup? But I digress.]

Nice link. Note the error in the Summary form that supposedly explains
the structure: the second G appearing as a child of F. It should be I
rather than G.

Hierarchical structures are useful, and they're fast. Trees and doubly
linked lists have been used in most programming languages for decades.
That they can be implemented in SQL as well is no big deal.

However, they're at best beside the point when it comes to simple array
indexing, which is pretty much all that's required to fetch the values
from a table that are needed for any type of interpolation. Like using
a freight train to go grocery shopping.
yeah.. so uh-- now that you can write recursive queries in plain old
sql are you going to stop bitching about all this 'oh, its sooo
complex.. it's recursive'

All you have to do is create the nested set to begin with. And how
simple is that? And would average users immediately understand how to
set them up? And do you really believe debugging errors in the nested
set table would be easier than debugging spreadsheet formulas?! What
you smokin'?!!

Recursive queries (walking trees) aren't the same as recursive function
evaluation. And they're nowhere close to simple array indexing.
 
H

Harlan Grove

(e-mail address removed) wrote...
most spreadsheet developers ARE programmers.
....

I agree that spreadsheet *developers* (people who write spreadsheet
models used by other people) are programmers, but a substantial
fraction of them wouldn't call themselves programmers, and fewer still
have any formal training as programmers.
I dont think that Excel does procedural programmming better than
Access. other than the bs that you can record macros in Excel but you
can't record macros in Access.

Since both use VBA, they wouldn't differ, would they? But there's
seldom the need to resort to VBA for calculations in Excel. As long as
one's willing to use an arbitrary number of cells, almost any
calculation can be done using worksheet formulas exclusively.
and OLAP-- Analysis Services-- it provides is the most powerful
PivotTables. OLAP pivotTables-- from Analysis Services-- are superior
to normal pivotTables by a factor of about 1000. OWC with OLAP is
about 1,000,000 times more powerful than normal silly excel
pivotTables.

OK. I don't use pivot tables. Never found them useful. Next point?
Since MDX is FREE with SQL Server and SQL Server is FREE with Windows--
then you're pretty much an idiot for not using it.

I suppose you mean SQL Server in the form of MSDE is free. Other
editions of SQL Server aren't free.

http://www.microsoft.com/sql/howtobuy/default.mspx

MSDE comes with MDX?
 
A

aaron.kempf

because I can interpolate with real data using olap

i can cartesian a couple of tables--

i just think that it's funny

the only thing that you're talking about doing is trying data for lots
and lots of time slices

you store those time slices in a TABLE and then you can cartesian date
and flow where you want and you'll get whatever kind of ratio you're
looking for.

and it wont be iterative. you wont write a loop. it'll be a JOIN.

yes, i believe that

a) let db developers write the functions
b) you guys can use these functions in a sql database-- MSDE is you
want

the only single reason that you think that databases 'arent as
powerful' is because they dont have all the cheesy little functions
you're looking for.

I will try to compile some list of functions for making SQL behave more
like Excel.

Ok?

someday I'll try to write that.. just a couple of googles; and you'll
be able to use any of these cheesy Excel functions you want-- but
you'll do it in a database

i mean-- if that is what it takes to wean you off of excel; i will do
it throw together a list someday and share it with all you folk
 
Top