How to copy non-contiguous columns to a text file

J

joeu2004

Jennifer Murphy said:
I just want a simple macro that only I will use.

What you asked for is hardly "simple".

See the macro implemented in the Excel file "copy to textfile.xls" at
http://www.box.net/shared/tgi09xdin8nr77jb36bd.

I believe it does what you want, namely:

1. You select the first row of data. Then press alt+F8 to execute the
macro.

2. Presumptions:
a. The first column contains data (e.g. text).
b. The last column contains integers.
c. Interstitial columns are ignored.
d. The data are in contiguous rows.

3. The macro prompts for the text file name. If it already exists, you can
elect to overwrite it, select another file name, or cancel.

4. The macro copies the two columns to a new temporary worksheet. So the
original data is not altered at all, as requested.

5. The data is sorted by the first column.

6. The macro ignores rows in which the first column contains specified "bad
characters".

Note: I chose to implement a VBA function to check for "bad characters",
calling InStr successively. This might prove to be too time-consuming for
large data (e.g. 500,000 rows).

Arguably, it would be better to use VBScript Regular Expression
operations. I chose not to because: (a) I am not sure we can rely on it;
(b) I don't know if VB regexpr is indeed faster; and (c) it would add
significant complexity to automate setting up the VBScript Reg Expr
"reference" (i.e. Tools > References).

But if that is something that you require, it can be done.

7. The macro sums the integers in the second column for like data in the
first column.

Note: "like data" means "exactly equal", as requested. The comparison
is case-sensitive; for example, "aBc" is considered not "exactly equal" to
"AbC".

Caveat: the text "1234" is not considered "exactly equal" to the number
1234 in the first column. Presumably, this is not a problem because you
said that the first column is expected to contain only text.

8. The macro verifies that the second column contains positive integers
(i.e. >0).

9. If there are no errors, the summary data is written to the text file,
separating the two columns with one tab character.
 
G

GS

Not bad for someone who said they didn't have time for this!<g>
Thanks for taking this off my plate!
 
J

Jennifer Murphy

Hi Jennifer,

Remove the Filter and Try. I am sure the Auto Filter is getting
applied in your data.

I have never used the Excel Autofilter mechanism, so I doubt that's the
problem unless it's easy to invoke it by accident. Besides, I'mk getting
too much data, not too little...
 
J

Jennifer Murphy

Jennifer Murphy wrote on 7/20/2011 :

Simple is what Claus suggested (as joeu2004 stated). What you asked for
was a macro that "did it all in one click". The number of tasks you
asked to be done no longer qualifies the solution as 'a simple macro'.

My original post was not for a macro of any kind. I just wanted to be
able to copy non-contiguous ranges. You pointed out, correctly, that my
piecemeal approach was kludgey and inefficient and suggested a macro
which could be tailored as needed. The code you posted suggested, again,
correctly, that doing the whole job in the macro would be better all
around. That's when it got to be non-simple.
I will give you a no nonsense, get the job done solution that will also
allow you to choose what gets written to file, and in what order it
gets written, and how you want it sorted. The flexibility doesn't add
to the workload of the project, but it does add to the
user-friendliness of using it.

I appreciate anything you have time to post. I'll then set break points
and step through it until I understand it and can modify it as needed.
No point doing anything if you only want half the job done!<g>
(no offense intended)

I guess that's where we will have to agree to disagree. ;-) This is a
quick and dirty solution to a short-term problem. I'm willing to spend a
little time writing decent code and learning new techniques, but this is
not production code that will ever be used by anyone other than me, so I
am not willing to bother with "best practices". If it works, I'm happy.

Again, I appreciate anything that you care to post and the effort you
have already expended.
 
R

Rick Rothstein

Note: I chose to implement a VBA function to check for "bad characters",
calling InStr successively. This might prove to be too time-consuming
for large data (e.g. 500,000 rows).

Arguably, it would be better to use VBScript Regular Expression
operations. I chose not to because: (a) I am not sure we can rely on
it;(b) I don't know if VB regexpr is indeed faster; and (c) it would add
significant complexity to automate setting up the VBScript Reg Expr
"reference" (i.e. Tools > References).

I thought you might be interested in knowing that we can do this check
without using RegExp in a single line of code...

Private Function badText(d) As Boolean
badText = d Like "*[%$#@.&*]*"
End Function

Since this is a single line of code, you could eliminate the function
housing and do the testing directly inside your main code.

Rick Rothstein (MVP - Excel)
 
G

GS

It happens that Rick Rothstein formulated :
Note: I chose to implement a VBA function to check for "bad characters",
calling InStr successively. This might prove to be too time-consuming
for large data (e.g. 500,000 rows).

Arguably, it would be better to use VBScript Regular Expression
operations. I chose not to because: (a) I am not sure we can rely on
it;(b) I don't know if VB regexpr is indeed faster; and (c) it would add
significant complexity to automate setting up the VBScript Reg Expr
"reference" (i.e. Tools > References).

I thought you might be interested in knowing that we can do this check
without using RegExp in a single line of code...

Private Function badText(d) As Boolean
badText = d Like "*[%$#@.&*]*"
End Function

Since this is a single line of code, you could eliminate the function housing
and do the testing directly inside your main code.

Rick Rothstein (MVP - Excel)

Brilliant! Once again you continue to shine. Thanks so much for posting
this really fine example of using 'Like'. I've saved other examples you
posted but didn't think to use this here because I have text filtering
wrappers I use regularly that iterate each character to remove invalid
ones. If the returned length matches the original length then all is
good. You just made that entire process way more efficient! Thanks
again...
 
G

GS

Some suggestions...

You might want to experiment with your sample data by changing B2:B8 to
include decimals so you can see how your 'badint' MsgBox reports
incorrect cell addresses.

Also, I believe Jennifer requested the final output be sorted by the
integer column in descending order.

Finally, you might find it more efficient to build an output string
rather than write one line at a time to the file. I think the line that
writes the file will dump the entire string in one shot. I also suggest
you end the line with a semi-colon so the file doesn't contain a blank
line at the end.
 
R

Rick Rothstein

' skip any initial bad text
For i = 1 To n
If Not badText(data(i, 1)) Then Exit For
Next
If i > n Then GoTo noData
If Not IsNumeric(data(i, 2)) Then GoTo badInt
If data(i, 2) <= 0 Or Int(data(i, 2)) <> data(i, 2) _
Then GoTo badInt

Joe... are the last two statements above placed correctly? Being outside the
i-loop, it looks to me like they will only test the last value for being an
integer, not every value.

Anyway, we can do the above without using loops at all. Here is the Like
operator code I posted as a function earlier used directly here coupled with
another Like operator statement to handle the integers; this can be used to
replace all the code quoted above...
' skip any initial bad text
If Join(WorksheetFunction.Transpose(Range(txtRng, txtRng.End(xlDown)))) Like
"*[%$#@.&*]*" Then GoTo noData
If Join(WorksheetFunction.Transpose(Range(intRng, intRng.End(xlDown))), "")
Like "*[!0-9]*" Then GoTo badInt

I did a quick look, but unless I missed it, I did not see any Range
variables set to each column of data individually which is why I formed them
"on the fly" in my code via those Range() calls. If you, in fact, do have
Range variables declared individually for those columns of data, then by all
means use them instead.

Rick Rothstein (MVP - Excel)
 
R

Rick Rothstein

Thanks so much for posting this really fine example of using 'Like'.
I've saved other examples you posted but didn't think to use this
here because I have text filtering wrappers I use regularly that
iterate each character to remove invalid ones. If the returned
length matches the original length then all is good. You just
made that entire process way more efficient! Thanks again...

You are quite welcome. I'm glad you are finding my postings useful. If you
haven't seen it yet, you might find my second message to Joe interesting
(and hopefully useful) as well.

Rick Rothstein (MVP - Excel)
 
R

Rick Rothstein

If Not IsNumeric(data(i, 2)) Then GoTo badInt

The use of IsNumeric to "proof" an entry as being all digits used to come up
a lot back when I was volunteering in the compiled VB newsgroups. Here is a
message I used to post in response that is also applicable to Excel's VBA as
well...

From a previous post of mine...

I usually try and steer people away from using IsNumeric to "proof"
supposedly numeric text. Consider this (also see note below):

ReturnValue = IsNumeric("($1,23,,3.4,,,5,,E67$)")

Most people would not expect THAT to return True. IsNumeric has some "flaws"
in what it considers a proper number and what most programmers are looking
for.

I had a short tip published by Pinnacle Publishing in their Visual Basic
Developer magazine that covered some of these flaws. Originally, the tip was
free to view but is now viewable only by subscribers.. Basically, it said
that IsNumeric returned True for things like -- currency symbols being
located in front or in back of the number as shown in my example (also
applies to plus, minus and blanks too); numbers surrounded by parentheses as
shown in my example (some people use these to mark negative numbers);
numbers containing any number of commas before a decimal point as shown in
my example; numbers in scientific notation (a number followed by an upper or
lower case "D" or "E", followed by a number equal to or less than 305 -- the
maximum power of 10 in VB); and Octal/Hexadecimal numbers (&H for
Hexadecimal, &O or just & in front of the number for Octal).

NOTE:
======
In the above example and in the referenced tip, I refer to $ signs and
commas and dots -- these were meant to refer to your currency, thousands
separator and decimal point symbols as defined in your local settings --
substitute your local regional symbols for these if appropriate.

As for your question about checking numbers, here are two functions that I
have posted in the past for similar questions..... one is for digits only
and the other is for "regular" numbers (the code is simple enough that it
can be pulled from the function "housing" and used directly inside your own
code):

Function IsDigitsOnly(Value As String) As Boolean
IsDigitsOnly = Len(Value) > 0 And _
Not Value Like "*[!0-9]*"
End Function

Function IsNumber(ByVal Value As String) As Boolean
' Leave the next statement out if you don't
' want to provide for plus/minus signs
If Value Like "[+-]*" Then Value = Mid$(Value, 2)
IsNumber = Not Value Like "*[!0-9.]*" And _
Not Value Like "*.*.*" And _
Len(Value) > 0 And Value <> "."
End Function

Here are revisions to the above functions that deal with the local settings
for decimal points (and thousand's separators) that are different than used
in the US (this code works in the US too, of course).

Function IsNumber(ByVal Value As String) As Boolean
Dim DP As String
' Get local setting for decimal point
DP = Format$(0, ".")
' Leave the next statement out if you don't
' want to provide for plus/minus signs
If Value Like "[+-]*" Then Value = Mid$(Value, 2)
IsNumber = Not Value Like "*[!0-9" & DP & "]*" And _
Not Value Like "*" & DP & "*" & DP & "*" And _
Len(Value) > 0 And Value <> DP
End Function

I'm not as concerned by the rejection of entries that include one or more
thousand's separators, but we can handle this if we don't insist on the
thousand's separator being located in the correct positions (in other words,
we'll allow the user to include them for their own purposes... we'll just
tolerate their presence).

Function IsNumber(ByVal Value As String) As Boolean
Dim DP As String
Dim TS As String
' Get local setting for decimal point
DP = Format$(0, ".")
' Get local setting for thousand's separator
' and eliminate them. Remove the next two lines
' if you don't want your users being able to
' type in the thousands separator at all.
TS = Mid$(Format$(1000, "#,###"), 2, 1)
Value = Replace$(Value, TS, "")
' Leave the next statement out if you don't
' want to provide for plus/minus signs
If Value Like "[+-]*" Then Value = Mid$(Value, 2)
IsNumber = Not Value Like "*[!0-9" & DP & "]*" And _
Not Value Like "*" & DP & "*" & DP & "*" And _
Len(Value) > 0 And Value <> DP
End Function

Rick Rothstein (MVP - Excel)
 
J

joeu2004

Jennifer Murphy said:
Thanks, I'll check it out.

Re-download the file at the same link,
http://www.box.net/shared/tgi09xdin8nr77jb36bd.

I have implemented some improvements suggested by Rick and Garry.

Note that I added some test cases to the Excel worksheet. See the
instructions in textboxes. If they are unclear, simply delete rows 15
through 23.

You might also want to review my responses to Rick and Garry to be sure that
you agree with my judgment.
 
J

joeu2004

This is a response to a number of your postings.


Rick Rothstein said:
I thought you might be interested in knowing that we
can do this check without using RegExp in a single line
of code...

Private Function badText(d) As Boolean
badText = d Like "*[%$#@.&*]*"
End Function

Great! Thanks. I have implemented the inline Like comparisons.

One downside: it appears that "]" cannot be included between "*[" and "]*".
There appears to be no "escape" character (e.g. backslash).

If I am wrong, please let me know. Otherwise, I have documented
instructions to change the logic as follows if Jennifer chooses later to
make "]" a "bad character":

data(i,1) Like badChar Or data(i,1) Like "*]*"


Rick wrote:
Rick Rothstein said:
Joe... are the last two statements above placed correctly?
Being outside the i-loop, it looks to me like they will only
test the last value for being an integer, not every value.

Well, the (first) i-th value that does not have "bad characters". See the
Exit For.

And yes, that is intentional. Jennifer said that "bad character" rows are
to be ignored. I interpreted that to mean that we should ignore any
possible error in the integer column as well. It's a judgment call.

Anyway, we can do the above without using loops at all. [....]
If Join(WorksheetFunction.Transpose(Range(txtRng, txtRng.End(xlDown))))
Like "*[%$#@.&*]*" Then GoTo noData
If Join(WorksheetFunction.Transpose(Range(intRng, intRng.End(xlDown))),
"") Like "*[!0-9]*" Then GoTo badInt

Clever! However, I would not do that in this instance.

Jennifer indicated that she is processing a large amount of data -- perhaps
500,000 rows, she wrote. Since we must examine each row anyway, I prefer to
make only one pass through the data. It's a judgment call.

If Not IsNumeric(data(i, 2)) Then GoTo badInt
[....]
I usually try and steer people away from using IsNumeric
to "proof" supposedly numeric text. Consider this [...]:
ReturnValue = IsNumeric("($1,23,,3.4,,,5,,E67$)")
Most people would not expect THAT to return True.

Good point! Since I have set On Error Resume Next, the fix for my
implementation is simple:

If Int(data(i, 2)) <> data(i, 2) Then GoTo badInt
If Err.Number <> 0 Then GoTo badInt
If data(i, 2) <= 0 Then GoTo badInt

(I could have combined the latter two with Or. I didn't because initially,
I was trying to keep lines short. But later I eschewed that objective for
the most part.)

Aside.... Originally, I intended to permit "text integer"; that is, text
that VBA recognizes as integers in arithmetic expressions. I learned later
than VBA has the same dubious "inconsistency" that Excel does: "text
integers" are treated as text, not numbers, in comparison expressions. So
the expression Int(data(i, 2)) <> data(i, 2) returns True when data(i,2) is
a "text integer", even though Int(data(i,2)) returns the proper integer.

Although I could work around that, I decided to allow the error. It's a
judgment call. It might be what Jennifer prefers anyway. If not,
presumably she will let me know, since I added it to her test cases.

I'm not as concerned by the rejection of entries
that include one or more

Neither am I. It's a judgment call. If Jennifer wants that feature, I
presume she will mention it.
thousand's separators
 
R

Rick Rothstein

You might also want to review my responses to Rick and
Garry to be sure that you agree with my judgment.

Joe,

The last message I see by you, other than the message I am replying to now,
was posted more than 7 hours ago. In other words, I do not see a reply from
you to messages from either Garry or myself. I had a system crash this
morning, but I don't think I had this newsgroup open at the time, so I'm not
sure why I can't see the messages you mentioned to Jennifer. Anyway, could
you post those messages again to see if they show up in my newsreader? Or,
if you prefer, you could send them to me via email (remove the NO.SPAM stuff
from the email address that shows attached to my messages). Thanks.

Rick Rothstein (MVP - Excel)
 
R

Rick Rothstein

I do not see a reply from you to messages from either Garry or myself.

NEVER MIND... wouldn't you know it, two minutes after I posted my message,
your reply to me showed up. I don't see your reply to Garry yet, but I guess
it will come through when my newsreader service is is good and ready to let
through.<g>

Rick Rothstein (MVP - Excel)
 
J

joeu2004

GS said:
You might want to experiment with your sample data
by changing B2:B8 to include decimals so you can see
how your 'badint' MsgBox reports incorrect cell addresses.

Good point! I forgot that rows are sorted. I fixed the problem by adding a
column with the original order number.

Unfortunately, that might increase memory usage and processing time
unnecessarily because presumably bad-integer errors are unlikely since they
are fatal errors. But I thought it would be difficult to find such errors
otherwise. It's a judgment call.

Also, I believe Jennifer requested the final output
be sorted by the integer column in descending order.

I disagree with that interpretation. For her sample data, Jennifer wrote:

Here's the output in a text file. The "->" character indicated a tab.
abc->100
abcd->250
abd->90
def->600

Obviously that is not sorted by the integer column.

I believe you got confused when Jennifer wrote her idea of the
implementation, to wit:

* Read the two columns
* Sort by the text column
* Do the processing
* Resort by the numeric column
* Write the results to a text file

My interpretation is that the "re-sort" there is to put the data back into
order based on the numeric column, assuming that the initial sort modified
the original data. If I misinterpreted, presumably Jennifer will let me
know.

Finally, you might find it more efficient to build an
output string rather than write one line at a time to
the file. I think the line that writes the file will
dump the entire string in one shot.

I disagree. Jennifer indicated that she might be processing large amounts
of data -- 500,000 rows, she wrote. In that case, there is a potential that
each row of data is unique. Consequently, a single output string could
theoretically exceed the limits of the String data type (2^31 characters).
Unlikely as all that might seem, I refer to design algorithms that do not
break when expected limits are exceeded.

More importantly, I am concerned about any smaller limitations that the
Print# statement might have. Off-hand, I cannot find any documentation to
that effect. But I thought I discovered one empirically long ago. Anyway,
it's a judgment call.

Aside.... I could build an output string up to an arbitrary small limit,
effectively buffering the output myself, and reduce the number of Print#
statements. But I believe Print# output is buffered anyway. If Jennifer
expresses for the execution time, I might then consider this change.

I also suggest you end the line with a semi-colon so
the file doesn't contain a blank line at the end.

Okay, I implemented this suggestion even though I am skeptical of the need
to do it. I note that Jennifer's model is copy-and-paste into Notepad, and
that does result in an empty line at the end.

After-thought.... I vaguely remember that some other applications (Access?)
requires the empty line at the end when they import the file. Of course, it
is equally possible, perhaps even more likely, that some applications might
misbehave due the empty line, which why I acquiesced to this suggestion.
 
R

Rick Rothstein

One downside: it appears that "]" cannot be included
between "*[" and "]*". There appears to be no "escape"
character (e.g. backslash).

If I am wrong, please let me know.

Nope, you are correct; however, I don't not remember Jennifer mentioning
square brackets as being "bad" characters and I don't remember your code
checking for them either.
Otherwise, I have documented instructions to change the logic
as follows if Jennifer chooses later to make "]" a "bad character":

data(i,1) Like badChar Or data(i,1) Like "*]*"

Yes, we are forced to check for the right square bracket separately. You
would think Microsoft could have implemented an escape character kind of
easily, wouldn't you?
Well, the (first) i-th value that does not have "bad characters".
See the Exit For.

Yep, I see that now... I originally read your code (way) too quickly.
If Join(WorksheetFunction.Transpose(Range(txtRng, txtRng.End(xlDown))))
Like "*[%$#@.&*]*" Then GoTo noData
If Join(WorksheetFunction.Transpose(Range(intRng, intRng.End(xlDown))),
"") Like "*[!0-9]*" Then GoTo badInt

However, I would not do that in this instance. Jennifer indicated
that she is processing a large amount of data -- perhaps 500,000 rows, she
wrote.

The Worksheetfunction.Transpose seems to work quite quickly and if I
remember old, old, old testing I once did, Join (as well as its counter part
Split) are fairly efficient for String values up to about 15 to 20 Megs in
size, so I think Jennifer's data might not bog down too much. However....
Since we must examine each row anyway, I prefer to make only one pass
through the data. It's a judgment call.

.... that is a good point and a good reason to handle things the way you are
now doing.

Rick Rothstein (MVP - Excel)
 
G

GS

joeu2004 wrote :
Good point! I forgot that rows are sorted. I fixed the problem by adding a
column with the original order number.

Unfortunately, that might increase memory usage and processing time
unnecessarily because presumably bad-integer errors are unlikely since they
are fatal errors. But I thought it would be difficult to find such errors
otherwise. It's a judgment call.



I disagree with that interpretation. For her sample data, Jennifer wrote:

Here's the output in a text file. The "->" character indicated a tab.
abc->100
abcd->250
abd->90
def->600

Obviously that is not sorted by the integer column.

I believe you got confused when Jennifer wrote her idea of the
implementation, to wit:

* Read the two columns
* Sort by the text column
* Do the processing
* Resort by the numeric column
* Write the results to a text file

My interpretation is that the "re-sort" there is to put the data back into
order based on the numeric column, assuming that the initial sort modified
the original data. If I misinterpreted, presumably Jennifer will let me
know.
My understanding is the integer column IS the numeric column, not the
text column. said:
I disagree. Jennifer indicated that she might be processing large amounts of
data -- 500,000 rows, she wrote. In that case, there is a potential that
each row of data is unique. Consequently, a single output string could
theoretically exceed the limits of the String data type (2^31 characters).
Unlikely as all that might seem, I refer to design algorithms that do not
break when expected limits are exceeded.

More importantly, I am concerned about any smaller limitations that the
Print# statement might have. Off-hand, I cannot find any documentation to
that effect. But I thought I discovered one empirically long ago. Anyway,
it's a judgment call.

Aside.... I could build an output string up to an arbitrary small limit,
effectively buffering the output myself, and reduce the number of Print#
statements. But I believe Print# output is buffered anyway. If Jennifer
expresses for the execution time, I might then consider this change.

Good point! I'm in the habit of writing large amounts (beyond the
'one-shot' limit) in blocks. Just my preference since I do a lot of
this in VB6 apps.
Okay, I implemented this suggestion even though I am skeptical of the need to
do it. I note that Jennifer's model is copy-and-paste into Notepad, and that
does result in an empty line at the end.

After-thought.... I vaguely remember that some other applications (Access?)
requires the empty line at the end when they import the file. Of course, it
is equally possible, perhaps even more likely, that some applications might
misbehave due the empty line, which why I acquiesced to this suggestion.

Actually, I believe it's the reverse but I won't go to the bank with
that since I don't do much work with Access and its db files.

My point is that good db practice is to not deliberately create empty
records. A blank line at the end of a text file used for this purpose
violates that rule. That said, worksheets with blank/empty rows after
the last record also violate the rule, but the JET/ACE providers are
configured to handle that internally <AFAIK> in both cases.

Different story when using arrays instead of ADO/DAO! First line should
<IMO> always contain fieldnames as that's normally what you get
whenever a db table 'dump' is done. This line would occupy element(0)
of the array. The last line (UBound), then, should be the Record.Count
since the first record (line2) is element(1) of the array. Otherwise,
you'd have to iterate the array to check for the last line that's not
empty. Of course, it would make sense to start at UBound and Step -1
because appending data just adds more empty lines. Point is we never
know how many blank records there are. Optionally, if the array
variable is a Variant (undimmed) then we can also use VB's Filter
function to erase empty elements (if a 1D array) without the need to
iterate. This will also remove all blank elements regardless of where
they're positioned.

I don't think it matters how you write the text file, really, but I
wonder if using SaveAs and specifying ".txt" as the output filetype
would be any faster than using VBA. If so then using a new wkb instead
of a temp wks would be appropriate.

Thanks for your feedback. You always seem to put tremendous effort and
valuable content into your replies! -Much appreciated...
 
G

GS

Rick Rothstein formulated on Thursday :
You are quite welcome. I'm glad you are finding my postings useful. If you
haven't seen it yet, you might find my second message to Joe interesting
(and hopefully useful) as well.

Rick Rothstein (MVP - Excel)

Yes, Rick, I read those replies! I recall you posting content similar
to this in a Classic VB group. I think it's quite valuable and I do use
it whenever I encounter dubuous concerns about using IsNumeric.
Probably, though, I should just include these in my helper function
..bas and persist consistency n my work habit!<bg>
 
J

joeu2004

GS said:
I don't think it matters how you write the text file,
really, but I wonder if using SaveAs and specifying
".txt" as the output filetype would be any faster than
using VBA.

Funny you mention that. That was indeed my original design. But I
discovered that alters the current workbook name; and from what I read, the
only way to name it back again is to do a SaveAs with the original name.
Too presumptuous, not to mention other issues.

(If anyone knows how to do something like a SaveAs without changing the
workbook name, I'm all ears. I'm thinking of perhaps some object methods --
like FileSystemObject -- that would write the worksheet or selection to a
tab-delimited text file.)

If so then using a new wkb instead of a temp wks would be appropriate.

Right. That is what I concluded as well. And another benefit is: it might
avoid triggering a recalculation cycle of the current workbook, which I
believe adding and deleting a worksheet does. (In my implemenation, that
happens only once when I restore oldCalc, if it was Automatic.)

But at the point, I was unsure what the performance issues might be; and I
wrestled with the question of whether to create the new workbook in the same
instance of Excel or in a new instance, which further isolate any other
unforeseen side-effects.

In the final analysis, I felt that I had worked long enough on the
implementation, and it would be better to defer such changes (and
investigations) until Jennifer provides feedback, if any, about the behavior
and performance of the current implementation.

You always seem to put tremendous effort and valuable content into your
replies! -Much appreciated.

Thanks. That is welcomed feedback right now. Some golem in another forum
characterized my motivations as just "to sound clever". Nothing could be
further from the truth. I do this first because it is fun, second for my
own edification, and third to help others -- although it is a close tie
among all three.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top