G'day "leightonwalter" <
[email protected]>,
Ahh sorry m8, missed that part of the spec in my haste. As for your
solution, you've nailed it on ya own! Good job! It does seem
repetitive, but it is spot on. At best you could
With Selection
.Font.Name = ActiveDocument.Styles(.Style).Font.Name
End With
hardly worth it
FYI only: Word _interprets_ the source into a Word document in memory
thingy. Thus your source architecture is unattainable from within
Word. However, it CAN be inferred from the document, as the document
is created from the source using a consistent (well, sorta, not as
much as we would really like) methodology.
As CSS defines styles, the results can be found inside of Word's
equivalent as you discovered for yourself.
However, there ARE times when we really do need the proper definition.
There are a few ways of attaining this.
VBA Open File
____________
Rant: We step back 30 years in technology levels and manually process
every line via an Input# after an Open File For BoringRead as
AfghaniHashOne.
Pros: Maximum flexibility and guaranteed transparency to the data
Cons: Massive amounts of yestercentury coding, S L O W
Open into Word Doc as a Text file
__________________________
Rant: Interesting halfway house this one.
Pros: Simplicity of coding, great speed (assuming sensible
architecture for the environment), good transparency to the data
Cons: Still yestercentury sequential processing, upgraded to a query
search methodology (pre-RDB).
InfoPath
______
Rant: MS are trying to give us an XML processing interface. It's kinda
cool, but I personally find it a bit yestercentury. That is natural
though, coz the modern equivalent is XSLT and this is the 'other side'
to the XML processing story.
Pros: Architect correctly and achieve light speed,
Cons: ok transparency to data, potentially short-lived base
technology, re-coding the same wheel as many others due to current
skeletal nature of the supporting API.
So really, all these approaches are kinda amusing from an information
access POV.
<It IS Friday here already in Aus, so beware!

>
I find it all perfectly analogous to quantum mechanics compared to
Newtonian physics.
Any given document, from a document processor's frame of reference, is
in an unknown state. and it is only our act of observation that
defines it. Indeed, we have so many manners of observation that we can
form contradictory conclusions about some documents, depending on the
ways in which we examine them. Our observation predicates the event.
If we take a traditional sequential processing approach we view
paragraphs. Long live RTF 1, 2, HTML and friends.
If we take a structured information approach we view rigidly
hierarchical models with a static baseline that are capable of
identifying without context. Long love XML, SmartTags and friends.
If we take a natural processing approach, we view an almost chaotic
blend of boundaries interweaving to present a woven thread of
knowledge when read sequentially.
Several authors, including myself, have tried to describe indexing
documents by using a colour analogy, with each colour being an
indexable topic. My particular take, reader's digest version, is that
it is reasonably fractal. Long live NLP, it will outlive all the
rest.
So, in any given situation in the horrible NOW we exist in, you HAVE
to decide whether you want the equations to be wavelike (XML) or
particulate (RTF/HTML) because we do not have the document equivalent
of SQUIDS yet because NLP is still in its infancy. Most of the NLP
gurus seem to be caught up in neural network learning architectures in
order to solve the dynamics of the instinctive human language
mechanism instead of applying the rules they themselves learnt. Be
taught everything in context.
Steve hawking up a juicy golly Hudson - Word Heretic
steve from wordheretic.com (Email replies require payment)
Without prejudice
leightonwalter reckoned: