Hi, Brendan.
So based on the number of posts marked as answers via Microsoft's
web-based interface, one would conclude that about 9.3% of my posts
answer the question asked.
You've made four assumptions that make your conclusion about your success
rate unfair and unrealistic.
Assumption #1: Every post has a chance to earn one answer.
It doesn't. For an example, a question is posted and you reply to it.
Additional information is added in the OP's second post. You reply again
and the OP marks your response as an answer to his question. So there's
one question, one answer marked, and four posts, two of which are yours.
What is your success rate?
A. 0%
B. 25%
C. 50%
D. 100%
The answer is D. No matter how many posts are made to the thread, and no
matter how many of those posts are yours, only one answer can be awarded
per poster per question -- and you earned it, so you have a 100% success
rate in this example.
So you need to count the total number of threads participated in, not the
total number of posts posted when calculating your success rate.
Assumption #2: That every question posted can have replies marked as
answers.
Unless you've discriminated against questions posted via UseNet newsgroup
subscribers, Google Groups, AccessMonster.com, et cetera, and only
answered questions posted via the Microsoft Web newsreader, then there are
quite a number of questions that you are relying on an MVP to come around
and mark the replies. The original poster can't mark replies to that
question as answers unless he was signed into the Microsoft Online
Community to post the question. Historically, the chances of an Access
MVP marking a non-Microsoft Online Community post approaches 0%, so
realistically one can never count on it.
Allowing a large number of questions into the statistical pool that will
always have a 0% success rate will decieve one into thinking that one
always failed on those questions. It's like my claiming, "I can't get
Brendan Reynolds to shake my hand," when in fact we've never been in the
same room, so there's never been an opportunity. It would be totally
unfair of me to make such a claim.
So, to be fair in calculating your success rate, only consider those
questions you participated in that have a reasonable opportunity to be
marked by the OP as having an answer. Those questions would be the ones
submitted via the Microsoft Web newsreader by members of the Microsoft
Online Community.
Assumption #3: That Google Groups gives an accurate count of posts (or
threads).
It's not accurate, but it does give a ballpark count. Google's search
engine is built to use the key word indices to optimize searches, but
since they enhanced it with the "Google Groups Beta" version, it drops out
dozens or even hundreds of threads for an individual poster being searched
on unless one uses two week increments and accumulates those counts over a
period of time to calculate the total count.
Assumption #4: That unless a data sample consists of the entire
population, or most of it, that no conclusions may be drawn from the data.
We don't need everyone who can give feedback to actually do so before we
can determine trends and draw reasonable conclusions from their feedback.
I just checked the most recent data downloaded (1 Jan. '06 through 30 Mar.
'06 for the 12 Access newsgroups mentioned earlier), and for the 13,268
questions where the OP _could_ have marked answers, 2,811, or 21.2%,
actually did. That's a fairly large sample size, but when we draw
conclusions about a sample of the data, we need to also calculate the
theoretical margin of error so that we can determine how reliable those
conclusions are.
For example, if we took a poll of a group of registered voters and asked
them how they would vote on a law legalizing the death penalty if an
election were held today, and the poll results were as follows:
65% against the death penalty
30% for the death penalty
5% I don't know
+/- 3.5% margin of error
. . . then we can conclude that it would take a miracle for this law to be
passed today. Even if we subtracted the margin of error from the group of
responders against the death penalty and added this to the group of
responders for the death penalty and generously added all of the "I don't
knows" to the group of responders for the death penalty, 61.5% vs. 38.5%,
there are still too many against the death penalty in this example for the
law to have any hope of passing.
That said, we can calculate the reliability of the figures (the
theoretical margin of error) we have available with mathematical
equations. (One can use a calculator if one doesn't know the equations
off-hand.) When I plug the numbers (2,811 for the sample size of
"questions marked as answers" in the population of 13,268 "questions that
could be marked by the OP") into the "Margin of Error Calculator" on the
following Web page, it comes out to +/- 1.64% theoretical margin of error,
with a confidence level of 95%, meaning this sample size is sufficient for
most purposes:
http://www.americanresearchgroup.com/moe.html
If you don't think a confidence level of 95% is high enough to be certain
that any conclusions can be drawn from the data, then you can require a
higher level by using another calculator, such as the one on the following
Web page:
http://www.raosoft.com/samplesize.html
If you plug in 2% acceptable margin of error (at the top), 98 for the
confidence level (down at the bottom right corner), you only need a sample
size of 2,696 (so our sample has more than enough), but if we wanted a 99%
confidence level, then we would need a larger sample of at least 3,160.
We only have a sample large enough for a maximum 98.3% confidence level
for the past three months' questions. (We can use a longer period of
time, but I figure we should deal with a manageable time period that can
even out any short-term anomolies, such as everyone being off for the
holidays or all the MVP's lounging at the Summit.) Therefore, if we use
at least a three month period, we have a statistically significant count
of which replies the questioners think are answers to their questions.
So for a fair calculation of your actual success rate in having your
replies marked as answers, my advice is to avoid the assumptions listed
above. And realize that there's some luck involved because some people
refuse to mark answers no matter how fabulous the responses are. I laugh
at one guy every time I see his name in the newsgroups, because he's here
for one thing, and one thing only, so don't get in his way: ;-)
http://groups.google.com/group/micr...03df/27b0ee45c49cbf38?&hl=en#27b0ee45c49cbf38
Please note that the guy was already signed in, so instead of clicking on
either of the "Yes" or "No" buttons for "Did this post answer your
question?" he posted a reply to explain why he wouldn't bother, which
takes much longer and is a lot more trouble than just clicking on a
button. And since he didn't mark a reply, if anyone else has the same
exact question, he won't find it in the answer database because the thread
has since expired off the server, so he'll post the question again and
wait for someone else to post the same answer again.
And you can say that people can always find it in Google Groups, but look
closely and count how many different ways Google mangled that thread, like
losing a post and scrambling the order in which the messages were posted,
so you have to expand each post's options and jump from post to post based
upon the time posted, not the vertical sequential order of the thread's
posts. (Maybe Google will fix this and make me a liar, but I've seen this
enough times that I'll complain about it.)
These figures do not change my opinion that the number of posts marked as
answers via Microsoft's web-based interface is not an accurate reflection
of the poster's contribution.
I agree that a poster's actual contribution isn't just the number of posts
or the number of posts marked as answers, but the time, skills and
knowledge offered to others to help them solve problems. The number of
questions answered as indicated in Microsoft's Web newsreader reflects how
many times the questioner felt he'd received help and how often he took
the time to provide this feedback. So this is one method of measurement
of the "degree of helpfulness" in comparison to other posters in the
newsgroups, with those at the top of the list (higher numbers) being
indicative as helpful more often than those at the bottom of the list
(lower numbers).
In that context, the records in Microsoft's Web newsreader reveal that
Rick B is one of the most helpful posters over the past 22 months,
regardless of his attitude in some of those posts. And he has plenty of
good company.
HTH.
Gunny
See
http://www.QBuilt.com for all your database needs.
See
http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.