Rick B....do you enjoy being rude?

B

Brendan Reynolds

Hi Gunny,

My mistake was that I was looking only at posts marked 'helpful', not at
posts marked as answers. I had not realised that there was a difference.

I followed the 'Why should I rate a post?' link and read what was said
there. It says that one can obtain more information about a poster by
clicking the poster's display name, but when I tried that with my display
name and with yours, all I got was a pop-up-window that said 'no information
available'.

I think the subject is probably of interest to others, and we should try to
keep the discussion in the newsgroup in so far as that is possible, but you
are welcome to send e-mail if you wish. The address is the first four
letters of my given name followed by the first four letters of my family
name at brinkster dot net.
 
6

'69 Camaro

Hi, Brendan.
I had not realised that there was a difference.

It's rather confusing how and why they built the Web newsreader tools the
way they did. Reading the online help is a must unless one wants to do a
lot of trial and error.
when I tried that with my display name and with yours, all I got was a
pop-up-window that said 'no information available'.

Neither of us was signed in as a member of the Microsoft Online Community
via the Web newsreader when we posted our messages to UseNet, so the Web
newsreader doesn't have an assigned member profile to display on either one
of us. Here's a recent thread with two members, Tom Wickerath and Ken
Sheridan, who were signed in when they posted their messages, so you can see
what the authors' profiles look like.

http://www.microsoft.com/office/com...cess&mid=2e2f0cc4-7fa7-4e6f-b6b6-346c1f228224
you are welcome to send e-mail if you wish.

Thank you. I sent you the zipped spreadsheet.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
T

TC

'69 Camaro said:
Hi, TC.


The good news is that any MVP can sign in to the Microsoft Online Community
and mark answers as helpful. You and all of the other MVP's have had the
power to make that statistic significant.

Have you done so?

Nope. I find the MS web interface inefficient, so I am not inclined to
use it. I only found out about it fairly recently. Comparing it &
google, there's no way that I would swap to it (from google) yet.

If you have, since we can see it's a work in progress, do
you have an estimated date of completion? If not, then why do you complain
that it doesn't meet your standards when you haven't made an effort to bring
it up to your standards?

Huh? I'm not complaining to anyone, about anything. I've used the
google interface for years. I will continue using it until something
better comes along. The MS web interface is not "it", yet, IMHO. I am
under no personal obligation to use the MS inteface AFAIK.

TC (MVP Access)
http://tc2.atspace.com
 
B

Brendan Reynolds

Thanks Gunny.

According to the figures you sent me, which I do not doubt, 126 of my
messages posted to the microsoft.public.access newsgroup between 14 June
2004 and 28 March 2006 were marked as answers. According to Google, I posted
1,350 messages to that newsgroup in that timeframe. So based on the number
of posts marked as answers via Microsoft's web-based interface, one would
conclude that about 9.3% of my posts answer the question asked.

What is the reality? I'm not about to trawl through 1,350 posts to find out,
but I looked at a sample, and my estimate is that somewhere in the region of
75% of my posts answer the question asked.

These figures do not change my opinion that the number of posts marked as
answers via Microsoft's web-based interface is not an accurate reflection of
the poster's contribution.
 
A

aaron.kempf

i think that the whole MVP thing is a farce; i mean--- Microsoft only
grants MVPs to people that follow GROUPTHINK

when was the last time that we heard a MVP really call out Microsoft?
like, for example
-----------------------------------------------
Hey, Microsoft-- when are you going to fix SQL authentication?
-----------------------------------------------

I just think that the whole concept of a MVP is to be VALUABLE. And in
a closed; propietary system; I argue that the only way to be VALUABLE
is to kick and scream at those lazy jerks out in redmond-- since
Redmond has all the power; what is the point of <I>only awarding MVPs
to people that agree with you</I>?

I think that we should create a new category called
MVDS

Most
Valuable
Drill
Sargent

And when we discover someone-- like myself for example-- that is
willing to STAND UP TO MICROSOFT AND DEMAND SOMETHING BETTER-- then we
should award them for their persistence.

When we find someone like 'Rick B' that might be rude here and there??
Maybe we should AWARD HIM INSTEAD OF TALKING TRASH TO HIM.

I think that MVPs should be independent of Microsoft; or the govt-- or
another company-- or SOMEONE should come out with some type of award to
recognize people that are HELPFUL but not pleasant and peachy and
innocent and bland.

I dont think that I've ever met a single Access MVP that knows jack
shit about SQL Server.

I've never met a single MVP that is willing to KICK AND SCREAM ABOUT
THIS ABUSIVE FAT LAZY COMPANY ACROSS THE POND (Lake Washington that is
lol)

Anyone that is developing Access without using SQL Server should be
drawn and quartered.
Literally you MDB assholes should be shot and fed in stew to the poor
people.

I believe that Microsoft's treatement of Access MVPs; is that they
consistently reinforce the MDB concept

THIS MDB CONCEPT IS OBSOLETE AND YOU LAME ACCESS MVPs SHOULD BE SHOT
FOR SCARING PEOPLE AWAY FROM A REAL DATABASE.


WHY DOESNT MICROSOFT GIVE MVPs TO ADP DEVELOPERS?
WHY DOESNT MICROSOFT GIVE MVPs TO ADP DEVELOPERS?
WHY DOESNT MICROSOFT GIVE MVPs TO ADP DEVELOPERS?


Why doesn't Microsoft give MVPs to people that submit bugs? Why
doesn't Microsoft keep track of how many SqlWish each person has
submitted?

That seems 100 times more VALUABLE than this cheesy whiteboard where we
call each other names when we disagree

People that INNOVATE-- are the people that should be rewarded.



-Aaron
 
6

'69 Camaro

Hi, Brendan.
So based on the number of posts marked as answers via Microsoft's
web-based interface, one would conclude that about 9.3% of my posts answer
the question asked.

You've made four assumptions that make your conclusion about your success
rate unfair and unrealistic.

Assumption #1: Every post has a chance to earn one answer.

It doesn't. For an example, a question is posted and you reply to it.
Additional information is added in the OP's second post. You reply again
and the OP marks your response as an answer to his question. So there's one
question, one answer marked, and four posts, two of which are yours. What
is your success rate?

A. 0%
B. 25%
C. 50%
D. 100%

The answer is D. No matter how many posts are made to the thread, and no
matter how many of those posts are yours, only one answer can be awarded per
poster per question -- and you earned it, so you have a 100% success rate in
this example.

So you need to count the total number of threads participated in, not the
total number of posts posted when calculating your success rate.

Assumption #2: That every question posted can have replies marked as
answers.

Unless you've discriminated against questions posted via UseNet newsgroup
subscribers, Google Groups, AccessMonster.com, et cetera, and only answered
questions posted via the Microsoft Web newsreader, then there are quite a
number of questions that you are relying on an MVP to come around and mark
the replies. The original poster can't mark replies to that question as
answers unless he was signed into the Microsoft Online Community to post the
question. Historically, the chances of an Access MVP marking a
non-Microsoft Online Community post approaches 0%, so realistically one can
never count on it.

Allowing a large number of questions into the statistical pool that will
always have a 0% success rate will decieve one into thinking that one always
failed on those questions. It's like my claiming, "I can't get Brendan
Reynolds to shake my hand," when in fact we've never been in the same room,
so there's never been an opportunity. It would be totally unfair of me to
make such a claim.

So, to be fair in calculating your success rate, only consider those
questions you participated in that have a reasonable opportunity to be
marked by the OP as having an answer. Those questions would be the ones
submitted via the Microsoft Web newsreader by members of the Microsoft
Online Community.

Assumption #3: That Google Groups gives an accurate count of posts (or
threads).

It's not accurate, but it does give a ballpark count. Google's search
engine is built to use the key word indices to optimize searches, but since
they enhanced it with the "Google Groups Beta" version, it drops out dozens
or even hundreds of threads for an individual poster being searched on
unless one uses two week increments and accumulates those counts over a
period of time to calculate the total count.

Assumption #4: That unless a data sample consists of the entire population,
or most of it, that no conclusions may be drawn from the data.

We don't need everyone who can give feedback to actually do so before we can
determine trends and draw reasonable conclusions from their feedback. I
just checked the most recent data downloaded (1 Jan. '06 through 30 Mar. '06
for the 12 Access newsgroups mentioned earlier), and for the 13,268
questions where the OP _could_ have marked answers, 2,811, or 21.2%,
actually did. That's a fairly large sample size, but when we draw
conclusions about a sample of the data, we need to also calculate the
theoretical margin of error so that we can determine how reliable those
conclusions are.

For example, if we took a poll of a group of registered voters and asked
them how they would vote on a law legalizing the death penalty if an
election were held today, and the poll results were as follows:

65% against the death penalty
30% for the death penalty
5% I don't know
+/- 3.5% margin of error

.. . . then we can conclude that it would take a miracle for this law to be
passed today. Even if we subtracted the margin of error from the group of
responders against the death penalty and added this to the group of
responders for the death penalty and generously added all of the "I don't
knows" to the group of responders for the death penalty, 61.5% vs. 38.5%,
there are still too many against the death penalty in this example for the
law to have any hope of passing.

That said, we can calculate the reliability of the figures (the theoretical
margin of error) we have available with mathematical equations. (One can
use a calculator if one doesn't know the equations off-hand.) When I plug
the numbers (2,811 for the sample size of "questions marked as answers" in
the population of 13,268 "questions that could be marked by the OP") into
the "Margin of Error Calculator" on the following Web page, it comes out to
+/- 1.64% theoretical margin of error, with a confidence level of 95%,
meaning this sample size is sufficient for most purposes:

http://www.americanresearchgroup.com/moe.html

If you don't think a confidence level of 95% is high enough to be certain
that any conclusions can be drawn from the data, then you can require a
higher level by using another calculator, such as the one on the following
Web page:

http://www.raosoft.com/samplesize.html

If you plug in 2% acceptable margin of error (at the top), 98 for the
confidence level (down at the bottom right corner), you only need a sample
size of 2,696 (so our sample has more than enough), but if we wanted a 99%
confidence level, then we would need a larger sample of at least 3,160. We
only have a sample large enough for a maximum 98.3% confidence level for the
past three months' questions. (We can use a longer period of time, but I
figure we should deal with a manageable time period that can even out any
short-term anomolies, such as everyone being off for the holidays or all the
MVP's lounging at the Summit.) Therefore, if we use at least a three month
period, we have a statistically significant count of which replies the
questioners think are answers to their questions.

So for a fair calculation of your actual success rate in having your replies
marked as answers, my advice is to avoid the assumptions listed above. And
realize that there's some luck involved because some people refuse to mark
answers no matter how fabulous the responses are. I laugh at one guy every
time I see his name in the newsgroups, because he's here for one thing, and
one thing only, so don't get in his way: ;-)

http://groups.google.com/group/micr...03df/27b0ee45c49cbf38?&hl=en#27b0ee45c49cbf38

Please note that the guy was already signed in, so instead of clicking on
either of the "Yes" or "No" buttons for "Did this post answer your
question?" he posted a reply to explain why he wouldn't bother, which takes
much longer and is a lot more trouble than just clicking on a button. And
since he didn't mark a reply, if anyone else has the same exact question, he
won't find it in the answer database because the thread has since expired
off the server, so he'll post the question again and wait for someone else
to post the same answer again.

And you can say that people can always find it in Google Groups, but look
closely and count how many different ways Google mangled that thread, like
losing a post and scrambling the order in which the messages were posted, so
you have to expand each post's options and jump from post to post based upon
the time posted, not the vertical sequential order of the thread's posts.
(Maybe Google will fix this and make me a liar, but I've seen this enough
times that I'll complain about it.)
These figures do not change my opinion that the number of posts marked as
answers via Microsoft's web-based interface is not an accurate reflection
of the poster's contribution.

I agree that a poster's actual contribution isn't just the number of posts
or the number of posts marked as answers, but the time, skills and knowledge
offered to others to help them solve problems. The number of questions
answered as indicated in Microsoft's Web newsreader reflects how many times
the questioner felt he'd received help and how often he took the time to
provide this feedback. So this is one method of measurement of the "degree
of helpfulness" in comparison to other posters in the newsgroups, with those
at the top of the list (higher numbers) being indicative as helpful more
often than those at the bottom of the list (lower numbers).

In that context, the records in Microsoft's Web newsreader reveal that Rick
B is one of the most helpful posters over the past 22 months, regardless of
his attitude in some of those posts. And he has plenty of good company.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
D

david epsom dot com dot au

The bottom line is, the first answer gets the prize.

So this is a good way of counting who is burning to get the answer up first.

But not very good for counting who puts up the best answer.

Statistically, I accept that your numbers are a good estimate of who put up
the first answer to questions placed through the Microsoft Web Interface. It
is accepted that users using the MWI are less sophisticated than users using
other interfaces, so statistically, I accept that your numbers are a good
estimate of who put up the first answer to simple questions.

Obviously, with the number of unchecked answers, MVP's could bias the result
in any other direction.

I would be amazed if anyone with a serious interest in the news group
content would also like to spend time marking answers.

I think that if MS would like to MVP's to provide this free service, they
still need to work on the interface, to make it attractive to sophisticated
users.

Actually, never mind that: they need to start by just fixing it so that it
is not unbearably slow.

(david)




'69 Camaro said:
Hi, Brendan.
So based on the number of posts marked as answers via Microsoft's
web-based interface, one would conclude that about 9.3% of my posts
answer the question asked.

You've made four assumptions that make your conclusion about your success
rate unfair and unrealistic.

Assumption #1: Every post has a chance to earn one answer.

It doesn't. For an example, a question is posted and you reply to it.
Additional information is added in the OP's second post. You reply again
and the OP marks your response as an answer to his question. So there's
one question, one answer marked, and four posts, two of which are yours.
What is your success rate?

A. 0%
B. 25%
C. 50%
D. 100%

The answer is D. No matter how many posts are made to the thread, and no
matter how many of those posts are yours, only one answer can be awarded
per poster per question -- and you earned it, so you have a 100% success
rate in this example.

So you need to count the total number of threads participated in, not the
total number of posts posted when calculating your success rate.

Assumption #2: That every question posted can have replies marked as
answers.

Unless you've discriminated against questions posted via UseNet newsgroup
subscribers, Google Groups, AccessMonster.com, et cetera, and only
answered questions posted via the Microsoft Web newsreader, then there are
quite a number of questions that you are relying on an MVP to come around
and mark the replies. The original poster can't mark replies to that
question as answers unless he was signed into the Microsoft Online
Community to post the question. Historically, the chances of an Access
MVP marking a non-Microsoft Online Community post approaches 0%, so
realistically one can never count on it.

Allowing a large number of questions into the statistical pool that will
always have a 0% success rate will decieve one into thinking that one
always failed on those questions. It's like my claiming, "I can't get
Brendan Reynolds to shake my hand," when in fact we've never been in the
same room, so there's never been an opportunity. It would be totally
unfair of me to make such a claim.

So, to be fair in calculating your success rate, only consider those
questions you participated in that have a reasonable opportunity to be
marked by the OP as having an answer. Those questions would be the ones
submitted via the Microsoft Web newsreader by members of the Microsoft
Online Community.

Assumption #3: That Google Groups gives an accurate count of posts (or
threads).

It's not accurate, but it does give a ballpark count. Google's search
engine is built to use the key word indices to optimize searches, but
since they enhanced it with the "Google Groups Beta" version, it drops out
dozens or even hundreds of threads for an individual poster being searched
on unless one uses two week increments and accumulates those counts over a
period of time to calculate the total count.

Assumption #4: That unless a data sample consists of the entire
population, or most of it, that no conclusions may be drawn from the data.

We don't need everyone who can give feedback to actually do so before we
can determine trends and draw reasonable conclusions from their feedback.
I just checked the most recent data downloaded (1 Jan. '06 through 30 Mar.
'06 for the 12 Access newsgroups mentioned earlier), and for the 13,268
questions where the OP _could_ have marked answers, 2,811, or 21.2%,
actually did. That's a fairly large sample size, but when we draw
conclusions about a sample of the data, we need to also calculate the
theoretical margin of error so that we can determine how reliable those
conclusions are.

For example, if we took a poll of a group of registered voters and asked
them how they would vote on a law legalizing the death penalty if an
election were held today, and the poll results were as follows:

65% against the death penalty
30% for the death penalty
5% I don't know
+/- 3.5% margin of error

. . . then we can conclude that it would take a miracle for this law to be
passed today. Even if we subtracted the margin of error from the group of
responders against the death penalty and added this to the group of
responders for the death penalty and generously added all of the "I don't
knows" to the group of responders for the death penalty, 61.5% vs. 38.5%,
there are still too many against the death penalty in this example for the
law to have any hope of passing.

That said, we can calculate the reliability of the figures (the
theoretical margin of error) we have available with mathematical
equations. (One can use a calculator if one doesn't know the equations
off-hand.) When I plug the numbers (2,811 for the sample size of
"questions marked as answers" in the population of 13,268 "questions that
could be marked by the OP") into the "Margin of Error Calculator" on the
following Web page, it comes out to +/- 1.64% theoretical margin of error,
with a confidence level of 95%, meaning this sample size is sufficient for
most purposes:

http://www.americanresearchgroup.com/moe.html

If you don't think a confidence level of 95% is high enough to be certain
that any conclusions can be drawn from the data, then you can require a
higher level by using another calculator, such as the one on the following
Web page:

http://www.raosoft.com/samplesize.html

If you plug in 2% acceptable margin of error (at the top), 98 for the
confidence level (down at the bottom right corner), you only need a sample
size of 2,696 (so our sample has more than enough), but if we wanted a 99%
confidence level, then we would need a larger sample of at least 3,160.
We only have a sample large enough for a maximum 98.3% confidence level
for the past three months' questions. (We can use a longer period of
time, but I figure we should deal with a manageable time period that can
even out any short-term anomolies, such as everyone being off for the
holidays or all the MVP's lounging at the Summit.) Therefore, if we use
at least a three month period, we have a statistically significant count
of which replies the questioners think are answers to their questions.

So for a fair calculation of your actual success rate in having your
replies marked as answers, my advice is to avoid the assumptions listed
above. And realize that there's some luck involved because some people
refuse to mark answers no matter how fabulous the responses are. I laugh
at one guy every time I see his name in the newsgroups, because he's here
for one thing, and one thing only, so don't get in his way: ;-)

http://groups.google.com/group/micr...03df/27b0ee45c49cbf38?&hl=en#27b0ee45c49cbf38

Please note that the guy was already signed in, so instead of clicking on
either of the "Yes" or "No" buttons for "Did this post answer your
question?" he posted a reply to explain why he wouldn't bother, which
takes much longer and is a lot more trouble than just clicking on a
button. And since he didn't mark a reply, if anyone else has the same
exact question, he won't find it in the answer database because the thread
has since expired off the server, so he'll post the question again and
wait for someone else to post the same answer again.

And you can say that people can always find it in Google Groups, but look
closely and count how many different ways Google mangled that thread, like
losing a post and scrambling the order in which the messages were posted,
so you have to expand each post's options and jump from post to post based
upon the time posted, not the vertical sequential order of the thread's
posts. (Maybe Google will fix this and make me a liar, but I've seen this
enough times that I'll complain about it.)
These figures do not change my opinion that the number of posts marked as
answers via Microsoft's web-based interface is not an accurate reflection
of the poster's contribution.

I agree that a poster's actual contribution isn't just the number of posts
or the number of posts marked as answers, but the time, skills and
knowledge offered to others to help them solve problems. The number of
questions answered as indicated in Microsoft's Web newsreader reflects how
many times the questioner felt he'd received help and how often he took
the time to provide this feedback. So this is one method of measurement
of the "degree of helpfulness" in comparison to other posters in the
newsgroups, with those at the top of the list (higher numbers) being
indicative as helpful more often than those at the bottom of the list
(lower numbers).

In that context, the records in Microsoft's Web newsreader reveal that
Rick B is one of the most helpful posters over the past 22 months,
regardless of his attitude in some of those posts. And he has plenty of
good company.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
B

Brendan Reynolds

Hi Gunny,

I'm afraid I remain unconvinced - but I do stand in awe of your tenacity on
the subject! I'm afraid we're going to have to agree to disagree on this
one. I wish you well.

--
Brendan Reynolds
Access MVP

'69 Camaro said:
Hi, Brendan.
So based on the number of posts marked as answers via Microsoft's
web-based interface, one would conclude that about 9.3% of my posts
answer the question asked.

You've made four assumptions that make your conclusion about your success
rate unfair and unrealistic.

Assumption #1: Every post has a chance to earn one answer.

It doesn't. For an example, a question is posted and you reply to it.
Additional information is added in the OP's second post. You reply again
and the OP marks your response as an answer to his question. So there's
one question, one answer marked, and four posts, two of which are yours.
What is your success rate?

A. 0%
B. 25%
C. 50%
D. 100%

The answer is D. No matter how many posts are made to the thread, and no
matter how many of those posts are yours, only one answer can be awarded
per poster per question -- and you earned it, so you have a 100% success
rate in this example.

So you need to count the total number of threads participated in, not the
total number of posts posted when calculating your success rate.

Assumption #2: That every question posted can have replies marked as
answers.

Unless you've discriminated against questions posted via UseNet newsgroup
subscribers, Google Groups, AccessMonster.com, et cetera, and only
answered questions posted via the Microsoft Web newsreader, then there are
quite a number of questions that you are relying on an MVP to come around
and mark the replies. The original poster can't mark replies to that
question as answers unless he was signed into the Microsoft Online
Community to post the question. Historically, the chances of an Access
MVP marking a non-Microsoft Online Community post approaches 0%, so
realistically one can never count on it.

Allowing a large number of questions into the statistical pool that will
always have a 0% success rate will decieve one into thinking that one
always failed on those questions. It's like my claiming, "I can't get
Brendan Reynolds to shake my hand," when in fact we've never been in the
same room, so there's never been an opportunity. It would be totally
unfair of me to make such a claim.

So, to be fair in calculating your success rate, only consider those
questions you participated in that have a reasonable opportunity to be
marked by the OP as having an answer. Those questions would be the ones
submitted via the Microsoft Web newsreader by members of the Microsoft
Online Community.

Assumption #3: That Google Groups gives an accurate count of posts (or
threads).

It's not accurate, but it does give a ballpark count. Google's search
engine is built to use the key word indices to optimize searches, but
since they enhanced it with the "Google Groups Beta" version, it drops out
dozens or even hundreds of threads for an individual poster being searched
on unless one uses two week increments and accumulates those counts over a
period of time to calculate the total count.

Assumption #4: That unless a data sample consists of the entire
population, or most of it, that no conclusions may be drawn from the data.

We don't need everyone who can give feedback to actually do so before we
can determine trends and draw reasonable conclusions from their feedback.
I just checked the most recent data downloaded (1 Jan. '06 through 30 Mar.
'06 for the 12 Access newsgroups mentioned earlier), and for the 13,268
questions where the OP _could_ have marked answers, 2,811, or 21.2%,
actually did. That's a fairly large sample size, but when we draw
conclusions about a sample of the data, we need to also calculate the
theoretical margin of error so that we can determine how reliable those
conclusions are.

For example, if we took a poll of a group of registered voters and asked
them how they would vote on a law legalizing the death penalty if an
election were held today, and the poll results were as follows:

65% against the death penalty
30% for the death penalty
5% I don't know
+/- 3.5% margin of error

. . . then we can conclude that it would take a miracle for this law to be
passed today. Even if we subtracted the margin of error from the group of
responders against the death penalty and added this to the group of
responders for the death penalty and generously added all of the "I don't
knows" to the group of responders for the death penalty, 61.5% vs. 38.5%,
there are still too many against the death penalty in this example for the
law to have any hope of passing.

That said, we can calculate the reliability of the figures (the
theoretical margin of error) we have available with mathematical
equations. (One can use a calculator if one doesn't know the equations
off-hand.) When I plug the numbers (2,811 for the sample size of
"questions marked as answers" in the population of 13,268 "questions that
could be marked by the OP") into the "Margin of Error Calculator" on the
following Web page, it comes out to +/- 1.64% theoretical margin of error,
with a confidence level of 95%, meaning this sample size is sufficient for
most purposes:

http://www.americanresearchgroup.com/moe.html

If you don't think a confidence level of 95% is high enough to be certain
that any conclusions can be drawn from the data, then you can require a
higher level by using another calculator, such as the one on the following
Web page:

http://www.raosoft.com/samplesize.html

If you plug in 2% acceptable margin of error (at the top), 98 for the
confidence level (down at the bottom right corner), you only need a sample
size of 2,696 (so our sample has more than enough), but if we wanted a 99%
confidence level, then we would need a larger sample of at least 3,160.
We only have a sample large enough for a maximum 98.3% confidence level
for the past three months' questions. (We can use a longer period of
time, but I figure we should deal with a manageable time period that can
even out any short-term anomolies, such as everyone being off for the
holidays or all the MVP's lounging at the Summit.) Therefore, if we use
at least a three month period, we have a statistically significant count
of which replies the questioners think are answers to their questions.

So for a fair calculation of your actual success rate in having your
replies marked as answers, my advice is to avoid the assumptions listed
above. And realize that there's some luck involved because some people
refuse to mark answers no matter how fabulous the responses are. I laugh
at one guy every time I see his name in the newsgroups, because he's here
for one thing, and one thing only, so don't get in his way: ;-)

http://groups.google.com/group/micr...03df/27b0ee45c49cbf38?&hl=en#27b0ee45c49cbf38

Please note that the guy was already signed in, so instead of clicking on
either of the "Yes" or "No" buttons for "Did this post answer your
question?" he posted a reply to explain why he wouldn't bother, which
takes much longer and is a lot more trouble than just clicking on a
button. And since he didn't mark a reply, if anyone else has the same
exact question, he won't find it in the answer database because the thread
has since expired off the server, so he'll post the question again and
wait for someone else to post the same answer again.

And you can say that people can always find it in Google Groups, but look
closely and count how many different ways Google mangled that thread, like
losing a post and scrambling the order in which the messages were posted,
so you have to expand each post's options and jump from post to post based
upon the time posted, not the vertical sequential order of the thread's
posts. (Maybe Google will fix this and make me a liar, but I've seen this
enough times that I'll complain about it.)
These figures do not change my opinion that the number of posts marked as
answers via Microsoft's web-based interface is not an accurate reflection
of the poster's contribution.

I agree that a poster's actual contribution isn't just the number of posts
or the number of posts marked as answers, but the time, skills and
knowledge offered to others to help them solve problems. The number of
questions answered as indicated in Microsoft's Web newsreader reflects how
many times the questioner felt he'd received help and how often he took
the time to provide this feedback. So this is one method of measurement
of the "degree of helpfulness" in comparison to other posters in the
newsgroups, with those at the top of the list (higher numbers) being
indicative as helpful more often than those at the bottom of the list
(lower numbers).

In that context, the records in Microsoft's Web newsreader reveal that
Rick B is one of the most helpful posters over the past 22 months,
regardless of his attitude in some of those posts. And he has plenty of
good company.

HTH.
Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 
F

Frustrated

You simply have too much time on your hands.....that is the problem. Though
you may think you have solved some great mystery you are wrong. And....as
much as I would like to continue to debate this matter with you, I can't...or
else it may seem I too have little else to occupy my time.
 
R

Rick Brandt

Brendan Reynolds said:
Hi Gunny,

I'm afraid I remain unconvinced - but I do stand in awe of your tenacity on
the subject! I'm afraid we're going to have to agree to disagree on this one.
I wish you well.

I agree with you Brendan. People in these groups should be well aware that
poorly sampled data is worse than no data in most cases. (GIGO) :)
 
6

'69 Camaro

Dear mdavis:

I have been reminded about another thread where young Hugh Betcha went
ballistic, and I have considered, "Would it have killed me to have given
mdavis some slack?"

No. Of course not.

Let's wipe the slate clean, and you can post your questions under any name
you'd like and I will not reply with a link to this thread as a reminder of
your conduct. And ma'am, I apologize for my conduct. I can't speak for
anyone else on whether or not they, too, will wipe the slate clean, but from
my experience I can say that the people who frequent these halls are not
only extremely knowledgeable, they are extremely generous, too.

I'd like to clear up an important point, though. You requested that I check
the facts. I did, and I reported them here where you would have an
opportunity to correct me. Someone suggested that it was an invasion of
your privacy to have disclosed your ISP, which is currently the company you
work for. To clarify, with few exceptions one is not entitled to privacy of
information after she has publicly disclosed that information, which you
did, ma'am, when you posted via the Microsoft Online Community's Web
newsreader. One's ISP is not one of those exceptions. (Remember that check
box "I agree" that you had to mark before you could post your first message?
The text preceding that check box explained that you were publicly
disclosing information to UseNet, and listed that information. Obviously,
you agreed, or you couldn't have posted any messages using Microsoft's Web
newsreader.)

I have spent much of my life in the South, and I can say that the ladies in
Texas are the most gracious. I know that you will reconsider the situation
that prompted this thread and grant Rick some slack, too.

Again, my apologies ma'am, and please have a nice day.

Gunny

See http://www.QBuilt.com for all your database needs.
See http://www.Access.QBuilt.com for Microsoft Access tips and tutorials.
http://www.Access.QBuilt.com/html/expert_contributors2.html for contact
info.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top