Gary said:
Dave:
I was comparing the usefulness of PERT to the usefulness of negotiated
buffers when it comes to hitting schedule dates. Please don't extend that
meaning to say I'm dismissing all the wisdom behind critical chain theory.
The problem with PERT analysis is that it is continuously performed against
bad, and non-improving estimates.
I'm assuming that you're assuming that organizations don't keep
historical data on which to base future duration estimates. Is that
correct? If so, I agree, in the vast majority of cases.
Because estimates improve only when
actuals are collected, ninety percent of the battle is establishing and then
closing the feedback loop. Whatever happened to focusing on hitting dates?
Not sure what you mean here. My experience is that there is often too
much focus on hitting dates and not enough on creating a realistic CPM
schedule.
Why do we need more than one target?
Again, I'm unclear. Where you're getting more than one target in any of
this?
I agree that Monte Carlo simulation is
a better way to manage schedule risk, but it assumes that you have historic
data to work with. Most people don't have meaningful historic data so
they're stuck with simpler voodoo approaches such as PERT.
I'm not an expert on Monte Carlo, but I don't recall in my reading, or
the software demo I saw some years ago, any requirement for historic
data. From what I've seen and read, it requires the same three
estimates as PERT, just processes them differently and with additional
input from the user on what kind of statistical distribution to use for
the analysis. Feel free to correct me if I'm wrong on this.
However, I do believe that either method would be more accurate if
historical data were available to replace the "guesstimates" that are
usually used. I harp on all my clents, as much as I can get away with,
the value of "Lesson's Learned" (aka the seventh of the "Seven Habits
of Hightly Effective People" by Covey, "Honing the Axe"). But in our
culture, it's a hard sell. But LL is the only way I know to achieve L5!
The gap we're discussing is between planning projects and tracking projects,
also known as the gap between Level 2 and Level 3 maturity in the UC
Berkeley and Carnegie Melon models. The fact that anyone can successfully
assign buffers to work paths and manage them full cycle is evidence, itself,
of level-three maturity achievements, as would simply possessing the data to
execute on Monte Carlo simulations evidence level 3 achievements.
I've never seen a correlation between planning, Level 1, tracking,
Level 2, and buffers, Level 3. The material I've seen, admittedly
limited, was more generic than that. L1 is a Disciplined Process. That
could include planning, tracking, some sort of quality, etc. L2 is a
Standard, Consistent Process. L2 also could include planning, tracking,
quality, etc. L3 is a predictable Process, which could include
planning, tracking, etc. And so forth. I'd be very curious as to where
you got the correlations between the steps on scheduling and the CMM. I
know there are newer models out there, but I didn't know they'd changed
that much, if in fact they have.
So, is it
the maturity achievement or the novel technique of using buffers what's
providing the payoff?
I don't understand how using buffers makes the process L3, so I can't
comment. I could sit down and use buffers tomorrow and not be anywhere
near an L3 CMM capability. In my understanding of CMMs, I'd have to use
them consistently to achieve L2, and predictably to be L3. So I don't
know how to reply to your question.
Regardless of CMM level, I think there is an answer that applies to
this or most any organized endeavor. And that is that the tecniques
used are USUALLY indicative of the level of maturity in that field of
the user implementing them. IOW, someone using buffers is probably of
higher maturity in scheduling than someone using PERT as normally
defined (there is a way to use PERT to create a buffer, though probably
not a very good way!). But I don't think that a tecnique alone,
especially if used out of context, indicates a given level of process
maturity or will consistently deliver the desired results. That is, if
I observe someone using buffers, it doesn't mean they are applying the
principles of PM in a "mature" manner (L3 or above), or that they will
consistently bring in projects on schedule. They may have picked up the
tecnique somewhere early in their experice but stil not be achieving
any kind of repeatability, much less optimization. So the evident
improvement, comes from true maturity, not tecnique. So, after coming
at it the long way, it's the maturity achievement that provides the
payoff. Make sense?