Implementing Buffers in MS Project

  • Thread starter Andrew Lavinsky
  • Start date
A

Andrew Lavinsky

As per conversation w/ davegb, am throwing this question out to the community....

Anyone have a good method for implementing schedule buffers in MS Project
(that they would be willing to share w/ the group?)

-
 
G

Gary L. Chefetz [MVP]

Andrew:

Going all Goldratt are you?<p> Unless you want to add tasks to your
workpaths that represent the buffer value for each task, there's no way to
represent formal Goldratt-style buffer values in Project and have them
affect the actual schedule and show up in the Gantt chart. You do know, of
course, that before we had the "critical chain" brand, Project Evaluation
Review Technique (PERT) satisfied this need fairly well for most of us.
 
A

Andrew Lavinsky

Just dabbling in the TOC world....and trying to shirk my other duties which
at the moment are much less entertaining than this.

So you say adding tasks into the Critical Path (flagged, I suppose) would
do it. Hmmm...guess that would mean I would have to continuously update
their duration to account for changing buffer amounts?

I might throw out an alternate technique that I have toyed w/ but not used
yet is to use deadlines to depict when the buffer should end. I flag the
task before the buffer as a buffer task, set the Deadline as when the buffer
should end, and then depict the Finish to Deadline time span for that task
as a bar in the Gantt Chart. This allows me to use custom fields tracking
current vs. baseline buffer, i.e. ([Deadline]-[Finish]/[Deadline]-[Baseline
Finish]).

This would necessitate then slapping a SNET constraint on the subsequent
task - and making sure to document/flag, etc. so I can figure out what my
logic was when I created it. It does kill my Critical Path calculations,
but in theory, any buffer adding would do that anyway.

Is there a reason why that wouldn't work....other than yes, the issue that
I am using SNET constraints, which is generally not a great idea?
 
D

davegb

Gary said:
Andrew:

Going all Goldratt are you?<p> Unless you want to add tasks to your
workpaths that represent the buffer value for each task, there's no way to
represent formal Goldratt-style buffer values in Project and have them
affect the actual schedule and show up in the Gantt chart. You do know, of
course, that before we had the "critical chain" brand, Project Evaluation
Review Technique (PERT) satisfied this need fairly well for most of us.

Interesting comment. I'm suprised that you'd compare PERT (Variance
Analysis) with critical chain. While PERT may have been useful before
we had computers to factor in the uncertainty of doing projects, it
doesn't compare in reality to critical chain and the proper use of
buffers, for two main reasons.

The first reason is something I discoverd on my own many years ago when
"padding" tasks to account for the probability of something somewhere
going wrong. I found that when I padded the duration, regardless of
whether or not I used PERT to determine how much padding to add or just
my own guesstimate, that the resources, 90% of the time, just started
that much later working on the task. The padding was thrown away and
only added to the original duration estimate and had no significant
impact on the probability of success of the overall project. I've
checked this observation with thousands of students in my classes over
the years, and found it to be nearly universally true. YMM have Vd.

Schedule buffer, OTOH, works much better. When you put a finite amount
of time at the end of the project, or of a major phase, it's there for
everyone to see and if explained properly, to realize that it's shared,
not owned by any one resource working on a single task. And anyone who
uses a significant part of that shared resource of time will have to
face their co-workers. My experience is that most of us are used to the
wrath of our supervisors, that's part of what we're paid to put up with
at work. But few of us want to deal with the dissapproval of our peers.
Again, you may disagree.

Second, I read an article a few years ago, which I lost somewhere along
the way. It was by a statistician who had done an statistical analysis
of the PERT method. (Not being much of a statistician myself, I can
only paraphrase him in layman's terms.) Basically, he claimed that PERT
is seriously flawed because it treats each task as a separate event,
rather than as part of a sequence of events, the sequence of course
being the task and all it's predecessors and successors. The real
schedule risk in a project is not the likelihood of a single event
failing to meet expectations, but a chain of events failing to meet
them. And the probably damage (schedule slip, and accompanying costs),
are far greater than PERT could possible account for. In other words, a
PERT analysis would account for only a small fraction of the
statistically probable result of a significant schedule slip. It's the
opposite of the elephant gun to kill a fly problem, its using a fly
swatter to kill an elephant. (I'm sure you'd agree that going after an
elephant with a fly swatter is a bigger problem than going after a fly
with an elephant gun!)

Now that we have computers to do the grunt work, Monte Carlo Analysis
is a far better way to deal with schedule risk because it directly
deals with exactly the issue of a chain of events missing their
deadlines and the overall, much larger impact that is likely. In fact,
it amazes me that PERT is even considered by PMI in certification. All
I can say is, old dogs hate giving up old tricks, even if the audience
is bored!

Hope this helps in your world.
 
G

Gary L. Chefetz [MVP]

Andrew:

I don't like the concept of negotiated buffers. I learned a long time ago
that every task, workpath or assignment (however you want to slice it) will
consume as many resource hours as it is given. Task deadlines are very
useful, and I have no problem with SNET constraints when applied
appropriately.




Andrew Lavinsky said:
Just dabbling in the TOC world....and trying to shirk my other duties
which at the moment are much less entertaining than this.

So you say adding tasks into the Critical Path (flagged, I suppose) would
do it. Hmmm...guess that would mean I would have to continuously update
their duration to account for changing buffer amounts?

I might throw out an alternate technique that I have toyed w/ but not used
yet is to use deadlines to depict when the buffer should end. I flag the
task before the buffer as a buffer task, set the Deadline as when the
buffer should end, and then depict the Finish to Deadline time span for
that task as a bar in the Gantt Chart. This allows me to use custom
fields tracking current vs. baseline buffer, i.e.
([Deadline]-[Finish]/[Deadline]-[Baseline Finish]).

This would necessitate then slapping a SNET constraint on the subsequent
task - and making sure to document/flag, etc. so I can figure out what my
logic was when I created it. It does kill my Critical Path calculations,
but in theory, any buffer adding would do that anyway.

Is there a reason why that wouldn't work....other than yes, the issue that
I am using SNET constraints, which is generally not a great idea?

Andrew:

Going all Goldratt are you?<p> Unless you want to add tasks to your
workpaths that represent the buffer value for each task, there's no
way to represent formal Goldratt-style buffer values in Project and
have them affect the actual schedule and show up in the Gantt chart.
You do know, of course, that before we had the "critical chain" brand,
Project Evaluation Review Technique (PERT) satisfied this need fairly
well for most of us.

For Project Server FAQs visit
http://www.projectserverexperts.com
For Project FAQs visit
http://www.mvps.org/project
 
G

Gary L. Chefetz [MVP]

Dave:

I was comparing the usefulness of PERT to the usefulness of negotiated
buffers when it comes to hitting schedule dates. Please don't extend that
meaning to say I'm dismissing all the wisdom behind critical chain theory.
The problem with PERT analysis is that it is continuously performed against
bad, and non-improving estimates. Because estimates improve only when
actuals are collected, ninety percent of the battle is establishing and then
closing the feedback loop. Whatever happened to focusing on hitting dates?
Why do we need more than one target? I agree that Monte Carlo simulation is
a better way to manage schedule risk, but it assumes that you have historic
data to work with. Most people don't have meaningful historic data so
they're stuck with simpler voodoo approaches such as PERT.

The gap we're discussing is between planning projects and tracking projects,
also known as the gap between Level 2 and Level 3 maturity in the UC
Berkeley and Carnegie Melon models. The fact that anyone can successfully
assign buffers to work paths and manage them full cycle is evidence, itself,
of level-three maturity achievements, as would simply possessing the data to
execute on Monte Carlo simulations evidence level 3 achievements. So, is it
the maturity achievement or the novel technique of using buffers what's
providing the payoff?
 
D

davegb

Gary said:
Dave:

I was comparing the usefulness of PERT to the usefulness of negotiated
buffers when it comes to hitting schedule dates. Please don't extend that
meaning to say I'm dismissing all the wisdom behind critical chain theory.
The problem with PERT analysis is that it is continuously performed against
bad, and non-improving estimates.

I'm assuming that you're assuming that organizations don't keep
historical data on which to base future duration estimates. Is that
correct? If so, I agree, in the vast majority of cases.

Because estimates improve only when
actuals are collected, ninety percent of the battle is establishing and then
closing the feedback loop. Whatever happened to focusing on hitting dates?

Not sure what you mean here. My experience is that there is often too
much focus on hitting dates and not enough on creating a realistic CPM
schedule.
Why do we need more than one target?

Again, I'm unclear. Where you're getting more than one target in any of
this?

I agree that Monte Carlo simulation is
a better way to manage schedule risk, but it assumes that you have historic
data to work with. Most people don't have meaningful historic data so
they're stuck with simpler voodoo approaches such as PERT.

I'm not an expert on Monte Carlo, but I don't recall in my reading, or
the software demo I saw some years ago, any requirement for historic
data. From what I've seen and read, it requires the same three
estimates as PERT, just processes them differently and with additional
input from the user on what kind of statistical distribution to use for
the analysis. Feel free to correct me if I'm wrong on this.

However, I do believe that either method would be more accurate if
historical data were available to replace the "guesstimates" that are
usually used. I harp on all my clents, as much as I can get away with,
the value of "Lesson's Learned" (aka the seventh of the "Seven Habits
of Hightly Effective People" by Covey, "Honing the Axe"). But in our
culture, it's a hard sell. But LL is the only way I know to achieve L5!

The gap we're discussing is between planning projects and tracking projects,
also known as the gap between Level 2 and Level 3 maturity in the UC
Berkeley and Carnegie Melon models. The fact that anyone can successfully
assign buffers to work paths and manage them full cycle is evidence, itself,
of level-three maturity achievements, as would simply possessing the data to
execute on Monte Carlo simulations evidence level 3 achievements.

I've never seen a correlation between planning, Level 1, tracking,
Level 2, and buffers, Level 3. The material I've seen, admittedly
limited, was more generic than that. L1 is a Disciplined Process. That
could include planning, tracking, some sort of quality, etc. L2 is a
Standard, Consistent Process. L2 also could include planning, tracking,
quality, etc. L3 is a predictable Process, which could include
planning, tracking, etc. And so forth. I'd be very curious as to where
you got the correlations between the steps on scheduling and the CMM. I
know there are newer models out there, but I didn't know they'd changed
that much, if in fact they have.

So, is it
the maturity achievement or the novel technique of using buffers what's
providing the payoff?

I don't understand how using buffers makes the process L3, so I can't
comment. I could sit down and use buffers tomorrow and not be anywhere
near an L3 CMM capability. In my understanding of CMMs, I'd have to use
them consistently to achieve L2, and predictably to be L3. So I don't
know how to reply to your question.

Regardless of CMM level, I think there is an answer that applies to
this or most any organized endeavor. And that is that the tecniques
used are USUALLY indicative of the level of maturity in that field of
the user implementing them. IOW, someone using buffers is probably of
higher maturity in scheduling than someone using PERT as normally
defined (there is a way to use PERT to create a buffer, though probably
not a very good way!). But I don't think that a tecnique alone,
especially if used out of context, indicates a given level of process
maturity or will consistently deliver the desired results. That is, if
I observe someone using buffers, it doesn't mean they are applying the
principles of PM in a "mature" manner (L3 or above), or that they will
consistently bring in projects on schedule. They may have picked up the
tecnique somewhere early in their experice but stil not be achieving
any kind of repeatability, much less optimization. So the evident
improvement, comes from true maturity, not tecnique. So, after coming
at it the long way, it's the maturity achievement that provides the
payoff. Make sense?
 
G

Gary L. Chefetz [MVP]

Dave:

We could tear up the airwaves with this discussion. I never suggested that
technique, alone, indicates a maturity level. That you can use one
consistently does. Even if it's a technique that I don't agree with.<g> For
instance, buffers don't make your schedule more realistic, but they do give
you something to talk about. Perhaps it's the conversation that matters?

Immature organizations do not maintain historical data because they don't
have any to maintain. If they did, they'd be somewhat more mature. I think
the original CMM model, which UC Berkeley adapted for the first PMM ever
proffered, is the closest to representing the essence of "Process Maturity."
The Berkeley PMM and original CMM model followed these steps: Adhoc,
planned, managed, integrated, continuous improvement. IMO, the key
difference between planned and managed is whether a plan is made or a plan
is worked. The latter dictates tracking. Closing the gap between planning
projects and tracking them (aka managing) is where most organizations
struggle the most and most often fail.

IMO, recent entrants into the Maturity Model fray, mostly aimed at selling
professional services, have obscured the original wisdom in applying one. I
also believe that level 5 doesn't exist. It's simply an idealized state that
is unachievable because to achieve it is to change its definition.
 
D

davegb

Gary said:
Dave:

We could tear up the airwaves with this discussion. I never suggested that
technique, alone, indicates a maturity level. That you can use one
consistently does.

Having no experience actually applying CMMs, I can't say if this is
true. I could argue that consistently applying a poor, or even faulty,
tecnique is indicative of low maturity, no matter how consistently
applied. What do Berkely and Carnegie-Mellon say about this? I've never
gotten into it in enough detail to have a valid opinion. Just shooting
from the hip here.

Even if it's a technique that I don't agree with. said:
instance, buffers don't make your schedule more realistic, but they do give
you something to talk about. Perhaps it's the conversation that matters?

We do disagree on that. I've had much greater success on projects since
I started using buffers in the mid 80s (long before Goldratt and TOC)
than I had before. I'd argue they're just like budget contingency, and
very useful. I'm aware that many people disagree, including some of the
long-time participants here. For me, it gets back to "what works best
for me". If I believe that using schedule buffers helps me bring in
more projects on schedule, then it probably does. Some might then argue
the placebo effect ("Let go of the feather, Dumbo!"), but I don't
really care why it works, only that it does, at least for me.
Immature organizations do not maintain historical data because they don't
have any to maintain. If they did, they'd be somewhat more mature. I think
the original CMM model, which UC Berkeley adapted for the first PMM ever
proffered, is the closest to representing the essence of "Process Maturity."
The Berkeley PMM and original CMM model followed these steps: Adhoc,
planned, managed, integrated, continuous improvement.

I think this explains a lot of the differences in our perceptions of
CMM. I've never heard of the Berkely CMM so I can't comment on that
except to contrast what you've told me with the Carnegie model. My
earliest experiences were with the CM SEI CMM, and the levels for that
were Initial, Repeatable, Defined, Managed and Optimizing. "Planned"
and "Initial" seem to me miles apart, as do "Managed" and "Repeatable".
In the Berkely, "Managed" is L2, in the CM model, L3, so they are quite
different, more different than I would have thought from other CMMs
I've seen. The main similarity is L5, both sound the same to me. So
we've been talking, as I suspected, 2 very different models. Not much
real communication happening under these circumstances! No one's fault,
just different reference points.

IMO, the key
difference between planned and managed is whether a plan is made or a plan
is worked. The latter dictates tracking. Closing the gap between planning
projects and tracking them (aka managing) is where most organizations
struggle the most and most often fail.

In the Berkely model, this is probably true. I doubt that in the CM
model, it would be. I can't imagine that a very incomplete process, say
just sitting down and doing a schedule without tracking it or some kind
of feedback loop, would be anything more than L1, if that. (I've seen
references to an L0, not from any of the actual modelers, but in third
party literature, that says there's a level so low, it's not even on
this scale.) From what I've read, L1 is initial, or ad-hoc, meaning PM
is done like we're ducks, waking up in a new world every morning.
What's happened on past projects is entirely ignored, except maybe on
rare occasions when an individual on the project thinks, "I did it with
method x last time, and wow, did I blow it. I think I'll give method y
a try this time". It doesn't mean that no effort is made to plan, but
only a minimal level. That would include, I believe, a partial process
where a plan was drawn up but not followed up on.
IMO, recent entrants into the Maturity Model fray, mostly aimed at selling
professional services, have obscured the original wisdom in applying one. I
also believe that level 5 doesn't exist. It's simply an idealized state that
is unachievable because to achieve it is to change its definition.

Again, we see it very differently. I think L5 is the level at which the
process becomse self-optimzing, which is a very real condition. Rare,
but real. The process undergoing the CMM analysis is being continuously
improved, thereby redefined, but not the CMM. It's definition at L5 is
continuous improvement or optimization. That doesn't change.
 
D

davegb

Gary said:

They are pretty much the same. I went back and looked at your earlier
post and realized I had mis-read the levels in the Berkeley Model and
my previous reply reflected that. So I guess that means we have rather
different interpretations of them. I'd love to participate in the
implmentation of one as I know I'd learn a lot, and I'm sure I'd find
that many of the things I've assumed to be true are, in fact, not. Very
interesting subject to me! Have always thought it would be very
challenging to take an organization up that ladder.
 
G

Gary L. Chefetz [MVP]

Dave:

The UC Berkeley model is pretty much a rip-off of the original CMM, no?<g>
My problem with the models is that they all suggest that "continuous
improvement" occurs at level 5. I find this a bit self-serving. IMO,
continuous improvement *is* the maturity cycle. That's why I lop off level 5
from all of them. You can't walk the ladder of maturity without engaging in
continuously improving activities. Similarly, some of then more recent "XMM"
implementations suggest that measuring comes after standardizing. How is it
possible to standardize without measurement? This doesn't make sense to me.

The problem is we always focus on "process" maturity. Even though more
recent permutations give lip service to "organizational" maturity, it's
always tied to process and not maturing people and the organization. IMO,
that's why the primary challenge and struggle is bridging the gap between
Level 2 and Level 3. This is the point where most improvement efforts fail.
I don't believe it's a process thing, rather it's a people thing.
Unfortunately, none of our current models explain this or give us insight
into how to make that leap in the organization. Until we figure this out, we
just keep wagging the dog.

As you can tell, I'm a business theory junkie and I've enjoyed discussing
this with you.
 
S

St Dilbert

I like your take on the "Level 5" issue ;-)... my interpretation is
that nobody has come up with a good idea to generalize a level above 4
- many "step" maturity models are quite similar to CMMI in this regard.
If you're a "business theory junkie" you have probably come across the
"Tipu ake ki te ora" already? (http://www.tipuake.org.nz/index.htm) I
attended a workshop with a guy from New Zealand once and really enjoyed
the fresh ideas (all levels present simultaneously, all that metaphor
and imagery with the native tribe background, hippie atmosphere vs. the
usual academic CMMI) - never really saw anything so different again
with regards to maturity levels ;-)...

As for "How is it possible to standardize without measurement?"
My view is: in many organizations there are status reports and they
already contain statements on say %complete. When you're still on a
level with individual (per project) reporting cultures you can't
compare or add numbers from different projects and make sense of the
result. So you have "numbers" that mean something e.g. when comparing
reports from one project over time, but you don't have a real "metric"
yet. With this picture in mind you would have to "standardize"
reporting culture before trying your hand at "measurement" (managing by
numbers).
 
D

davegb

Gary said:
Dave:

The UC Berkeley model is pretty much a rip-off of the original CMM, no?<g>
Yes!

My problem with the models is that they all suggest that "continuous
improvement" occurs at level 5. I find this a bit self-serving. IMO,
continuous improvement *is* the maturity cycle. That's why I lop off level 5
from all of them. You can't walk the ladder of maturity without engaging in
continuously improving activities.

Again, shooting from the hip here. Wouldn't the difference between L5
and the others be that up to L5, the improvement is continuous because
you're following the CMM model. At L5, the improvment is built into the
process itself. Theoretically, you no longer need the CMM model as it's
been replaced by a continuously improving process.

Similarly, some of then more recent "XMM"
implementations suggest that measuring comes after standardizing. How is it
possible to standardize without measurement? This doesn't make sense to me.

The problem is we always focus on "process" maturity. Even though more
recent permutations give lip service to "organizational" maturity, it's
always tied to process and not maturing people and the organization. IMO,
that's why the primary challenge and struggle is bridging the gap between
Level 2 and Level 3. This is the point where most improvement efforts fail.
I don't believe it's a process thing, rather it's a people thing.
Unfortunately, none of our current models explain this or give us insight
into how to make that leap in the organization. Until we figure this out, we
just keep wagging the dog.

I'd always assumed that the people part of all this has to be handled
concurrently with the process change. Obviously, no change occurs in an
organization without some change in the people. In order for any model
to work, you have to get the participants on board first, then keep
them there. My guess is that many of these whiz-bang business models
fail because management didn't get the employees on board in the first
place. Do it by edict, like everything else. Most organizations can't
change, and therefore fail in any enterprise-wide endeavor, because
they don't know how to affect change effectively. The book on what
Walsh did at GE (not Walsh's book), "The GE Way" is a great book on
affecting change in a large organization. So is Collins', "Good to
Great". I don't know if this part of the equation should be included in
the CMM model itself, or is just the "how it's really done" part that's
nearly always omitted in technial descriptions.

The PMIBOK is pretty much the same. It identifies that a PM needs to
understand Human Resources, but pretty much gives lip service to this
part of the job. In actuallity, if a PM isn't a smart people person,
s/he's in deep kimchi.

I agree that trying to affect this kind of change without understanding
the human aspects of change is a "tail wagging the dog" kind of thing,
and is destined to failure.

As you can tell, I'm a business theory junkie and I've enjoyed discussing
this with you.

My condolences. Why don't we start a new 12 Step program for
"Recovering Business Model Junkies Anonymous"? I'm not sure if knowing
any of this has any use in my life, but I find it fascinating!
 
G

Gary L. Chefetz [MVP]

SD:

Thank you. Why can't we accept that level 5 doesn't exist and can't exist in
reality? I believe this is an ideal state, something akin to praying for
salvation. It's good to have goals that are beyond our reach, if only to
keep us reaching. However, the notion that the system improves itself
suggests sentience in the system and inevitable conclusion that people are
irrelevant. At best it's an over-step and at worst it's an excuse to not
focus on the human element. Your Tipu Ake example certainly reflects my way
of "organic" thinking. I believe that an organization is fully baked at
level 4.

The real problem with our maturity models, like many of our business models,
is that they are mostly level 2 solutions that are more taxonomic than they
are systematic. If our solutions are stuck between level 2 and level 3, how
can we expect our organizations to cross that divide?

Isn't status reporting, in itself, measuring? While they may not be as
qualitative as we like, I don't see how we can deny that they're
quantitative in nature.
 
G

Gary L. Chefetz [MVP]

Dave:

I'm a "recovering PMO director," but I'm going to stick with my business
theory addiction for now.<g>
 
D

davegb

Gary said:
Dave:

I'm a "recovering PMO director," but I'm going to stick with my business
theory addiction for now.<g>

Good thinking! My theory is that one should have a list of addictions
to switch between when the need arises, like when someone catches you
in one of them! :)
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top