D
David L.
Is the MS-Project resource leveling algorithm published anywhere?
Hi David,
As a matter of interest, some 25 years ago, I attended the launch of
Computer Associates' Superproject for Windows. At the end of the
presentation I asked if he would tell us the algorithm that they used to
determine the levelling process. He looked me in the eye and said: "That's
propriety information - next question please?" This is part of the
scheduling
engine that make one product different from another, and thus a jealously
guarded secret! However, one can make some educated guesses - Project
obeys the logic linking and starts at the first minute of the project
looking down the task list to see if there's any overallocation. It will
then look at the slack and delay a non-critical task in favour of a critical
task. If there is more than one non-critical tasks overallocated, it will
delay the one with the most slack first. And so on... And then the trail
stops - what if there are 2 critical tasks, which one gets delayed? My
guess is the one with the highest Task ID as there is an option to level by
ID Only. Now consider there being more that one resource assigned - which
one gets delayed? Again my guess is the resource with the highest Resource
ID. I'm sure you can see how complicated the algorithm can become with
multiple resources assigned! Nevertheless, this knowledge, plus the use of
priorities, give us plenty of scope for tailoring levelling to optimise our
requirements should we so desire.
Mike Glen
MS Project MVP
Seehttp://tinyurl.com/2xbhcfor Project Tutorials
- Show quoted text -
Jan,
Personnally I wouldn't want ANY algorithm to replace my resources or change
the units. That is a decision that only a resource manager or a project
manager should make. The only service I would expect from the algorithm is
to respect the dependencies, the assignments (name and units) and resolve the
overallocations by automatically changing the suggested start date of the
tasks, which in my view MSP does sufficiently well.
Also, there is so much variation during project execution anyway that one
should question the over-optimization of a leveling algorithm - basically we
are just tweaking a model, not reality. As you execute the project, task
duration are subject to variation, resource are not always available when
they are needed, new tasks are created and existing tasks are obsoleted - so
why try to over-optimize up front something that will drastically change
anyway? Here in my view the key is to find the right way to prioritize tasks
so that a given resource knows where they are needed, and progressively
re-level (in a forward-looking way) to gauge the true expected completion
date of the project.
So I agree with you and I wonder what the true motivation of the disgruntled
users is (here I suspect that they are selling a competing piece of software).
Want to reply to this thread or ask your own question?
You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.