Best Practice re: Hours per Day?

J

JEzell

Friends,

What's now considered Best Practice regarding setting number of working
hours per day? For example, the work day is actually 8 hours, but some
recommend setting Calendar and Working Time options to show only 6 hours to
account for the fact that no one (well, almost no one) actually accomplishes
productive work for the full eight hours.

I have supported both scenarios, and am wondering what the current prevalent
stance is, particularly with the Microsoft Project scheduling engine in mind.

Thanks for your thoughts one way or the other.

Jim
 
R

Rod Gill

I would prefer leaving all on 8h/day but setting the Max Units for the
resources to the % that represents the portion of an average day they can
spend on the project. That way when you assign a realistic number of work
hours Project calculates the realistic number of days for you.
 
J

Joe

I agree with Rod. I use 75% allocation for some groups and 90% allocation
for others. This works well for us.

Joe
 
D

davegb

Rod said:
I would prefer leaving all on 8h/day but setting the Max Units for the
resources to the % that represents the portion of an average day they can
spend on the project. That way when you assign a realistic number of work
hours Project calculates the realistic number of days for you.

I'm curious, Rod. What are the advantages of setting resource
availabilities to 75% vs. setting the calendar to 6h/d?
 
R

Rod Gill

In my mind it is more realistic. For example a typical resource might have
25% of their time on support for existing systems or production, 20% for
general admin, team meetings, training and other non-project time leaving
only 55% for project work. So, they do an 8 hour day but can only spend 55%
of it on your project. max Units is the more realistic way of defining this.

If Max Units is 55% and you click assign in the Resource Assignments dialog,
55% is what is assigned to the task. When Work is edited that then
calculates an accurate duration.

one person at 100% doing 40h of work with a calendar of 5h per day
or
one person at 55% max units on 8h per day calendar

I prefer the reality of option 2.
 
S

Steve House [Project MVP]

Remember that the basic purpose of the "hours per day" setting is to set a
conversion factor for your duration entries. The actual scheduling
calculations are based solely on the calendars that govern the tasks.
Project tracks all its times in hours (actually minutes if you want to get
picky). When I enter a task as requiring "5 days" it has to convert that
into the equivalent minutes for storage and calculation. So when you say
the task X requires 5 days, how many hours are you really talking about?
Only you can answer that.

I must say I get a little uncomfortable at trying to adjust for the fact the
a resource is productive on the project's tasks for less than their total
workday. If I estimate a task is going to take 5 days, what am I basing
that on? Formally or informally we're going to base it on experience - last
year when we did something like this it took us 5 days from start to finish
so this year it'll probably be close to that. But that 5 days included time
spent by the resources that was both productive on the task in question and
"non-productive" time for email, phonemail, watercooler meetings, whatever.
If we micromanaged we might say it really wasn't 5 days but 4 days of
project with a day of non-project work interwoven to make it total 5. But
do we really have that level of detailed information about the history or do
we just have the bottom line end result? I'd say just assume the resources
worked 5 days at 100% and not to sweat the fact that what we're calling 100%
really isn't 100% if you get picky about the small stuff..
 
J

JEzell

Thanks, Rod, Joe & Steve. I had a feeling I'd get some good answers. I do
appreciate it!

je
 
D

davegb

Steve said:
Remember that the basic purpose of the "hours per day" setting is to set a
conversion factor for your duration entries. The actual scheduling
calculations are based solely on the calendars that govern the tasks.
Project tracks all its times in hours (actually minutes if you want to get
picky). When I enter a task as requiring "5 days" it has to convert that
into the equivalent minutes for storage and calculation. So when you say
the task X requires 5 days, how many hours are you really talking about?
Only you can answer that.

I must say I get a little uncomfortable at trying to adjust for the fact the
a resource is productive on the project's tasks for less than their total
workday. If I estimate a task is going to take 5 days, what am I basing
that on? Formally or informally we're going to base it on experience - last
year when we did something like this it took us 5 days from start to finish
so this year it'll probably be close to that. But that 5 days included time
spent by the resources that was both productive on the task in question and
"non-productive" time for email, phonemail, watercooler meetings, whatever.
If we micromanaged we might say it really wasn't 5 days but 4 days of
project with a day of non-project work interwoven to make it total 5. But
do we really have that level of detailed information about the history or do
we just have the bottom line end result? I'd say just assume the resources
worked 5 days at 100% and not to sweat the fact that what we're calling 100%
really isn't 100% if you get picky about the small stuff..

I agree with everything you say, Steve. Many years ago (more than 20),
when I started using scheduling software, I learned about productivity
and started factoring that in. But after a while, I reached the same
conclusions you described, and stopped doing it. I just included it in
the duration estimate. I do it for an entirely different reason now,
and I'm not sure I can explain it well.
It has for years mystified me that production systems are always
designed around the realities of downtime and the real world limits of
mechanical/electrical/hydraulic systems. When you build a copper
concentrator, if you want to process 100,000 T of ore a day, you build
a system with a peak capacity (everything available working) of say,
110,000 T. We know things break, even in the best designed systems.
There will be downtime. So just design for it. Strangely, in service
systems, there is rarely any attention payed to this same fact, that
people don't perform 100% all the time, and that, even if they did,
unpredictable factors occur that no one could foresee. To schedule the
overall system (of people, in this case) to be operating at 100% is
just as naive as to schedule the copper concentrator at 100%. So what
I'm applying in this situation is not the individual's inefficiency,
but the system's inefficiency. The two are different. One could argue
that they both should be accounted for in the original estimate of
time, just as the individual's is. But it doesn't work out the same,
and I'm not sure why. I'd be happy is someone else jumped in and
rescued me on this one. I do know that I get better results this way,
so it's an empirical thing. I'm not a university professor, like
Goldratt, who needs everything to have a theoretical explanation to
implement a technique. If I find it works for my clients, I go with it,
and try and figure out why it works later.
So now I'm trying to figure out why the system inefficiency needs to be
accounted for this way and the individual inefficiency is in the
duration estimate. Anyone have any ideas?
 
S

Steve House [Project MVP]

To sum the last part of your post - When we say a resource is working at a
100% assignment, he is generating all the work output possible given the
system he's working in. The inefficiencies of the system itself may mean
that a 100% committment for the resource in this system results in a
different actual output than 100% for the same resource would generate when
placed in another system. Joe & Bill are identical twins and they wax
widgets with equal skill at the rate of 10 widgets an hour when working
full-tilt. Company X expects Joe to mix his own wax fresh every day and
that takes him one hour. Joe assigned to waxing widgets 100% will wax 70
widgets in an 8 hour workday. Company Y provides Bill with pre-mixed wax.
Bill working 100% generates 80 widgets during his 8 hour workday. And
that's where it aligns with my posting - when we look back and see that when
Fred did a software upgrade last year it took 10 days to upgrade 10 seats,
that 10 days has the adjustments for systemic inefficiencies already
accounted for. So when we estimate that this year it will take 25 days to
rollout an upgrade to 25 seats, we can assign Fred 100% because the
infficencies of the system that have Fred going to meetings, answering
phonemail, putting out fires are already included in the estimated
duration - we don't have to adjust and say that Fred only works 80% of each
day exclusively on software installation. The systemic inefficiencies that
eat up some of Fred's productivity and affected last years project are
embedded in the historical data and will affect this year's project in
exactly the same way, so adjusting Fred's time for them can be ignored if
this year's estimate is based on last year's history (as it virtually always
will be).
 
D

davegb

Steve said:
To sum the last part of your post - When we say a resource is working at a
100% assignment, he is generating all the work output possible given the
system he's working in. The inefficiencies of the system itself may mean
that a 100% committment for the resource in this system results in a
different actual output than 100% for the same resource would generate when
placed in another system. Joe & Bill are identical twins and they wax
widgets with equal skill at the rate of 10 widgets an hour when working
full-tilt. Company X expects Joe to mix his own wax fresh every day and
that takes him one hour. Joe assigned to waxing widgets 100% will wax 70
widgets in an 8 hour workday. Company Y provides Bill with pre-mixed wax.
Bill working 100% generates 80 widgets during his 8 hour workday. And
that's where it aligns with my posting - when we look back and see that when
Fred did a software upgrade last year it took 10 days to upgrade 10 seats,
that 10 days has the adjustments for systemic inefficiencies already
accounted for. So when we estimate that this year it will take 25 days to
rollout an upgrade to 25 seats, we can assign Fred 100% because the
infficencies of the system that have Fred going to meetings, answering
phonemail, putting out fires are already included in the estimated
duration - we don't have to adjust and say that Fred only works 80% of each
day exclusively on software installation. The systemic inefficiencies that
eat up some of Fred's productivity and affected last years project are
embedded in the historical data and will affect this year's project in
exactly the same way, so adjusting Fred's time for them can be ignored if
this year's estimate is based on last year's history (as it virtually always
will be).

Once again, Steve, our experiences are quite different. First, none of
my clients waxes widgets. In highly repetitive work, like
manufacturing, you analogy makes sense. But my clients are seldom doing
manufacturing, which really isn't project work at all. And when they
are doing manufacturing, it's not assembly line type. Each project is,
to some degree, unique. And therefore, even if there were good
historical records (see issue 2), they would have to be adjusted for
the new project.

Second, there are seldom good records of how long the "same" task, if
such a task exists (see issue 1), took. One of the battles I often face
is convincing my clients to keep good records and to use that
historical database to estimate new projects. Most of the estimating I
see is seat-of-the-pants. And they seem to consistently underestimate
over and over and over. Like the proverbial duck.

Third, even if they did keep good records and refer back to them when
they do a new project, they'd be inaccurate. The main reason being that
most of the work on my client's projects is done by exempt employees
where OT is deliberately not reported nor tracked. These companies love
OT for exempt employees (they rarely allow non-exempt employees to do
OT, don't want to pay for it, and why pay for it, when exempt employees
have to do it for free?). This means that the actual hours it took to
do almost any given task are unknown, even if the reported hours are
known and referred to.

My guess is that in your experience, the OT is tracked meticulously,
even it not paid. I suspect that my clients don't track it at all
because they don't want the people doing it to know how much OT they
work without getting paid for it. It seems to go un-noticed when I
suggest they track this time.

Thanks for the reply. It's helped me to understand why it needs to be
done this way. It's because the real time it takes to do something is
seldom known, and so we have to estimate a "base" time, then factor in
the rest. Makes sense now.
 
S

Steve House [Project MVP]

RE unpaid overtime for exempt employees - there was a bumper sticker popular
in the 60's that read "Gas, Grass, or *** - Nobody Rides for Free!" that we
should make a PM adage. Whether expempt employees get and OT cheque or not,
it is still a real cost to the company. If Joe puts in a string of 12 days
to make a deadline, you can bet he's going to make it up somewhere, be it
overtly or covertly. Those extra hours definitely aren't going to be free
in the long run.

It's not so much that OT is tracked meticulously, it's that the real work
effort to accomplish the task needs to be evaluated to the most accurate
level possible. There's a real problem if one ignores the OT contribution
because THIS time you might not have as compliant a worker on the job. If
the last time we did a similar task it took 5 12-hour days, we really need
to know that this time with a non-exempt worker it's going to take almost 8
8-hour days to get tehe same work done. We really shouldn't try to get away
with pretending it was 5 8-hour days. One of the things that gets my goat
is the attitude of some people who seem to think management is an act of
willpower - that wanting something to be done in a certain way or by a
certain deadline badlky enough is sufficent to make it possible for it to be
done that way and anyone who doesn't deliver on those arbitrary and
irrational standards is a slacker.

I use the "wax widgets" example not because I'm trying to apply PM
principles to manufacturing but rather to try to keep examples focussed on
the basic idea that project tasks are ALWAYS, without except, observable
work carried out by a resource over an exact period of time that produces an
observable and measurable outcome. Dull widgets had been waxed, a program
has been written, a report has been submitted, software has been installed,
etc etc - a task always changes the state of the universe, the world is a
different place from what it was before the task began.
--
Steve House [MVP]
MS Project Trainer & Consultant
Visit http://www.mvps.org/project/faqs.htm for the FAQs


davegb said:
To sum the last part of your post - When we say a resource is working at
a
100% assignment, he is generating all the work output possible given the
system he's working in. The inefficiencies of the system itself may mean
that a 100% committment for the resource in this system results in a
different actual output than 100% for the same resource would generate
when
placed in another system. Joe & Bill are identical twins and they wax
widgets with equal skill at the rate of 10 widgets an hour when working
full-tilt. Company X expects Joe to mix his own wax fresh every day and
that takes him one hour. Joe assigned to waxing widgets 100% will wax 70
widgets in an 8 hour workday. Company Y provides Bill with pre-mixed
wax.
Bill working 100% generates 80 widgets during his 8 hour workday. And
that's where it aligns with my posting - when we look back and see that
when
Fred did a software upgrade last year it took 10 days to upgrade 10
seats,
that 10 days has the adjustments for systemic inefficiencies already
accounted for. So when we estimate that this year it will take 25 days
to
rollout an upgrade to 25 seats, we can assign Fred 100% because the
infficencies of the system that have Fred going to meetings, answering
phonemail, putting out fires are already included in the estimated
duration - we don't have to adjust and say that Fred only works 80% of
each
day exclusively on software installation. The systemic inefficiencies
that
eat up some of Fred's productivity and affected last years project are
embedded in the historical data and will affect this year's project in
exactly the same way, so adjusting Fred's time for them can be ignored if
this year's estimate is based on last year's history (as it virtually
always
will be).

Once again, Steve, our experiences are quite different. First, none of
my clients waxes widgets. In highly repetitive work, like
manufacturing, you analogy makes sense. But my clients are seldom doing
manufacturing, which really isn't project work at all. And when they
are doing manufacturing, it's not assembly line type. Each project is,
to some degree, unique. And therefore, even if there were good
historical records (see issue 2), they would have to be adjusted for
the new project.

Second, there are seldom good records of how long the "same" task, if
such a task exists (see issue 1), took. One of the battles I often face
is convincing my clients to keep good records and to use that
historical database to estimate new projects. Most of the estimating I
see is seat-of-the-pants. And they seem to consistently underestimate
over and over and over. Like the proverbial duck.

Third, even if they did keep good records and refer back to them when
they do a new project, they'd be inaccurate. The main reason being that
most of the work on my client's projects is done by exempt employees
where OT is deliberately not reported nor tracked. These companies love
OT for exempt employees (they rarely allow non-exempt employees to do
OT, don't want to pay for it, and why pay for it, when exempt employees
have to do it for free?). This means that the actual hours it took to
do almost any given task are unknown, even if the reported hours are
known and referred to.

My guess is that in your experience, the OT is tracked meticulously,
even it not paid. I suspect that my clients don't track it at all
because they don't want the people doing it to know how much OT they
work without getting paid for it. It seems to go un-noticed when I
suggest they track this time.

Thanks for the reply. It's helped me to understand why it needs to be
done this way. It's because the real time it takes to do something is
seldom known, and so we have to estimate a "base" time, then factor in
the rest. Makes sense now.
davegb said:
Steve House [Project MVP] wrote:
Remember that the basic purpose of the "hours per day" setting is to
set
a
conversion factor for your duration entries. The actual scheduling
calculations are based solely on the calendars that govern the tasks.
Project tracks all its times in hours (actually minutes if you want to
get
picky). When I enter a task as requiring "5 days" it has to convert
that
into the equivalent minutes for storage and calculation. So when you
say
the task X requires 5 days, how many hours are you really talking
about?
Only you can answer that.

I must say I get a little uncomfortable at trying to adjust for the
fact
the
a resource is productive on the project's tasks for less than their
total
workday. If I estimate a task is going to take 5 days, what am I
basing
that on? Formally or informally we're going to base it on
experience -
last
year when we did something like this it took us 5 days from start to
finish
so this year it'll probably be close to that. But that 5 days
included
time
spent by the resources that was both productive on the task in
question
and
"non-productive" time for email, phonemail, watercooler meetings,
whatever.
If we micromanaged we might say it really wasn't 5 days but 4 days of
project with a day of non-project work interwoven to make it total 5.
But
do we really have that level of detailed information about the history
or
do
we just have the bottom line end result? I'd say just assume the
resources
worked 5 days at 100% and not to sweat the fact that what we're
calling
100%
really isn't 100% if you get picky about the small stuff..
--
Steve House [MVP]
MS Project Trainer & Consultant
Visit http://www.mvps.org/project/faqs.htm for the FAQs

I agree with everything you say, Steve. Many years ago (more than 20),
when I started using scheduling software, I learned about productivity
and started factoring that in. But after a while, I reached the same
conclusions you described, and stopped doing it. I just included it in
the duration estimate. I do it for an entirely different reason now,
and I'm not sure I can explain it well.
It has for years mystified me that production systems are always
designed around the realities of downtime and the real world limits of
mechanical/electrical/hydraulic systems. When you build a copper
concentrator, if you want to process 100,000 T of ore a day, you build
a system with a peak capacity (everything available working) of say,
110,000 T. We know things break, even in the best designed systems.
There will be downtime. So just design for it. Strangely, in service
systems, there is rarely any attention payed to this same fact, that
people don't perform 100% all the time, and that, even if they did,
unpredictable factors occur that no one could foresee. To schedule the
overall system (of people, in this case) to be operating at 100% is
just as naive as to schedule the copper concentrator at 100%. So what
I'm applying in this situation is not the individual's inefficiency,
but the system's inefficiency. The two are different. One could argue
that they both should be accounted for in the original estimate of
time, just as the individual's is. But it doesn't work out the same,
and I'm not sure why. I'd be happy is someone else jumped in and
rescued me on this one. I do know that I get better results this way,
so it's an empirical thing. I'm not a university professor, like
Goldratt, who needs everything to have a theoretical explanation to
implement a technique. If I find it works for my clients, I go with it,
and try and figure out why it works later.
So now I'm trying to figure out why the system inefficiency needs to be
accounted for this way and the individual inefficiency is in the
duration estimate. Anyone have any ideas?




Friends,

What's now considered Best Practice regarding setting number of
working
hours per day? For example, the work day is actually 8 hours, but
some
recommend setting Calendar and Working Time options to show only 6
hours
to
account for the fact that no one (well, almost no one) actually
accomplishes
productive work for the full eight hours.

I have supported both scenarios, and am wondering what the current
prevalent
stance is, particularly with the Microsoft Project scheduling engine
in
mind.

Thanks for your thoughts one way or the other.

Jim
 
D

davegb

Steve said:
RE unpaid overtime for exempt employees - there was a bumper sticker popular
in the 60's that read "Gas, Grass, or *** - Nobody Rides for Free!" that we
should make a PM adage. Whether expempt employees get and OT cheque or not,
it is still a real cost to the company. If Joe puts in a string of 12 days
to make a deadline, you can bet he's going to make it up somewhere, be it
overtly or covertly. Those extra hours definitely aren't going to be free
in the long run.

It's not so much that OT is tracked meticulously, it's that the real work
effort to accomplish the task needs to be evaluated to the most accurate
level possible. There's a real problem if one ignores the OT contribution
because THIS time you might not have as compliant a worker on the job. If
the last time we did a similar task it took 5 12-hour days, we really need
to know that this time with a non-exempt worker it's going to take almost 8
8-hour days to get tehe same work done. We really shouldn't try to get away
with pretending it was 5 8-hour days. One of the things that gets my goat
is the attitude of some people who seem to think management is an act of
willpower - that wanting something to be done in a certain way or by a
certain deadline badlky enough is sufficent to make it possible for it to be
done that way and anyone who doesn't deliver on those arbitrary and
irrational standards is a slacker.

I use the "wax widgets" example not because I'm trying to apply PM
principles to manufacturing but rather to try to keep examples focussed on
the basic idea that project tasks are ALWAYS, without except, observable
work carried out by a resource over an exact period of time that produces an
observable and measurable outcome. Dull widgets had been waxed, a program
has been written, a report has been submitted, software has been installed,
etc etc - a task always changes the state of the universe, the world is a
different place from what it was before the task began.

Exempt employees don't get OT checks, by definition. That's what
they're exempt from.
I agree about management by edict, though I'm not clear how it's
relevant here.
All I was saying is, that in my experience, it's seldom as simple as
going and looking up how long it took to do x last time, and plugging
that into the new schedule. I would that it were...
davegb said:
To sum the last part of your post - When we say a resource is working at
a
100% assignment, he is generating all the work output possible given the
system he's working in. The inefficiencies of the system itself may mean
that a 100% committment for the resource in this system results in a
different actual output than 100% for the same resource would generate
when
placed in another system. Joe & Bill are identical twins and they wax
widgets with equal skill at the rate of 10 widgets an hour when working
full-tilt. Company X expects Joe to mix his own wax fresh every day and
that takes him one hour. Joe assigned to waxing widgets 100% will wax 70
widgets in an 8 hour workday. Company Y provides Bill with pre-mixed
wax.
Bill working 100% generates 80 widgets during his 8 hour workday. And
that's where it aligns with my posting - when we look back and see that
when
Fred did a software upgrade last year it took 10 days to upgrade 10
seats,
that 10 days has the adjustments for systemic inefficiencies already
accounted for. So when we estimate that this year it will take 25 days
to
rollout an upgrade to 25 seats, we can assign Fred 100% because the
infficencies of the system that have Fred going to meetings, answering
phonemail, putting out fires are already included in the estimated
duration - we don't have to adjust and say that Fred only works 80% of
each
day exclusively on software installation. The systemic inefficiencies
that
eat up some of Fred's productivity and affected last years project are
embedded in the historical data and will affect this year's project in
exactly the same way, so adjusting Fred's time for them can be ignored if
this year's estimate is based on last year's history (as it virtually
always
will be).

Once again, Steve, our experiences are quite different. First, none of
my clients waxes widgets. In highly repetitive work, like
manufacturing, you analogy makes sense. But my clients are seldom doing
manufacturing, which really isn't project work at all. And when they
are doing manufacturing, it's not assembly line type. Each project is,
to some degree, unique. And therefore, even if there were good
historical records (see issue 2), they would have to be adjusted for
the new project.

Second, there are seldom good records of how long the "same" task, if
such a task exists (see issue 1), took. One of the battles I often face
is convincing my clients to keep good records and to use that
historical database to estimate new projects. Most of the estimating I
see is seat-of-the-pants. And they seem to consistently underestimate
over and over and over. Like the proverbial duck.

Third, even if they did keep good records and refer back to them when
they do a new project, they'd be inaccurate. The main reason being that
most of the work on my client's projects is done by exempt employees
where OT is deliberately not reported nor tracked. These companies love
OT for exempt employees (they rarely allow non-exempt employees to do
OT, don't want to pay for it, and why pay for it, when exempt employees
have to do it for free?). This means that the actual hours it took to
do almost any given task are unknown, even if the reported hours are
known and referred to.

My guess is that in your experience, the OT is tracked meticulously,
even it not paid. I suspect that my clients don't track it at all
because they don't want the people doing it to know how much OT they
work without getting paid for it. It seems to go un-noticed when I
suggest they track this time.

Thanks for the reply. It's helped me to understand why it needs to be
done this way. It's because the real time it takes to do something is
seldom known, and so we have to estimate a "base" time, then factor in
the rest. Makes sense now.
Steve House [Project MVP] wrote:
Remember that the basic purpose of the "hours per day" setting is to
set
a
conversion factor for your duration entries. The actual scheduling
calculations are based solely on the calendars that govern the tasks.
Project tracks all its times in hours (actually minutes if you want to
get
picky). When I enter a task as requiring "5 days" it has to convert
that
into the equivalent minutes for storage and calculation. So when you
say
the task X requires 5 days, how many hours are you really talking
about?
Only you can answer that.

I must say I get a little uncomfortable at trying to adjust for the
fact
the
a resource is productive on the project's tasks for less than their
total
workday. If I estimate a task is going to take 5 days, what am I
basing
that on? Formally or informally we're going to base it on
experience -
last
year when we did something like this it took us 5 days from start to
finish
so this year it'll probably be close to that. But that 5 days
included
time
spent by the resources that was both productive on the task in
question
and
"non-productive" time for email, phonemail, watercooler meetings,
whatever.
If we micromanaged we might say it really wasn't 5 days but 4 days of
project with a day of non-project work interwoven to make it total 5.
But
do we really have that level of detailed information about the history
or
do
we just have the bottom line end result? I'd say just assume the
resources
worked 5 days at 100% and not to sweat the fact that what we're
calling
100%
really isn't 100% if you get picky about the small stuff..
--
Steve House [MVP]
MS Project Trainer & Consultant
Visit http://www.mvps.org/project/faqs.htm for the FAQs

I agree with everything you say, Steve. Many years ago (more than 20),
when I started using scheduling software, I learned about productivity
and started factoring that in. But after a while, I reached the same
conclusions you described, and stopped doing it. I just included it in
the duration estimate. I do it for an entirely different reason now,
and I'm not sure I can explain it well.
It has for years mystified me that production systems are always
designed around the realities of downtime and the real world limits of
mechanical/electrical/hydraulic systems. When you build a copper
concentrator, if you want to process 100,000 T of ore a day, you build
a system with a peak capacity (everything available working) of say,
110,000 T. We know things break, even in the best designed systems.
There will be downtime. So just design for it. Strangely, in service
systems, there is rarely any attention payed to this same fact, that
people don't perform 100% all the time, and that, even if they did,
unpredictable factors occur that no one could foresee. To schedule the
overall system (of people, in this case) to be operating at 100% is
just as naive as to schedule the copper concentrator at 100%. So what
I'm applying in this situation is not the individual's inefficiency,
but the system's inefficiency. The two are different. One could argue
that they both should be accounted for in the original estimate of
time, just as the individual's is. But it doesn't work out the same,
and I'm not sure why. I'd be happy is someone else jumped in and
rescued me on this one. I do know that I get better results this way,
so it's an empirical thing. I'm not a university professor, like
Goldratt, who needs everything to have a theoretical explanation to
implement a technique. If I find it works for my clients, I go with it,
and try and figure out why it works later.
So now I'm trying to figure out why the system inefficiency needs to be
accounted for this way and the individual inefficiency is in the
duration estimate. Anyone have any ideas?




Friends,

What's now considered Best Practice regarding setting number of
working
hours per day? For example, the work day is actually 8 hours, but
some
recommend setting Calendar and Working Time options to show only 6
hours
to
account for the fact that no one (well, almost no one) actually
accomplishes
productive work for the full eight hours.

I have supported both scenarios, and am wondering what the current
prevalent
stance is, particularly with the Microsoft Project scheduling engine
in
mind.

Thanks for your thoughts one way or the other.

Jim
 
L

luvtoscrab

How do I set default start and end times for each page in the Outlook
calendar? For instance, I want only the period from 6 a.m. to midnight to
show on the calendar page. Right now the times on the page are from 12:00
a.m. to 11:59 p.m. But I generally enter nothing from 12:00 a.m. to 6:00
a.m., so much of the page is dead space and I have to scroll down to find the
first entry.

I am able to change the workweek times so that, for instance, 6:00a.m. to
11:59 p.m. is in another color, but the period from 12:00 a.m. to 6:00 a.m.
still takes up space on the calendar.
 
J

John

luvtoscrab said:
How do I set default start and end times for each page in the Outlook
calendar? For instance, I want only the period from 6 a.m. to midnight to
show on the calendar page. Right now the times on the page are from 12:00
a.m. to 11:59 p.m. But I generally enter nothing from 12:00 a.m. to 6:00
a.m., so much of the page is dead space and I have to scroll down to find the
first entry.

I am able to change the workweek times so that, for instance, 6:00a.m. to
11:59 p.m. is in another color, but the period from 12:00 a.m. to 6:00 a.m.
still takes up space on the calendar.

luvtoscrub,
This newsgroup is dedicated to questions/issues about Microsoft Project,
a planning and scheduling application. I suggest you post to a newsgroup
that deals with Outlook issues.

John
Project MVP
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top