Hawthorne Effect Coaching Dilemma

Hawthorne Effect Coaching Dilemma

The Hawthorne Effect is something I wrote about over a year ago.  Previously as a Project Management Adviser and now as an Enterprise Agile Coach, I’ve seen it numerous times.  To all those currently advising or coaching, do you tend to see clients trying to impress you? The Hawthorne Effect refers to the tendency of some people to modify their behavior, when they know they are being watched, due to the attention they are receiving from researchers, auditors, or coaches.
hawthorne effect

This effect was first discovered and named by researchers at Harvard University who were studying the relationship between productivity and work environment. Researchers conducted these experiments at the Hawthorne Works plant of Western Electric. The study was originally commissioned to determine if increasing or decreasing the amount of light workers received increased or decreased worker productivity. The researchers found that productivity temporarily increased, regardless if the light was increased or decreases. They then realized the increase in productivity was due to the attention given the workers by the research team and not because of changes to the experimental variable.  (Thanks Wikipedia)

This is one reason short term engagements can be challenging.  People are on their best behavior, until they get used to you being there.  This is also why I don’t believe in annual reviews.  How do you, as managers, leaders, coaches, or auditors get past the effect?  How do you ensure you get a true representation of individual and team behavior and not suffer from the Hawthorne Effect?

Image Source: Pictofigo

5 Replies to “Hawthorne Effect Coaching Dilemma”

  1. The Hawthorne Effect was about the impact of attempts to measure work.  As my high school football coach put it, “Everybody runs faster if they see you holding a stopwatch.”

    In their classic book, “Peopleware,” Tom DeMarco and Timothy Lister warned us to be careful what you measure.  The classic example in project management is measuring whether you came in on time and within budget, rather than whether you created value in excess of the total project cost, in time to take advantage of that new capability.  Those who publish surveys about “failed projects” hardly ever address the value created, because it’s harder to measure than schedule and cost overruns.  A project that delivers dogshit, on time and within budget, is no success.


    1. Excellent points.

      Before starting a coaching engagement, I do a high level assessment to report the current state of the team or project.  I do a subjective rating of areas like team collaboration, transparency, adaptation…  I want the team or group to question if what they are doing contributes to delivering value or if they are just doing things because that’s the way it’s always been.  I do sometimes meet those who push back because they have goals like increased team member utilization and I want to see things like increased team throughput.

      My goal is to create or support an environment that is highly collaborative and transparent and allows the team or group to react quickly to change.

      Dave, when you refer to a project that delivers “dogshit”, do you mean it delivers low quality product or just product or features nobody wants or needs?

      1. Those are two good examples, but I’ll award the “dogshit” demerit badge for any of the following:

        No longer needed, but they didn’t kill the project and delivered it, anyway.
        Rolled out without any current or historical data loaded, and a
        suggestion that the users enter it by hand, “as a way to learn the
        Poorly designed, overly complex (or no) business processes.
        Undefined user and support roles, or vague security models.  Especially for applications storing sensitive data.

        Rolled out without training or communications to the users.

        User hostile UX, even with training.
        “Orphaned on delivery,” and just dumped into the user community’s collective lap without a clear support model.
        Insufficient documentation to meet regulatory compliance needs, where applicable.

        I’m sure there are others, but I’m caffeine-deficient today. 

        I think you have a good approach – even subjective measures are valuable, in the hands of an experienced assessor.  The trick is to avoid setting off their alarm bells.  A lot of organizations figure that everyone does things the way they do them. When an outsider asks why, they can get defensive: “Whaddaya mean, why?”  On Friday, I spent about an hour explaining that User Acceptance Testing is not about testing software, but about determining whether the user community should accept the system, including all of that stuff listed above, for delivery.  An IT person got very indignant …

    2.  Like the “stopwatch=run faster” quote – you might be interested in a blog post summarising some research that showed that if success criteria are well-defined and measured this is more likley to lead to project success because you adjust the project towards achieving the defined success criteria. Makes sense but how often is it done?

      1. Michelle, sounds like common sense, doesn’t it?  Define success criteria?  Vision, mission, goals, tasks…  Every step of the way, there should be success criteria.  If not, we risk wandering aimlessly.  Thank you for the blog post link.

Leave a Reply

Your email address will not be published. Required fields are marked *