Skip to content

    Facts, Feelings, and Forecasts

    The "Facts and Feelings" framework offers a process and language for sharing actionable, useful, and accurate status based on what we know (Facts) and what we think might happen next (Feelings or Forecasts).

    lukeh-fff-hero-image

    lukeh-fff-cover

    Teams often report progress using a ‘Red – Amber – Green” (RAG) framework in which Red indicates a current problem, Amber indicates a potential problem, Green indicates no problems. Unfortunately, this overly simplistic approach fails to provide teams with a means to communicate their expectations of the future, often inadvertently creating a misleading summary of progress - commonly known as ‘watermelon status’ - ‘green on the outside, red on the inside.

    The "Facts and Feelings" framework offers a process and language for sharing actionable, useful, and accurate status based on what we know (Facts) and what we think might happen next (Feelings or Forecasts).It enables teams to share useful, decision-relevant information that gives leaders a better chance to more fully support their teams or help them respond effectively to challenges.

    This paper will cover facts, feelings, and and how to use them.

    Facts

    lukeh-fff-red

    There is at least one significant problem.

    lukeh-fff-amber

    There might be a significant problem.

    lukeh-fff-green

    There are no known significant problems.

    Feelings

    lukeh-fff-smile

    We feel good about the future.

    lukeh-fff-uncertain

    We are uncertain about the future.

    lukeh-fff-problem

    We anticipate future problems.

     

    Facts

    A ‘fact’ should be used per the ‘natural meaning’ of the word: an empirically verifiable statement; or something that can be shown to be true or false without any doubt, usually through some indisputable evidence, such as measurement, observation, or mathematical proof. A fact is typically a single measure, not a combined set of measures.

    The assessment of the fact relative to a desired result is the color we ascribe to the fact. 

    Because there are many potential facts and associated assessments that can be made, the goal is to select the smallest set of facts that support effective decision-making. Practically, a collection of three to five facts tailored to the decisions that need to be made are sufficient. Decision-makers are encouraged to periodically review the facts and assessments they’re using to ensure they are serving their needs. 

    We represent assessments using both a color and a shape to improve accessibility.

    Examples of facts and assessments:

    The result of an automated QA test. Typically:

    lukeh-fff-red The tests identified ‘must-fix’ defects.
    lukeh-fff-amber Some tests failed to execute or were otherwise inconclusive.
    lukeh-fff-green All tests passed and no escaped defects were observed.

    Whether or not a portfolio initiative is within accepted budget parameters.

    lukeh-fff-red The initiative is unacceptably over/under allocated budget (e.g. ≥15% over/under).
    lukeh-fff-amber The initiative is over/under budget by an amount causing concern (e.g. ≥5% but <15% over/under).
    lukeh-fff-green The initiative is on budget (e.g. <5% over/under).

    Facts and their assessments are vital to decision-making. However, on their own, they are insufficient: A team may have a failing QA test and be confident that they can fix the remaining defects in an acceptable timeframe. An initiative may be over-budget for reasons that all stakeholders agree are desirable for the business. 

    To improve decision-making, we augment facts with feelings. 

    Feelings

    Feelings are intended to capture the feelings or expectations about the likely future result one or more facts and relevant context. These expectations are based on such things as the degree of agency or the plan that individuals may have. Here are some examples. 

    lukeh-fff-smile We don’t anticipate future problems, or, we have a credible plan to resolve issues.
    lukeh-fff-uncertain We think there might be some problems; or, if there are problems, we’re unsure how to resolve them.
    lukeh-fff-problem We are confident there will be future problems, or, we have problems, and we believe our current mitigation plans are insufficient.

    Whether or not people’s feelings accurately represent facts, they are important to assess for the following reasons:

    • They represent useful intuition / wisdom / ‘expert knowledge” that help in understanding and managing the implications of the facts.
    • Feeling and emotion tend to drive human behavior at least as much as rational logic.
    • They are critical indicators into the well-being of our most important work-related assets: the people involved with the work.

    The Nine Combinations in Practice

    By pairing a Fact with a Feeling, we create a simple but powerful tool that supports honest conversations. Here are examples of each:

    lukeh-fff-red  lukeh-fff-smile

    We have known problems; they are non-recurring in nature, and/or we’re confident we can fix them.

    • Our QA tests have identified more defects than expected, however; we’re confident we can fix them in time to meet our release window.

    • We are under budget because hiring three key developers was delayed because they took job offers at other companies. We have several solid candidates who are progressing in the hiring process.

    • We are over budget because we accelerated onboarding of a dev team by one month to take advantage of their becoming available from another initiative earlier than anticipated. The overall portfolio costs have not changed, and we’ll resolve the discrepancies.


    lukeh-fff-red  lukeh-fff-uncertain

    We have known problems; we’re unsure how to resolve them.

    • Some of our QA tests failed to execute. We are unsure if the problem is in the test themselves, the result of configuration errors, or something else. We’re going to explore the first two items in depth.

    • Cloud costs have been running higher than budgeted. We have adjusted our solution design in response, but we need to continue to monitor to verify we’ve fully addressed the root causes.


    lukeh-fff-red  lukeh-fff-problem

    We have known problems; we think this trend will continue.

    • We are building automated tests for a system that has never had test automation. Many tests are failing, and as we expand coverage, the trend suggests we will continue to see more before we start seeing a reduction.

    • Challenges with expanding automated test coverage are causing us to incur higher utilization of offshore developers than budgeted. Given the fixed release timeline, this will continue unless defect rates drop unexpectedly.


    lukeh-fff-amber  lukeh-fff-smile

    Results are outside the optimal range; however, we see no reason for concern.

    • QA did not complete because the integration server failed. However, the last three test runs were fine, and we don’t anticipate any problems. We’ve got a plan in place to address the integration server.

    • Labor costs are causing us to exceed budget, however we are under budget on a different initiative, and finance is OK with adjusting budgets to ensure both initiatives provided we stay under the total portfolio budget. 


    lukeh-fff-amber  lukeh-fff-uncertain

    Results are outside the optimal range; we’re unsure why, and we want to investigate.

    • Some automated QA tests are failing inconsistently. Integration server settings appear to be correctly configured, so we are investigating other potential causes.

    • Cloud compute costs are spiking sporadically, causing minor overruns. We are trying to discern the underlying pattern so we can address the issue.


    lukeh-fff-amber  lukeh-fff-problem

    Results are outside the optimal range; we believe this indicates a deeper issue.

    • QA testing is failing sporadically. We’re pretty sure the problem is due to unexpected dependencies in the code base, which requires an unknown effort to fully investigate and remediate.

    • The offshore development team has been requesting authorization for overtime more frequently, causing us to begin exceeding our budget. We are concerned about this negative trend and suspect they are not being forthright about the underlying causes.


    lukeh-fff-green  lukeh-fff-smile

    The initiative is on track. We expect it to continue being on track.

    • Code coverage continues to expand ahead of schedule with fewer defects being identified than planned for.

    • We’re on budget. We expect to stay on budget.


    lukeh-fff-green  lukeh-fff-uncertain

    The initiative is on track. We have some concerns going forward.

    • While test automation is on schedule, we are seeing a negative trend with defect rates as we begin to address the oldest modules.

    • We’re on budget. However, the hardware team is planning to build more prototypes than in our original forecast, which could us over budget.


    lukeh-fff-green  lukeh-fff-problem

    The initiative is on track. We expect there will be a problem in the future.

    • Test automation is on schedule. However, it appears ongoing test maintenance will require a much higher level of effort than originally forecasted.

    • We’re on budget; however, a key software vendor has notified us of an upcoming pricing model change which may significantly impact our cost model.

     

    How Managers Can Respond

    This model doesn’t just give you more insight — it guides what kind of support a team might need. Here’s are some suggested responses to the combinations:

    lukeh-fff-red  lukeh-fff-smile

    • Stay the course.
    • Check to make sure that this status is not used repeatedly without change.
    • Publicly recognize the team for their accomplishments as they move from this category into yellow and ultimately green.

    lukeh-fff-red  lukeh-fff-uncertain

    • Ask questions of the team to help them build confidence in their options or explore alternative approaches.
    • Ask what indicators might have helped them see and react to this sooner.
    • Ask the team if they’d like additional outside help, from you or others.

    lukeh-fff-red  lukeh-fff-problem

    • You’re probably going to need to dig into this with the team.
    • Explore the implications of the problems and the future impact they’ll create.
    • Ask what indicators might have helped them see and react to this sooner.
    • Create/update contingency plans for worst-case scenario

    lukeh-fff-amber  lukeh-fff-smile

    • Stay the course.
    • Ask questions to confirm the validity of reported results and the team’s judgment (“How might we potentially be wrong here?” Have you considered X?” “What about Y?”).
    • Monitor the situation over time to ensure judgments are accurate.

    lukeh-fff-amber  lukeh-fff-uncertain

    • Ask questions to encourage divergent thinking and create options to explore (“What are the most likely options we should consider exploring?” “What are some ways we can get better information?” “What indicators might have helped us see and react to this sooner?”).
    • Monitor the situation over time to ensure the team is iterating toward the best option and building confidence in its approach.

    lukeh-fff-amber  lukeh-fff-problem

    • Ask questions to encourage divergent thinking and create options to explore (“What are the most likely options we should consider exploring?” “What are some ways we can get better information?” “What indicators might have helped us see and react to this sooner?”).
    • Monitor the situation over time to ensure the team is iterating toward the best option and building confidence in its approach.
    • Ask the team if they’d like additional outside help, from you or others.

    lukeh-fff-green  lukeh-fff-smile

    • Stay the course.
    • If the team has consistently been reporting this status, consider asking the team to “pursue perfection” by adopting a skeptical mindset (“Are going too well?”) and continue improving (“How might we make this even better?”).
    • Encourage the team to share what’s working and why with other teams and look for opportunities to coach/mentor/support other teams.
    • Publicly recognize the team for consistently and objectively measurable good performance.

    lukeh-fff-green  lukeh-fff-uncertain

    • Generally, stay the course.
    • Explore the potential impact of any problems.
    • Ask questions to encourage divergent thinking and create options to investigate (“What are the most likely options we should consider?” “What are some ways we can get better information?” “What indicators might have helped us see and react to this sooner?”).
    • Monitor the situation over time to ensure the team is iterating toward the best option and building confidence in its approach. 

    lukeh-fff-green  lukeh-fff-problem

    • This is an ‘early warning’ status – use this to prepare for the future problems.
    • Ask questions to encourage divergent thinking and create options to explore (“What are the most likely options we should consider exploring?” “What are some ways we can get better information?” “What indicators might have helped us see and react to this sooner?”).
    • Monitor the situation over time to ensure the team is iterating toward the best option and building confidence in its approach.
    • Ask the team if they’d like additional outside help, from you or others.

    Tracking historical values

    Facts and feelings support historical reporting and future forecasting, with the caveat that longer forecasts are less accurate. You would want to add your own perspective of time (weekly or monthly). 

    lukeh-forecast

    Choosing an Approach

    The environment you’re working in will determine how you should best refer to this approach (e.g., facts and feelings, facts and expectations, or something else entirely). What’s important is explicitly recognizing that our feelings about the future are a vital piece of managerial data.

    Summary

    This isn’t about layering on more process. It’s about creating shared language that helps teams speak honestly about their concerns and equips leaders to respond more proportionally.
    It also gives space for uncertainty — something we all feel but rarely have the words for in a traditional RAG or KPI dashboard.

    Initiatives rarely live in a world of perfect green lights. “Facts and Feelings” gives us a more authentic dashboard — one that honors the truth and the uncertainty of real work.


    Special thanks to Laura Caldie, Andrew Long, Jason Tanner, Harry Max, Scott Sehlhorst, and Kevin McCabe for their edits and feedback.
    Extra special thanks to James Bach, who co-developed this technique when we were working at Aurigin Systems, Inc. 

    About the Author

    luke

    Luke has been involved with Applied Frameworks since its founding in 2003. He later went on to start Conteneo, a collaboration software company which was acquired by Scaled Agile in 2019. While at Scaled Agile, Luke served as a SAFe® Framework Contributor and Principal Consultant, with significant contributions to the SAFe Agile Product Delivery (APD) and Lean Portfolio Management (LPM) competencies and the SAFe POPM, APM, and LPM courses. He is an author and cited as an inventor on more than a dozen patents. His books include Innovation Games (2006), Beyond Software Architecture (2003), Journey of the Software Professional (1996), and the upcoming Software Profit Streams™ (2023), co-written with Applied Frameworks CEO Jason Tanner.