Notes on Structured Analytic Techniques for Intelligence Analysis

Selections from Structured Analytic Techniques for Intelligence Analysis by Richards J Heuer and Randolph H Pherson.

In contrast to the bipolar dynamics of the Cold War, this new world is strewn with failing states, proliferation dangers, regional crises, rising powers, and dangerous nonstate actors—all at play against a backdrop of exponential change in fields as diverse as population and technology.

To be sure, there are still precious secrets that intelligence collection must uncover—things that are knowable and discoverable. But this world is equally rich in mysteries having to do more with the future direction of events and the intentions of key actors. Such things are rarely illuminated by a single piece of secret intelligence data; they are necessarily subjects for analysis.

intelligence analysis differs from similar fields of intellectual endeavor.  Intelligence analysts must traverse a minefield of potential errors.

First, they typically must begin addressing their subjects where others have left off; in most cases the questions they get are about what happens next, not about what is known.

Second, they cannot be deterred by lack of evidence. As Heuer pointed out in his earlier work, the essence of the analysts’ challenge is having to deal with ambiguous situations in which information is never complete and arrives only incrementally—but with constant pressure to arrive at conclusions.

Third, analysts must frequently deal with an adversary that actively seeks to deny them the information they need and is often working hard to deceive them.

Finally, analysts, for all of these reasons, live with a high degree of risk—essentially the risk of being wrong and thereby contributing to ill-informed policy decisions.

The risks inherent in intelligence analysis can never be eliminated, but one way to minimize them is through more structured and disciplined thinking about thinking.

The key point is that all analysts should do something to test the conclusions they advance. To be sure, expert judgment and intuition have their place—and are often the foundational elements of sound analysis— but analysts are likely to minimize error to the degree they can make their underlying logic explicit in the ways these techniques demand.

Just as intelligence analysis has seldom been more important, the stakes in the policy process it informs have rarely been higher. Intelligence analysts these days therefore have a special calling, and they owe it to themselves and to those they serve to do everything possible to challenge their own thinking and to rigorously test their conclusions.

Preface: Origin and Purpose

 

Structured analysis involves a step-by-step process that externalizes an individual analyst’s thinking in a manner that makes it readily apparent to others, thereby enabling it to be shared, built on, and critiqued by others. When combined with the intuitive judgment of subject matter experts, such a structured and transparent process can significantly reduce the risk of analytic error.

Each step in a technique prompts relevant discussion and, typically, this generates more divergent information and more new ideas than any unstructured group process. The step-by-step process of structured analytic techniques structures the interaction among analysts in a small analytic group or team in a way that helps to avoid the multiple pitfalls and pathologies that often degrade group or team performance.

By defining the domain of structured analytic techniques, providing a manual for using and testing these techniques, and outlining procedures for evaluating and validating these techniques, this book lays the groundwork for continuing improvement of how analysis is done, both within the Intelligence Community and beyond.

Audience for This Book

 

This book is for practitioners, managers, teachers, and students of intelligence analysis and foreign affairs in both the public and private sectors. Managers, commanders, action officers, planners, and policymakers who depend upon input from analysts to help them achieve their goals should also find it useful. Academics who specialize in qualitative methods for dealing with unstructured data will be interested in this pathbreaking book as well.

 

Techniques such as Analysis of Competing Hypotheses, Key Assumptions Check, and Quadrant Crunching developed specifically for intelligence analysis are now being adapted for use in other fields. New techniques that the authors developed to fill gaps in what is currently available for intelligence analysis are being published for the first time in this book and have broad applicability.

Introduction and Overview

 

Analysis in the U.S. Intelligence Community is currently in a transitional stage, evolving from a

mental activity done predominantly by a sole analyst to a collaborative team or group activity.

The driving forces behind this transition include the following:

  • The growing complexity of international issues and the consequent requirement for

multidisciplinary input to most analytic products.

  • The need to share more information more quickly across organizational boundaries.
  • The dispersion of expertise, especially as the boundaries between analysts, collectors, and operators become blurred.
  • And the need to identify and evaluate the validity of alternative mental models.

This transition is being enabled by advances in technology, such as the Intelligence Community’s Intellipedia and new A-Space collaborative network, “communities of interest,” the mushrooming growth of social networking practices among the upcoming generation of analysts, and the increasing use of structured analytic techniques that guide the interaction among analysts.

 

OUR VISION

 

Structured analysis is a mechanism by which internal thought processes are externalized in a systematic and transparent manner so that they can be shared, built on, and easily critiqued by others. Each technique leaves a trail that other analysts and managers can follow to see the basis for an analytic judgment.

This transparency also helps ensure that differences of opinion among analysts are heard and seriously considered early in the analytic process. Analysts have told us that this is one of the most valuable benefits of any structured technique.

Structured analysis helps analysts ensure that their analytic framework—the foundation upon which they form their analytic judgments—is as solid as possible. By helping break down a specific analytic problem into its component parts and specifying a step-by-step process for handling these parts, structured analytic techniques help to organize the amorphous mass of data with which most analysts must contend. This is the basis for the terms structured analysis and structured analytic techniques. Such techniques make an analyst’s thinking more open and available for review and critique than the traditional approach to analysis. It is this transparency that enables the effective communication at the working level that is essential for interoffice and interagency collaboration.

Structured analytic techniques in general, however, do form a methodology—a set of principles and procedures for qualitative analysis of the kinds of uncertainties that intelligence analysts must deal with on a daily basis.

There is, of course, no formula for always getting it right, but the use of structured techniques can reduce the frequency and severity of error. These techniques can help analysts mitigate the proven cognitive limitations, side-step some of the known analytic pitfalls, and explicitly confront the problems associated with unquestioned mental models (also known as mindsets). They help analysts think more rigorously about an analytic problem and ensure that preconceptions and assumptions are not taken for granted but are explicitly examined and tested.

Intelligence analysts, like humans in general, do not start with an empty mind. Whenever people try to make sense of events, they begin with some body of experience or knowledge that gives them a certain perspective or viewpoint which we are calling a mental model. Intelligence specialists who are expert in their field have well developed mental models.

If an analyst’s mindset is seen as the problem, one tends to blame the analyst for being inflexible or outdated in his or her thinking.

1.2 THE VALUE OF TEAM ANALYSIS

 

Our vision for the future of intelligence analysis dovetails with that of the Director of National Intelligence’s Vision 2015, in which intelligence analysis increasingly becomes a collaborative enterprise, with the focus of collaboration shifting “away from coordination of draft products toward regular discussion of data and hypotheses early in the research phase.”

 

Analysts have also found that use of a structured process helps to depersonalize arguments when there are differences of opinion. Fortunately, today’s technology and social networking programs make structured collaboration much easier than it has ever been in the past.

1.3 THE ANALYST’S TASK

 

we developed a taxonomy for a core group of fifty techniques that appear to be the most useful for the Intelligence Community, but also useful for those engaged in related analytic pursuits in academia, business, law enforcement, finance, and medicine. This list, however, is not static.

 

It is expected to increase or decrease as new techniques are identified and others are tested and found wanting. Some training programs may have a need to boil down their list of techniques to the essentials required for one particular type of analysis.

 

willingness to share in a collaborative environment is also conditioned by the sensitivity of the

information that one is working with.

 

1.4 HISTORY OF STRUCTURED ANALYTIC TECHNIQUES

 

The first use of the term “Structured Analytic Techniques” in the Intelligence Community was in 2005. However, the origin of the concept goes back to the 1980s, when the eminent teacher of intelligence analysis, Jack Davis, first began teaching and writing about what he called “alternative analysis.” The term referred to the evaluation of alternative explanations or hypotheses, better understanding of other cultures, and analyzing events from the other country’s point of view rather than by mirror imaging.

 

organized the techniques into three categories: diagnostic techniques, contrarian techniques, and imagination techniques.

It proposes that most analysis be done in two phases: a divergent analysis or creative phase with broad participation by a social network using a wiki, followed by a convergent analysis phase and final report done by a small analytic team.

1.6 AGENDA FOR THE FUTURE

A principal theme of this book is that structured analytic techniques facilitate effective collaboration among analysts. These techniques guide the dialogue among analysts with common interests as they share evidence and alternative perspectives on the meaning and significance of this evidence. Just as these techniques provide structure to our individual thought processes, they also structure the interaction of analysts within a small team or group. Because structured techniques are designed to generate and evaluate divergent information and new ideas, they can help avoid the common pitfalls and pathologies that commonly beset other small group processes. In other words, structured analytic techniques are enablers of collaboration.

2 Building a Taxonomy

A taxonomy is a classification of all elements of the domain of information or knowledge. It defines a domain by identifying, naming, and categorizing all the various objects in this space. The objects are organized into related groups based on some factor common to each object in the group.

The word taxonomy comes from the Greek taxis meaning arrangement, division, or order and nomos meaning law.

 

Development of a taxonomy is an important step in organizing knowledge and furthering the development of any particular discipline.

 

“a taxonomy differentiates domains by specifying the scope of inquiry, codifying naming conventions, identifying areas of interest, helping to set research priorities, and often leading to new

theories. Taxonomies are signposts, indicating what is known and what has yet to be discovered.”

 

To the best of our knowledge, a taxonomy of analytic methods for intelligence analysis has not previously been developed, although taxonomies have been developed to classify research methods used in forecasting, operations research, information systems, visualization tools, electronic commerce, knowledge elicitation, and cognitive task analysis.

 

After examining taxonomies of methods used in other fields, we found that there is no single right way to organize a taxonomy—only different ways that are more or less useful in achieving a specified goal. In this case, our goal is to gain a better understanding of the domain of structured analytic techniques, investigate how these techniques contribute to providing a better analytic product, and consider how they relate to the needs of analysts. The objective has been to identify various techniques that are currently available, identify or develop additional potentially useful techniques, and help analysts compare and select the best technique for solving any specific analytic problem. Standardization of terminology for structured analytic techniques will facilitate collaboration across agency boundaries during the use of these techniques.

 

 

2.1 FOUR CATEGORIES OF ANALYTIC METHODS

 

The taxonomy described here posits four functionally distinct methodological approaches to intelligence analysis. These approaches are distinguished by the nature of the analytic methods used, the type of quantification if any, the type of data that are available, and the type of training that is expected or required. Although each method is distinct, the borders between them can be blurry.

 

* Expert judgment: This is the traditional way most intelligence analysis has been done. When done well, expert judgment combines subject matter expertise with critical thinking. Evidentiary reasoning, historical method, case study method, and reasoning by analogy are included in the expert judgment category. The key characteristic that distinguishes expert judgment from structured analysis is that it is usually an individual effort in which the reasoning remains largely in the mind of the individual analyst until it is written down in a draft report. Training in this type of analysis is generally provided through postgraduate education, especially in the social sciences and liberal arts, and often along with some country or language expertise.

 

* Structured analysis: Each structured analytic technique involves a step-by-step process that externalizes the analyst’s thinking in a manner that makes it readily apparent to others, thereby enabling it to be reviewed, discussed, and critiqued piece by piece, or step by step. For this reason, structured analysis often becomes a collaborative effort in which the transparency of the analytic process exposes participating analysts to divergent or conflicting perspectives. This type of analysis is believed to mitigate the adverse impact on analysis of known cognitive limitations and pitfalls. Frequently used techniques include Structured Brainstorming, Scenarios, Indicators, Analysis of Competing Hypotheses, and Key Assumptions Check. Structured techniques can be used by analysts who have not been trained in statistics, advanced mathematics, or the hard sciences. For most analysts, training in structured analytic techniques is obtained only within the Intelligence Community.

 

* Quantitative methods using expert-generated data: Analysts often lack the empirical data needed to analyze an intelligence problem. In the absence of empirical data, many methods are designed to use quantitative data generated by expert opinion, especially subjective probability judgments. Special procedures are used to elicit these judgments. This category includes methods such as Bayesian inference, dynamic modeling, and simulation. Training in the use of these methods is provided through graduate education in fields such as mathematics, information science, operations research, business, or the sciences.

 

* Quantitative methods using empirical data: Quantifiable empirical data are so different from expert- generated data that the methods and types of problems the data are used to analyze are also quite different. Econometric modeling is one common example of this method. Empirical data are collected by various types of sensors and are used, for example, in analysis of weapons systems. Training is generally obtained through graduate education in statistics, economics, or the hard sciences.

 

 

2.2 TAXONOMY OF STRUCTURED ANALYTIC TECHNIQUES

Structured techniques have been used by Intelligence Community methodology specialists and some analysts in selected specialties for many years, but the broad and general use of these techniques by the average analyst is a relatively new approach to intelligence analysis. The driving forces behind the development and use of these techniques are:

(1) an increased appreciation of cognitive limitations and pitfalls that make intelligence analysis so difficult

(2) prominent intelligence failures that have prompted reexamination of how intelligence analysis is generated

(3) policy support and technical support for interagency collaboration from the Office of the Director of National Intelligence

(4) a desire by policymakers who receive analysis that it be more transparent as to how the conclusions were reached.

 

There are eight categories of structured analytic techniques, which are listed below:

Decomposition and Visualization (chapter 4)
Idea Generation (chapter 5)
Scenarios and Indicators (chapter 6)
Hypothesis Generation and Testing (chapter 7)

Decision Support (chapter 11)

Assessment of Cause and Effect (chapter 8)

Challenge Analysis (chapter 9)
Conflict Management (chapter 10)

 

3 Criteria for Selecting Structured Techniques

 

Techniques that require a major project of the type usually outsourced to an outside expert or company are not included. Several interesting techniques that were recommended to us were not included for this reason. A number of techniques that tend to be used exclusively for a single type of analysis, such as tactical military, law enforcement, or business consulting, have not been included.

In this collection of techniques we build on work previously done in the Intelligence Community.

3.2 TECHNIQUES EVERY ANALYST SHOULD MASTER

 

The average intelligence analyst is not expected to know how to use every technique in this book. All analysts should, however, understand the functions performed by various types of techniques and recognize the analytic circumstances in which it is advisable to use them.

 

Structured Brainstorming: Perhaps the most commonly used technique, Structured Brainstorming is a simple exercise often employed at the beginning of an analytic project to elicit relevant information or insight from a small group of knowledgeable analysts. The group’s goal might be to identify a list of such things as relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, potential responses by an adversary or competitor to some action or situation, or, for law enforcement, potential suspects or avenues of investigation.

 

Cross-Impact Matrix: If the brainstorming identifies a list of relevant variables, driving forces, or key players, the next step should be to create a Cross-Impact Matrix and use it as an aid to help the group visualize and then discuss the relationship between each pair of variables, driving forces, or players. This is a learning exercise that enables a team or group to develop a common base of knowledge about, for example, each variable and how it relates to each other variable.

 

Key Assumptions Check: Requires analysts to explicitly list and question the most important working assumptions underlying their analysis. Any explanation of current events or estimate of future developments requires the interpretation of incomplete, ambiguous, or potentially deceptive evidence. To fill in the gaps, analysts typically make assumptions about such things as another country’s intentions or capabilities, the way governmental processes usually work in that country, the relative strength of political forces, the trustworthiness or accuracy of key sources, the validity of previous analyses on the same subject, or the presence or absence of relevant changes in the context in which the activity is occurring.

 

Indicators: Indicators are observable or potentially observable actions or events that are monitored to detect or evaluate change over time. For example, indicators might be used to measure changes toward an undesirable condition such as political instability, a humanitarian crisis, or an impending attack. They can also point toward a desirable condition such as economic or democratic reform. The special value of indicators is that they create an awareness that prepares an analyst’s mind to recognize the earliest signs of significant change that might otherwise be overlooked. Developing an effective set of indicators is more difficult than it might seem. The Indicator Validator helps analysts assess the diagnosticity of their indicators.

 

Analysis of Competing Hypotheses: This technique requires analysts to start with a full set of plausible hypotheses rather than with a single most likely hypothesis. Analysts then take each item of evidence, one at a time, and judge its consistency or inconsistency with each hypothesis. The idea is to refute hypotheses rather than confirm them. The most likely hypothesis is the one with the least evidence against it, not the most evidence for it. This process applies a key element of scientific method to intelligence analysis.

 

Premortem Analysis and Structured Self-Critique:  These two easy-to-use techniques enable a small team of analysts who have been working together on any type of future-oriented analysis to challenge effectively the accuracy of their own conclusions. Premortem Analysis uses a form of reframing, in which restating the question or problem from another perspective enables one to see it in a different way and come up with different answers.

 

With Structured Self-Critique, analysts respond to a list of questions about a variety of factors, including sources of uncertainty, analytic processes that were used, critical assumptions, diagnosticity of evidence, information gaps, and the potential for deception. Rigorous use of both of these techniques can help prevent a future need for a postmortem.

 

What If? Analysis: one imagines that an unexpected event has happened and then, with the benefit of “hindsight,” analyzes how it could have happened and considers the potential consequences. This type of exercise creates an awareness that prepares the analyst’s mind to recognize early signs of a significant change, and it may enable a decision maker to plan ahead for that contingency.

 

3.3 COMMON ERRORS IN SELECTING TECHNIQUES

 

The value and accuracy of an analytic product depends in part upon selection of the most appropriate technique or combination of techniques for doing the analysis… Lacking effective guidance, analysts are vulnerable to various influences:

 

  • College or graduate-school recipe: Analysts are inclined to use the tools they learned in college or graduate school whether or not those tools are the best application for the different context of intelligence analysis.
  • Tool rut: Analysts are inclined to use whatever tool they already know or have readily available. Psychologist Abraham Maslow observed that “if the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.”
  • Convenience shopping: The analyst, guided by the evidence that happens to be available, uses a method appropriate for that evidence, rather than seeking out the evidence that is really needed to address the intelligence issue. In other words, the evidence may sometimes drive the technique selection instead of the analytic need driving the evidence collection.
  • Time constraints: Analysts can easily be overwhelmed by their in-boxes and the myriad tasks they have to perform in addition to their analytic workload. The temptation is to avoid techniques that would “take too much time.”

 

3.4 ONE PROJECT, MULTIPLE TECHNIQUES

 

Multiple techniques can also be used to check the accuracy and increase the confidence in an analytic conclusion. Research shows that forecasting accuracy is increased by combining “forecasts derived from methods that differ substantially and draw from different sources of information.”

 

3.5 STRUCTURED TECHNIQUE SELECTION GUIDE

Analysts must be able, with minimal effort, to identify and learn how to use those techniques that best meet their needs and fit their styles.

 

4 Decomposition and Visualization

 

Two common approaches for coping with this limitation of our working memory are decomposition —that is, breaking down the problem or issue into its component parts so that each part can be considered separately—and visualization—placing all the parts on paper or on a computer screen in some organized manner designed to facilitate understanding how the various parts interrelate.

 

Any technique that gets a complex thought process out of the analyst’s head and onto paper or the computer screen can be helpful. The use of even a simple technique such as a checklist can be extremely productive.

 

Analysis is breaking information down into its component parts. Anything that has parts also has a structure that relates these parts to each other. One of the first steps in doing analysis is to determine an appropriate structure for the analytic problem, so that one can then identify the various parts and begin assembling information on them. Because there are many different kinds of analytic problems, there are also many different ways to structure analysis.

—Richards J. Heuer Jr., The Psychology of Intelligence Analysis (1999).

 

Overview of Techniques

 

Getting Started Checklist, Customer Checklist, and Issue Redefinition are three techniques that can be combined to help analysts launch a new project. If an analyst can start off in the right direction and avoid having to change course later, a lot of time can be saved.

 

Chronologies and Timelines are used to organize data on events or actions. They are used whenever it is important to understand the timing and sequence of relevant events or to identify key events and gaps.

 

Sorting is a basic technique for organizing data in a manner that often yields new insights. Sorting is effective when information elements can be broken out into categories or subcategories for comparison by using a computer program, such as a spreadsheet.

 

Ranking, Scoring, and Prioritizing provide how-to guidance on three different ranking techniques—Ranked Voting, Paired Comparison, and Weighted Ranking. Combining an idea-generation technique such as Structured Brainstorming with a ranking technique is an effective way for an analyst to start a new project or to provide a foundation for interoffice or interagency collaboration. The idea-generation technique is used to develop lists of driving forces, variables to be considered, indicators, possible scenarios, important players, historical precedents, sources of information, questions to be answered, and so forth. Such lists are even more useful once they are ranked, scored, or prioritized to determine which items are most important, most useful, most likely, or should be at the top of the priority list.

 

Matrices are generic analytic tools for sorting and organizing data in a manner that facilitates comparison and analysis. They are used to analyze the relationships among any two sets of variables or the interrelationships among a single set of variables. A Matrix consists of a grid with as many cells as needed for whatever problem is being analyzed. Some analytic topics or problems that use a matrix occur so frequently that they are described in this book as separate techniques.

 

Network Analysis is used extensively by counterterrorism, counternarcotics, counterproliferation, law enforcement, and military analysts to identify and monitor individuals who may be involved in illegal activity. Social Network Analysis is used to map and analyze relationships among people, groups, organizations, computers, Web sites, and any other information processing entities.

 

Mind Maps and Concept Maps are visual representations of how an individual or a group thinks about a topic of interest.

 

Process Maps and Gantt Charts were developed for use in industry and the military, but they are also useful to intelligence analysts. Process Mapping is a technique for identifying and diagramming each step in a complex process; this includes event flow charts, activity flow charts, and commodity flow charts.

 

4.1 GETTING STARTED CHECKLIST

 

The Method

Analysts should answer several questions at the beginning of a new project. The following is our list of suggested starter questions, but there is no single best way to begin. Other lists can be equally effective.

 

  • What has prompted the need for the analysis? For example, was it a news report, a new intelligence report, a new development, a perception of change, or a customer request?
    What is the key intelligence question that needs to be answered?
    Why is this issue important, and how can analysis make a meaningful contribution?
  • Has your organization or any other organization ever answered this question or a similar question before, and, if so, what was said? To whom was this analysis delivered, and what has changed since that time?
  • Who are the principal customers? Are these customers’ needs well understood? If not, try to gain a better understanding of their needs and the style of reporting they like.
    Are there other stakeholders who would have an interest in the answer to this question? Who might see the issue from a different perspective and prefer that a different question be answered? Consider meeting with others who see the question from a different perspective.
  • From your first impressions, what are all the possible answers to this question? For example, what alternative explanations or outcomes should be considered before making an analytic judgment on the issue?
  • Depending on responses to the previous questions, consider rewording the key intelligence question. Consider adding subordinate or supplemental questions.
  • Generate a list of potential sources or streams of reporting to be explored.
  • Reach out and tap the experience and expertise of analysts in other offices or organizations—both within and outside the government—who are knowledgeable on this topic. For example, call a meeting or conduct a virtual meeting to brainstorm relevant evidence and to develop a list of alternative hypotheses, driving forces, key indicators, or important players.

 

4.2 CUSTOMER CHECKLIST

 

The Customer Checklist helps an analyst tailor the product to the needs of the principal customer for the analysis. When used appropriately, it ensures that the product is of maximum possible value to this customer.

 

The Method

  • Before preparing an outline or drafting a paper, ask the following questions:
  • Who is the key person for whom the product is being developed?
  • Will this product answer the question the customer asked or the question the customer should be asking? If necessary, clarify this before proceeding.
  • What is the most important message to give this customer?
  • How is the customer expected to use this information?
  • How much time does the customer have to digest this product?
  • What format would convey the information most effectively?
  • Is it possible to capture the essence in one or a few key graphics?
  • What classification is most appropriate for this product? Is it necessary to consider publishing the paper at more than one classification level?
  • What is the customer’s level of tolerance for technical language? How much detail would the customer expect? Can the details be provided in appendices or backup papers, graphics, notes, or pages?
  • Will any structured analytic technique be used? If so, should it be flagged in the product?
  • Would the customer expect you to reach out to other experts within or outside the Intelligence Community to tap their expertise in drafting this paper? If this has been done, how has the contribution of other experts been flagged in the product? In a footnote? In a source list?
  • To whom or to what source might the customer turn for other views on this topic? What data or analysis might others provide that could influence how the customer reacts to what is being prepared in this product?

 

 

4.3 ISSUE REDEFINITION

 

 

Many analytic projects start with an issue statement. What is the issue, why is it an issue, and how will it be addressed? Issue Redefinition is a technique for experimenting with different ways to define an issue. This is important, because seemingly small differences in how an issue is defined can have significant effects on the direction of the research.

 

When to Use It

Using Issue Redefinition at the beginning of a project can get you started off on the right foot. It may also be used at any point during the analytic process when a new hypothesis or critical new evidence is introduced. Issue Redefinition is particularly helpful in preventing “mission creep,” which results when analysts unwittingly take the direction of analysis away from the core intelligence question or issue at hand, often as a result of the complexity of the problem or a perceived lack of information.

 

Value Added

Proper issue identification can save a great deal of time and effort by forestalling unnecessary research and analysis on a poorly stated issue. Issues are often poorly presented when they are:

 

  • Solution driven (Where are the weapons of mass destruction in Iraq?)
  • Assumption driven (When China launches rockets into Taiwan, will the Taiwanese government collapse?)
  • Too broad or ambiguous (What is the status of Russia’s air defense system?)
  • Too narrow or misdirected (Who is voting for President Hugo Chávez in the election?)

 

The Method

 

* Rephrase: Redefine the issue without losing the original meaning. Review the results to see if they provide a better foundation upon which to conduct the research and assessment to gain the best answer. Example: Rephrase the original question, “How much of a role does Aung San Suu Kyi play in the ongoing unrest in Burma?” as, “How active is the National League for Democracy, headed by Aung San Suu Kyi, in the antigovernment riots in Burma?”

 

* Ask why? Ask a series of “why” or “how” questions about the issue definition. After receiving the first response, ask “why” to do that or “how” to do it. Keep asking such questions until you are satisfied that the real problem has emerged. This process is especially effective in generating possible alternative answers.

 

* Broaden the focus: Instead of focusing on only one piece of the puzzle, step back and look at several pieces together. What is the issue connected to? Example: The original question, “How corrupt is the Pakistani president?” leads to the question, “How corrupt is the Pakistani government as a whole?”

 

* Narrow the focus: Can you break down the issue further? Take the question and ask about the components that make up the problem. Example: The original question, “Will the European Union ratify a new constitution?” can be broken down to, “How do individual member states view the new European Union constitution?”

 

* Redirect the focus: What outside forces impinge on this issue? Is deception involved? Example: The original question, “What are the terrorist threats against the U.S. homeland?” is revised to, “What opportunities are there to interdict terrorist plans?”

 

* Turn 180 degrees: Turn the issue on its head. Is the issue the one asked or the opposite of it? Example: The original question, “How much of the ground capability of China’s People’s Liberation Army would be involved in an initial assault on Taiwan?” is rephrased as, “How much of the ground capability of China’s People’s Liberation Army would not be involved in the initial Taiwan assault?”

 

Relationship to Other Techniques

 

Issue Redefinition is often used simultaneously with the Getting Started Checklist and the Customer Checklist. The technique is also known as Issue Development, Problem Restatement, and Reframing the Question.

 

4.4 CHRONOLOGIES AND TIMELINES

 

When to Use It

Chronologies and timelines aid in organizing events or actions. Whenever it is important to understand the timing and sequence of relevant events or to identify key events and gaps, these techniques can be useful. The events may or may not have a cause-and-effect relationship.

 

Value Added

Chronologies and timelines aid in the identification of patterns and correlations among events. These techniques also allow you to relate seemingly disconnected events to the big picture to highlight or identify significant changes or to assist in the discovery of trends, developing issues, or anomalies. They can serve as a catch-all for raw data when the meaning of the data has not yet been identified. Multiple-level timelines allow analysts to track concurrent events that may have an effect on each other. Although timelines may be developed at the onset of an analytic task to ascertain the context of the activity to be analyzed, timelines and chronologies also may be used in postmortem intelligence studies to break down the intelligence reporting, find the causes for intelligence failures, and highlight significant events after an intelligence surprise.

 

When researching the problem, ensure that the relevant information is listed with the date or order in which it occurred. Make sure the data are properly referenced.
Review the chronology or timeline by asking the following questions.

  • What are the temporal distances between key events? If “lengthy,” what caused the delay? Are there missing pieces of data that may fill those gaps that should be collected?
  • Did the analyst overlook piece(s) of intelligence information that may have had an impact on or be related to the events?
  • Conversely, if events seem to have happened more rapidly than were expected, or if not all events appear to be related, is it possible that the analyst has information related to multiple event timelines?
  • Does the timeline have all the critical events that are necessary for the outcome to occur?
  • When did the information become known to the analyst or a key player?
  • What are the intelligence gaps?
  • Are there any points along the timeline when the target is particularly vulnerable to U.S. intelligence collection activities or countermeasures?
  • What events outside this timeline could have influenced the activities?
  • If preparing a timeline, synopsize the data along a line, usually horizontal or vertical. Use the space on both sides of the line to highlight important analytic points. For example, place facts above the line and points of analysis or commentary below the line.
  • Alternatively, contrast the activities of different groups, organizations, or streams of information by placement above or below the line. If multiple actors are involved, you can use multiple lines, showing how and where they converge.
  • Look for relationships and patterns in the data connecting persons, places, organizations, and other activities. Identify gaps or unexplained time periods, and consider the implications of the absence of evidence. Prepare a summary chart detailing key events and key analytic points in an annotated timeline.

 

 

4.5 SORTING

 

When to Use It

Sorting is effective when information elements can be broken out into categories or subcategories for comparison with each other, most often by using a computer program, such as a spreadsheet. This technique is particularly effective during the initial data gathering and hypothesis generation phases of analysis, but you may also find sorting useful at other times.

Value Added

Sorting large amounts of data into relevant categories that are compared with each other can provide analysts with insights into trends, similarities, differences, or abnormalities of intelligence interest that otherwise would go unnoticed. When you are dealing with transactions data in particular (for example, communications intercepts or transfers of goods or money), it is very helpful to sort the data first.

 

The Method

Follow these steps:

* Review the categories of information to determine which category or combination of categories might show trends or an abnormality that would provide insight into the problem you are studying. Place the data into a spreadsheet or a database using as many fields (columns) as necessary to differentiate among the data types (dates, times, locations, people, activities, amounts, etc.). List each of the facts, pieces of information, or hypotheses involved in the problem that are relevant to your sorting schema. (Use paper, whiteboard, movable sticky notes, or other means for this.)

* Review the listed facts, information, or hypotheses in the database or spreadsheet to identify key fields that may allow you to uncover possible patterns or groupings. Those patterns or groupings then illustrate the schema categories and can be listed as header categories. For example, if an examination of terrorist activity shows that most attacks occur in hotels and restaurants but that the times of the attacks vary, “Location” is the main category; while “Date” and “Time” are secondary categories.

  • Group those items according to the sorting schema in the categories that were defined in step 1.
  • Choose a category and sort the data within that category. Look for any insights, trends, or oddities.

Good analysts notice trends; great analysts notice anomalies.

* Review (or ask others to review) the sorted facts, information, or hypotheses to see if there are alternative ways to sort them. List any alternative sorting schema for your problem. One of the most useful applications for this technique is to sort according to multiple schemas and examine results for correlations between data and categories. But remember that correlation is not the same as causation.

 

Origins of This Technique

Sorting is a long-established procedure for organizing data. The description here is from Defense Intelligence Agency training materials.

 

 

4.6 RANKING, SCORING, PRIORITIZING

 

When to Use It

 

A ranking technique is appropriate whenever there are too many items to rank easily just by looking at the list; the ranking has significant consequences and must be done as accurately as possible; or it is useful to aggregate the opinions of a group of analysts.

 

Value Added

 

Combining an idea-generation technique with a ranking technique is an excellent way for an analyst to start a new project or to provide a foundation for inter-office or interagency collaboration. An idea-generation technique is often used to develop lists of such things as driving forces, variables to be considered, or important players. Such lists are more useful once they are ranked, scored, or prioritized.

 

Ranked Voting

In a Ranked Voting exercise, members of the group individually rank each item in order according to the member’s preference or what the member regards as the item’s importance.

 

Paired Comparison

Paired Comparison compares each item against every other item, and the analyst can assign a score to show how much more important or more preferable or more probable one item is than the others. This technique provides more than a simple ranking, as it shows the degree of importance or preference for each item. The list of items can then be ordered along a dimension, such as importance or preference, using an interval-type scale.

Follow these steps to use the technique:

  • List the items to be compared. Assign a letter to each item.
  • Create a table with the letters across the top and down the left side as in Figure 4.6a. The results of the comparison of each pair of items are marked in the cells of this table. Note the diagonal line of darker-colored cells. These cells are not used, as each item is never compared with itself. The cells below this diagonal line are not used because they would duplicate a comparison in the cells above the diagonal line. If you are working in a group, distribute a blank copy of this table to each participant.
  • Looking at the cells above the diagonal row of gray cells, compare the item in the row with the one in the column. For each cell, decide which of the two items is more important (or more preferable or more probable). Write the letter of the winner of this comparison in the cell, and score the degree of difference on a scale from 0 (no difference) to 3 (major difference) as in Figure 4.6a.
  • Consolidate the results by adding up the total of all the values for each of the items and put this number in the “Score” column. For example, in Figure 4.6a item B has one 3 in the first row plus one 2, and two 1s in the second row, for a score of 7.
  • Finally, it may be desirable to convert these values into a percentage of the total score. To do this, divide the total number of scores (20 in the example) by the score for each individual item. Item B, with a score of 7, is ranked most important or most preferred. Item B received a score of 35 percent (7 divided by 20) as compared with 25 percent for item D and only 5 percent each for items C and E, which received only one vote each. This example shows how Paired Comparison captures the degree of difference between each ranking.
  • To aggregate rankings received from a group of analysts, simply add the individual scores for each analyst.

 

Weighted Ranking

In Weighted Ranking, a specified set of criteria are used to rank items. The analyst creates a table with items to be ranked listed across the top row and criteria for ranking these items listed down the far left column

* Create a table with one column for each item. At the head of each column, write the name of an item or assign it a letter to save space.

* Add two more blank columns on the left side of this table. Count the number of selection criteria, and then adjust the table so that it has that number of rows plus three more, one at the top to list the items and two at the bottom to show the raw scores and percentages for each item. In the first column on the left side, starting with the second row, write in all the selection criteria down the left side of the table. There is some value in listing the criteria roughly in order of importance, but that is not critical. Leave the bottom two rows blank for the scores and percentages.

* Now work down the far left hand column assigning weights to the selection criteria based on their relative importance for judging the ranking of the items. Depending upon how many criteria there are, take either 10 points or 100 points and divide these points between the selection criteria based on what is believed to be their relative importance in ranking the items. In other words, ask what percentage of the decision should be based on each of these criteria? Be sure that the weights for all the selection criteria combined add up to either 10 or 100, whichever is selected. Also be sure that all the criteria are phrased in such a way that a higher weight is more desirable.

  • Work across the rows to write the criterion weight in the left side of each cell.
  • Next, work across the matrix one row (selection criterion) at a time to evaluate the relative ability of each of the items to satisfy that selection criteria. Use a ten-point rating scale, where 1 = low and 10 = high, to rate each item separately. (Do not spread the ten points proportionately across all the items as was done to assign weights to the criteria.) Write this rating number after the criterion weight in the cell for each item.

 

* Again, work across the matrix one row at a time to multiply the criterion weight by the item rating for that criterion, and enter this number for each cell as shown in Figure 4.6b.

* Now add the columns for all the items. The result will be a ranking of the items from highest to lowest score. To gain a better understanding of the relative ranking of one item as compared with another, convert these raw scores to percentages. To do this, first add together all the scores in the “Totals” row to get a total number. Then divide the score for each item by this total score to get a percentage ranking for each item. All the percentages together must add up to 100 percent. In Figure 4.6b it is apparent that item B has the number one ranking (with 20.3 percent), while item E has the lowest (with 13.2 percent).

 

4.7 MATRICES


A matrix is an analytic tool for sorting and organizing data in a manner that facilitates comparison and analysis. It consists of a simple grid with as many cells as needed for whatever problem is being analyzed.

 

When to Use It

Matrices are used to analyze the relationship between any two sets of variables or the interrelationships between a single set of variables. Among other things, they enable analysts to:

  • Compare one type of information with another.
  • Compare pieces of information of the same type.
  • Categorize information by type.
  • Identify patterns in the information.
  • Separate elements of a problem.

A matrix is such an easy and flexible tool to use that it should be one of the first tools analysts think of when dealing with a large body of data. One limiting factor in the use of matrices is that information must be organized along only two dimensions.

 

Value Added

Matrices provide a visual representation of a complex set of data. By presenting information visually, a matrix enables analysts to deal effectively with more data than they could manage by juggling various pieces of information in their head. The analytic problem is broken down to component parts so that each part (that is, each cell in the matrix) can be analyzed separately, while ideally maintaining the context of the problem as a whole.

 

The Method

A matrix is a tool that can be used in many different ways and for many different purposes. What matrices have in common is that each has a grid with sufficient columns and rows for you to enter two sets of data that you want to compare. Organize the category headings for each set of data in some logical sequence before entering the headings for one set of data in the top row and the headings for the other set in the far left column. Then enter the data in the appropriate cells.

 

4.8 NETWORK ANALYSIS

 

Network Analysis is the review, compilation, and interpretation of data to determine the presence of associations among individuals, groups, businesses, or other entities; the meaning of those associations to the people involved; and the degrees and ways in which those associations can be strengthened or weakened. It is the best method available to help analysts understand and identify opportunities to influence the behavior of a set of actors about whom information is sparse. In the fields of law enforcement and national security, information used in Network Analysis usually comes from informants or from physical or technical surveillance.

 

 

Analysis of networks is broken down into three stages, and analysts can stop at the stage that answers their questions.

* Network Charting is the process of and associated techniques for identifying people, groups, things, places, and events of interest (nodes) and drawing connecting lines (links) between them on the basis of various types of association. The product is often referred to as a Link Chart.

* Network Analysis is the process and techniques that take the chart and strive to make sense of the data represented by the chart by grouping associations (sorting) and identifying patterns in and among those groups.

* Social Network Analysis (SNA) is the mathematical measuring of variables related to the distance between nodes and the types of associations in order to derive even more meaning from the chart, especially

 

 

 

 

about the degree and type of influence one node has on another.

When to Use It

Network Analysis is used extensively in law enforcement, counterterrorism analysis, and analysis of transnational issues such as narcotics and weapons proliferation to identify and monitor individuals who may be involved in illegal activity.

 

When to Use It

Network Analysis is used extensively in law enforcement, counterterrorism analysis, and analysis of transnational issues such as narcotics and weapons proliferation to identify and monitor individuals who may be involved in illegal activity.

 

Value Added

Network Analysis has proved to be highly effective in helping analysts identify and understand patterns of organization, authority, communication, travel, financial transactions, or other interactions between people or groups that are not apparent from isolated pieces of information. It often identifies key leaders, information brokers, or sources of funding.

 

Potential Pitfalls

This method is extremely dependent upon having at least one good source of information. It is hard to know when information may be missing, and the boundaries of the network may be fuzzy and constantly changing, in which case it is difficult to determine whom to include. The constantly changing nature of networks over time can cause information to become outdated.

 

The Method

Analysis of networks attempts to answer the question “Who is related to whom and what is the nature of their relationship and role in the network?” The basic network analysis software identifies key nodes and shows the links between them. SNA software measures the frequency of flow between links and explores the significance of key attributes of the nodes. We know of no software that does the intermediate task of grouping nodes into meaningful clusters, though algorithms do exist and are used by individual analysts. In all cases, however, you must interpret what is represented, looking at the chart to see how it reflects organizational structure, modes of operation, and patterns of behavior.

 

Network charting usually involves the following steps.

  • Identify at least one reliable source or stream of data to serve as a beginning point. Identify, combine, or separate nodes within this reporting.
    List each node in a database, association matrix, or software program.
    Identify interactions among individuals or groups.
  • List interactions by type in a database, association matrix, or software program.
    Identify each node and interaction by some criterion that is meaningful to your analysis. These criteria often include frequency of contact, type of contact, type of activity, and source of information.
    Draw the connections between nodes—connect the dots—on a chart by hand, using a computer drawing tool, or using Network Analysis software.
  • Work out from the central nodes, adding links and nodes until you run out of information from the good sources.
    Add nodes and links from other sources, constantly checking them against the information you already have. Follow all leads, whether they are people, groups, things, or events, and regardless of source. Make note of the sources.
  • Stop in these cases: when you run out of information, when all of the new links are dead ends, when all of the new links begin to turn in on each other like a spider web, or when you run out of time.
    Update the chart and supporting documents regularly as new information becomes available, or as you have time.
  • Rearrange the nodes and links so that the links cross over each other as little as possible.
  • Cluster the nodes. Do this by looking for “dense” areas of the chart and relatively “empty” areas. Draw shapes around the dense areas. Use a variety of shapes, colors, and line styles to denote different types of clusters, your relative confidence in the cluster, or any other criterion you deem important.
  • Cluster the clusters, if you can, using the same method.
  • Label each cluster according to the common denominator among the nodes it contains. In doing this you will identify groups, events, activities, and/or key locations. If you have in mind a model for groups or activities, you may be able to identify gaps in the chart by what is or is not present that relates to the model.
  • Look for “cliques”—a group of nodes in which every node is connected to every other node, though not to many nodes outside the group. These groupings often look like stars or pentagons. In the intelligence world, they often turn out to be clandestine cells.
  • Look in the empty spaces for nodes or links that connect two clusters. Highlight these nodes with shapes or colors. These nodes are brokers, facilitators, leaders, advisers, media, or some other key connection that bears watching. They are also points where the network is susceptible to disruption.
  • Chart the flow of activities between nodes and clusters. You may want to use arrows and time stamps. Some software applications will allow you to display dynamically how the chart has changed over time. Analyze this flow. Does it always go in one direction or in multiple directions? Are the same or different nodes involved? How many different flows are there? What are the pathways? By asking these questions, you can often identify activities, including indications of preparation for offensive action and lines of authority. You can also use this knowledge to assess the resiliency of the network. If one node or pathway were removed, would there be alternatives already built in?
  • Continually update and revise as nodes or links change.

 

 

4.9 MIND MAPS AND CONCEPT MAPS

Mind Maps and Concept Maps are visual representations of how an individual or a group thinks about a topic of interest. Such a diagram has two basic elements: the ideas that are judged relevant to whatever topic one is thinking about, and the lines that show and briefly describe the connections between these ideas.

Whenever you think about a problem, develop a plan, or consider making even a very simple decision, you are putting a series of thoughts together. That series of thoughts can be represented visually with words or images connected by lines that represent the nature of the relationship between them. Any thinking for any purpose, whether about a personal decision or analysis of an intelligence issue, can be diagrammed in this manner.

  • By an individual or a group to help sort out their own thinking and achieve a shared understanding of key concepts.

After having participated in this group process to define the problem, the group should be better able to identify what further research needs to be done and able to parcel out additional work among the best qualified members of the group. The group should also be better able to prepare a report that represents as fully as possible the collective wisdom of the group as a whole.

The Method

Start a Mind Map or Concept Map with a focal question that defines what is to be included. Then follow these steps:

  • Make a list of concepts that relate in some way to the focal question.
  • Starting with the first dozen or so concepts, sort them into groupings within the diagram space in some logical manner. These groups may be based on things they have in common or on their status as either direct or indirect causes of the matter being analyzed.
  • Begin making links between related concepts, starting with the most general concepts. Use lines with arrows to show the direction of the relationship. The arrows may go in either direction or in both directions.
  • Choose the most appropriate words for describing the nature of each relationship. The lines might be labeled with words such as “causes,” “influences,” “leads to,” “results in,” “is required by,” or “contributes to.” Selecting good linking phrases is often the most difficult step.
  • While building all the links between the concepts and the focal question, look for and enter crosslinks between concepts.
  • Don’t be surprised if, as the map develops, you discover that you are now diagramming on a different focus question from the one you started with. This can be a good thing. The purpose of a focus question is not to lock down the topic but to get the process going.
  • Finally, reposition, refine, and expand the map structure as appropriate.

Mind Mapping has only one main or central idea, and all other ideas branch off from it radially in all directions. The central idea is preferably shown as an image rather than in words, and images are used throughout the map. “Around the central word you draw the 5 or 10 main ideas that relate to that word. You then take each of those child words and again draw the 5 or 10 main ideas that relate to each of

those words.” A Concept Map has a more flexible form. It can have multiple hubs and clusters. It can also be designed around a central idea, but it does not have to be and often is not designed that way. It does not normally use images. A Concept Map is usually shown as a network, although it too can be shown as a hierarchical structure like Mind Mapping when that is appropriate. Concept Maps can be very complex and are often meant to be viewed on a large-format screen.

 

4.10 PROCESS MAPS AND GANTT CHARTS

Process Mapping is an umbrella term that covers a variety of procedures for identifying and depicting visually each step in a complex procedure. It includes flow charts of various types (Activity Flow Charts,

Commodity Flow Charts, Causal Flow Charts), Relationship Maps, and Value Stream Maps commonly used to assess and plan improvements for business and industrial processes. A Gantt Chart is a specific type of Process Map that was developed to facilitate the planning, scheduling, and management of complex industrial projects.

When to Use It

Process Maps, including Gantt Charts, are used by intelligence analysts to track, understand, and monitor the progress of activities of intelligence interest being undertaken by a foreign government, a criminal or terrorist group, or any other nonstate actor. For example, a Process Map can be used to monitor progress in developing a new weapons system, preparations for a major military action, or the execution of any other major plan that involves a sequence of observable steps. It is often used to identify and describe the modus operandi of a criminal or terrorist group, including the preparatory steps that such a group typically takes prior to a major action.

Value Added

The process of constructing a Process Map or a Gantt Chart helps analysts think clearly about what someone else needs to do to complete a complex project.

When a complex plan or process is understood well enough to be diagrammed or charted, analysts can then answer questions such as the following: What are they doing? How far along are they? What do they still need to do? What resources will they need to do it? How much time do we have before they have this capability? Is there any vulnerable point in this process where they can be stopped or slowed down?

The Process Map or Gantt Chart is a visual aid for communicating this information to the customer. If sufficient information can be obtained, the analyst’s understanding of the process will lead to a set of indicators that can be used to monitor the status of an ongoing plan or project.

The Method

There is a substantial difference in appearance between a Process Map and a Gantt Chart. In a Process Map, the steps in the process are diagrammed sequentially with various symbols representing starting and end points, decisions, and actions connected with arrows. Diagrams can be created with readily available software such as Microsoft Visio.

Example

The Intelligence Community has considerable experience monitoring terrorist groups. This example describes how an analyst would go about creating a Gantt Chart of a generic terrorist attack-planning process (see Figure 4.10). The analyst starts by making a list of all the tasks that terrorists must complete, estimating the schedule for when each task will be started and finished, and determining what resources are needed for each task. Some tasks need to be completed in a sequence, with each task being more-or-less completed before the next activity can begin. These are called sequential, or linear, activities. Other activities are not dependent upon completion of any other tasks. These may be done at any time before or after a particular stage is reached. These are called nondependent, or parallel, tasks.

Note whether each terrorist task to be performed is sequential or parallel. It is this sequencing of dependent and nondependent activities that is critical in determining how long any particular project or process will take. The more activities that can be worked in parallel, the greater the chances of a project being completed on time. The more tasks that must be done sequentially, the greater the chances of a single bottleneck delaying the entire project.

Gantt Charts that map a generic process can also be used to track data about a more specific process as it is received.

information about a specific group’s activities could be layered by using a different color or line type. Layering in the specific data allows an analyst to compare what is expected with the actual data. The chart can then be used to identify and narrow gaps or anomalies in the data and even to identify and challenge assumptions about what is expected or what is happening.

5.0 Idea Generation
5 Idea Generation

New ideas, and the combination of old ideas in new ways, are essential elements of effective intelligence analysis. Some structured techniques are specifically intended for the purpose of eliciting or generating ideas at the very early stage of a project, and they are the topic of this chapter.

 

Structured Brainstorming is not a group of colleagues just sitting around talking about a problem. Rather, it is a group process that follows specific rules and procedures. It is often used at the beginning of a project to identify a list of relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, or, in law enforcement, potential suspects or avenues of investigation. It requires little training, and is one of the most frequently used structured techniques in the Intelligence Community.

The wiki format—including the ability to upload documents and even hand-drawn graphics or photos —allows analysts to capture and track brainstorming ideas and return to them at a later date.

Nominal Group Technique, often abbreviated NGT, serves much the same function as Structured Brainstorming, but it uses a quite different approach. It is the preferred technique when there is a concern that a senior member or outspoken member of the group may dominate the meeting, that junior members may be reluctant to speak up, or that the meeting may lead to heated debate. Nominal Group Technique encourages equal participation by requiring participants to present ideas one at a time in round-robin fashion until all participants feel that they have run out of ideas.

Starbursting is a form of brainstorming that focuses on generating questions rather than answers. To help in defining the parameters of a research project, use Starbursting to identify the questions that need to be answered. Questions start with the words Who, What, When, Where, Why, and How.

Cross-Impact Matrix is a technique that can be used after any form of brainstorming session that identifies a list of variables relevant to a particular analytic project. The results of the brainstorming session are put into a matrix, which is used to guide a group discussion that systematically examines how each variable influences all other variables to which it is judged to be related in a particular problem context.

Morphological Analysis is useful for dealing with complex, nonquantifiable problems for which little data are available and the chances for surprise are significant. It is a generic method for systematically identifying and considering all possible relationships in a multidimensional, highly complex, usually nonquantifiable problem space. It helps prevent surprises in intelligence analysis by generating a large number of outcomes for any complex situation, thus reducing the chance that events will play out in a way that the analyst has not previously imagined and has not at least considered.

Quadrant Crunching is an application of Morphological Analysis that uses key assumptions and their opposites as a starting point for systematically generating a large number of alternative outcomes. For example, an analyst might use Quadrant Crunching to identify the many different ways that a terrorist might attack a water supply. The technique forces analysts to rethink an issue from a broad range of perspectives and systematically question all the assumptions that underlie their lead hypothesis.

5.1 STRUCTURED BRAINSTORMING

When to Use It

Structured Brainstorming is one of the most widely used analytic techniques. It is often used at the beginning of a project to identify a list of relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, or, for law enforcement, potential suspects or avenues of investigation.

 

The Method

There are seven general rules to follow, and then a twelve-step process for Structured Brainstorming. Here are the rules:

  • Be specific about the purpose and the topic of the brainstorming session. Announce the topic beforehand, and ask participants to come to the session with some ideas or to forward them to the facilitator before the session.
  • New ideas are always encouraged. Never criticize an idea during the divergent (creative) phase of the process no matter how weird or unconventional or improbable it might sound. Instead, try to figure out how the idea might be applied to the task at hand.
  • Allow only one conversation at a time, and ensure that everyone has an opportunity to speak.
  • Allocate enough time to do the brainstorming correctly. It often takes one hour to set the rules of the game, get the group comfortable, and exhaust the conventional wisdom on the topic. Only then do truly creative ideas begin to emerge.
  • To avoid groupthink and stimulate divergent thinking, include one or more “outsiders” in the group— that is, astute thinkers who do not share the same body of knowledge or perspective as the other group members but do have some familiarity with the topic.
  • Write it down! Track the discussion by using a whiteboard, an easel, or sticky notes (see Figure 5.1).
  • Summarize the key findings at the end of the session. Ask the participants to write down the most important thing they learned on a 3 x 5 card as they depart the session. Then prepare a short summary and distribute the list to the participants (who may add items to the list) and to others interested in the topic (including supervisors and those who could not attend). Capture these findings and disseminate them to attendees and other interested parties either by e-mail or, preferably, a wiki.
  1. Figure 5.1 Picture of Brainstorming
  • Pass out Post-it or “sticky” notes and Sharpie-type pens or markers to all participants.
  • Pose the problem or topic in terms of a “focal question.” Display this question in one sentence for all to see on a large easel or whiteboard.
  • Ask the group to write down responses to the question with a few key words that will fit on a Post-it.
  • When a response is written down, the participant is asked to read it out loud or to give it to the facilitator who will read it out loud. Sharpie-type pens are used so that people can easily see what is written on the Post-it notes later in the exercise.
  • Stick all the Post-its on a wall in the order in which they are called out. Treat all ideas the same. Encourage participants to build on one another’s ideas.
  • Usually there is an initial spurt of ideas followed by pauses as participants contemplate the question. After five or ten minutes there is often a long pause of a minute or so. This slowing down suggests that the group has “emptied the barrel of the obvious” and is now on the verge of coming up with some fresh insights and ideas. Do not talk during this pause even if the silence is uncomfortable.
  • After two or three long pauses, conclude this divergent thinking phase of the brainstorming session.
  • Ask all participants as a group to go up to the wall and rearrange the Post-its in some organized manner. This arrangement might be by affinity groups (groups that have some common characteristic), scenarios, a predetermined priority scale, or a time sequence. Participants are not allowed to talk during this process. Some Post-its may be moved several times, but they will gradually be clustered into logical groupings. Post-its may be copied if necessary to fit one idea into more than one group.
  • When all Post-its have been arranged, ask the group to select a word or phrase that best describes each grouping.
  • Look for Post-its that do not fit neatly into any of the groups. Consider whether such an outlier is useless noise or the germ of an idea that deserves further attention.
  • Assess what the group has accomplished. Have new ideas or concepts been identified, have key issues emerged, or are there areas that need more work or further brainstorming?
  • To identify the potentially most useful ideas, the facilitator or group leader should establish up to five criteria for judging the value or importance of the ideas. If so desired, then use the Ranking, Scoring,

Prioritizing technique, described in chapter 4, for voting on or ranking or prioritizing ideas

  • Set the analytic priorities accordingly, and decide on a work plan for the next steps in the analysis.

Relationship to Other Techniques

As discussed under “When to Use It,” some form of brainstorming is commonly combined with a wide variety of other techniques.

Structured Brainstorming is also called Divergent/Convergent Thinking.

Origins of This Technique

Brainstorming was a creativity technique used by advertising agencies in the 1940s. It was popularized in a book by advertising manager Alex Osborn, Applied Imagination: Principles and Procedures of Creative Problem Solving. There are many versions of brainstorming. The description here is a combination of information from Randy Pherson, “Structured Brainstorming,” in Handbook of Analytic Tools and Techniques (Reston, Va.: Pherson Associates, LLC, 2008), and training materials from the CIA’s Sherman Kent School for Intelligence Analysis.

5.2 VIRTUAL BRAINSTORMING

Virtual Brainstorming is the same as Structured Brainstorming except that it is done online with participants who are geographically dispersed or unable to meet in person.

The Method

Virtual Brainstorming is usually a two-phase process. It usually begins with the divergent process of creating as many relevant ideas as possible. The second phase is a process of convergence when the ideas are sorted into categories, weeded out, prioritized, or combined and molded into a conclusion or plan of action.

5.3 NOMINAL GROUP TECHNIQUE

Nominal Group Technique (NGT) is a process for generating and evaluating ideas. It is a form of brainstorming, but NGT has always had its own identity as a separate technique.

When to Use It

NGT prevents the domination of a discussion by a single person. Use it whenever there is concern that a senior officer or executive or an outspoken member of the group will control the direction of the meeting by speaking before anyone else.

The Method

An NGT session starts with the facilitator asking an open-ended question, such as, “What factors will influence …?” “How can we learn if …?” “In what circumstances might … happen?” “What should be included or not included in this research project?” The facilitator answers any questions about what is expected of participants and then gives participants five to ten minutes to work privately to jot down on note cards their initial ideas in response to the focal question. This part of the process is followed by these steps:

  • The facilitator calls on one person at a time to present one idea. As each idea is presented, the facilitator writes a summary description on a flip chart or whiteboard. This process continues in a round-robin fashion until all ideas have been exhausted.
  • When no new ideas are forthcoming, the facilitator initiates a group discussion to ensure that there is a common understanding of what each idea means. The facilitator asks about each idea, one at a time, in the order presented, but no argument for or against any idea is allowed. It is possible at this time to expand or combine ideas, but no change can be made to any idea without the approval of the original presenter of the idea.
  • Voting to rank or prioritize the ideas as discussed in chapter 4 is optional, depending upon the purpose of the meeting. When voting is done, it is usually by secret ballot, although various voting procedures may be used depending in part on the number of ideas and the number of participants. It usually works best to employ a ratio of one vote for every three ideas presented. For example, if the facilitator lists twelve ideas, each participant is allowed to cast four votes.

Origins of This Technique

Nominal Group Technique was developed by A. L. Delbecq and A. H. Van de Ven and first described in “A Group Process Model for Problem Identification and Program Planning,” Journal of Applied Behavioral Science

5.4 STARBURSTING

Starbursting is a form of brainstorming that focuses on generating questions rather than eliciting ideas or answers. It uses the six questions commonly asked by journalists: Who? What? When? Where? Why? and How?

When to Use It

Use Starbursting to help define your research project. After deciding on the idea, topic, or issue to be analyzed, brainstorm to identify the questions that need to be answered by the research. Asking the right questions is a common prerequisite to finding the right answer.

Origin of This Technique

Starbursting is one of many techniques developed to stimulate creativity.

5.5 CROSS-IMPACT MATRIX

Cross-Impact Matrix helps analysts deal with complex problems when “everything is related to everything else.” By using this technique, analysts and decision makers can systematically examine how each factor in a particular context influences all other factors to which it appears to be related.

When to Use It

The Cross-Impact Matrix is useful early in a project when a group is still in a learning mode trying to sort out a complex situation.

The Method

Assemble a group of analysts knowledgeable on various aspects of the subject. The group brainstorms a list of variables or events that would likely have some effect on the issue being studied. The project coordinator then creates a matrix and puts the list of variables or events down the left side of the matrix and the same variables or events across the top.

The matrix is then used to consider and record the relationship between each variable or event and every other variable or event.

5.6 MORPHOLOGICAL ANALYSIS

A method for systematically structuring and examining all the possible relationships in a multidimensional, highly complex, usually nonquantifiable problem space. The basic idea is to identify a set of variables and then look at all the possible combinations of these variables.

For intelligence analysis, it helps prevent surprise by generating a large number of feasible outcomes for any complex situation. This exercise reduces the chance that events will play out in a way that the analyst has not previously imagined and considered.

When to Use It

Morphological Analysis is most useful for dealing with complex, nonquantifiable problems for which little information is available and the chances for surprise are great. It can be used, for example, to identify possible variations of a threat, possible ways a crisis might occur between two countries, possible ways a set of driving forces might interact, or the full range of potential outcomes in any ambiguous situation.

Although Morphological Analysis is typically used for looking ahead, it can also be used in an investigative context to identify the full set of possible explanations for some event.

Value Added

By generating a comprehensive list of possible outcomes, analysts are in a better position to identify and select those outcomes that seem most credible or that most deserve attention. This list helps analysts and decision makers focus on what actions need to be undertaken today to prepare for events that could occur in the future. They can then take the actions necessary to prevent or mitigate the effect of bad outcomes and help foster better outcomes. The technique can also sensitize analysts to low probability/high impact developments, or “nightmare scenarios,” which could have significant adverse implications for influencing policy or allocation of resources.

The product of Morphological Analysis is often a set of potential noteworthy scenarios, with indicators of each, plus the intelligence collection requirements for each scenario. Another benefit is that morphological analysis leaves a clear audit trail about how the judgments were reached.

The Method

Morphological analysis works through two common principles of creativity techniques: decomposition and forced association. Start by defining a set of key parameters or dimensions of the problem, and then break down each of those dimensions further into relevant forms or states or values that the dimension can assume —as in the example described later in this section. Two dimensions can be visualized as a matrix and three dimensions as a cube. In more complicated cases, multiple linked matrices or cubes may be needed to break the problem down into all its parts.

The principle of forced association then requires that every element be paired with and considered in connection with every other element in the morphological space. How that is done depends upon the complexity of the case. In a simple case, each combination may be viewed as a potential scenario or problem solution and examined from the point of view of its possibility, practicability, effectiveness, or other criteria. In complex cases, there may be thousands of possible combinations and computer assistance is required. With or without computer assistance, it is often possible to quickly eliminate about 90 percent of the combinations as not physically possible, impracticable, or undeserving of attention. This narrowing-down process allows the analyst to concentrate only on those combinations that are within the realm of the possible and most worthy of attention.

5.7 QUADRANT CRUNCHING

Quadrant Crunching helps analysts avoid surprise by examining multiple possible combinations of selected key variables. It also helps analysts to identify and systematically challenge assumptions, explore the implications of contrary assumptions, and discover “unknown unknowns.” By generating multiple possible outcomes for any situation, Quadrant Crunching reduces the chance that events could play out in a way that has not previously been at least imagined and considered. Training and practice are required before an analyst should use this technique, and an experienced facilitator is recommended.

The technique forces analysts to rethink an issue from many perspectives and systematically question assumptions that underlie their lead hypothesis. As a result, analysts can be more confident that they have considered a broad range of possible permutations for a particularly complex and ambiguous situation. In so doing, analysts are more likely to anticipate most of the ways a situation can develop (or terrorists might launch an attack) and to spot indicators that signal a specific scenario is starting to develop.

The Method

Quadrant Crunching is sometimes described as a Key Assumptions Check on steroids. It is most useful when there is a well-established lead hypothesis that can be articulated clearly.

Quadrant Crunching calls on the analyst to break down the lead hypothesis into its component parts, identifying the key assumptions that underlie the lead hypothesis, or dimensions that focus on Who, What, When, Where, Why, and How. Once the key dimensions of the lead hypothesis are articulated, the analyst generates at least two examples of contrary dimensions.

 

Relationship to Other Techniques

Quadrant Crunching is a specific application of a generic method called Morphological Analysis (described in this chapter). It draws on the results of the Key Assumptions Check and can contribute to Multiple Scenarios Generation. It can also be used to identify Indicators.

Origins of This Technique

The Quadrant Crunching technique was developed by Randy Pherson and Alan Schwartz to meet a specific analytic need. It was first published in Randy Pherson, Handbook of Analytic Tools and Techniques

6.0 Scenarios and Indicators
6 Scenarios and Indicators

In the complex, evolving, uncertain situations that intelligence analysts and decision makers must deal with, the future is not easily predicable. Some events are intrinsically of low predictability. The best the analyst can do is to identify the driving forces that may determine future outcomes and monitor those forces as they interact to become the future. Scenarios are a principal vehicle for doing this. Scenarios are plausible and provocative stories about how the future might unfold.

 

Scenarios Analysis provides a framework for considering multiple plausible futures. As Peter

Schwartz, author of The Art of the Long View, has argued, “The future is plural.”1 Trying to divine or predict a single outcome often is a disservice to senior intelligence officials, decision makers, and other clients. Generating several scenarios (for example, those that are most likely, least likely, and most dangerous) helps focus attention on the key underlying forces and factors most likely to influence how a situation develops. Analysts can also use scenarios to examine assumptions and deliver useful warning messages when high impact/low probability scenarios are included in the exercise.

 

Identification and monitoring of indicators or signposts can provide early warning of the direction in which the future is heading, but these early signs are not obvious. The human mind tends to see what it expects to see and to overlook the unexpected. These indicators take on meaning only in the context of a specific scenario with which they have been identified. The prior identification of a scenario and associated indicators can create an awareness that prepares the mind to recognize early signs of significant change.

 

Change sometimes happens so gradually that analysts don’t notice it, or they rationalize it as not being of fundamental importance until it is too obvious to ignore. Once analysts take a position on an issue, they typically are slow to change their minds in response to new evidence. By going on the record in advance to specify what actions or events would be significant and might change their minds, analysts can avert this type of rationalization.

 

Another benefit of scenarios is that they provide an efficient mechanism for communicating complex ideas. A scenario is a set of complex ideas that can be described with a short label.

 

Overview of Techniques

 

 

Indicators are a classic technique used to seek early warning of some undesirable event. Indicators are often paired with scenarios to identify which of several possible scenarios is developing. They are also used to measure change toward an undesirable condition, such as political instability or a desirable condition, such as economic reform. Use indicators whenever you need to track a specific situation to monitor, detect, or evaluate change over time.

 

Indicators Validator is a new tool that is useful for assessing the diagnostic power of an indicator. An indicator is most diagnostic when it clearly points to the likelihood of only one scenario or hypothesis and suggests that the others are unlikely. Too frequently indicators are of limited value, because they may be consistent with several different outcomes or hypotheses.

 

6.1 SCENARIOS ANALYSIS

 

Identification and analysis of scenarios helps to reduce uncertainties and manage risk. By postulating different scenarios analysts can identify the multiple ways in which a situation might evolve. This process can help decision makers develop plans to exploit whatever opportunities the future may hold or, conversely, to avoid risks. Monitoring of indicators keyed to various scenarios can provide early warnings of the direction in which the future may be heading.

 

When to Use It

Scenarios Analysis is most useful when a situation is complex or when the outcomes are too uncertain to trust a single prediction. When decision makers and analysts first come to grips with a new situation or challenge, there usually is a degree of uncertainty about how events will unfold.

 

Value Added

When analysts are thinking about scenarios, they are rehearsing the future so that decision makers can be prepared for whatever direction that future takes. Instead of trying to estimate the most likely outcome (and being wrong more often than not), scenarios provide a framework for considering multiple plausible futures.

 

Analysts have learned, from past experience, that involving decision makers in a scenarios exercise is an effective way to communicate the results of this technique and to sensitize them to important uncertainties. Most participants find the process of developing scenarios as useful as any written report or formal briefing. Those involved in the process often benefit in several ways. Analysis of scenarios can:

 

  • Suggest indicators to monitor for signs that a particular future is becoming more or less likely.
  • Help analysts and decision makers anticipate what would otherwise be surprising developments by forcing them to challenge assumptions and consider plausible “wild card” scenarios or discontinuous events.
  • Produce an analytic framework for calculating the costs, risks, and opportunities represented by different outcomes.
  • Provide a means of weighing multiple unknown or unknowable factors and presenting a set of plausible outcomes.
  • Bound a problem by identifying plausible combinations of uncertain factors.

 

When decision makers or analysts from different intelligence disciplines or organizational cultures are included on the team, new insights invariably emerge as new information and perspectives are introduced.

 

6.1.1 The Method: Simple Scenarios

Of the three scenario techniques described here, Simple Scenarios is the easiest one to use. It is the only one of the three that can be implemented by an analyst working alone rather than in a group or a team, and it is the only one for which a coach or a facilitator is not needed.

. Here are the steps for using this technique:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Make a list of forces, factors, and events that are likely to influence the future.
  • Organize the forces, factors, and events that are related to each other into five to ten affinity groups that are expected to be the driving forces in how the focal issue will evolve.
  • Label each of these drivers and write a brief description of each. For example, one training exercise for this technique is to forecast the future of the fictional country of Caldonia by identifying and describing six drivers. Generate a matrix, as shown in Figure 6.1.1, with a list of drivers down the left side. The columns of the matrix are used to describe scenarios. Each scenario is assigned a value for each driver. The values are strong or positive (+), weak or negative (–), and blank if neutral or no change.

 

  • Government effectiveness: To what extent does the government exert control over all populated regions of the country and effectively deliver services?
  • Economy: Does the economy sustain a positive growth rate?
  • Civil society: Can nongovernmental and local institutions provide appropriate services and security to the population?
  • Insurgency: Does the insurgency pose a viable threat to the government? Is it able to extend its dominion over greater portions of the country?
  • Drug trade: Is there a robust drug-trafficking economy?
  • Foreign influence: Do foreign governments, international financial organizations, or nongovernmental organizations provide military or economic assistance to the government?
  • Generate at least four different scenarios—a best case, worst case, mainline, and at least one other by assigning different values (+, 0, –) to each driver.
  • This is a good time to reconsider both drivers and scenarios. Is there a better way to conceptualize and describe the drivers? Are there important forces that have not been included? Look across the matrix to see the extent to which each driver discriminates among the scenarios. If a driver has the same value across all scenarios, it is not discriminating and should be deleted. To stimulate thinking about other possible scenarios, consider the key assumptions that were made in deciding on the most likely scenario. What if some of these assumptions turn out to be invalid? If they are invalid, how might that affect the outcome, and are such outcomes included within the available set of scenarios?
  • For each scenario, write a one-page story to describe what that future looks like and/or how it might come about. The story should illustrate the interplay of the drivers.
  • For each scenario, describe the implications for the decision maker.
  • Generate a list of indicators, or “observables,” for each scenario that would help you discover that events are starting to play out in a way envisioned by that scenario.
  • Monitor the list of indicators on a regular basis.

6.1.2 The Method: Alternative Futures Analysis

Alternative Futures Analysis and Multiple Scenarios Generation differ from Simple Scenarios in that they are usually larger projects that rely on a group of experts, often including academics and decision makers. They use a more systematic process, and the assistance of a knowledgeable facilitator is very helpful.

The steps in the Alternative Futures Analysis process are:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Brainstorm to identify the key forces, factors, or events that are most likely to influence how the issue will develop over a specified time period.
  • If possible, group these various forces, factors, or events to form two critical drivers that are expected to determine the future outcome. In the example on the future of Cuba (Figure 6.1.2), the two key drivers are Effectiveness of Government and Strength of Civil Society. If there are more than two critical drivers, do not use this technique. Use the Multiple Scenarios Generation technique, which can handle a larger number of scenarios.
  • As in the Cuba example, define the two ends of the spectrum for each driver.
  • Draw a 2 × 2 matrix. Label the two ends of the spectrum for each driver.
  • Note that the square is now divided into four quadrants. Each quadrant represents a scenario generated by a combination of the two drivers. Now give a name to each scenario, and write it in the relevant quadrant.
  • Generate a narrative story of how each hypothetical scenario might come into existence. Include a hypothetical chronology of key dates and events for each of the scenarios.
  • Describe the implications of each scenario should it be what actually develops.
  • Generate a list of indicators, or “observables,” for each scenario that would help determine whether events are starting to play out in a way envisioned by that scenario.
  • Monitor the list of indicators on a regular basis.

Figure 6.1.2 Alternative Futures Analysis: Cuba

6.1.3 The Method: Multiple Scenarios Generation

Multiple Scenarios Generation is similar to Alternative Futures Analysis except that with this technique, you are not limited to two critical drivers generating four scenarios. By using multiple 2 × 2 matrices pairing every possible combination of multiple driving forces, you can create a very large number of possible scenarios. This is sometimes desirable to make sure nothing has been overlooked. Once generated, the scenarios can be screened quickly without detailed analysis of each one.

Once sensitized to these different scenarios, analysts are more likely to pay attention to outlying data that would suggest that events are playing out in a way not previously imagined.

Training and an experienced facilitator are needed to use this technique. Here are the basic steps:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Brainstorm to identify the key forces, factors, or events that are most likely to influence how the issue will develop over a specified time period.
  • Define the two ends of the spectrum for each driver.
  • Pair the drivers in a series of 2 × 2 matrices.
  • Develop a story or two for each quadrant of each 2 × 2 matrix.
  • From all the scenarios generated, select those most deserving of attention because they illustrate compelling and challenging futures not yet being considered.
  • Develop indicators for each scenario that could be tracked to determine whether or not the scenario is developing.

 

6.2 INDICATORS

Indictors are observable phenomena that can be periodically reviewed to help track events, spot emerging trends, and warn of unanticipated changes. An indicators list is a pre-established set of observable or

potentially observable actions, conditions, facts, or events whose simultaneous occurrence would argue strongly that a phenomenon is present or is very likely to occur. Indicators can be monitored to obtain tactical, operational, or strategic warnings of some future development that, if it were to occur, would have a major impact.

The identification and monitoring of indicators are fundamental tasks of intelligence analysis, as they are the principal means of avoiding surprise. They are often described as forward-looking or predictive indicators. In the law enforcement community indicators are also used to assess whether a target’s activities or behavior is consistent with an established pattern. These are often described as backward-looking or descriptive indicators.

When to Use It

Indicators provide an objective baseline for tracking events, instilling rigor into the analytic process, and enhancing the credibility of the final product. Descriptive indicators are best used to help the analyst assess whether there are sufficient grounds to believe that a specific action is taking place. They provide a systematic way to validate a hypothesis or help substantiate an emerging viewpoint.

In the private sector, indicators are used to track whether a new business strategy is working or whether a low-probability scenario is developing that offers new commercial opportunities.

Value Added

The human mind sometimes sees what it expects to see and can overlook the unexpected. Identification of indicators creates an awareness that prepares the mind to recognize early signs of significant change. Change often happens so gradually that analysts don’t see it, or they rationalize it as not being of fundamental importance until it is too obvious to ignore. Once analysts take a position on an issue, they can be reluctant to change their minds in response to new evidence. By specifying in advance the threshold for what actions or events would be significant and might cause them to change their minds, analysts can seek to avoid this type of rationalization.

Defining explicit criteria for tracking and judging the course of events makes the analytic process more visible and available for scrutiny by others, thus enhancing the credibility of analytic judgments. Including an indicators list in the finished product helps decision makers track future developments and builds a more concrete case for the analytic conclusions.

Preparation of a detailed indicator list by a group of knowledgeable analysts is usually a good learning experience for all participants. It can be a useful medium for an exchange of knowledge between analysts from different organizations or those with different types of expertise—for example, analysts who specialize in a particular country and those who are knowledgeable about a particular field, such as military mobilization, political instability, or economic development.

The indicator list becomes the basis for directing collection efforts and for routing relevant information to all interested parties. It can also serve as the basis for the analyst’s filing system to keep track of these indicators.

When analysts or decision makers are sharply divided over the interpretation of events (for example, how the war in Iraq or Afghanistan is progressing), of the guilt or innocence of a “person of interest,” or the culpability of a counterintelligence suspect, indicators can help depersonalize the debate by shifting attention away from personal viewpoints to more objective criteria. Emotions often can be diffused and substantive disagreements clarified if all parties agree in advance on a set of criteria that would demonstrate that developments are—or are not—moving in a particular direction or that a person’s behavior suggests that he or she is guilty as suspected or is indeed a spy.

Potential Pitfalls

The quality of indicators is critical, as poor indicators lead to analytic failure. For these reasons, analysts must periodically review the validity and relevance of an indicators list.

The Method

The first step in using this technique is to create a list of indicators. (See Figure 6.2b for a sample indicators list.) The second step is to monitor these indicators regularly to detect signs of change. Developing the indicator list can range from a simple process to a sophisticated team effort.

For example, with minimum effort you could jot down a list of things you would expect to see if a particular situation were to develop as feared or foreseen. Or you could join with others to define multiple variables that would influence a situation and then rank the value of each variable based on incoming information about relevant events, activities, or official statements. In both cases, some form of brainstorming, hypothesis generation, or scenario development is often used to identify the indicators.

A good indicator must meet several criteria, including the following:

Observable and collectible. There must be some reasonable expectation that, if present, the indicator will be observed and reported by a reliable source. If an indicator is to monitor change over time, it must be collectable over time.
Valid. An indicator must be clearly relevant to the end state the analyst is trying to predict or assess, and it must be inconsistent with all or at least some of the alternative explanations or outcomes. It must accurately measure the concept or phenomenon at issue.
Reliable. Data collection must be consistent when comparable methods are used. Those observing and collecting data must observe the same things. Reliability requires precise definition of the indicators. Stable. An indicator must be useful over time to allow comparisons and to track events. Ideally, the indicator should be observable early in the evolution of a development so that analysts and decision makers have time to react accordingly.
Unique. An indicator should measure only one thing and, in combination with other indicators, should point only to the phenomenon being studied. Valuable indicators are those that are not only consistent with a specified scenario or hypothesis but are also inconsistent with alternative scenarios or hypotheses. The Indicators Validator tool, described later in this chapter, can be used to check the diagnosticity of indicators.

Maintaining separate indicator lists for alternative scenarios or hypotheses is particularly useful when making a case that a certain event is unlikely to happen, as in What If? Analysis or High Impact/Low Probability Analysis.

After creating the indicator list or lists, you or the analytic team should regularly review incoming reporting and note any changes in the indicators. To the extent possible, you or the team should decide well in advance which critical indicators, if observed, will serve as early-warning decision points. In other words, if a certain indicator or set of indicators is observed, it will trigger a report advising of some modification in the intelligence appraisal of the situation.

Techniques for increasing the sophistication and credibility of an indicator list include the following:

Establishing a scale for rating each indicator
Providing specific definitions of each indicator
Rating the indicators on a scheduled basis (e.g., monthly, quarterly, or annually)
Assigning a level of confidence to each rating
Providing a narrative description for each point on the rating scale, describing what one would expect to observe at that level
Listing the sources of information used in generating the rating

6.3 INDICATORS VALIDATOR

The Indicators Validator is a simple tool for assessing the diagnostic power of indicators.

When to Use It

The Indicators Validator is an essential tool to use when developing indicators for competing hypotheses or alternative scenarios. Once an analyst has developed a set of alternative scenarios or future worlds, the next step is to generate indicators for each scenario (or world) that would appear if that particular world were beginning to emerge. A critical question that is not often asked is whether a given indicator would appear only in the scenario to which it is assigned or also in one or more alternative scenarios. Indicators that could appear in several scenarios are not considered diagnostic, suggesting that they are not particularly useful in determining whether a specific scenario is emerging. The ideal indicator is highly consistent for the world to which it is assigned and highly inconsistent for all other worlds.

Value Added

Employing the Indicators Validator to identify and dismiss nondiagnostic indicators can significantly increase the credibility of an analysis. By applying the tool, analysts can rank order their indicators from most to least diagnostic and decide how far up the list they want to draw the line in selecting the indicators that will be used in the analysis. In some circumstances, analysts might discover that most or all the indicators for a given scenario have been eliminated because they are also consistent with other scenarios, forcing them to brainstorm a new and better set of indicators. If analysts find it difficult to generate independent lists of diagnostic indicators for two scenarios, it may be that the scenarios are not sufficiently dissimilar, suggesting that they should be combined.

The Method

The first step is to populate a matrix similar to that used for Analysis of Competing Hypotheses. This can be done manually or by using the Indicators Validator software. The matrix should list:

Alternative scenarios or worlds (or competing hypotheses) along the top of the matrix (as is done for hypotheses in Analysis of Competing Hypotheses)
Indicators that have already been generated for all the scenarios down the left side of the matrix (as is done with evidence in Analysis of Competing Hypotheses)

In each cell of the matrix, assess whether the indicator for that particular scenario is

 

Highly likely to appear

Likely to appear
Could appear
Unlikely to appear

Highly unlikely to appear

Once this process is complete, re-sort the indicators so that the most discriminating indicators are displayed at the top of the matrix and the least discriminating indicators at the bottom.

The most discriminating indicator is “Highly Likely” to emerge in one scenario and “Highly Unlikely” to emerge in all other scenarios.
The least discriminating indicator is “Highly Likely” to appear in all scenarios.
Most indicators will fall somewhere in between.

The Indicators with the most “Highly Unlikely” and “Unlikely” ratings are the most discriminating and should be retained.
Indicators with few or no “Highly Unlikely” or “Unlikely” ratings should be eliminated.
Once nondiscriminating indicators have been eliminated, regroup the indicators under their assigned scenario. If most indicators for a particular scenario have been eliminated, develop new—and more diagnostic—indicators for that scenario.

Recheck the diagnostic value of any new indicators by applying the Indicators Validator to them as well.

 

7.0 Hypothesis Generation and Testing
7 Hypothesis Generation and Testing

Intelligence analysis will never achieve the accuracy and predictability of a true science, because the information with which analysts must work is typically incomplete, ambiguous, and potentially

deceptive. Intelligence analysis can, however, benefit from some of the lessons of science and adapt some of the elements of scientific reasoning.

The scientific process involves observing, categorizing, formulating hypotheses, and then testing those hypotheses. Generating and testing hypotheses is a core function of intelligence analysis. A possible explanation of the past or a judgment about the future is a hypothesis that needs to be tested by collecting and presenting evidence.

The generation and testing of hypotheses is a skill, and its subtleties do not come naturally. It is a form of reasoning that people can learn to use for dealing with high-stakes situations. What does come naturally is drawing on our existing body of knowledge and experience (mental model) to make an intuitive judgment. In most circumstances in our daily lives, this is an efficient approach that works most of the time.

When one is facing a complex choice of options, the reliance on intuitive judgment risks following a practice called “satisficing,” a term coined by Nobel Prize winner Herbert Simon by combining the words satisfy and suffice.1 It means being satisfied with the first answer that seems adequate, as distinct from assessing multiple options to find the optimal or best answer. The “satisficer” who does seek out additional information may look only for information that supports this initial answer rather than looking more broadly at all the possibilities.

 

The truth of a hypothesis can never be proven beyond doubt by citing only evidence that is consistent with the hypothesis, because the same evidence may be and often is consistent with one or more other hypotheses. Science often proceeds by refuting or disconfirming hypotheses. A hypothesis that cannot be refuted should be taken just as seriously as a hypothesis that seems to have a lot of evidence in favor of it. A single item of evidence that is shown to be inconsistent with a hypothesis can be sufficient grounds for rejecting that hypothesis. The most tenable hypothesis is often the one with the least evidence against it.

Analysts often test hypotheses by using a form of reasoning known as abduction, which differs from the two better known forms of reasoning, deduction and induction. Abductive reasoning starts with a set of facts. One then develops hypotheses that, if true, would provide the best explanation for these facts. The most tenable hypothesis is the one that best explains the facts. Because of the uncertainties inherent to intelligence analysis, conclusive proof or refutation of hypotheses is the exception rather than the rule.

The Analysis of Competing Hypotheses (ACH) technique, was developed by Richards Heuer specifically for use in intelligence analysis. It is the application to intelligence analysis of Karl Popper’s theory of science.2 Popper was one of the most influential philosophers of science of the twentieth century. He is known for, among other things, his position that scientific reasoning should start with multiple hypotheses and proceed by rejecting or eliminating hypotheses, while tentatively accepting only those hypotheses that cannot be refuted.

This chapter describes techniques that are intended to be used specifically for hypothesis generation.

 

Overview of Techniques

Hypothesis Generation is a category that includes three specific techniques—Simple Hypotheses, Multiple Hypotheses Generator, and Quadrant Hypothesis Generation. Simple Hypotheses is the easiest of the three, but it is not always the best selection. Use Multiple Hypotheses Generator to identify a large set of all possible hypotheses. Quadrant Hypothesis Generation is used to identify a set of hypotheses when there are just two driving forces that are expected to determine the outcome.

Diagnostic Reasoning applies hypothesis testing to the evaluation of significant new information. Such information is evaluated in the context of all plausible explanations of that information, not just in the context of the analyst’s well-established mental model. The use of Diagnostic Reasoning reduces the risk of surprise, as it ensures that an analyst will have given at least some consideration to alternative conclusions. Diagnostic Reasoning differs from the Analysis of Competing Hypotheses (ACH) technique in that it is used to evaluate a single item of evidence, while ACH deals with an entire issue involving multiple pieces of evidence and a more complex analytic process.

Analysis of Competing Hypotheses

The requirement to identify and then refute all reasonably possible hypotheses forces an analyst to recognize the full uncertainty inherent in most analytic situations. At the same time, the ACH software helps the analyst sort and manage evidence to identify paths for reducing that uncertainty.

Argument Mapping is a method that can be used to put a single hypothesis to a rigorous logical test. The structured visual representation of the arguments and evidence makes it easier to evaluate any analytic judgment. Argument Mapping is a logical follow on to an ACH analysis. It is a detailed presentation of the arguments for and against a single hypothesis, while ACH is a more general analysis of multiple hypotheses. The successful application of Argument Mapping to the hypothesis favored by the ACH analysis would increase confidence in the results of both analyses.

Deception Detection is discussed in this chapter because the possibility of deception by a foreign intelligence service or other adversary organization is a distinctive type of hypothesis that analysts must frequently consider. The possibility of deception can be included as a hypothesis in any ACH analysis. Information identified through the Deception Detection technique can then be entered as evidence in the ACH matrix.

7.1 HYPOTHESIS GENERATION

In broad terms, a hypothesis is a potential explanation or conclusion that is to be tested by collecting and presenting evidence. It is a declarative statement that has not been established as true—an “educated guess” based on observation that needs to be supported or refuted by more observation or through experimentation.

A good hypothesis:

Is written as a definite statement, not as a question. Is based on observations and knowledge.
Is testable and falsifiable.
Predicts the anticipated results clearly.

Contains a dependent and an independent variable. The dependent variable is the phenomenon being explained. The independent variable does the explaining.

When to Use It

Analysts should use some structured procedure to develop multiple hypotheses at the start of a project when:

The importance of the subject matter is such as to require systematic analysis of all alternatives. Many variables are involved in the analysis.
There is uncertainty about the outcome.
Analysts or decision makers hold competing views.

Value Added

Generating multiple hypotheses at the start of a project can help analysts avoid common analytic pitfalls such as these:

Coming to premature closure.
Being overly influenced by first impressions.
Selecting the first answer that appears “good enough.”
Focusing on a narrow range of alternatives representing marginal, not radical, change. Opting for what elicits the most agreement or is desired by the boss.
Selecting a hypothesis only because it avoids a previous error or replicates a past success.

7.1.1 The Method: Simple Hypotheses

To use the Simple Hypotheses method, define the problem and determine how the hypotheses are expected to be used at the beginning of the project.

Gather together a diverse group to review the available evidence and explanations for the issue, activity, or behavior that you want to evaluate. In forming this diverse group, consider that you will need different types of expertise for different aspects of the problem, cultural expertise about the geographic area involved, different perspectives from various stakeholders, and different styles of thinking (left brain/right brain, male/female). Then:

Ask each member of the group to write down on a 3 × 5 card up to three alternative explanations or hypotheses. Prompt creative thinking by using the following:

Situational logic: Take into account all the known facts and an understanding of the underlying forces at work at that particular time and place.
Historical analogies: Consider examples of the same type of phenomenon.
Theory: Consider theories based on many examples of how a particular type of situation generally plays out.

Collect the cards and display the results on a whiteboard. Consolidate the list to avoid any duplication. Employ additional group and individual brainstorming techniques to identify key forces and factors. Aggregate the hypotheses into affinity groups and label each group.
Use problem restatement and consideration of the opposite to develop new ideas.

Update the list of alternative hypotheses. If the hypotheses will be used in ACH, strive to keep them mutually exclusive—that is, if one hypothesis is true all others must be false.
Have the group clarify each hypothesis by asking the journalist’s classic list of questions: Who, What, When, Where, Why, and How?

Select the most promising hypotheses for further exploration.

7.1.2 The Method: Multiple Hypotheses Generator

The Multiple Hypotheses Generator provides a structured mechanism for generating a wide array of hypotheses. Analysts often can brainstorm a useful set of hypotheses without such a tool, but the Hypotheses Generator may give greater confidence than other techniques that a critical alternative or an outlier has not been overlooked. To use this method:

Define the issue, activity, or behavior that is subject to examination. Do so by using the journalist’s classic list of Who, What, When, Where, Why, and How for explaining this issue, activity, or behavior.

7.1.3 The Method: Quadrant Hypothesis Generation

Use the quadrant technique to identify a basic set of hypotheses when there are two easily identified key driving forces that will determine the outcome of an issue. The technique identifies four potential scenarios that represent the extreme conditions for each of the two major drivers. It spans the logical possibilities inherent in the relationship and interaction of the two driving forces, thereby generating options that analysts otherwise may overlook.

These are the steps for Quadrant Hypothesis Generation:

Identify the two main drivers by using techniques such as Structured Brainstorming or by surveying subject matter experts. A discussion to identify the two main drivers can be a useful exercise in itself. Construct a 2 × 2 matrix using the two drivers.
Think of each driver as a continuum from one extreme to the other. Write the extremes of each of the drivers at the end of the vertical and horizontal axes.

Fill in each quadrant with the details of what the end state would be as shaped by the two drivers. Develop signposts that show whether events are moving toward one of the hypotheses. Use the signposts or indicators of change to develop intelligence collection strategies to determine the direction in which events are moving.

7.2 DIAGNOSTIC REASONING

Diagnostic Reasoning applies hypothesis testing to the evaluation of a new development, the assessment of a new item of intelligence, or the reliability of a source. It is different from the Analysis of Competing Hypotheses (ACH) technique in that Diagnostic Reasoning is used to evaluate a single item of evidence, while ACH deals with an entire issue involving multiple pieces of evidence and a more complex analytic process.

When to Use It

Analysts should use Diagnostic Reasoning instead of making a snap intuitive judgment when assessing the meaning of a new development in their area of interest, or the significance or reliability of a new intelligence report. The use of this technique is especially important when the analyst’s intuitive interpretation of a new piece of evidence is that the new information confirms what the analyst was already thinking.

Value Added

Diagnostic Reasoning helps balance people’s natural tendency to interpret new information as consistent with their existing understanding of what is happening—that is, the analyst’s mental model. It is a common experience to discover that much of the evidence supporting what one believes is the most likely conclusion is really of limited value in confirming one’s existing view, because that same evidence is also consistent with alternative conclusions. One needs to evaluate new information in the context of all possible explanations of that information, not just in the context of a well-established mental model. The use of Diagnostic Reasoning reduces the element of surprise by ensuring that at least some consideration has been given to alternative conclusions.

The Method

Diagnostic Reasoning is a process by which you try to refute alternative judgments rather than confirm what you already believe to be true. Here are the steps to follow:

* When you receive a potentially significant item of information, make a mental note of what it seems to mean (i.e., an explanation of why something happened or what it portends for the future). Make a quick intuitive judgment based on your current mental model.

* Brainstorm, either alone or in a small group, the alternative judgments that another analyst with a different perspective might reasonably deem to have a chance of being accurate. Make a list of these alternatives.

* For each alternative, ask the following question: If this alternative were true or accurate, how likely is it that I would see this new information?

* Make a tentative judgment based on consideration of these alternatives. If the new information is equally likely with each of the alternatives, the information has no diagnostic value and can be ignored. If the information is clearly inconsistent with one or more alternatives, those alternatives might be ruled out. Following this mode of thinking for each of the alternatives, decide which alternatives need further attention and which can be dropped from consideration.

* Proceed further by seeking evidence to refute the remaining alternatives rather than confirm them.

7.3 ANALYSIS OF COMPETING HYPOTHESES

Analysis of Competing Hypotheses (ACH) is a technique that assists analysts in making judgments on issues that require careful weighing of alternative explanations or estimates. ACH involves identifying a set of

mutually exclusive alternative explanations or outcomes (presented as hypotheses), assessing the consistency or inconsistency of each item of evidence with each hypothesis, and selecting the hypothesis that best fits the evidence. The idea behind this technique is to refute rather than to confirm each of the hypotheses. The most likely hypothesis is the one with the least evidence against it, as well as evidence for it, not the one with the most evidence for it.

When to Use It

ACH is appropriate for almost any analysis where there are alternative explanations for what has happened, is happening, or is likely to happen. Use it when the judgment or decision is so important that you cannot afford to be wrong. Use it when your gut feelings are not good enough, and when you need a systematic approach to prevent being surprised by an unforeseen outcome. Use it on controversial issues when it is desirable to identify precise areas of disagreement and to leave an audit trail to show what evidence was considered and how different analysts arrived at their judgments.

ACH also is particularly helpful when an analyst must deal with the potential for denial and deception, as it was initially developed for that purpose.

Value Added

There are a number of different ways by which ACH helps analysts produce a better analytic product. These include the following:

* It prompts analysts to start by developing a full set of alternative hypotheses. This process reduces the risk of what is called “satisficing”—going with the first answer that comes to mind that seems to meet the need. It ensures that all reasonable alternatives are considered before the analyst gets locked into a preferred conclusion.

* It requires analysts to try to refute hypotheses rather than support a single hypothesis. The technique helps analysts overcome the tendency to search for or interpret new information in a way that confirms their preconceptions and avoids information and interpretations that contradict prior beliefs. A word of caution, however. ACH works this way only when the analyst approaches an issue with a relatively open mind. An analyst who is already committed to a belief in what the right answer is will often find a way to interpret the evidence as consistent with that belief. In other words, as an antidote to confirmation bias, ACH is similar to a flu shot. Taking the flu shot will usually keep you from getting the flu, but it won’t make you well if you already have the flu.

* It helps analysts to manage and sort evidence in analytically useful ways. It helps maintain a record of relevant evidence and tracks how that evidence relates to each hypothesis. It also enables analysts to sort data by type, date, and diagnosticity of the evidence.

* It spurs analysts to present conclusions in a way that is better organized and more transparent as to how these conclusions were reached than would otherwise be possible.

* It provides a foundation for identifying indicators that can be monitored to determine the direction in which events are heading.

* It leaves a clear audit trail as to how the analysis was done.
As a tool for interoffice or interagency collaboration, ACH ensures that all analysts are working from the

same database of evidence, arguments, and assumptions and ensures that each member of the team has had an opportunity to express his or her view on how that information relates to the likelihood of each hypothesis. Users of ACH report that:

* The technique helps them gain a better understanding of the differences of opinion with other analysts or between analytic offices.

* Review of the ACH matrix provides a systematic basis for identification and discussion of differences between two or more analysts.

* Reference to the matrix helps depersonalize the argumentation when there are differences of opinion. The Method

Simultaneous evaluation of multiple, competing hypotheses is difficult to do without some type of analytic aid. To retain three or five or seven hypotheses in working memory and note how each item of information fits into each hypothesis is beyond the capabilities of most people. It takes far greater mental agility than the common practice of seeking evidence to support a single hypothesis that is already believed to be the most likely answer. ACH can be accomplished, however, with the help of the following eight-step process:

* First, identify the hypotheses to be considered. Hypotheses should be mutually exclusive; that is, if one hypothesis is true, all others must be false. The list of hypotheses should include all reasonable possibilities. Include a deception hypothesis if that is appropriate. For each hypothesis, develop a brief scenario or “story” that explains how it might be true.

* Make a list of significant “evidence,” which for ACH means everything that is relevant to evaluating the hypotheses—including evidence, arguments, assumptions, and the absence of things one would expect to see if a hypothesis were true. It is important to include assumptions as well as factual evidence, because the matrix is intended to be an accurate reflection of the analyst’s thinking about the topic. If the analyst’s thinking is driven by assumptions rather than hard facts, this needs to become apparent so that the assumptions can be challenged. A classic example of absence of evidence is the Sherlock Holmes story of the dog barking in the night. The failure of the dog to bark was persuasive evidence that the guilty party was not an outsider but an insider who was known to the dog.

* Analyze the diagnosticity of the evidence, arguments, and assumptions to identify which inputs are most influential in judging the relative likelihood of the hypotheses. Assess each input by working across the matrix. For each hypothesis, ask, “Is this input consistent with the hypothesis, inconsistent with the hypothesis, or is it not relevant?” If it is consistent, place a “C” in the box; if it is inconsistent, place an “I”; if it is not relevant to that hypothesis leave the box blank. If a specific item of evidence, argument, or assumption is particularly compelling, place two “CCs” in the box; if it strongly undercuts the hypothesis, place two “IIs.” When you are asking if an input is consistent or inconsistent with a specific hypothesis, a common response is, “It all depends on….” That means the rating for the hypothesis will be based on an assumption—whatever assumption the rating “depends on.” You should write down all such assumptions. After completing the matrix, look for any pattern in those assumptions—that is, the same assumption being made when ranking

multiple items of evidence. After sorting the evidence for diagnosticity, note how many of the highly diagnostic inconsistency ratings are based on assumptions. Consider how much confidence you should have in those assumptions and then adjust the confidence in the ACH Inconsistency Scores accordingly. See Figure 7.3a for an example.

* Refine the matrix by reconsidering the hypotheses. Does it make sense to combine two hypotheses into one or to add a new hypothesis that was not considered at the start? If a new hypothesis is added, go back and evaluate all the evidence for this hypothesis. Additional evidence can be added at any time.

* Draw tentative conclusions about the relative likelihood of each hypothesis, basing your conclusions on an analysis of the diagnosticity of each item of evidence. The software calculates an inconsistency score based on the number of “I” or “II” ratings or a weighted inconsistency score that also includes consideration of the weight assigned to each item of evidence. The hypothesis with the lowest inconsistency score is tentatively the most likely hypothesis. The one with the most inconsistencies is the least likely.

* Analyze the sensitivity of your tentative conclusion to a change in the interpretation of a few critical items of evidence. Do this by using the ACH software to sort the evidence by diagnosticity. This identifies the most diagnostic evidence that is driving your conclusion. See Figure 7.3b. Consider the consequences for your analysis if one or more of these critical items of evidence were wrong or deceptive or subject to a different interpretation. If a different interpretation would be sufficient to change your conclusion, go back and do everything that is reasonably possible to double check the accuracy of your interpretation.

* Report the conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one. State which items of evidence were the most diagnostic and how compelling a case they make in distinguishing the relative likelihood of the hypotheses.

* Identify indicators or milestones for future observation. Generate two lists: the first focusing on future events or what might be developed through additional research that would help prove the validity of your analytic judgment, and the second, a list of indicators that would suggest that your judgment is less likely to be correct. Monitor both lists on a regular basis, remaining alert to whether new information strengthens or weakens your case.

Potential Pitfalls

The inconsistency or weighted inconsistency scores generated by the ACH software for each hypothesis are not the product of a magic formula that tells you which hypothesis to believe in! The ACH software takes you through a systematic analytic process, and the computer makes the calculation, but the judgment that emerges is only as accurate as your selection and evaluation of the evidence to be considered.

Because it is more difficult to refute hypotheses than to find information that confirms a favored hypothesis, the generation and testing of alternative hypotheses will often increase rather than reduce the analyst’s level of uncertainty. Such uncertainty is frustrating, but it is usually an accurate reflection of the true situation. The ACH procedure has the offsetting advantage of focusing your attention on the few items of critical evidence that cause the uncertainty or which, if they were available, would alleviate it.

Assumptions or logical deductions omitted: If the scores in the matrix do not support what you believe is the most likely hypothesis, the matrix may be incomplete. Your thinking may be influenced by assumptions or logical deductions that have not been included in the list of evidence/arguments. If so, these should be included so that the matrix fully reflects everything that influences your judgment on this issue. It is important for all analysts to recognize the role that unstated or unquestioned (and sometimes unrecognized) assumptions play in their analysis. In political or military analysis, for example, conclusions may be driven by assumptions about another country’s capabilities or intentions.

Insufficient attention to less likely hypotheses: If you think the scoring gives undue credibility to one or more of the less likely hypotheses, it may be because you have not assembled the evidence needed to refute them. You may have devoted insufficient attention to obtaining such evidence, or the evidence may simply not be there.

Definitive evidence: There are occasions when intelligence collectors obtain information from a trusted and well-placed inside source. The ACH analysis can assign a “High” weight for Credibility, but this is probably not enough to reflect the conclusiveness of such evidence and the impact it should have on an analyst’s thinking about the hypotheses. In other words, in some circumstances one or two highly authoritative reports from a trusted source in a position to know may support one hypothesis so strongly that they refute all other hypotheses regardless of what other less reliable or less definitive evidence may show.

Unbalanced set of evidence: Evidence and arguments must be representative of the problem as a whole. If there is considerable evidence on a related but peripheral issue and comparatively few items of evidence on the core issue, the inconsistency or weighted inconsistency scores may be misleading.

Diminishing returns: As evidence accumulates, each new item of inconsistent evidence or argument has less impact on the inconsistency scores than does the earlier evidence.

When you are evaluating change over time, it is desirable to delete the older evidence periodically or to partition the evidence and analyze the older and newer evidence separately.

Origins of This Technique

Richards Heuer originally developed the ACH technique as a method for dealing with a particularly difficult type of analytic problem at the CIA in the 1980s. It was first described publicly in his book The Psychology of Intelligence Analysis

7.4 ARGUMENT MAPPING

Argument Mapping is a technique that can be used to test a single hypothesis through logical reasoning. The process starts with a single hypothesis or tentative analytic judgment and then uses a box-and-arrow

diagram to lay out visually the argumentation and evidence both for and against the hypothesis or analytic judgment.

When to Use It

When making an intuitive judgment, use Argument Mapping to test your own reasoning. Creating a visual map of your reasoning and the evidence that supports this reasoning helps you better understand the strengths, weaknesses, and gaps in your argument.

Argument Mapping and Analysis of Competing Hypotheses (ACH) are complementary techniques that work well either separately or together. Argument Mapping is a detailed presentation of the argument for a single hypothesis, while ACH is a more general analysis of multiple hypotheses. The ideal is to use both.

Value Added

An Argument Map makes it easier for both analysts and recipients of the analysis to evaluate the soundness of any conclusion. It helps clarify and organize one’s thinking by showing the logical relationships between the various thoughts, both pro and con. An Argument Map also helps the analyst recognize assumptions and identify gaps in the available knowledge.

The Method

An Argument Map starts with a hypothesis—a single-sentence statement, judgment, or claim about which the analyst can, in subsequent statements, present general arguments and detailed evidence, both pro and con. Boxes with arguments are arrayed hierarchically below this statement, and these boxes are connected with arrows. The arrows signify that a statement in one box is a reason to believe, or not to believe, the statement in the box to which the arrow is pointing. Different types of boxes serve different functions in the reasoning process, and boxes use some combination of color-coding, icons, shapes, and labels so that one can quickly distinguish arguments supporting a hypothesis from arguments opposing it.

7.5 DECEPTION DETECTION

Deception is an action intended by an adversary to influence the perceptions, decisions, or actions of another to the advantage of the deceiver. Deception Detection is a set of checklists that analysts can use to

help them determine when to look for deception, discover whether deception actually is present, and figure out what to do to avoid being deceived. “The accurate perception of deception in counterintelligence analysis is extraordinarily difficult. If deception is done well, the analyst should not expect to see any evidence of it. If, on the other hand, it is expected, the analyst often will find evidence of deception even when it is not there.”4

When to Use It

Analysts should be concerned about the possibility of deception when:

  • The potential deceiver has a history of conducting deception.
  • Key information is received at a critical time, that is, when either the recipient or the potential deceiver has a great deal to gain or to lose.
  • Information is received from a source whose bona fides are questionable.
  • Analysis hinges on a single critical piece of information or reporting.
  • Accepting new information would require the analyst to alter a key assumption or key judgment.
  • Accepting the new information would cause the Intelligence Community, the U.S. government, or the client to expend or divert significant resources.
  • The potential deceiver may have a feedback channel that illuminates whether and how the deception information is being processed and to what effect.

Value Added

Most analysts know they cannot assume that everything that arrives in their inbox is valid, but few know how to factor such concerns effectively into their daily work practices. If an analyst accepts the possibility that some of the information received may be deliberately deceptive, this puts a significant cognitive burden on the analyst. All the evidence is open then to some question, and it becomes difficult to draw any valid inferences from the reporting. This fundamental dilemma can paralyze analysis unless practical tools are available to guide the analyst in determining when it is appropriate to worry about deception, how best to detect deception in the reporting, and what to do in the future to guard against being deceived.

The Method

Analysts should routinely consider the possibility that opponents are attempting to mislead them or to hide important information. The possibility of deception cannot be rejected simply because there is no evidence of it; if it is well done, one should not expect to see evidence of it.

Analysts have also found the following “rules of the road” helpful in dealing with deception.

  • Avoid over-reliance on a single source of information.
  • Seek and heed the opinions of those closest to the reporting.
  • Be suspicious of human sources or sub-sources who have not been met with personally or for whom it is unclear how or from whom they obtained the information.
  • Do not rely exclusively on what someone says (verbal intelligence); always look for material evidence (documents, pictures, an address or phone number that can be confirmed, or some other form of concrete, verifiable information).
  • Look for a pattern where on several occasions a source’s reporting initially appears correct but later turns out to be wrong and the source can offer a seemingly plausible, albeit weak, explanation for the discrepancy.
  • Generate and evaluate a full set of plausible hypothesis—including a deception hypothesis, if appropriate—at the outset of a project.
  • Know the limitations as well as the capabilities of the potential deceiver.

DECEPTION DETECTION CHECKLISTs

 

Motion, Opportunity, and Means (MOM):

Motive: What are the goals and motives of the potential deceiver?
Channels: What means are available to the potential deceiver to feed information to us?
Risks: What consequences would the adversary suffer if such a deception were revealed?
Costs: Would the potential deceiver need to sacrifice sensitive information to establish the credibility of the deception channel?
Feedback: Does the potential deceiver have a feedback mechanism to monitor the impact of the deception operation?

 

Past Opposition Practices (POP):

Does the adversary have a history of engaging in deception?
Does the current circumstance fit the pattern of past deceptions?
If not, are there other historical precedents?
If not, are there changed circumstances that would explain using this form of deception at this time?

 

Manipulability of Sources (MOSES):

Is the source vulnerable to control or manipulation by the potential deceiver?

What is the basis for judging the source to be reliable?
Does the source have direct access or only indirect access to the information? How good is the source’s track record of reporting?

 

Evaluation of Evidence (EVE):

How accurate is the source’s reporting? Has the whole chain of evidence including translations been checked?
Does the critical evidence check out? Remember, the sub-source can be more critical than the source.

Does evidence from one source of reporting (e.g., human intelligence) conflict with that coming from another source (e.g., signals intelligence or open source reporting)?
Do other sources of information provide corroborating evidence?
Is any evidence one would expect to see noteworthy by its absence?

Relationship to Other Techniques

Analysts can combine Deception Detection with Analysis of Competing Hypotheses to assess the possibility of deception. The analyst explicitly includes deception as one of the hypotheses to be analyzed, and information identified through the MOM, POP, MOSES, and EVE checklists is then included as evidence in the ACH analysis.

 

8.0 Cause and Effect
8 Assessment of Cause and Effect

At tempts to explain the past and forecast the future are based on an understanding of cause and effect. Such understanding is difficult, because the kinds of variables and relationships studied by the intelligence analyst are, in most cases, not amenable to the kinds of empirical analysis and theory development that are common in academic research. The best the analyst can do is to make an informed judgment, but such judgments depend upon the analyst’s subject matter expertise and reasoning ability and are vulnerable to various cognitive pitfalls and fallacies of reasoning.

 

One of the most common causes of intelligence failures is mirror imaging, the unconscious assumption that other countries and their leaders will act as we would in similar circumstances. Another is the tendency to attribute the behavior of people, organizations, or governments to the nature of the actor and underestimate the influence of situational factors. Conversely, people tend to see their own behavior as conditioned almost entirely by the situation in which they find themselves. This is known as the “fundamental attribution error.”

There is also a tendency to assume that the results of an opponent’s actions are what the opponent intended, and we are slow to accept the reality of simple mistakes, accidents, unintended consequences, coincidences, or small causes leading to large effects. Analysts often assume that there is a single cause and stop their search for an explanation when the first seemingly sufficient cause is found. Perceptions of causality are partly determined by where one’s attention is directed; as a result, information that is readily available, salient, or vivid is more likely to be perceived as causal than information that is not. Cognitive limitations and common errors in the perception of cause and effect are discussed in greater detail in Richards Heuer’s Psychology of Intelligence Analysis.

 

The Psychology of Intelligence Analysis describes three principal strategies that intelligence analysts use to make judgments to explain the cause of current events or forecast what might happen in the future:

* Situational logic: Making expert judgments based on the known facts and an understanding of the underlying forces at work at that particular time and place. When an analyst is working with incomplete, ambiguous, and possibly deceptive information, these expert judgments usually depend upon assumptions about capabilities, intent, or the normal workings of things in the country of concern. Key Assumptions Check, which is one of the most commonly used structured techniques, is described in this chapter.

* Comparison with historical situations: Combining an understanding of the facts of a specific situation with knowledge of what happened in similar situations in the past, either in one’s personal experience or historical events. This strategy involves the use of analogies. The Structured Analogies technique described in this chapter adds rigor and increased accuracy to this process.

* Applying theory: Basing judgments on the systematic study of many examples of the same phenomenon. Theories or models often based on empirical academic research are used to explain how and when certain types of events normally happen. Many academic models are too generalized to be applicable to the unique characteristics of most intelligence problems.

Overview of Techniques

Key Assumptions Check is one of the most important and frequently used techniques. Analytic judgment is always based on a combination of evidence and assumptions, or preconceptions, that influence how the evidence is interpreted.

Structured Analogies applies analytic rigor to reasoning by analogy. This technique requires that the analyst systematically compares the issue of concern with multiple potential analogies before selecting the one for which the circumstances are most similar to the issue of concern. It seems natural to use analogies when making decisions or forecasts as, by definition, they contain information about what has happened in similar situations in the past. People often recognize patterns and then consciously take actions that were successful in a previous experience or avoid actions that previously were unsuccessful. However, analysts need to avoid the strong tendency to fasten onto the first analogy that comes to mind and supports their prior view about an issue.

Role Playing, as described here, starts with the current situation, perhaps with a real or hypothetical new development that has just happened and to which the players must react.

Red Hat Analysis is a useful technique for trying to perceive threats and opportunities as others see them. Intelligence analysts frequently endeavor to forecast the behavior of a foreign leader, group, organization, or country. In doing so, they need to avoid the common error of mirror imaging, the natural tendency to assume that others think and perceive the world in the same way we do. Red Hat Analysis is of limited value without significant cultural understanding of the country and people involved.

Outside-In Thinking broadens an analyst’s thinking about the forces that can influence a particular issue of concern. This technique requires the analyst to reach beyond his or her specialty area to consider broader social, organizational, economic, environmental, technological, political, and global forces or trends that can affect the topic being analyzed.

Policy Outcomes Forecasting Model is a theory-based procedure for estimating the potential for political change. Formal models play a limited role in political/strategic analysis, since analysts generally are concerned with what they perceive to be unique events, rather than with any need to search for general patterns in events. Conceptual models that tell an analyst how to think about a problem and help the analyst through that thought process can be useful for frequently recurring issues, such as forecasting policy outcomes or analysis of political instability. Models or simulations that use a mathematical algorithm to calculate a conclusion are outside the domain of structured analytic techniques that are the topic of this book.

Prediction Markets are speculative markets created for the purpose of making predictions about future events. Just as betting on a horse race sets the odds on which horse will win, betting that some future occurrence will or will not happen sets the estimated probability of that future occurrence. Although the use of this technique has been successful in the private sector, it may not be a workable method for the Intelligence Community.

8.1 KEY ASSUMPTIONS CHECK

Analytic judgment is always based on a combination of evidence and assumptions, or preconceptions, which influence how the evidence is interpreted.2 The Key Assumptions Check is a systematic effort to make explicit and question the assumptions (the mental model) that guide an analyst’s interpretation of evidence and reasoning about any particular problem. Such assumptions are usually necessary and unavoidable as a means of filling gaps in the incomplete, ambiguous, and sometimes deceptive information with which the analyst must work. They are driven by the analyst’s education, training, and experience, plus the organizational context in which the analyst works.

An organization really begins to learn when its most cherished assumptions are challenged by counterassumptions. Assumptions underpinning existing policies and procedures should therefore be unearthed, and alternative policies and procedures put forward based upon counterassumptions.

—Ian I. Mitroff and Richard O. Mason,

Creating a Dialectical Social Science: Concepts, Methods, and Models

 

When to Use It

Any explanation of current events or estimate of future developments requires the interpretation of evidence. If the available evidence is incomplete or ambiguous, this interpretation is influenced by assumptions about how things normally work in the country of interest. These assumptions should be made explicit early in the analytic process.

If a Key Assumptions Check is not done at the outset of a project, it can still prove extremely valuable if done during the coordination process or before conclusions are presented or delivered.

Value Added

Preparing a written list of one’s working assumptions at the beginning of any project helps the analyst:

  • Identify the specific assumptions that underpin the basic analytic line.
  • Achieve a better understanding of the fundamental dynamics at play.
  • Gain a better perspective and stimulate new thinking about the issue.
  • Discover hidden relationships and links between key factors.
  • Identify any developments that would cause an assumption to be abandoned.
  • Avoid surprise should new information render old assumptions invalid.

A sound understanding of the assumptions underlying an analytic judgment sets the limits for the confidence the analyst ought to have in making a judgment.

The Method

The process of conducting a Key Assumptions Check is relatively straightforward in concept but often challenging to put into practice. One challenge is that participating analysts must be open to the possibility that they could be wrong. It helps to involve in this process several well-regarded analysts who are generally familiar with the topic but have no prior commitment to any set of assumptions about the issue at hand. Keep in mind that many “key assumptions” turn out to be “key uncertainties.”

Here are the steps in conducting a Key Assumptions Check:

* Gather a small group of individuals who are working the issue along with a few “outsiders.” The primary analytic unit already is working from an established mental model, so the “outsiders” are needed to bring other perspectives.

* Ideally, participants should be asked to bring their list of assumptions when they come to the meeting. If this was not done, start the meeting with a silent brainstorming session. Ask each participant to write down several assumptions on 3 × 5 cards.

*  Collect the cards and list the assumptions on a whiteboard for all to see.

*  Elicit additional assumptions. Work from the prevailing analytic line back to the key arguments that support it. Use various devices to help prod participants’ thinking:

  • Ask the standard journalist questions. Who: Are we assuming that we know who all the key players are? What: Are we assuming that we know the goals of the key players? When: Are we assuming that conditions have not changed since our last report or that they will not change in the foreseeable future? Where: Are we assuming that we know where the real action is going to be? Why: Are we assuming that we understand the motives of the key players? How: Are we assuming that we know how they are going to do it?
  • After identifying a full set of assumptions, go back and critically examine each assumption. Ask:
  • Why am I confident that this assumption is correct?
    In what circumstances might this assumption be untrue?
    Could it have been true in the past but no longer be true today?
    How much confidence do I have that this assumption is valid?
    If it turns out to be invalid, how much impact would this have on the analysis?
  • Place each assumption in one of three categories:
  • Basically solid.
    Correct with some caveats.
    Unsupported or questionable—the “key uncertainties.”

Refine the list, deleting those that do not hold up to scrutiny and adding new assumptions that emerge from the discussion. Above all, emphasize those assumptions that would, if wrong, lead to changing the analytic conclusions.

There is a particularly noteworthy interaction between Key Assumptions Check and Analysis of Competing Hypotheses (ACH). Key assumptions need to be included as “evidence” in an ACH matrix to ensure that the matrix is an accurate reflection of the analyst’s thinking. And analysts frequently identify assumptions during the course of filling out an ACH matrix. This happens when an analyst assesses the consistency or inconsistency of an item of evidence with a hypothesis and concludes that this judgment is dependent upon something else—usually an assumption. Users of ACH should write down and keep track of the assumptions they make when evaluating evidence against the hypotheses.

8.2 STRUCTURED ANALOGIES
The Structured Analogies technique applies increased rigor to analogical reasoning by requiring that the

issue of concern be compared systematically with multiple analogies rather than with a single analogy.

When to Use It

When one is making any analogy, it is important to think about more than just the similarities. It is also necessary to consider those conditions, qualities, or circumstances that are dissimilar between the two phenomena. This should be standard practice in all reasoning by analogy and especially in those cases when one cannot afford to be wrong.

We recommend that analysts considering the use of this technique read Richard D. Neustadt and Ernest R. May, “Unreasoning from Analogies,” chapter 4, in Thinking in Time: The Uses of History for Decision Makers (New York: Free Press, 1986). Also recommended is Giovanni Gavetti and Jan W. Rivkin, “How Strategists Really Think: Tapping the Power of Analogy,” Harvard Business Review (April 2005).

Value Added

Reasoning by analogy helps achieve understanding by reducing the unfamiliar to the familiar. In the absence of data required for a full understanding of the current situation, reasoning by analogy may be the only alternative.

The benefit of the Structured Analogies technique is that it avoids the tendency to fasten quickly on a single analogy and then focus only on evidence that supports the similarity of that analogy. Analysts should take into account the time required for this structured approach and may choose to use it only when the cost of being wrong is high.

The following is a step-by-step description of this technique.

*  Describe the issue and the judgment or decision that needs to be made.

*  Identify a group of experts who are familiar with the problem a

* Ask the group of experts to identify as many analogies as possible without focusing too strongly on how similar they are to the current situation. Various universities and international organizations maintain databases to facilitate this type of research. For example, the Massachusetts Institute of Technology (MIT) maintains its Cascon System for Analyzing International Conflict, a database of 85 post–World War II conflicts that are categorized and coded to facilitate their comparison with current conflicts of interest.

* Review the list of potential analogies and agree on which ones should be examined further.

* Develop a tentative list of categories for comparing the analogies to determine which analogy is closest to the issue in question. For example, the MIT conflict database codes each case according to the following broad categories as well as finer subcategories: previous or general relations between sides, great power and allied involvement, external relations generally, military-strategic, international organization (UN, legal, public opinion), ethnic (refugees, minorities), economic/resources, internal politics of the sides, communication and information, actions in disputed area.

* Write up an account of each selected analogy, with equal focus on those aspects of the analogy that are similar and those that are different. The task of writing accounts of all the analogies should be divided up among the experts. Each account can be posted on a wiki where each member of the group can read and comment on them.

* Review the tentative list of categories for comparing the analogous situations to make sure they are still appropriate. Then ask each expert to rate the similarity of each analogy to the issue of concern. The experts should do the rating in private using a scale from 0 to 10, where 0 = not at all similar, 5 = somewhat similar, and 10 = very similar.

* After combining the ratings to calculate an average rating for each analogy, discuss the results and make a forecast for the current issue of concern. This will usually be the same as the outcome of the most similar analogy. Alternatively, identify several possible outcomes, or scenarios, based on the diverse outcomes of analogous situations. Then use the analogous cases to identify drivers or policy actions that might influence the outcome of the current situation.

8.3 ROLE PLAYING

In Role Playing, analysts assume the roles of the leaders who are the subject of their analysis and act out their responses to developments. This technique is also known as gaming, but we use the name Role Playing here

to distinguish it from the more complex forms of military gaming. This technique is about simple Role Playing, when the starting scenario is the current existing situation, perhaps with a real or hypothetical new development that has just happened and to which the players must react.

When to Use It

Role Playing is often used to improve understanding of what might happen when two or more people, organizations, or countries interact, especially in conflict situations or negotiations. It shows how each side might react to statements or actions from the other side. Many years ago Richards Heuer participated in several Role Playing exercises, including one with analysts of the Soviet Union from throughout the Intelligence Community playing the role of Politburo members deciding on the successor to Soviet leader Leonid Brezhnev.

Role Playing has a desirable byproduct that might be part of the rationale for using this technique. It is a useful mechanism for bringing together people who, although they work on a common problem, may have little opportunity to meet and discuss their perspectives on this problem. A role-playing game may lead to the long-term benefits that come with mutual understanding and ongoing collaboration. To maximize this benefit, the organizer of the game should allow for participants to have informal time together.

Value Added

Role Playing is a good way to see a problem from another person’s perspective, to gain insight into how others think, or to gain insight into how other people might react to U.S. actions.

Role Playing is particularly useful for understanding the potential outcomes of a conflict situation. Parties to a conflict often act and react many times, and they can change as a result of their interactions. There is a body of research showing that experts using unaided judgment perform little better than chance in predicting the outcome of such conflict. Performance is improved significantly by the use of “simulated interaction” (Role Playing) to act out the conflicts.

Role Playing does not necessarily give a “right” answer, but it typically enables the players to see some things in a new light. Players become more conscious that “where you stand depends on where you sit.”

Potential Pitfalls

One limitation of Role Playing is the difficulty of generalizing from the game to the real world. Just because something happens in a role-playing game does not necessarily mean the future will turn out that way. This observation seems obvious, but it can actually be a problem. Because of the immediacy of the experience and the personal impression made by the simulation, the outcome may have a stronger impact on the participants’ thinking than is warranted by the known facts of the case. As we shall discuss, this response needs to be addressed in the after-action review.

When the technique is used for intelligence analysis, the goal is not an explicit prediction but better understanding of the situation and the possible outcomes. The method does not end with the conclusion of the Role Playing. There must be an after-action review of the key turning points and how the outcome might have been different if different choices had been made at key points in the game.

The Method

Most of the gaming done in the Department of Defense and in the academic world is rather elaborate so it requires substantial preparatory work.

Whenever possible, a Role Playing game should be conducted off site with cell phones turned off. Being away from the office precludes interruptions and makes it easier for participants to imagine themselves in a different environment with a different set of obligations, interests, ambitions, fears, and historical memories.

The analyst who plans and organizes the game leads a control team. This team monitors time to keep the game on track, serves as the communication channel to pass messages between teams, leads the after-action review, and helps write the after-action report to summarize what happened and lessons learned. The control team also plays any role that becomes necessary but was not foreseen, for example, a United Nations mediator. If necessary to keep the game on track or lead it in a desired direction, the control team may introduce new events, such as a terrorist attack that inflames emotions or a new policy statement on the issue by the U.S. president.

After the game ends or on the following day, it is necessary to conduct an after-action review. If there is agreement that all participants played their roles well, there may be a natural tendency to assume that the outcome of the game is a reasonable forecast of what will eventually happen in real life.

8.4 RED HAT ANALYSIS

Intelligence analysts frequently endeavor to forecast the actions of an adversary or a competitor. In doing so, they need to avoid the common error of mirror imaging, the natural tendency to assume that others think and

perceive the world in the same way we do. Red Hat Analysis is a useful technique for trying to perceive threats and opportunities as others see them, but this technique alone is of limited value without significant cultural understanding of the other country and people involved.

 

To see the options faced by foreign leaders as these leaders see them, one must understand their values and assumptions and even their misperceptions and misunderstandings. Without such insight, interpreting foreign leaders’ decisions or forecasting future decisions is often little more than partially informed speculation. Too frequently, behavior of foreign leaders appears ‘irrational’ or ‘not in their own best interest.’ Such conclusions often indicate analysts have projected American values and conceptual frameworks onto the foreign leaders and societies, rather than understanding the logic of the situation as it appears to them.

—Richards J. Heuer Jr., Psychology of Intelligence Analysis (1999).

When to Use It

The chances of a Red Hat Analysis being accurate are better when one is trying to foresee the behavior of a specific person who has the authority to make decisions. Authoritarian leaders as well as small, cohesive groups, such as terrorist cells, are obvious candidates for this type of analysis. The chances of making an accurate forecast about an adversary’s or a competitor’s decision is significantly lower when the decision is constrained by a legislature or influenced by conflicting interest groups.

Value Added

There is a great deal of truth to the maxim that “where you stand depends on where you sit.” Red Hat Analysis is a reframing technique that requires the analyst to adopt—and make decisions consonant with— the culture of a foreign leader, cohesive group, criminal, or competitor. This conscious effort to imagine the situation as the target perceives it helps the analyst gain a different and usually more accurate perspective on a problem or issue. Reframing the problem typically changes the analyst’s perspective from that of an analyst observing and forecasting an adversary’s behavior to that of a leader who must make a difficult decision within that operational culture.

This reframing process often introduces new and different stimuli that might not have been factored into a traditional analysis. For example, in a Red Hat exercise, participants might ask themselves these questions: “What are my supporters expecting from me?” “Do I really need to make this decision now?” What are the consequences of making a wrong decision?” “How will the United States respond?”

Potential Pitfalls

Forecasting human decisions or the outcome of a complex organizational process is difficult in the best of circumstances.

It is even more difficult when dealing with a foreign culture and significant gaps in the available information. Mirror imaging is hard to avoid because, in the absence of a thorough understanding of the foreign situation and culture, your own perceptions appear to be the only reasonable way to look at the problem.

A common error in our perceptions of the behavior of other people, organizations, or governments of all types is likely to be even more common when assessing the behavior of foreign leaders or groups.

This is the tendency to attribute the behavior of people, organizations, or governments to the nature of the actor and to underestimate the influence of situational factors. This error is especially easy to make when one assumes that the actor has malevolent intentions but our understanding of the pressures on that actor is limited. Conversely, people tend to see their own behavior as conditioned almost entirely by the situation in which they find themselves. We seldom see ourselves as a bad person, but we often see malevolent intent in others.

This is known to cognitive psychologists as the fundamental attribution error.

The Method

* Gather a group of experts with in-depth knowledge of the target, operating environment, and senior decision maker’s personality, motives, and style of thinking. If at all possible, try to include people who are well grounded in the adversary’s culture, who speak the same language, share the same ethnic background, or have lived extensively in the region.

* Present the experts with a situation or a stimulus and ask the experts to put themselves in the adversary’s or competitor’s shoes and simulate how they would respond.

* Emphasize the need to avoid mirror imaging. The question is not “What would you do if you were in their shoes?” but “How would this person or group in that particular culture and circumstance most likely think, behave, and respond to the stimulus?”

* If trying to foresee the actions of a group or an organization, consider using the Role Playing technique.

* In presenting the results, describe the alternatives that were considered and the rationale for selecting the path the person or group is most likely to take. Consider other less conventional means of presenting the results of your analysis, such as the following:

Describing a hypothetical conversation in which the leader and other players discuss the issue in the first person.
Drafting a document (set of instructions, military orders, policy paper, or directives) that the adversary or competitor would likely generate.

Relationship to Other Techniques

Red Hat Analysis differs from a Red Team Analysis in that it can be done or organized by any analyst who needs to understand or forecast foreign behavior and who has, or can gain access to, the required cultural expertise.

8.5 OUTSIDE-IN THINKING

Outside-In Thinking identifies the broad range of global, political, environmental, technological, economic, or social forces and trends that are outside the analyst’s area of expertise but that may profoundly affect the issue of concern. Many analysts tend to think from the inside out, focused on familiar factors in their specific area of responsibility with which they are most familiar.

When to Use It

This technique is most useful in the early stages of an analytic process when analysts need to identify all the critical factors that might explain an event or could influence how a particular situation will develop. It should be part of the standard process for any project that analyzes potential future outcomes, for this approach covers the broader environmental context from which surprises and unintended consequences often come.

Outside-In Thinking also is useful if a large database is being assembled and needs to be checked to ensure that no important field in the database architecture has been overlooked. In most cases, important categories of information (or database fields) are easily identifiable early on in a research effort, but invariably one or two additional fields emerge after an analyst or group of analysts is well into a project, forcing them to go back and review all previous files, recoding for that new entry. Typically, the overlooked fields are in the broader environment over which the analysts have little control. By applying Outside-In Thinking, analysts can better visualize the entire set of data fields early on in the research effort.

Value Added

Most analysts focus on familiar factors within their field of specialty, but we live in a complex, interrelated world where events in our little niche of that world are often affected by forces in the broader environment over which we have no control. The goal of Outside-In Thinking is to help analysts get an entire picture, not just the part of the picture with which they are already familiar.

Outside-In Thinking reduces the risk of missing important variables early in the analytic process. It encourages analysts to rethink a problem or an issue while employing a broader conceptual framework.

The Method

  • Generate a generic description of the problem or phenomenon to be studied.
  • Form a group to brainstorm the key forces and factors that could have an impact on the topic but over which the subject can exert little or no influence, such as globalization, the emergence of new technologies, historical precedent, and the growth of the Internet.
  • Employ the mnemonic STEEP +2 to trigger new ideas (Social, Technical, Economic, Environmental, Political plus Military and Psychological).
  • Move down a level of analysis and list the key factors about which some expertise is available.
  • Assess specifically how each of these forces and factors could have an impact on the problem.
  • Ascertain whether these forces and factors actually do have an impact on the issue at hand basing your conclusion on the available evidence.
  • Generate new intelligence collection tasking to fill in information gaps.

Relationship to Other Techniques

Outside-In Thinking is essentially the same as a business analysis technique that goes by different acronyms, such as STEEP, STEEPLED, PEST, or PESTLE. For example, PEST is an acronym for Political, Economic, Social, and Technological, while STEEPLED also includes Legal, Ethical, and Demographic. All require the analysis of external factors that may have either a favorable or unfavorable influence on an organization.

8.6 POLICY OUTCOMES FORECASTING MODEL

The Policy Outcomes Forecasting Model structures the analysis of competing political forces in order to forecast the most likely political outcome and the potential for significant political change. The model was

originally designed as a quantitative method using expert-generated data, not as a structured analytic technique. However, like many quantitative models, it can also be used simply as a conceptual model to guide how an expert analyst thinks about a complex issue.

When to Use It

The Policy Outcomes Forecasting Model has been used to analyze the following types of questions:

What policy is Country W likely to adopt toward its neighbor?
Is the U.S. military likely to lose its base in Country X?
How willing is Country Y to compromise in its dispute with Country X?
In what circumstances can the government of Country Z be brought down?

Use this model when you have substantial information available on the relevant actors (individual leaders or organizations), their positions on the issues, the importance of the issues to each actor, and the relative strength of each actor’s ability to support or oppose any specific policy. Judgments about the positions and the strengths and weaknesses of the various political forces can then be used to forecast what policies might be adopted and to assess the potential for political change.

Use of this model is limited to situations when there is a single issue that will be decided by political bargaining and maneuvering, and when the potential outcomes can be visualized on a continuous line

Value Added

Like any model, Policy Outcomes Forecasting provides a systematic framework for generating and organizing information about an issue of concern. Once the basic analysis is done, it can be used to analyze the significance of changes in the position of any of the stakeholders. An analyst may also use the data to answer What If? questions such as the following:

Would a leader strengthen her position if she modified her stand on a contentious issue?
Would the military gain the upper hand if the current civilian leader were to die?
What would be the political consequences if a traditionally apolitical institution—such as the church or the military—became politicized?

An analyst or group of analysts can make an informed judgment about an outcome by explicitly identifying all the stakeholders in the outcome of an issue and then determining how close or far apart they are on the issue, how influential each one is, and how strongly each one feels about it. Assembling all this data in a graphic such as Figure 8.6 helps the analyst manage the complexity, share and discuss the information with other analysts, and present conclusions in an efficient and effective manner.

The Method

Define the problem in terms of a policy or leadership choice issue. The issue must vary along a single dimension so that options can be arrayed from one extreme to another in a way that makes sense within the country in which the decision will be made.

These alternative policies are rated on a scale from 0 to 100, with the position on the scale reflecting the distance or difference between the policies.

These options range between the two extremes—full nationalization of energy investment at the left end of the scale and private investment only at the right end. Note that the position of these policies on the horizontal scale captures the full range of the policy debate and reflects the estimated political distance or difference between each of the policies.

The next step is to identify all the actors, no matter how strong or weak, that will try to influence the policy outcome.

First, their position on the horizontal scale shows where the actor stands on the issue, and, second, their height above the scale is a measure of the relative amount of clout each actor has and is prepared to use to influence the outcome of the policy decision. To judge the relative height of each actor, identify the strongest actor and arbitrarily assign that actor a strength of 100. Assign proportionately lower values to other actors based on your judgment or gut feeling about how their strength and political clout compare with those of the actor assigned a strength of 100.

This graphic representation of the relevant variables is used as an aid in assessing and communicating to others the current status of the most influential forces on this issue and the potential impact of various changes in this status.

Origins of This Technique

The Policy Outcomes Forecasting Model described here is a simplified, nonquantitative version of a policy forecasting model developed by Bruce Bueno de Mesquita and described in his book The War Trap (New Haven: Yale University Press, 1981). It was further refined by Bueno de Mesquita et al., in Forecasting Political Events: The Future of Hong Kong (New Haven: Yale University Press, 1988).

In the 1980s, CIA analysts used this method with the implementing software to analyze scores of policy and political instability issues in more than thirty countries. Analysts used their subject expertise to assign numeric values to the variables. The simplest version of this methodology uses the positions of each actor, the relative strength of each actor, and the relative importance of the issue to each actor to calculate which actor’s or group’s position would get the most support if each policy position had to compete with every other policy position in a series of “pairwise “contests. In other words, the model finds the policy option around which a coalition will form that can defeat every other possible coalition in every possible contest between any two policy options (the “median voter” model). The model can also test how sensitive the policy forecast is to various changes in the relative strength of the actors or in their positions or in the importance each attaches to the issue.

A testing program at that time found that traditional analysis and analyses using the policy forces analysis software were both accurate in hitting the target about 90 percent of the time, but the software hit the bull’s- eye twice as often. Also, reports based on the policy forces software gave greater detail on the political dynamics leading to the policy outcome and were less vague in their forecasts than were traditional analyses.

8.7 PREDICTION MARKETS

Prediction Markets are speculative markets created solely for the purpose of allowing participants to make predictions in a particular area. Just as betting on a horse race sets the odds on which horse will win, supply and demand in the prediction market sets the estimated probability of some future occurrence. Two books, The Wisdom of Crowds by James Surowiecki and Infotopia by Cass Sunstein, have popularized the concept of Prediction Markets.

We do not support the use of Prediction Markets for intelligence analysis for reasons that are discussed below. We have included Prediction Markets in this book because it is an established analytic technique and it has been suggested for use in the Intelligence Community.

The following arguments have been made against the use of Prediction Markets for intelligence analysis:

* Prediction Markets can be used only in situations that will have an unambiguous outcome, usually within a predictable time period. Such situations are commonplace in business and industry, though much less so in intelligence analysis.

* Prediction Markets do have a strong record of near-term forecasts, but intelligence analysts and their customers are likely to be uncomfortable with their predictions. No matter what the statistical record of accuracy with this technique might be, consumers of intelligence are unlikely to accept any forecast without understanding the rationale for the forecast and the qualifications of those who voted on it.

* If people in the crowd are offering their unsupported opinions, and not informed judgments, the utility of the prediction is questionable. Prediction Markets are more likely to be useful in dealing with commercial preferences or voting behavior and less accurate, for example, in predicting the next terrorist attack in the United States, a forecast that would require special expertise and knowledge.

* Like other financial markets, such as commodities futures markets, Prediction Markets are subject to liquidity problems and speculative attacks mounted in order to manipulate the results. Financially and politically interested parties may seek to manipulate the vote. The fewer the participants, the more vulnerable a market is.

* Ethical objections have been raised to the use of a Prediction Market for national security issues. The Defense Advanced Research Projects Agency (DARPA) proposed a Policy Analysis Market in 2003. It would have worked in a manner similar to the commodities market, and it would have allowed investors to earn profits by betting on the likelihood of such events as regime changes in the Middle East and the likelihood of terrorist attacks. The DARPA plan was attacked on grounds that “it was unethical and in bad taste to accept wagers on the fate of foreign leaders and a terrorist attack. The project was canceled a day after it was announced.” Although attacks on the DARPA plan in the media may have been overdone, there is a legitimate concern about government-sponsored betting on international events.

Relationship to Other Techniques

The Delphi Method is a more appropriate method for intelligence agencies to use to aggregate outside expert opinion; Delphi also has a broader applicability for other types of intelligence analysis.

9.0 Challenge Analysis
9 Challenge Analysis

Challenge analysis encompasses a set of analytic techniques that have also been called contrarian analysis, alternative analysis, competitive analysis, red team analysis, and devil’s advocacy. What all of these have in common is the goal of challenging an established mental model or analytic consensus in order to broaden the range of possible explanations or estimates that are seriously considered. The fact that this same activity has been called by so many different names suggests there has been some conceptual diversity about how and why these techniques are being used and what might be accomplished by their use.

There is a broad recognition in the Intelligence Community that failure to question a consensus judgment, or a long-established mental model, has been a consistent feature of most significant intelligence failures. The postmortem analysis of virtually every major U.S. intelligence failure since Pearl Harbor has identified an analytic mental model (mindset) as a key factor contributing to the failure. The situation changed, but the analyst’s mental model did not keep pace with that change or did not recognize all the ramifications of the change.

This record of analytic failures has generated discussion about the “paradox of expertise.” The experts can be the last to recognize the reality and significance of change. For example, few experts on the Soviet Union foresaw its collapse, and the experts on Germany were the last to accept that Germany was going to be reunified. Going all the way back to the Korean War, experts on China were saying that China would not enter the war—until it did.

A mental model formed through education and experience serves an essential function; it is what enables the analyst to provide on a daily basis reasonably good intuitive assessments or estimates about what is happening or likely to happen.

The problem is that a mental model that has previously provided accurate assessments and estimates for many years can be slow to change. New information received incrementally over time is easily assimilated into one’s existing mental model, so the significance of gradual change over time is easily missed. It is human nature to see the future as a continuation of the past.

There is also another logical rationale for consistently challenging conventional wisdom. Former CIA Director Michael Hayden has stated that “our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we’re at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments.” The director went on to suggest that getting it right seven times out of ten might be a realistic expectation.

This chapter describes three types of challenge analysis techniques: self-critique, critique of others, and solicitation of critique by others.

Self-critique: Two techniques that help analysts challenge their own thinking are Premortem Analysis and Structured Self-Critique. These techniques can counteract the pressures for conformity or consensus that often suppress the expression of dissenting opinions in an analytic team or group. We adapted Premortem Analysis from business and applied it to intelligence analysis.

Critique of others: Analysts can use What If? Analysis or High Impact/Low Probability Analysis to tactfully question the conventional wisdom by making the best case for an alternative explanation or outcome.

Critique by others: Several techniques are available for seeking out critique by others. Devil’s Advocacy is a well-known example of that. The term “Red Team” is used to describe a group that is assigned to take an adversarial perspective. The Delphi Method is a structured process for eliciting opinions from a panel of outside experts.

Reframing Techniques

Three of the techniques in this chapter work by a process called reframing. A frame is any cognitive structure that guides the perception and interpretation of what one sees. A mental model of how things normally work can be thought of as a frame through which an analyst sees and interprets evidence. An individual or a group of people can change their frame of reference, and thus challenge their own thinking about a problem, simply by changing the questions they ask or changing the perspective from which they ask the questions. Analysts can use this reframing technique when they need to generate new ideas, when they want to see old ideas from a new perspective, or any other time when they sense a need for fresh thinking.

it is fairly easy to open the mind to think in different ways. The trick is to restate the question, task, or problem from a different perspective that activates a different set of synapses in the brain. Each of the three applications of reframing described in this chapter does this in a different way. Premortem Analysis asks analysts to imagine themselves at some future point in time, after having just learned that a previous analysis turned out to be completely wrong. The task then is to figure out how and why it might have gone wrong. What If? Analysis asks the analyst to imagine that some unlikely event has occurred, and then to explain how it could happen and the implications of the event. Structured Self-Critique asks a team of analysts to reverse its role from advocate to critic in order to explore potential weaknesses in the previous analysis. This change in role can empower analysts to express concerns about the consensus view that might previously have been suppressed. These techniques are generally more effective in a small group than with a single analyst. Their effectiveness depends in large measure on how fully and enthusiastically participants in the group embrace the imaginative or alternative role they are playing. Just going through the motions is of limited value.

Overview of Techniques

Premortem Analysis reduces the risk of analytic failure by identifying and analyzing a potential failure before it occurs. Imagine yourself several years in the future. You suddenly learn from an unimpeachable source that your estimate was wrong. Then imagine what could have happened to cause the estimate to be wrong. Looking back from the future to explain something that has happened is much easier than looking into the future to forecast what will happen, and this exercise helps identify problems one has not foreseen.

Structured Self-Critique is a procedure that a small team or group uses to identify weaknesses in its own analysis. All team or group members don a hypothetical black hat and become critics rather than supporters of their own analysis. From this opposite perspective, they respond to a list of questions about sources of uncertainty, the analytic processes that were used, critical assumptions, diagnosticity of evidence, anomalous evidence, information gaps, changes in the broad environment in which events are happening, alternative decision models, availability of cultural expertise, and indicators of possible deception. Looking at the responses to these questions, the team reassesses its overall confidence in its own judgment.

What If? Analysis is an important technique for alerting decision makers to an event that could happen, or is already happening, even if it may seem unlikely at the time. It is a tactful way of suggesting to decision makers the possibility that they may be wrong. What If? Analysis serves a function similar to that of Scenario Analysis—it creates an awareness that prepares the mind to recognize early signs of a significant change, and it may enable a decision maker to plan ahead for that contingency. The analyst imagines that an event has occurred and then considers how the event could have unfolded.

High Impact/Low Probability Analysis is used to sensitize analysts and decision makers to the possibility that a low-probability event might actually happen and stimulate them to think about measures that could be taken to deal with the danger or to exploit the opportunity if it does occur. The analyst assumes the event has occurred, and then figures out how it could have happened and what the consequences might be.

Devil’s Advocacy is a technique in which a person who has been designated the Devil’s Advocate, usually by a responsible authority, makes the best possible case against a proposed analytic judgment, plan, or decision.

Red Team Analysis as described here is any project initiated by management to marshal the specialized substantive, cultural, or analytic skills required to challenge conventional wisdom about how an adversary or competitor thinks about an issue.

Delphi Method is a procedure for obtaining ideas, judgments, or forecasts electronically from a geographically dispersed panel of experts. It is a time-tested, extremely flexible procedure that can be used on any topic or issue for which expert judgment can contribute.

9.1 PREMORTEM ANALYSIS

The goal of a Premortem Analysis is to reduce the risk of surprise and the subsequent need for a postmortem investigation of what went wrong. It is an easy-to-use technique that enables a group of analysts

who have been working together on any type of future-oriented analysis to challenge effectively the accuracy of their own conclusions.

When to Use It

Premortem Analysis should be used by analysts who can devote a few hours to challenging their own analytic conclusions about the future to see where they might be wrong. It may be used by a single analyst but, like all structured analytic techniques, it is most effective when used in a small group.

After the trainees formulated a plan of action, they were asked to imagine that it is several months or years into the future, and their plan has been implemented but has failed. They were then asked to describe how it might have failed, and, despite their original confidence in the plan, they could easily come up with multiple explanations for failure—reasons that were not identified when the plan was first proposed and developed.

Klein reported his trainees showed a “much higher level of candor” when evaluating their own plans after being exposed to the premortem exercise, as compared with other more passive attempts at getting them to self-critique their own plans.

Value Added

Briefly, there are two creative processes at work here. First, the questions are reframed, an exercise that typically elicits responses that are different from the original ones. Asking questions about the same topic, but from a different perspective, opens new pathways in the brain, as we noted in the introduction to this chapter. Second, the Premortem approach legitimizes dissent. For various reasons, many members of small groups suppress dissenting opinions, leading to premature consensus. In a Premortem Analysis, all analysts are asked to make a positive contribution to group goals by identifying weaknesses in the previous analysis.

Research has documented that an important cause of poor group decisions is the desire for consensus. This desire can lead to premature closure and agreement with majority views regardless of whether they are perceived as right or wrong. Attempts to improve group creativity and decision making often focus on ensuring that a wider range of information and opinions are presented to the group and given consideration.

In a candid newspaper column written long before he became CIA Director, Leon Panetta wrote that “an unofficial rule in the bureaucracy says that to ‘get along, go along.’ In other words, even when it is obvious that mistakes are being made, there is a hesitancy to report the failings for fear of retribution or embarrassment. That is true at every level, including advisers to the president. The result is a ‘don’t make waves’ mentality … that is just another fact of life you tolerate in a big bureaucracy.”

The Method

The best time to conduct a Premortem Analysis is shortly after a group has reached a conclusion on an action plan, but before any serious drafting of the report has been done. If the group members are not already familiar with the Premortem technique, the group leader, another group member, or a facilitator steps up and makes a statement along the lines of the following. “Okay, we now think we know the right answer, but we need to double-check this.

To free up our minds to consider other possibilities, let’s imagine that we have made this judgment, our report has gone forward and been accepted, and now, x months or years later, we gain access to a crystal ball. Peering into this ball, we learn that our analysis was wrong, and things turned out very differently from the way we had expected. Now, working from that perspective in the future, let’s put our imaginations to work and brainstorm what could have possibly happened to cause our analysis to be so wrong.”

After all ideas are posted on the board and visible to all, the group discusses what it has learned by this exercise, and what action, if any, the group should take. This generation and initial discussion of ideas can often be accomplished in a single two-hour meeting, which is a small investment of time to undertake a systematic challenge to the group’s thinking.

 

9.2 STRUCTURED SELF-CRITIQUE

Structured Self-Critique is a systematic procedure that a small team or group can use to identify weaknesses in its own analysis. All team or group members don a hypothetical black hat and become critics rather than

supporters of their own analysis. From this opposite perspective, they respond to a list of questions about sources of uncertainty, the analytic processes that were used, critical assumptions, diagnosticity of evidence, anomalous evidence, information gaps, changes in the broad environment in which events are happening, alternative decision models, availability of cultural expertise, and indicators of possible deception. As it reviews responses to these questions, the team reassesses its overall confidence in its own judgment.

When to Use It

You can use Structured Self-Critique productively to look for weaknesses in any analytic explanation of events or estimate of the future. It is specifically recommended for use in the following ways:

  • As the next step after a Premortem Analysis raises unresolved questions about any estimated future outcome or event.
    As a double check prior to the publication of any major product such as a National Intelligence Estimate.
  • As one approach to resolving conflicting opinions

The Method

Start by re-emphasizing that all analysts in the group are now wearing a black hat. They are now critics, not advocates, and they will now be judged by their ability to find weaknesses in the previous analysis, not on the basis of their support for the previous analysis. Then work through the following topics or questions:

Sources of uncertainty: Identify the sources and types of uncertainty in order to set reasonable expectations for what the team might expect to achieve. Should one expect to find: (a) a single correct or most likely answer, (b) a most likely answer together with one or more alternatives that must also be considered, or (c) a number of possible explanations or scenarios for future development? To judge the uncertainty, answer these questions:

  • Is the question being analyzed a puzzle or a mystery? Puzzles have answers, and correct answers can be identified if enough pieces of the puzzle are found. A mystery has no single definitive answer; it depends upon the future interaction of many factors, some known and others unknown. Analysts can frame the boundaries of a mystery only “by identifying the critical factors and making an intuitive judgment about how they have interacted in the past and might interact in the future.”
    How does the team rate the quality and timeliness of its evidence?
  • Are there a greater than usual number of assumptions because of insufficient evidence or the complexity of the situation?
  • Is the team dealing with a relatively stable situation or with a situation that is undergoing, or potentially about to undergo, significant change?

Analytic process: In the initial analysis, did the team do the following. Did it identify alternative hypotheses and seek out information on these hypotheses? Did it identify key assumptions? Did it seek a broad range of diverse opinions by including analysts from other offices, agencies, academia, or the private sector in the deliberations? If these steps were not taken, the odds of the team having a faulty or incomplete analysis are increased. Either consider doing some of these things now or lower the team’s level of confidence in its judgment.

Critical assumptions: Assuming that the team has already identified key assumptions, the next step is to identify the one or two assumptions that would have the greatest impact on the analytic judgment if they turned out to be wrong. In other words, if the assumption is wrong, the judgment will be wrong. How recent and well-documented is the evidence that supports each such assumption? Brainstorm circumstances that could cause each of these assumptions to be wrong, and assess the impact on the team’s analytic judgment if the assumption is wrong. Would the reversal of any of these assumptions support any alternative hypothesis? If the team has not previously identified key assumptions, it should do a Key Assumptions Check now.

Diagnostic evidence: Identify alternative hypotheses and the several most diagnostic items of evidence that enable the team to reject alternative hypotheses. For each item, brainstorm for any reasonable alternative interpretation of this evidence that could make it consistent with an alternative hypothesis. See Diagnostic Reasoning in chapter 7.

Information gaps: Are there gaps in the available information, or is some of the information so dated that it may no longer be valid? Is the absence of information readily explainable? How should absence of information affect the team’s confidence in its conclusions?

Missing evidence: Is there any evidence that one would expect to see in the regular flow of intelligence or open source reporting if the analytic judgment is correct, but that turns out not to be there? Anomalous evidence: Is there any anomalous item of evidence that would have been important if it had been believed or if it could have been related to the issue of concern, but that was rejected as unimportant because it was not believed or its significance was not known? If so, try to imagine how this item might be a key clue to an emerging alternative hypothesis.

Changes in the broad environment: Driven by technology and globalization, the world as a whole seems to be experiencing social, technical, economic, environmental, and political changes at a faster rate than ever before in history. Might any of these changes play a role in what is happening or will happen? More broadly, what key forces, factors, or events could occur independently of the issue that is the subject of analysis that could have a significant impact on whether the analysis proves to be right or wrong? Alternative decision models: If the analysis deals with decision making by a foreign government or nongovernmental organization (NGO), was the group’s judgment about foreign behavior based on a rational actor assumption? If so, consider the potential applicability of other decision models, specifically that the action was or will be the result of bargaining between political or bureaucratic forces, the result

of standard organizational processes, or the whim of an authoritarian leader. If information for a more thorough analysis is lacking, consider the implications of that for confidence in the team’s judgment. Cultural expertise: If the topic being analyzed involves a foreign or otherwise unfamiliar culture or subculture, does the team have or has it obtained cultural expertise on thought processes in that culture?

Deception: Does another country, NGO, or commercial competitor about which the team is making judgments have a motive, opportunity, or means for engaging in deception to influence U.S. policy or to change your behavior? Does this country, NGO, or competitor have a past history of engaging in denial, deception, or influence operations?

9.3 WHAT IF? ANALYSIS

What If? Analysis imagines that an unexpected event has occurred with potential major impact. Then, with the benefit of “hindsight,” the analyst figures out how this event could have come about and what the consequences might be.

When to Use It

This technique should be in every analyst’s toolkit. It is an important technique for alerting decision makers to an event that could happen, even if it may seem unlikely at the present time. What If? Analysis serves a function similar to Scenario Analysis—it creates an awareness that prepares the mind to recognize early signs of a significant change, and it may enable the decision maker to plan ahead for that contingency. It is most appropriate when two conditions are present:

A mental model is well ingrained within the analytic or the customer community that a certain event will not happen.

There is a perceived need for others to focus on the possibility that this event could actually happen and to consider the consequences if it does occur.

Value Added

Shifting the focus from asking whether an event will occur to imagining that it has occurred and then explaining how it might have happened opens the mind to think in different ways. What If? Analysis shifts the discussion from, “How likely is it?” to these questions:

  • How could it possibly come about?
  • What would be the impact?
  • Has the possibility of the event happening increased?

The technique also gives decision makers the following additional benefits:

A better sense of what they might be able to do today to prevent an untoward development from occurring, or what they might do today to leverage an opportunity for advancing their interests. A list of specific indicators to monitor to help determine if the chances of a development actually occurring are increasing.

The What If? technique is a useful tool for exploring unanticipated or unlikely scenarios that are within the realm of possibility and that would have significant consequences should they come to pass.

9.4 HIGH IMPACT/LOW PROBABILITY ANALYSIS

High Impact/Low Probability Analysis provides decision makers with early warning that a seemingly unlikely event with major policy and resource repercussions might actually occur.

When to Use It

High Impact/Low Probability Analysis should be used when one wants to alert decision makers to the possibility that a seemingly long-shot development that would have a major policy or resource impact may be more likely than previously anticipated. Events that would have merited such treatment before they occurred include the reunification of Germany in 1989 and the collapse of the Soviet Union in 1991.

The more nuanced and concrete the analyst’s depiction of the plausible paths to danger, the easier it is for a decision maker to develop a package of policies to protect or advance vital U.S. interests.

Potential Pitfalls

Analysts need to be careful when communicating the likelihood of unlikely events. The meaning of the word “unlikely” can be interpreted as meaning anywhere from 1 percent to 25 percent probability, while “highly unlikely” may mean from 1 percent to 10 percent.

The Method

An effective High Impact/Low Probability Analysis involves these steps:

  • Clearly describe the unlikely event.
  • Define the high-impact consequences if this event occurs. Consider both the actual event and the secondary impacts of the event.
  • Identify any recent information or reporting suggesting that the likelihood of the unlikely event occurring may be increasing.
  • Postulate additional triggers that would propel events in this unlikely direction or factors that would greatly accelerate timetables, such as a botched government response, the rise of an energetic challenger, a major terrorist attack, or a surprise electoral outcome that benefits U.S. interests.
  • Develop one or more plausible pathways that would explain how this seemingly unlikely event could unfold. Focus on the specifics of what must happen at each stage of the process for the train of events to play out.
  • Generate a list of indicators that would help analysts and decision makers recognize that events were beginning to unfold in this way.
    Identify factors that would deflect a bad outcome or encourage a positive outcome.

Once the list of indicators has been developed, the analyst must periodically review the list. Such periodic reviews help analysts overcome prevailing mental models that the events being considered are too unlikely to merit serious attention.

Relationship to Other Techniques

High Impact/Low Probability Analysis is sometimes confused with What If? Analysis. Both deal with low- probability or unlikely events. High Impact/Low Probability Analysis is primarily a vehicle for warning decision makers that recent, unanticipated developments suggest that an event previously deemed highly unlikely might actually occur. Based on recent evidence or information, it projects forward to discuss what could occur and the consequences if the event does occur. It challenges the conventional wisdom. What If? Analysis does not require new or anomalous information to serve as a trigger. It reframes the question by assuming that a surprise event has happened.

9.5 DEVIL’S ADVOCACY

Devil’s Advocacy is a process for critiquing a proposed analytic judgment, plan, or decision, usually by a single analyst not previously involved in the deliberations that led to the proposed judgment, plan, or decision.

The origins of devil’s advocacy “lie in a practice of the Roman Catholic Church in the early 16th century. When a person was proposed for beatification or canonization to sainthood, someone was assigned the role of critically examining the life and miracles attributed to that individual; his duty was to especially bring forward facts that were unfavorable to the candidate.”

When to Use It

Devil’s Advocacy is most effective when initiated by a manager as part of a strategy to ensure that alternative solutions are thoroughly considered. The following are examples of well-established uses of Devil’s Advocacy that are widely regarded as good management practices:

* Before making a decision, a policymaker or military commander asks for a Devil’s Advocate analysis of what could go wrong.

* An intelligence organization designates a senior manager as a Devil’s Advocate to oversee the process of reviewing and challenging selected assessments.

* A manager commissions a Devil’s Advocacy analysis when he or she is concerned about seemingly widespread unanimity on a critical issue throughout the Intelligence Community, or when the manager suspects that the mental model of analysts working an issue for a long time has become so deeply ingrained that they are unable to see the significance of recent changes.

Within the Intelligence Community, Devil’s Advocacy is sometimes defined as a form of self-critique… We do not support this approach for the following reasons:

* Calling such a technique Devil’s Advocacy is inconsistent with the historic concept of Devil’s Advocacy that calls for investigation by an independent outsider.

* Research shows that a person playing the role of a Devil’s Advocate, without actually believing it, is significantly less effective than a true believer and may even be counterproductive. Apparently, more attention and respect is accorded to someone with the courage to advance their own minority view than to someone who is known to be only playing a role. If group members see the Devil’s Advocacy as an analytic exercise they have to put up with, rather than the true belief of one of their members who is courageous enough to speak out, this exercise may actually enhance the majority’s original belief—“a smugness that may occur because one assumes one has considered alternatives though, in fact, there has been little serious reflection on other possibilities.” What the team learns from the Devil’s Advocate presentation may be only how to better defend the team’s own entrenched position.

* There are other forms of self-critique, especially Premortem Analysis and Structured Self-Critique as described in this chapter, which may be more effective in prompting even a cohesive, heterogeneous team to question their mental model and to analyze alternative perspectives.

9.6 RED TEAM ANALYSIS

The term “red team” or “red teaming” has several meanings. One definition is that red teaming is “the practice of viewing a problem from an adversary or competitor’s perspective.”16 This is how red teaming is commonly viewed by intelligence analysts.

When to Use It

Management should initiate a Red Team Analysis whenever there is a perceived need to challenge the conventional wisdom on an important issue or whenever the responsible line office is perceived as lacking the level of cultural expertise required to fully understand an adversary’s or competitor’s point of view.

Value Added

Red Team Analysis can help free analysts from their own well-developed mental model—their own sense of rationality, cultural norms, and personal values. When analyzing an adversary, the Red Team approach requires that an analyst change his or her frame of reference from that of an “observer” of the adversary or competitor, to that of an “actor” operating within the adversary’s cultural and political milieu. This reframing or role playing is particularly helpful when an analyst is trying to replicate the mental model of authoritarian leaders, terrorist cells, or non-Western groups that operate under very different codes of behavior or motivations than those to which most Americans are accustomed.

9.7 DELPHI METHOD

Delphi is a method for eliciting ideas, judgments, or forecasts from a group of experts who may be geographically dispersed. It is different from a survey in that there are two or more rounds of questioning.

After the first round of questions, a moderator distributes all the answers and explanations of the answers to all participants, often anonymously. The expert participants are then given an opportunity to modify or clarify their previous responses, if so desired, on the basis of what they have seen in the responses of the other participants. A second round of questions builds on the results of the first round, drills down into greater detail, or moves to a related topic. There is great flexibility in the nature and number of rounds of questions that might be asked.

Over the years, Delphi has been used in a wide variety of ways, and for an equally wide variety of purposes. Although many Delphi projects have focused on developing a consensus of expert judgment, a variant called Policy Delphi is based on the premise that the decision maker is not interested in having a group make a consensus decision, but rather in having the experts identify alternative policy options and present all the supporting evidence for and against each option. That is the rationale for including Delphi in this chapter on challenge analysis. It can be used to identify divergent opinions that may be worth exploring.

One group of Delphi scholars advises that the Delphi technique “can be used for nearly any problem involving forecasting, estimation, or decision making”—as long as the problem is not so complex or so new as to preclude the use of expert judgment. These Delphi advocates report using it for diverse purposes that range from “choosing between options for regional development, to predicting election outcomes, to deciding which applicants should be hired for academic positions, to predicting how many meals to order for a conference luncheon.”

Value Added

One of Sherman Kent’s “Principles of Intelligence Analysis,” which are taught at the CIA’s Sherman Kent School for Intelligence Analysis, is “Systematic Use of Outside Experts as a Check on In-House Blinders.” Consultation with relevant experts in academia, business, and nongovernmental organizations is also encouraged by Intelligence Community Directive No. 205, on Analytic Outreach, dated July 2008.

The Method

In a Delphi project, a moderator (analyst) sends a questionnaire to a panel of experts who may be in different locations. The experts respond to these questions and usually are asked to provide short explanations for their responses. The moderator collates the results from this first questionnaire and sends the collated responses back to all panel members, requesting them to reconsider their responses based on what they see and learn from the other experts’ responses and explanations. Panel members may also be asked to answer another set of questions.

Examples

To show how Delphi can be used for intelligence analysis, we have developed three illustrative applications:

* Evaluation of another country’s policy options: The Delphi project manager or moderator identifies several policy options that a foreign country might choose. The moderator then asks a panel of experts on the country to rate the desirability and feasibility of each option, from the other country’s point of view, on a five- point scale ranging from “Very Desirable” or “Feasible” to “Very Undesirable” or “Definitely Infeasible.” Panel members are also asked to identify and assess any other policy options that ought to be considered and to identify the top two or three arguments or items of evidence that guided their judgments. A collation of all responses is sent back to the panel with a request for members to do one of the following: reconsider their position in view of others’ responses, provide further explanation of their judgments, or reaffirm their previous response. In a second round of questioning, it may be desirable to list key arguments and items of evidence and ask the panel to rate them on their validity and their importance, again from the other country’s perspective.

* Analysis of Alternative Hypotheses: A panel of outside experts is asked to estimate the probability of each hypothesis in a set of mutually exclusive hypotheses where the probabilities must add up to 100 percent. This could be done as a stand-alone project or to double-check an already completed Analysis of Competing Hypotheses (ACH) analysis (chapter 7). If two analyses using different analysts and different methods arrive at the same conclusion, this is grounds for a significant increase in confidence in the conclusion. If the analyses disagree, that may also be useful to know as one can then seek to understand the rationale for the different judgments.

* Warning analysis or monitoring a situation over time: An analyst asks a panel of experts to estimate the probability of a future event. This might be either a single event for which the analyst is monitoring early warning indicators or a set of scenarios for which the analyst is monitoring milestones to determine the direction in which events seem to be moving.

10.0 Conflict Management
10 Conflict Management

challenge analysis frequently leads to the identification and confrontation of opposing views. That is, after all, the purpose of challenge analysis, but two important

questions are raised. First, how can confrontation be managed so that it becomes a learning experience rather than a battle between determined adversaries? Second, in an analysis of any topic with a high degree of uncertainty, how can one decide if one view is wrong or if both views have merit and need to be discussed in an analytic report?

The Intelligence Community’s procedure for dealing with differences of opinion has often been to force a consensus, water down the differences, or add a dissenting footnote to an estimate. Efforts are under way to move away from this practice, and we share the hopes of many in the community that this approach will become increasingly rare as members of the Intelligence Community embrace greater interagency collaboration early in the analytic process, rather than mandated coordination at the end of the process after all parties are locked into their positions. One of the principal benefits of using structured analytic techniques for interoffice and interagency collaboration is that these techniques identify differences of opinion early in the analytic process. This gives time for the differences to be at least understood, if not resolved, at the working level before management becomes involved.

If an analysis meets rigorous standards and conflicting views still remain, decision makers are best served by an analytic product that deals directly with the uncertainty rather than minimizing it or suppressing it. The greater the uncertainty, the more appropriate it is to go forward with a product that discusses the most likely assessment or estimate and gives one or more alternative possibilities. Factors to be considered when assessing the amount of uncertainty include the following:

An estimate of the future generally has more uncertainty than an assessment of a past or current event. Mysteries, for which there are no knowable answers, are far more uncertain than puzzles, for which an

answer does exist if one could only find it.3
The more assumptions that are made, the greater the uncertainty. Assumptions about intent or capability, and whether or not they have changed, are especially critical.
Analysis of human behavior or decision making is far more uncertain than analysis of technical data. The behavior of a complex dynamic system is more uncertain than that of a simple system. The more variables and stakeholders involved in a system, the more difficult it is to foresee what might happen.

If the decision is to go forward with a discussion of alternative assessments or estimates, the next step might be to produce any of the following:

A comparative analysis of opposing views in a single report. This calls for analysts to identify the sources and reasons for the uncertainty (e.g., assumptions, ambiguities, knowledge gaps), consider the implications of alternative assessments or estimates, determine what it would take to resolve the uncertainty, and suggest indicators for future monitoring that might provide early warning of which alternative is correct.

An analysis of alternative scenarios as described in chapter 6.
A What If? Analysis or High Impact/Low Probability Analysis as described in chapter 9. A report that is clearly identified as a “second opinion.”

Overview of Techniques

Adversarial Collaboration in essence is an agreement between opposing parties on how they will work together in an effort to resolve their differences, to gain a better understanding of how and why they differ, or as often happens to collaborate on a joint paper explaining the differences. Six approaches to implementing adversarial collaboration are described.

Structured Debate is a planned debate of opposing points of view on a specific issue in front of a “jury of peers,” senior analysts, or managers. As a first step, each side writes up its best possible argument for its position and passes this summation to the opposing side. The next step is an oral debate that focuses on refuting the other side’s arguments rather than further supporting one’s own arguments. The goal is to elucidate and compare the arguments against each side’s argument. If neither argument can be refuted, perhaps both merit some consideration in the analytic report.

10.1 ADVERSARIAL COLLABORATION

Adversarial Collaboration is an agreement between opposing parties about how they will work together to resolve or at least gain a better understanding of their differences. Adversarial Collaboration is a relatively new concept championed by Daniel Kahneman, the psychologist who along with Amos Tversky initiated much of the research on cognitive biases described in Richards Heuer’s Psychology of Intelligence Analysis… he commented as follows on Adversarial Collaboration:  

Adversarial collaboration involves a good-faith effort to conduct debates by carrying out joint research—in some cases there may be a need for an agreed arbiter to lead the project and collect the data. Because there is no expectation of the contestants reaching complete agreement at the end of the exercise, adversarial collaboration will usually lead to an unusual type of joint publication, in which disagreements are laid out as part of a jointly authored paper.

Kahneman’s approach to Adversarial Collaboration involves agreement on empirical tests for resolving a dispute and conducting those tests with the help of an impartial arbiter. A joint report describes the tests, states what has been learned that both sides agree on, and provides interpretations of the test results on which they disagree.

When to Use It

Adversarial Collaboration should be used only if both sides are open to discussion of an issue. If one side is fully locked into its position and has repeatedly rejected the other side’s arguments, this technique is unlikely to be successful. It is then more appropriate to use Structured Debate in which a decision is made by an independent arbiter after listening to both sides.

Value Added

Adversarial Collaboration can help opposing analysts see the merit of another group’s perspective. If successful, it will help both parties gain a better understanding of what assumptions or evidence is behind their opposing opinions on an issue and to explore the best way of dealing with these differences. Can one side be shown to be wrong, or should both positions be reflected in any report on the subject? Can there be agreement on indicators to show the direction in which events seem to be moving?

The Method

Six approaches to Adversarial Collaboration are described here. What they all have in common is the forced requirement to understand and address the other side’s position rather than simply dismiss it. Mutual understanding of the other side’s position is the bridge to productive collaboration. These six techniques are not mutually exclusive; in other words, one might use several of them for any specific project.

Key Assumptions Check:

Analysis of Competing Hypotheses:

Argument Mapping:

Mutual Understanding:

Joint Escalation:

The analysts should be required to prepare a joint statement describing the disagreement and to present it jointly to their superiors. This requires each analyst to understand and address, rather than simply dismiss, the other side’s position. It also ensures that managers have access to multiple perspectives on the conflict, its causes, and the various ways it might be resolved.

The Nosenko Approach: Yuriy Nosenko was a Soviet intelligence officer who defected to the United States in 1964. Whether he was a true defector or a Soviet plant was a subject of intense and emotional controversy within the CIA for more than a decade. In the minds of some, this historic case is still controversial.

The interesting point here is the ground rule that the team was instructed to follow. After reviewing the evidence, each officer identified those items of evidence thought to be of critical importance in making a judgment on Nosenko’s bona fides. Any item that one officer stipulated as critically important had to be addressed by the other two members.

It turned out that fourteen items were stipulated by at least one of the team members and had to be addressed by both of the others. Each officer prepared his own analysis, but they all had to address the same fourteen issues. Their report became known as the “Wise Men” report.

10.2 STRUCTURED DEBATE

A Structured Debate is a planned debate between analysts or analytic teams holding opposing points of view on a specific issue. It is conducted according to a set of rules before an audience, which may be a “jury of peers” or one or more senior analysts or managers.

When to Use It

Structured Debate is called for when there is a significant difference of opinion within or between analytic units or within the policymaking community, or when Adversarial Collaboration has been unsuccessful or is impractical, and it is necessary to make a choice between two opposing opinions or to go forward with a comparative analysis of both. A Structured Debate requires a significant commitment of analytic time and resources.

Value Added

In the method proposed here, each side presents its case in writing, and the written report is read by the other side and the audience prior to the debate. The oral debate then focuses on refuting the other side’s position. Glib and personable speakers can always make their arguments for a position sound persuasive. Effectively refuting the other side’s position is a very different ball game, however. The requirement to refute the other side’s position brings to the debate an important feature of the scientific method, that the most likely hypothesis is actually the one with the least evidence against it as well as good evidence for it.

The Method

Start by defining the conflict to be debated. If possible, frame the conflict in terms of competing and mutually exclusive hypotheses. Ensure that all sides agree with the definition. Then follow these steps:

*  Identify individuals or teams to develop the best case that can be made for each hypothesis.

*  Each side writes up the best case for its point of view. This written argument must be structured with an explicit presentation of key assumptions, key pieces of evidence, and careful articulation of the logic behind the argument.

* The written arguments are exchanged with the opposing side, and the two sides are given time to develop counterarguments to refute the opposing side’s position.

The debate phase is conducted in the presence of a jury of peers, senior analysts, or managers who will provide guidance after listening to the debate. If desired, there might also be an audience of interested observers.

* The debate starts with each side presenting a brief (maximum five minutes) summary of its argument for its position. The jury and the audience are expected to have read each side’s full argument.

* Each side then presents to the audience its rebuttal of the other side’s written position. The purpose here is to proceed in the oral arguments by systematically refuting alternative hypotheses rather than by presenting more evidence to support one’s own argument. This is the best way to evaluate the strengths of the opposing arguments.

* After each side has presented its rebuttal argument, the other side is given an opportunity to refute the rebuttal.

* The jury asks questions to clarify the debaters’ positions or gain additional insight needed to pass judgment on the debaters’ positions.

* The jury discusses the issue and passes judgment. The winner is the side that makes the best argument refuting the other side’s position, not the side that makes the best argument supporting its own position. The jury may also recommend possible next steps for further research or intelligence collection efforts. If neither side can refute the other’s arguments, it may be that both sides have a valid argument that should be represented in any subsequent analytic report.

Origins of This Technique

The history of debate goes back to the Socratic dialogues in ancient Greece and even before, and many different forms of debate have evolved since then. Richards Heuer formulated the idea of focusing the debate between intelligence analysts on refuting the other side’s argument rather than supporting one’s own argument.

 

11.0 Decision Support
11 Decision Support

Managers, commanders, planners, and other decision makers all make choices or tradeoffs among competing goals, values, or preferences. Because of limitations in human short-term memory, we usually cannot keep all the pros and cons of multiple options in mind at the same time. That causes us to focus first on one set of problems or opportunities and then another, a situation that often leads to vacillation or procrastination in making a firm decision. Some decision-support techniques help overcome this cognitive limitation by laying out all the options and interrelationships in graphic form so that analysts can test the results of alternative options while still keeping the problem as a whole in view. Other techniques help decision makers untangle the complexity of a situation or define the opportunities and constraints in the environment in which the choice needs to be made.

 

It is not the analyst’s job to make the choices or decide on the tradeoffs, but intelligence analysts can and should use decision-support techniques to provide timely support to managers, commanders, planners, and decision makers who do make these choices. The Director of National Intelligence’s Vision 2015 foresees intelligence driven by customer needs and a “shifting focus from today’s product- centric model toward a more interactive model that blurs the distinction between producer and

consumer.”

Caution is in order, however, whenever one thinks of predicting or even explaining another person’s decision, regardless of whether the person is of similar background or not. People do not always act rationally in their own best interests. Their decisions are influenced by emotions and habits, as well as by what others might think or values of which others may not be aware.

The same is true of organizations and governments. One of the most common analytic errors is the assumption that an organization or a government will act rationally, that is, in its own best interests. There are three major problems with this assumption:

* Even if the assumption is correct, the analysis may be wrong, because foreign organizations and governments typically see their own best interests quite differently from the way Americans see them.

* Organizations and governments do not always have a clear understanding of their own best interests. Governments in particular typically have a variety of conflicting interests.

* The assumption that organizations and governments commonly act rationally in their own best interests is not always true. All intelligence analysts seeking to understand the behavior of another country should be familiar with Graham Allison’s analysis of U.S. and Soviet decision making during the Cuban

missile crisis. It describes three different models for how governments make decisions—bureaucratic bargaining processes and standard organizational procedures as well as the rational actor model.

Decision making and decision analysis are large and diverse fields of study and research. The decision- support techniques described in this chapter are only a small sample of what is available, but they do meet many of the basic requirements for intelligence analysis.

Overview of Techniques

Complexity Manager is a simplified approach to understanding complex systems—the kind of systems in which many variables are related to each other and may be changing over time. Government policy decisions are often aimed at changing a dynamically complex system. It is because of this dynamic complexity that many policies fail to meet their goals or have unforeseen and unintended consequences. Use Complexity Manager to assess the chances for success or failure of a new or proposed policy, identify opportunities for influencing the outcome of any situation, determine what would need to change in order to achieve a specified goal, or identify potential unintended consequences from the pursuit of a policy goal.

Decision Matrix is a simple but powerful device for making tradeoffs between conflicting goals or preferences. An analyst lists the decision options or possible choices, the criteria for judging the options, the weights assigned to each of these criteria, and an evaluation of the extent to which each option satisfies each of the criteria. This process will show the best choice—based on the values the analyst or a decision maker puts into the matrix. By studying the matrix, one can also analyze how the best choice would change if the values assigned to the selection criteria were changed or if the ability of an option to satisfy a specific criterion were changed. It is almost impossible for an analyst to keep track of these factors effectively without such a matrix, as one cannot keep all the pros and cons in working memory at the same time. A Decision Matrix helps the analyst see the whole picture.

Force Field Analysis is a technique that analysts can use to help a decision maker decide how to solve a problem or achieve a goal, or to determine whether it is possible to do so. The analyst identifies and assigns weights to the relative importance of all the factors or forces that are either a help or a hindrance in solving the problem or achieving the goal. After organizing all these factors in two lists, pro and con, with a weighted value for each factor, the analyst or decision maker is in a better position to recommend strategies that would be most effective in either strengthening the impact of the driving forces or reducing the impact of the restraining forces.

Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy ideas. It is intended to offset the human tendency of analysts and decision makers to jump to conclusions before conducting a full analysis of a problem, as often happens in group meetings. The first step is for the analyst or the project team to make lists of Pros and Cons. If the analyst or team is concerned that people are being unduly negative about an idea, he or she looks for ways to “Fix” the Cons, that is, to explain why the Cons are unimportant or even to transform them into Pros. If concerned that people are jumping on the bandwagon too quickly, the analyst tries to “Fault” the Pros by exploring how they could go wrong. The analyst can also do both Pros and Cons. Of the various techniques described in this chapter, this one is probably the easiest and quickest to use.

SWOT Analysis is used to develop a plan or strategy for achieving a specified goal. (SWOT is an acronym for Strengths, Weaknesses, Opportunities, and Threats.) In using this technique, the analyst first lists the strengths and weaknesses in the organization’s ability to achieve a goal, and then lists opportunities and threats in the external environment that would either help or hinder the organization from reaching the goal.

11.1 COMPLEXITY MANAGER

Complexity Manager helps analysts and decision makers understand and anticipate changes in complex systems. As used here, the word complexity encompasses any distinctive set of interactions that are more complicated than even experienced intelligence analysts can think through solely in their head.3

When to Use It

As a policy support tool, Complexity Manager can be used to assess the chances for success or failure of a new or proposed program or policy, and opportunities for influencing the outcome of any situation. It also can be used to identify what would have to change in order to achieve a specified goal, as well as unintended consequences from the pursuit of a policy goal.

Value Added

Complexity Manager can often improve an analyst’s understanding of a complex situation without the time delay and cost required to build a computer model and simulation. The steps in the Complexity Manager technique are the same as the initial steps required to build a computer model and simulation. These are identification of the relevant variables or actors, analysis of all the interactions between them, and assignment of rough weights or other values to each variable or interaction.

Scientists who specialize in the modeling and simulation of complex social systems report that “the earliest —and sometimes most significant—insights occur while reducing a problem to its most fundamental players, interactions, and basic rules of behavior,” and that “the frequency and importance of additional insights diminishes exponentially as a model is made increasingly complex.”

Complexity Manager does not itself provide analysts with answers. It enables analysts to find a best possible answer by organizing in a systematic manner the jumble of information about many relevant variables. It enables analysts to get a grip on the whole problem, not just one part of the problem at a time. Analysts can then apply their expertise in making an informed judgment about the problem. This structuring of the analyst’s thought process also provides the foundation for a well-organized report that clearly presents the rationale for each conclusion. This may also lead to some form of visual presentation, such as a Concept Map or Mind Map, or a causal or influence diagram.

The Method

Complexity Manager requires the analyst to proceed through eight specific steps:

  1. Define the problem: State the problem (plan, goal, outcome) to be analyzed, including the time period to be covered by the analysis.
  2. Identify and list relevant variables: Use one of the brainstorming techniques described in chapter 4 to identify the significant variables (factors, conditions, people, etc.) that may affect the situation of interest during the designated time period. Think broadly to include organizational or environmental constraints that are beyond anyone’s ability to control. If the goal is to estimate the status of one or more variables several years in the future, those variables should be at the top of the list. Group the other variables in some logical manner with the most important variables at the top of the list.
  3. Create a Cross-Impact Matrix: Create a matrix in which the number of rows and columns are each equal to the number of variables plus one. Leaving the cell at the top left corner of the matrix blank, enter all the variables in the cells in the row across the top of the matrix and the same variables in the column down the left side. The matrix then has a cell for recording the nature of the relationship between all pairs of variables. This is called a Cross-Impact Matrix—a tool for assessing the two-way interaction between each pair of variables. Depending on the number of variables and the length of their names, it may be convenient to use the variables’ letter designations across the top of the matrix rather than the full names.
  4. Assess the interaction between each pair of variables: Use a diverse team of experts on the relevant topic to analyze the strength and direction of the interaction between each pair of variables, and enter the results in the relevant cells of the matrix. For each pair of variables, ask the question: Does this variable impact the paired variable in a manner that will increase or decrease the impact or influence of that variable?

There are two different ways one can record the nature and strength of impact that one variable has on another. Figure 11.1 uses plus and minus signs to show whether the variable being analyzed has a positive or negative impact on the paired variable. The size of the plus or minus sign signifies the strength of the impact on a three-point scale. The small plus or minus shows a weak impact, the medium size a medium impact, and the large size a strong impact. If the variable being analyzed has no impact on the paired variable, the cell is left empty. If a variable might change in a way that could reverse the direction of its impact, from positive to negative or vice versa, this is shown by using both a plus and a minus sign.

After rating each pair of variables, and before doing further analysis, consider pruning the matrix to eliminate variables that are unlikely to have a significant effect on the outcome. It is possible to measure the relative significance of each variable by adding up the weighted values in each row and column.

  1. Analyze direct impacts: Write several paragraphs about the impact of each variable, starting with variable A. For each variable, describe the variable for further clarification if necessary. Identify all the variables that impact on that variable with a rating of 2 or 3, and briefly explain the nature, direction, and, if appropriate, the timing of this impact. How strong is it and how certain is it? When might these impacts be observed? Will the impacts be felt only in certain conditions?
  2. Analyze loops and indirect impacts: The matrix shows only the direct impact of one variable on another. When you are analyzing the direct impacts variable by variable, there are several things to look for and make note of. One is feedback loops. For example, if variable A has a positive impact on variable B, and variable B also has a positive impact on variable A, this is a positive feedback loop. Or there may be a three-variable loop, from A to B to C and back to A. The variables in a loop gain strength from each other, and this boost may enhance their ability to influence other variables. Another thing to look for is circumstances where the causal relationship between variables A and B is necessary but not sufficient for something to happen. For example, variable A has the potential to influence variable B, and may even be trying to influence variable B, but it can do so effectively only if variable C is also present. In that case, variable C is an enabling variable and takes on greater significance than it ordinarily would have.

All variables are either static or dynamic. Static variables are expected to remain more or less unchanged during the period covered by the analysis. Dynamic variables are changing or have the potential to change. The analysis should focus on the dynamic variables as these are the sources of surprise in any complex system. Determining how these dynamic variables interact with other variables and with each other is critical to any forecast of future developments. Dynamic variables can be either predictable or unpredictable. Predictable change includes established trends or established policies that are in the process of being implemented. Unpredictable change may be a change in leadership or an unexpected change in policy or available resources.

  1. Draw conclusions: Using data about the individual variables assembled in Steps 5 and 6, draw conclusions about the system as a whole. What is the most likely outcome or what changes might be anticipated during the specified time period? What are the driving forces behind that outcome? What things could happen to cause a different outcome? What desirable or undesirable side effects should be anticipated? If you need help to sort out all the relationships, it may be useful to sketch out by hand a diagram showing all the causal relationships. A Concept Map (chapter 4) may be useful for this purpose. If a diagram is helpful during the analysis, it may also be helpful to the reader or customer to include such a diagram in the report.
  2. Conduct an opportunity analysis: When appropriate, analyze what actions could be taken to influence this system in a manner favorable to the primary customer of the analysis.

Origins of This Technique

Complexity Manager was developed by Richards Heuer to fill an important gap in structured techniques that are available to the average analyst. It is a very simplified version of older quantitative modeling techniques, such as system dynamics.

11.2 DECISION MATRIX

Decision Matrix helps analysts identify the course of action that best achieves specified goals or preferences.

When to Use It

The Decision Matrix technique should be used when a decision maker has multiple options to choose from, multiple criteria for judging the desirability of each option, and/or needs to find the decision that maximizes a specific set of goals or preferences. For example, it can be used to help choose among various plans or strategies for improving intelligence analysis, to select one of several IT systems one is considering buying, to determine which of several job applicants is the right choice, or to consider any personal decision, such as what to do after retiring. A Decision Matrix is not applicable to most intelligence analysis, which typically deals with evidence and judgments rather than goals and preferences. It can be used, however, for supporting a decision maker’s consideration of alternative courses of action.

11.3 FORCE FIELD ANALYSIS

Force Field Analysis is a simple technique for listing and assessing all the forces for and against a change, problem, or goal. Kurt Lewin, one of the fathers of modern social psychology, believed that all organizations

are systems in which the present situation is a dynamic balance between forces driving for change and restraining forces. In order for any change to occur, the driving forces must exceed the restraining forces, and the relative strength of these forces is what this technique measures. This technique is based on Lewin’s theory.

The Method

* Define the problem, goal, or change clearly and concisely.

* Brainstorm to identify the main forces that will influence the issue. Consider such topics as needs, resources, costs, benefits, organizations, relationships, attitudes, traditions, interests, social and cultural trends, rules and regulations, policies, values, popular desires, and leadership to develop the full range of forces promoting and restraining the factors involved.

* Make one list showing the forces or people “driving” the change and a second list showing the forces or people “restraining” the change.

* Assign a value (the intensity score) to each driving or restraining force to indicate its strength. Assign the weakest intensity scores a value of 1 and the strongest a value of 5. The same intensity score can be assigned to more than one force if you consider the factors equal in strength. List the intensity scores in parentheses beside each item.

* Calculate a total score for each list to determine whether the driving or the restraining forces are dominant.

* Examine the two lists to determine if any of the driving forces balance out the restraining forces.

* Devise a manageable course of action to strengthen those forces that lead to the preferred outcome and weaken the forces that would hinder the desired outcome.

11.4 PROS-CONS-FAULTS-AND-FIXES

Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy ideas. It is intended to offset the human tendency of a group of analysts and decision makers to jump to a conclusion before full analysis of the problem has been completed.

When to Use It

Making lists of pros and cons for any action is a very common approach to decision making. The “Faults” and “Fixes” are what is new in this strategy. Use this technique to make a quick appraisal of a new idea or a more systematic analysis of a choice between two options.

Value Added

It is unusual for a new idea to meet instant approval. What often happens in meetings is that a new idea is brought up, one or two people immediately explain why they don’t like it or believe it won’t work, and the idea is then dropped. On the other hand, there are occasions when just the opposite happens. A new idea is immediately welcomed, and a commitment to support it is made before the idea is critically evaluated. The Pros-Cons-Faults-and-Fixes technique helps to offset this human tendency toward jumping to conclusions.

The Method

Start by clearly defining the proposed action or choice. Then follow these steps:

* List the Pros in favor of the decision or choice. Think broadly and creatively and list as many benefits, advantages, or other positives as possible.

* List the Cons, or arguments against what is proposed. There are usually more Cons than Pros, as most humans are naturally critical. It is easier to think of arguments against a new idea than to imagine how the new idea might work. This is why it is often difficult to get careful consideration of a new idea.

* Review and consolidate the list. If two Pros are similar or overlapping, consider merging them to eliminate any redundancy. Do the same for any overlapping Cons.

* If the choice is between two clearly defined options, go through the previous steps for the second option. If there are more than two options, a technique such as Decision Matrix may be more appropriate than Pros-Cons-Faults-and-Fixes.

* At this point you must make a choice. If the goal is to challenge an initial judgment that the idea won’t work, take the Cons, one at a time, and see if they can be “Fixed.” That means trying to figure a way to neutralize their adverse influence or even to convert them into Pros. This exercise is intended to counter any unnecessary or biased negativity about the idea. There are at least four ways an argument listed as a Con might be Fixed:

 

  • Propose a modification of the Con that would significantly lower the risk of the Con being a problem.
  • Identify a preventive measure that would significantly reduce the chances of the Con being a problem.
  • Do contingency planning that includes a change of course if certain indicators are observed.
  • Identify a need for further research or information gathering to confirm or refute the assumption that the Con is a problem.

* If the goal is to challenge an initial optimistic assumption that the idea will work and should be pursued, take the Pros, one at a time, and see if they can be “Faulted.” That means to try and figure out how the Pro might fail to materialize or have undesirable consequences. This exercise is intended to counter any wishful thinking or unjustified optimism about the idea. There are at least three ways a Pro might be Faulted:

  • Identify a reason why the Pro would not work or why the benefit would not be received.
  • Identify an undesirable side effect that might accompany the benefit.
  • Identify a need for further research or information gathering to confirm or refute the assumption that the Pro will work or be beneficial.

A third option is to combine both approaches, to Fault the Pros and Fix the Cons.

11.5 SWOT ANALYSIS

SWOT is commonly used by all types of organizations to evaluate the Strengths, Weaknesses,

Opportunities, and Threats involved in any project or plan of action. The strengths and weaknesses are internal to the organization, while the opportunities and threats are characteristics of the external environment.

12.0 Guide to Collaboration
12 Practitioner’s Guide to Collaboration

Analysis in the U.S. Intelligence Community is now in a transitional stage from being predominantly a mental activity done by a solo analyst to becoming a collaborative or group activity.

 

The increasing use of structured analytic techniques is central to this transition. Many things change when the internal thought process of analysts can be externalized in a transparent manner so that ideas can be shared, built on, and easily critiqued by others.

 

This chapter is not intended to describe collaboration as it exists today. It is a visionary attempt to foresee how collaboration might be put into practice in the future when interagency collaboration is the norm and the younger generation of analysts has had even more time to imprint its social networking practices on the Intelligence Community.

 

12.1 SOCIAL NETWORKS AND ANALYTIC TEAMS

There are several ways to categorize teams and groups. When discussing the U.S. Intelligence Community, it seems most useful to deal with three types: the traditional analytic team, the special project team, and social network.

* Traditional analytic team: This is the typical work team assigned to perform a specific task. It has a leader appointed by a manager or chosen by the team, and all members of the team are collectively accountable for the team’s product. The team may work jointly to develop the entire product or, as is commonly done for National Intelligence Estimates, each team member may be responsible for a specific section of the work.

The core analytic team, with participants usually working at the same agency, drafts a paper and sends it to other members of the government community for comment and coordination.

* Special project team: Such a team is usually formed to provide decision makers with near–real time analytic support during a crisis or an ongoing operation. A crisis support task force or field-deployed interagency intelligence team that supports a military operation exemplifies this type of team.

* Social networks: Experienced analysts have always had their own network of experts in their field or related fields with whom they consult from time to time and whom they may recruit to work with them on a specific analytic project. Social networks are critical to the analytic business. They do the day-to-day monitoring of events, produce routine products as needed, and may recommend the formation of a more formal analytic team to handle a specific project. The social network is the form of group activity that is now changing dramatically with the growing ease of cross-agency secure communications and the availability of social networking software.

The key problem that arises with social networks is the geographic distribution of their members. Even within the Washington, D.C., metropolitan area, distance is a factor that limits the frequency of face-to-face meetings.

Research on effective collaborative practices has shown that geographically distributed teams are most likely to succeed when they satisfy six key imperatives. Participants must

  • Know and trust each other; this usually requires that they meet face-to-face at least once. Feel a personal need to engage the group in order to perform a critical task.
  • Derive mutual benefits from working together.
  • Connect with each other virtually on demand and easily add new members.
  • Perceive incentives for participating in the group, such as saving time, gaining new insights from interaction with other knowledgeable analysts, or increasing the impact of their contribution.
  • Share a common understanding of the problem with agreed lists of common terms and definitions.

12.2 DIVIDING THE WORK

Managing the geographic distribution of the social network can also be addressed effectively by dividing the analytic task into two parts—first exploiting the strengths of the social network for divergent or creative analysis to identify ideas and gather information, and, second, forming a small analytic team that employs convergent analysis to meld these ideas into an analytic product.

 

Structured analytic techniques and collaborative software work very well with this two-part approach to analysis. A series of basic techniques used for divergent analysis early in the analytic process works well for a geographically distributed social network communicating via a wiki.

 

A project leader informs a social network of an impending project and provides a tentative project description, target audience, scope, and process to be followed. The leader also gives the name of the wiki to be used and invites interested analysts knowledgeable in that area to participate. Any analyst with access to the secure network also has access to the wiki and is authorized to add information and ideas to it. Any or all of the following techniques, as well as others, may come into play during the divergent analysis phase as specified by the project leader:

  • Issue Redefinition as described in chapter 4.
  • Collaboration in sharing and processing data using other techniques such as timelines, sorting, networking, mapping, and charting as described in chapter 4.
  • Some form of brainstorming, as described in chapter 5, to generate a list of driving forces, variables, players, etc.
  • Ranking or prioritizing this list, as described in chapter 4.
  • Putting this list into a Cross-Impact Matrix, as described in chapter 5, and then discussing and recording in the wiki the relationship, if any, between each pair of driving forces, variables, or players in that matrix.
  • Developing a list of alternative explanations or outcomes (hypotheses) to be considered (chapter 7).
  • Developing a list of items of evidence available to be considered when evaluating these hypotheses (chapter 7).
  • Doing a Key Assumptions Check (chapter 8). This will be less effective when done on a wiki than in a face-to-face meeting, but it would be useful to learn the network’s thinking about key assumptions.

Most of these steps involve making lists, which can be done quite effectively with a wiki. Making such input via a wiki can be even more productive than a face-to-face meeting, because analysts have more time to think about and write up their thoughts and are able to look at their contribution over several days and make additions or changes as new ideas come to them.

The process should be overseen and guided by a project leader. In addition to providing a sound foundation for further analysis, this process enables the project leader to identify the best analysts to be included in the smaller team that conducts the second phase of the project—making analytic judgments and drafting the report. Team members should be selected to maximize the following criteria: level of expertise on the subject, level of interest in the outcome of the analysis, and diversity of opinions and thinking styles among the group.

12.3 COMMON PITFALLS WITH SMALL GROUPS

the use of structured analytic techniques frequently helps analysts avoid many of the common pitfalls of the small-group process.

Much research documents that the desire for consensus is an important cause of poor group decisions. Development of a group consensus is usually perceived as success, but, in reality, it is often indicative of failure. Premature consensus is one of the more common causes of suboptimal group performance. It leads to failure to identify or seriously consider alternatives, failure to examine the negative aspects of the preferred

position, and failure to consider the consequences that might follow if the preferred position is wrong.8 This phenomenon is what is commonly called groupthink.

12.4 BENEFITING FROM DIVERSITY

Improvement of group performance requires an understanding of these problems and a conscientious effort to avoid or mitigate them. The literature on small-group performance is virtually unanimous in emphasizing that groups make better decisions when their members bring to the table a diverse set of ideas, opinions, and perspectives. What premature consensus, groupthink, and polarization all have in common is a failure to recognize assumptions and a failure to adequately identify and consider alternative points of view.

Briefly, then, the route to better analysis is to create small groups of analysts who are strongly encouraged by their leader to speak up and express a wide range of ideas, opinions, and perspectives. The use of structured analytic techniques generally ensures that this happens. These techniques guide the dialogue between analysts as they share evidence and alternative perspectives on the meaning and significance of the evidence. Each step in the technique prompts relevant discussion within the team, and such discussion can generate and evaluate substantially more divergent information and new ideas than can a group that does not use such a structured process.

12.5 ADVOCACY VS. OBJECTIVE INQUIRY

The desired diversity of opinion is, of course, a double-edged sword, as it can become a source of conflict which degrades group effectiveness.

In a task-oriented team environment, advocacy of a specific position can lead to emotional conflict and reduced team effectiveness. Advocates tend to examine evidence in a biased manner, accepting at face value information that seems to confirm their own point of view and subjecting any contrary evidence to highly critical evaluation. Advocacy is appropriate in a meeting of stakeholders that one is attending for the purpose of representing a specific interest. It is also “an effective method for making decisions in a courtroom when both sides are effectively represented, or in an election when the decision is made by a vote of the people.”

…many CIA and FBI analysts report that their preferred use of ACH is to gain a better understanding of the differences of opinion between them and other analysts or between analytic offices. The process of creating an ACH matrix requires identification of the evidence and arguments being used and determining how these are interpreted as either consistent or inconsistent with the various hypotheses.

Considerable research on virtual teaming shows that leadership effectiveness is a major factor in the success or failure of a virtual team. Although leadership usually is provided by a group’s appointed leader, it can also emerge as a more distributed peer process and is greatly aided by the use of a trained facilitator (see Figure 12.6). When face-to-face contact is limited, leaders, facilitators, and team members must compensate by paying more attention than they might otherwise devote to the following tasks:

  • Articulating a clear mission, goals, specific tasks, and procedures for evaluating results.
  • Defining measurable objectives with milestones and timelines for achieving them.
  • Identifying clear and complementary roles and responsibilities.
  • Building relationships with and between team members and with stakeholders. Agreeing on team norms and expected behaviors.
  • Defining conflict resolution procedures.
  • Developing specific communication protocols and practices

 

 

 

 

13.0 Evaluation of Techniques
13 Evaluation of Structured Analytic Techniques

13.1 ESTABLISHING FACE VALIDITY

The taxonomy of structured analytic techniques presents each category of structured technique in the context of how it is intended to mitigate or avoid a specific cognitive or group process problem. In other words, each structured analytic technique has face validity because there is a rational reason for expecting it to help mitigate or avoid a recognized problem that can occur when one is doing intelligence analysis. For example, a great deal of research in human cognition during the past sixty years shows the limits of working memory and suggests that one can manage a complex problem most effectively by breaking it down into smaller pieces.

Satisficing is a common analytic shortcut that people use in making everyday decisions when there are multiple possible answers. It saves a lot of time when you are making judgments or decisions of little consequence, but it is ill-advised when making judgments or decisions with significant consequences for national security.

The ACH process does not guarantee a correct judgment, but this anecdotal evidence suggests that ACH does make a significant contribution to better analysis.

13.2 LIMITS OF EMPIRICAL TESTING

Findings from empirical experiments can be generalized to apply to intelligence analysis only if the test conditions match relevant conditions in which intelligence analysis is conducted. There are so many variables that can affect the research results that it is very difficult to control for all or even most of them. These variables include the purpose for which a technique is used, implementation procedures, context of the experiment, nature of the analytic task, differences in analytic experience and skill, and whether the analysis is done by a single analyst or as a group process. All of these variables affect the outcome of any experiment that ostensibly tests the utility of an analytic technique. In a number of readily available examples of research on structured analytic techniques, we identified serious questions about the applicability of the research findings to intelligence analysis.

Different Purpose or Goal

Many structured analytic techniques can be used for several different purposes, and research findings on the effectiveness of these techniques can be generalized and applied to the Intelligence Community only if the technique is used in the same way and for the same purpose as in the actual practice of the Intelligence Community. For example, Philip Tetlock, in his important book Expert Political Judgment, describes two experiments showing that scenario development may not be an effective technique. The experiments compared judgments on a political issue before and after the test subjects prepared scenarios in an effort to gain a better understanding of the issues. The experiments showed that the predictions by both experts and nonexperts were more accurate before generating the scenarios; in other words, the generation of scenarios actually reduced the accuracy of their predictions. Several experienced analysts have separately cited this finding as evidence that scenario development may not be a useful method for intelligence analysis.
However, Tetlock’s conclusions should not be generalized to apply to intelligence analysis, as those experiments tested scenarios as a predictive tool. The Intelligence Community does not use scenarios for prediction.

Different Implementation Procedures

There are specific procedures for implementing many structured techniques. If research on the effectiveness of a specific technique is to be applicable to intelligence analysis, the research should use the same implementing procedure(s) for that technique as those used by the Intelligence Community.

Different Environment

When evaluating the validity of a technique, it is necessary to control for the environment in which the technique is used. If this is not done, the research findings may not always apply to intelligence analysis.

This is by no means intended to suggest that techniques developed for use in other domains should not be used in intelligence analysis. On the contrary, other domains are a productive source of such techniques, but the best way to apply them to intelligence analysis needs to be carefully evaluated.

Misleading Test Scenario

Empirical testing of a structured analytic technique requires developing a realistic test scenario. The test group analyzes this scenario using the structured technique while the control group analyzes the scenario without the benefit of any such technique. The MITRE Corporation conducted an experiment to test the ability of the

Analysis of Competing Hypotheses (ACH) technique to prevent confirmation bias. Confirmation bias is the tendency of people to seek information or assign greater weight to information that confirms what they already believe and to underweight or not seek information that supports an alternative belief.

Typically, intelligence analysts do not begin the process of attacking an intelligence problem by developing a full set of hypotheses. Richards Heuer, who developed the ACH methodology, has always believed that a principal benefit of ACH in mitigating confirmation bias is that it does requires analysts to develop a full set of hypotheses before evaluating any of them.

Differences in Analytic Experience and Skill

There is a difference between structured techniques in the skill level and amount of training that is required to implement them effectively.

When one is evaluating any technique, the level of skill and training required is an important variable. Any empirical testing needs to control for this variable, which suggests that testing of any medium- to high-skill technique should be done with current or former intelligence analysts, including analysts at different skill levels.

an analytic tool is not like a machine that works whenever it is turned on. It is a strategy for achieving a goal. Whether or not one reaches the goal depends in part upon the skill of the person executing the strategy.

Conclusion

Using empirical experiments to evaluate structured techniques is difficult because the outcome of any experiment is influenced by so many variables. Experiments conducted outside the Intelligence Community typically fail to replicate the important conditions that influence the outcome of analysis within the community.

13.3 A NEW APPROACH TO EVALUATION

There is a better way to evaluate structured analytic techniques. In this section we outline a new approach that is embedded in the reality of how analysis is actually done in the Intelligence Community. We then show how this approach might be applied to the analysis of three specific techniques.

Step 1 is to identify what we know, or think we know, about the benefits from using any particular structured technique. This is the face validity as described earlier in this chapter plus whatever analysts believe they have learned from frequent use of a technique. For example, we think we know that ACH provides several benefits that help produce a better intelligence product. A full analysis of ACH would consider each of the following potential benefits:

It requires analysts to start by developing a full set of alternative hypotheses. This reduces the risk of satisficing.
It enables analysts to manage and sort evidence in analytically useful ways.
It requires analysts to try to refute hypotheses rather than to support a single hypothesis. This process reduces confirmation bias and helps to ensure that all alternatives are fully considered.

It can help a small group of analysts identify new and divergent information as they fill out the matrix, and it depersonalizes the discussion when conflicting opinions are identified.
It spurs analysts to present conclusions in a way that is better organized and more transparent as to how these conclusions were reached.

It can provide a foundation for identifying indicators that can be monitored to determine the direction in which events are heading.
It leaves a clear audit trail as to how the analysis was done.

Step 2 is to obtain evidence to test whether or not a technique actually provides the expected benefits. Acquisition of evidence for or against these benefits is not limited to the results of empirical experiments. It includes structured interviews of analysts, managers, and customers; observations of meetings of analysts as they use these techniques; and surveys as well as experiments.

Step 3 is to obtain evidence of whether or not these benefits actually lead to higher quality analysis. Quality of analysis is not limited to accuracy. Other measures of quality include clarity of presentation, transparency in how the conclusion was reached, and construction of an audit trail for subsequent review, all of which are benefits that might be gained, for example, by use of ACH. Evidence of higher quality might come from independent evaluation of quality standards or interviews of customers receiving the reports. Cost effectiveness, including cost in analyst time as well as money, is another criterion of interest. As stated previously in this book, we claim that the use of a structured technique often saves analysts time in the long run. That claim should also be subjected to empirical analysis.

Indicators Validator

The Indicators Validator described in chapter 6 is a new technique developed by Randy Pherson to test the power of a set of indicators to provide early warning of future developments, such as which of several potential scenarios seems to be developing. It uses a matrix similar to an ACH matrix with scenarios listed across the top and indicators down the left side. For each combination of indicator and scenario, the analyst rates on a five-point scale the likelihood that this indicator will or will not be seen if that scenario is developing. This rating measures the diagnostic value of each indicator or its ability to diagnose which scenario is becoming most likely.

It is often found that indicators have little or no value because they are consistent with multiple scenarios. The explanation for this phenomenon is that when analysts are identifying indicators, they typically look for indicators that are consistent with the scenario they are concerned about identifying. They don’t think about the value of an indicator being diminished if it is also consistent with other hypotheses.

The Indicators Validator was developed to meet a perceived need for analysts to better understand the requirements for a good indicator. Ideally, however, the need for this technique and its effectiveness should be tested before all analysts working with indicators are encouraged to use it. Such testing might be done as follows:

* Check the need for the new technique. Select a sample of intelligence reports that include an indicators list and apply the Indicators Validator to each indicator on the list. How often does this test identify indicators that have been put forward despite their having little or no diagnostic value?

* Do a before-and-after comparison. Identify analysts who have developed a set of indicators during the course of their work. Then have them apply the Indicators Validator to their work and see how much difference it makes.

14.0 Vision of the Future
14 Vision of the Future

The Intelligence Community is pursuing several paths in its efforts to improve the quality of intelligence analysis. One of these paths is the increased use of structured analytic techniques, and this book is intended to encourage and support that effort.

 

14.4 IMAGINING THE FUTURE: 2015

Imagine it is now 2015. Our three assumptions have turned out to be accurate, and collaboration in the use of structured analytic techniques is now widespread. What has happened to make this outcome possible, and how has it transformed the way intelligence analysis is done in 2015? This is our vision of what could be happening by that date.

The use of A-Space has been growing for the past five years. Younger analysts in particular have embraced it in addition to Intellipedia as a channel for secure collaboration with their colleagues working on related topics in other offices and agencies. Analysts in different geographic locations arrange to meet as a group from time to time, but most of the ongoing interaction is accomplished via collaborative tools such as A-Space, communities of interest, and Intellipedia.

By 2015, the use of structured analytic techniques has expanded well beyond the United States. The British, Canadian, Australian, and several other foreign intelligence services increasingly incorporate structured techniques into their training programs and their processes for conducting analysis. After the global financial crisis that began in 2008, a number of international financial and business consulting firms adapted several of the core intelligence analysis techniques to their business needs, concluding that they could no longer afford multi-million dollar mistakes that could have been avoided by engaging in more rigorous analysis as part of their business processes.