Review of Psychology of Intelligence Analysis

Psychology of Intelligence Analysis by Richards J. Heuer, Jr.

Introduction

Improving Intelligence Analysis at CIA: Dick Heuer’s Contribution to Intelligence Analysis

By

Jack Davis served with the Directorate of Intelligence (DI), the National Intelligence Council, and the Office of Training during his CIA career.

Dick Heuer’s ideas on how to improve analysis focus on helping analysts compensate for the human mind’s limitations in dealing with complex problems that typically involve ambiguous information, multiple players, and fluid circumstances. Such multi-faceted estimative challenges have proliferated in the turbulent post-Cold War world.

Leading Contributors to Quality of Analysis

Intelligence analysts, in seeking to make sound judgments, are always under challenge from the complexities of the issues they address and from the demands made on them for timeliness and volume of production. Four Agency individuals over the decades stand out for having made major contributions on how to deal with these challenges to the quality of analysis.

My short list of the people who have had the greatest positive impact on CIA analysis consists of Sherman Kent, Robert Gates, Douglas MacEachin, and Richards Heuer.

Sherman Kent

Sherman Kent’s pathbreaking contributions to analysis cannot be done justice in a couple of paragraphs

Kent’s greatest contribution to the quality of analysis was to define an honorable place for the analyst–the thoughtful individual “applying the instruments of reason and the scientific method”–in an intelligence world then as now dominated by collectors and operators.

In a second (1965) edition of Strategic Intelligence, Kent took account of the coming computer age as well as human and technical collectors in proclaiming the centrality of the analyst:

Whatever the complexities of the puzzles we strive to solve and whatever the sophisticated techniques we may use to collect the pieces and store them, there can never be a time when the thoughtful man can be supplanted as the intelligence device supreme.

More specifically, Kent advocated application of the techniques of “scientific” study of the past to analysis of complex ongoing situations and estimates of likely future events. Just as rigorous “impartial” analysis could cut through the gaps and ambiguities of information on events long past and point to the most probable explanation, he contended, the powers of the critical mind could turn to events that had not yet transpired to determine the most probable developments

To this end, Kent developed the concept of the analytic pyramid, featuring a wide base of factual information and sides comprised of sound assumptions, which pointed to the most likely future scenario at the apex.

Robert Gates

Bob Gates served as Deputy Director of Central Intelligence (1986-1989) and as DCI (1991-1993). But his greatest impact on the quality of CIA analysis came during his 1982-1986 stint as Deputy Director for Intelligence (DDI).

Gates’s ideas for overcoming what he saw as insular, flabby, and incoherent argumentation featured the importance of distinguishing between what analysts know and what they believe–that is, to make clear what is “fact” (or reliably reported information) and what is the analyst’s opinion (which had to be persuasively supported with evidence). Among his other tenets were the need to seek the views of non-CIA experts, including academic specialists and policy officials, and to present alternate future scenarios.

Using his authority as DDI, he reviewed critically almost all in-depth assessments and current intelligence articles prior to publication. With help from his deputy and two rotating assistants from the ranks of rising junior managers, Gates raised the standards for DDI review dramatically–in essence, from “looks good to me” to “show me your evidence.”

As the many drafts Gates rejected were sent back to managers who had approved them–accompanied by the DDI’s comments about inconsistency, lack of clarity, substantive bias, and poorly supported judgments–the whole chain of review became much more rigorous. Analysts and their managers raised their standards to avoid the pain of DDI rejection. Both career advancement and ego were at stake.

The rapid and sharp increase in attention paid by analysts and managers to the underpinnings for their substantive judgments probably was without precedent in the Agency’s history. The longer term benefits of the intensified review process were more limited, however, because insufficient attention was given to clarifying tradecraft practices that would promote analytic soundness. More than one participant in the process observed that a lack of guidelines for meeting Gates’s standards led to a large amount of “wheel-spinning.”

Douglas MacEachin

Doug MacEachin, DDI from 1993 to 1996, sought to provide an essential ingredient for ensuring implementation of sound analytic standards: corporate tradecraft standards for analysts. This new tradecraft was aimed in particular at ensuring that sufficient attention would be paid to cognitive challenges in assessing complex issues.

MacEachin’s university major was economics, but he also showed great interest in philosophy. His Agency career–like Gates’–included an extended assignment to a policymaking office. He came away from this experience with new insights on what constitutes “value-added” intelligence usable by policymakers. Subsequently, as CIA’s senior manager on arms control issues, he dealt regularly with a cadre of tough- minded policy officials who let him know in blunt terms what worked as effective policy support and what did not.

MacEachin advocated an approach to structured argumentation called “linchpin analysis,” to which he contributed muscular terms designed to overcome many CIA professionals’ distaste for academic nomenclature. The standard academic term “key variables” became drivers. “Hypotheses” concerning drivers became linchpins— assumptions underlying the argument–and these had to be explicitly spelled out. MacEachin also urged that greater attention be paid to analytical processes for alerting policymakers to changes in circumstances that would increase the likelihood of alternative scenarios.

MacEachin thus worked to put in place systematic and transparent standards for determining whether analysts had met their responsibilities for critical thinking. To spread understanding and application of the standards, he mandated creation of workshops on linchpin analysis for managers and production of a series of notes on analytical tradecraft. He also directed that the DI’s performance on tradecraft standards be tracked and that recognition be given to exemplary assessments. Perhaps most ambitious, he saw to it that instruction on standards for analysis was incorporated into a new training course, “Tradecraft 2000.” Nearly all DI managers and analysts attended this course during 1996-97.

Richards Heuer

Dick Heuer was–and is–much less well known within the CIA than Kent, Gates, and MacEachin. He has not received the wide acclaim that Kent enjoyed as the father of professional analysis, and he has lacked the bureaucratic powers that Gates and MacEachin could wield as DDIs. But his impact on the quality of Agency analysis arguably has been at least as important as theirs.

Heuer’s Central Ideas

Dick Heuer’s writings make three fundamental points about the cognitive challenges intelligence analysts face:

  • The mind is poorly “wired” to deal effectively with both inherent uncertainty (the natural fog surrounding complex, indeterminate intelligence issues) and induced uncertainty (the man-made fog fabricated by denial and deception operations).
  • Even increased awareness of cognitive and other “unmotivated” biases, such as the tendency to see information confirming an already-held judgment more vividly than one sees “disconfirming” information, does little by itself to help analysts deal effectively with uncertainty.
  • Tools and techniques that gear the analyst’s mind to apply higher levels of critical thinking can substantially improve analysis on complex issues on which information is incomplete, ambiguous, and often deliberately distorted. Key examples of such intellectual devices include techniques for structuring information, challenging assumptions, and exploring alternative interpretations.

Given the difficulties inherent in the human processing of complex information, a prudent management system should:

  • Encourage products that (a) clearly delineate their assumptions and chains of inference and (b) specify the degree and source of the uncertainty involved in the conclusions.
  • Emphasize procedures that expose and elaborate alternative points of view–analytic debates, devil’s advocates, interdisciplinary brainstorming, competitive analysis, intra-office peer review of production, and elicitation of outside expertise.

Heuer emphasizes both the value and the dangers of mental models, or mind-sets. In the book’s opening chapter, entitled “Thinking About Thinking,” he notes that:

[Analysts] construct their own version of “reality” on the basis of information provided by the senses, but this sensory input is mediated by complex mental processes that determine which information is attended to, how it is organized, and the meaning attributed to it. What people perceive, how readily they perceive it, and how they process this information after receiving it are all strongly influenced by past experience, education, cultural values, role requirements, and organizational norms, as well as by the specifics of the information received.

This process may be visualized as perceiving the world through a lens or screen that channels and focuses and thereby may distort the images that are seen. To achieve the clearest possible image…analysts need more than information…They also need to understand the lenses through which this information passes. These lenses are known by many terms–mental models, mind-sets, biases, or analytic assumptions.

In essence, Heuer sees reliance on mental models to simplify and interpret reality as an unavoidable conceptual mechanism for intelligence analysts–often useful, but at times hazardous. What is required of analysts, in his view, is a commitment to challenge, refine, and challenge again their own working mental models, precisely because these steps are central to sound interpretation of complex and ambiguous issues.

Throughout the book, Heuer is critical of the orthodox prescription of “more and better information” to remedy unsatisfactory analytic performance. He urges that greater attention be paid instead to more intensive exploitation of information already on hand, and that in so doing, analysts continuously challenge and revise their mental models.

Heuer sees mirror-imaging as an example of an unavoidable cognitive trap. No matter how much expertise an analyst applies to interpreting the value systems of foreign entities, when the hard evidence runs out the tendency to project the analyst’s own mind-set takes over. In Chapter 4, Heuer observes:

To see the options faced by foreign leaders as these leaders see them, one must understand their values and assumptions and even their misperceptions and misunderstandings. Without such insight, interpreting foreign leaders’ decisions or forecasting future decisions is often nothing more than partially informed speculation. Too frequently, foreign behavior appears “irrational” or “not in their own best interest.” Such conclusions often indicate analysts have projected American values and conceptual frameworks onto the foreign leaders and societies, rather than understanding the logic of the situation as it appears to them.

Recommendations

Heuer’s advice to Agency leaders, managers, and analysts is pointed: To ensure sustained improvement in assessing complex issues, analysis must be treated as more than a substantive and organizational process. Attention also must be paid to techniques and tools for coping with the inherent limitations on analysts’ mental machinery. He urges that Agency leaders take steps to:

  • Establish an organizational environment that promotes and rewards the kind of critical thinking he advocates–or example, analysis on difficult issues that considers in depth a series of plausible hypotheses rather than allowing the first credible hypothesis to suffice.
  • Expand funding for research on the role such mental processes play in shaping analytical judgments. An Agency that relies on sharp cognitive performance by its analysts must stay abreast of studies on how the mind works–i.e., on how analysts reach judgments.
  • Foster development of tools to assist analysts in assessing information. On tough issues, they need help in improving their mental models and in deriving incisive findings from information they already have; they need such help at least as much as they need more information.

I offer some concluding observations and recommendations, rooted in Heuer’s findings and taking into account the tough tradeoffs facing intelligence professionals:

  •  Commit to a uniform set of tradecraft standards based on the insights in this book. Leaders need to know if analysts have done their cognitive homework before taking corporate responsibility for their judgments. Although every analytical issue can be seen as one of a kind, I suspect that nearly all such topics fit into about a dozen recurring patterns of challenge based largely on variations in substantive uncertainty and policy sensitivity. Corporate standards need to be established for each such category. And the burden should be put on managers to explain why a given analytical assignment requires deviation from the standards. I am convinced that if tradecraft standards are made uniform and transparent, the time saved by curtailing personalistic review of quick-turnaround analysis (e.g., “It reads better to me this way”) could be “re-invested” in doing battle more effectively against cognitive pitfalls. (“Regarding point 3, let’s talk about your assumptions.”)
  •  Pay more honor to “doubt.” Intelligence leaders and policymakers should, in recognition of the cognitive impediments to sound analysis, establish ground rules that enable analysts, after doing their best to clarify an issue, to express doubts more openly. They should be encouraged to list gaps in information and other obstacles to confident judgment. Such conclusions as “We do not know” or “There are several potentially valid ways to assess this issue” should be regarded as badges of sound analysis, not as dereliction of analytic duty.

Find a couple of successors to Dick Heuer. Fund their research. Heed their findings.

PART ONE–OUR MENTAL MACHINERY Chapter 1
Thinking About Thinking

Of the diverse problems that impede accurate intelligence analysis, those inherent in human mental processes are surely among the most important and most difficult to deal with. Intelligence analysis is fundamentally a mental process, but understanding this process is hindered by the lack of conscious awareness of the workings of our own minds.

A basic finding of cognitive psychology is that people have no conscious experience of most of what happens in the human mind. Many functions associated with perception, memory, and information processing are conducted prior to and independently of any conscious direction. What appears spontaneously in consciousness is the result of thinking, not the process of thinking.

Weaknesses and biases inherent in human thinking processes can be demonstrated through carefully designed experiments. They can be alleviated by conscious application of tools and techniques that should be in the analytical tradecraft toolkit of all intelligence analysts.

Thinking analytically is a skill like carpentry or driving a car. It can be taught, it can be learned, and it can improve with practice. But like many other skills, such as riding a bike, it is not learned by sitting in a classroom and being told how to do it. Analysts learn by doing. Most people achieve at least a minimally acceptable level of analytical performance with little conscious effort beyond completing their education. With much effort and hard work, however, analysts can achieve a level of excellence beyond what comes naturally.

expert guidance may be required to modify long-established analytical habits to achieve an optimal level of analytical excellence. An analytical coaching staff to help young analysts hone their analytical tradecraft would be a valuable supplement to classroom instruction.

One key to successful learning is motivation. Some of CIA’s best analysts developed their skills as a consequence of experiencing analytical failure early in their careers. Failure motivated them to be more self-conscious about how they do analysis and to sharpen their thinking process.

Part I identifies some limitations inherent in human mental processes. Part II discusses analytical tradecraft–simple tools and approaches for overcoming these limitations and thinking more systematically. Chapter 8, “Analysis of Competing Hypotheses,” is arguably the most important single chapter. Part III presents information about cognitive biases–the technical term for predictable mental errors caused by simplified information processing strategies. A final chapter presents a checklist for analysts and recommendations for how managers of intelligence analysis can help create an environment in which analytical excellence flourishes.

Herbert Simon first advanced the concept of “bounded” or limited rationality.

Because of limits in human mental capacity, he argued, the mind cannot cope directly with the complexity of the world. Rather, we construct a simplified mental model of reality and then work with this model. We behave rationally within the confines of our mental model, but this model is not always well adapted to the requirements of the real world. The concept of bounded rationality has come to be recognized widely, though not universally, both as an accurate portrayal of human judgment and choice and as a sensible adjustment to the limitations inherent in how the human mind functions.

Much psychological research on perception, memory, attention span, and reasoning capacity documents the limitations in our “mental machinery” identified by Simon.

Many scholars have applied these psychological insights to the study of international political behavior. A similar psychological perspective underlies some writings on intelligence failure and strategic surprise.

This book differs from those works in two respects. It analyzes problems from the perspective of intelligence analysts rather than policymakers. And it documents the impact of mental processes largely through experiments in cognitive psychology rather than through examples from diplomatic and military history.

A central focus of this book is to illuminate the role of the observer in determining what is observed and how it is interpreted. People construct their own version of “reality” on the basis of information provided by the senses, but this sensory input is mediated by complex mental processes that determine which information is attended to, how it is organized, and the meaning attributed to it. What people perceive, how readily they perceive it, and how they process this information after receiving it are all strongly influenced by past experience, education, cultural values, role requirements, and organizational norms, as well as by the specifics of the information received.

In this book, the terms mental model and mind-set are used more or less interchangeably, although a mental model is likely to be better developed and articulated than a mind-set. An analytical assumption is one part of a mental model or mind-set. The biases discussed in this book result from how the mind works and are independent of any substantive mental model or mind-set.

Intelligence analysts must understand themselves before they can understand others. Training is needed to (a) increase self-awareness concerning generic problems in how people perceive and make analytical judgments concerning foreign events, and (b) provide guidance and practice in overcoming these problems.

The disadvantage of a mind-set is that it can color and control our perception to the extent that an experienced specialist may be among the last to see what is really happening when events take a new and unexpected turn. When faced with a major paradigm shift, analysts who know the most about a subject have the most to unlearn.

The advantage of mind-sets is that they help analysts get the production out on time and keep things going effectively between those watershed events that become chapter headings in the history books.

What analysts need is more truly useful information–mostly reliable HUMINT from knowledgeable insiders–to help them make good decisions. Or they need a more accurate mental model and better analytical tools to help them sort through, make sense of, and get the most out of the available ambiguous and conflicting information.

Psychological research also offers to intelligence analysts additional insights that are beyond the scope of this book. Problems are not limited to how analysts perceive and process information. Intelligence analysts often work in small groups and always within the context of a large, bureaucratic organization. Problems are inherent in the processes that occur at all three levels–individual, small group, and organization. This book focuses on problems inherent in analysts’ mental processes, inasmuch as these are probably the most insidious. Analysts can observe and get a feel for these problems in small-group and organizational processes, but it is very difficult, at best, to be self-conscious about the workings of one’s own mind.

Chapter 2

Perception: Why Can’t We See What Is There To Be Seen?

The process of perception links people to their environment and is critical to accurate understanding of the world about us. Accurate intelligence analysis obviously requires accurate perception. Yet research into human perception demonstrates that the process is beset by many pitfalls. Moreover, the circumstances under which intelligence analysis is conducted are precisely the circumstances in which accurate perception tends to be most difficult. This chapter discusses perception in general, then applies this information to illuminate some of the difficulties of intelligence analysis.

We tend to perceive what we expect to perceive.

A corollary of this principle is that it takes more information, and more unambiguous information, to recognize an unexpected phenomenon than an expected one.

patterns of expectation become so deeply embedded that they continue to influence perceptions even when people are alerted to and try to take account of the existence of data that do not fit their preconceptions. Trying to be objective does not ensure accurate perception.

Patterns of expectations tell analysts, subconsciously, what to look for, what is important, and how to interpret what is seen. These patterns form a mind-set that predisposes analysts to think in certain ways. A mind-set is akin to a screen or lens through which one perceives the world.

There is a tendency to think of a mind-set as something bad, to be avoided. According to this line of argument, one should have an open mind and be influenced only by the facts rather than by preconceived notions! That is an unreachable ideal. There is no such thing as “the facts of the case.” There is only a very selective subset of the overall mass of data to which one has been subjected that one takes as facts and judges to be relevant to the question at issue.

Actually, mind-sets are neither good nor bad; they are unavoidable. People have no conceivable way of coping with the volume of stimuli that impinge upon their senses, or with the volume and complexity of the data they have to analyze, without some kind of simplifying preconceptions about what to expect, what is important, and what is related to what. “There is a grain of truth in the otherwise pernicious maxim that an open mind is an empty mind.” Analysts do no achieve objective analysis by avoiding preconceptions; that would be ignorance or self-delusion. Objectivity is achieved by making basic assumptions and reasoning as explicit as possible so that they can be challenged by others and analysts can, themselves, examine their validity.

One of the most important characteristics of mind-sets is: Mind-sets tend to be quick to form but resistant to change.

Once an observer has formed an image–that is, once he or she has developed a mind- set or expectation concerning the phenomenon being observed–this conditions future perceptions of that phenomenon.

This is the basis for another general principle of perception: New information is assimilated to existing images.

This principle explains why gradual, evolutionary change often goes unnoticed. It also explains the phenomenon that an intelligence analyst assigned to work on a topic or country for the first time may generate accurate insights that have been overlooked by experienced analysts who have worked on the same problem for 10 years. A fresh perspective is sometimes useful; past experience can handicap as well as aid analysis.

This tendency to assimilate new data into pre-existing images is greater “the more ambiguous the information, the more confident the actor is of the validity of his image, and the greater his commitment to the established view.”

One of the more difficult mental feats is to take a familiar body of data and reorganize it visually or mentally to perceive it from a different perspective. Yet this is what intelligence analysts are constantly required to do. In order to understand international interactions, analysts must understand the situation as it appears to each of the opposing forces, and constantly shift back and forth from one perspective to the other as they try to fathom how each side interprets an ongoing series of interactions. Trying to perceive an adversary’s interpretations of international events, as well as US interpretations of those same events, is comparable to seeing both the old and young woman in Figure 3.

A related point concerns the impact of substandard conditions of perception. The basic principle is:

Initial exposure to blurred or ambiguous stimuli interferes with accurate perception even after more and better information becomes available.

What happened in this experiment is what presumably happens in real life; despite ambiguous stimuli, people form some sort of tentative hypothesis about what they see. The longer they are exposed to this blurred image, the greater confidence they develop in this initial and perhaps erroneous impression, so the greater the impact this initial impression has on subsequent perceptions. For a time, as the picture becomes clearer, there is no obvious contradiction; the new data are assimilated into the previous image, and the initial interpretation is maintained until the contradiction becomes so obvious that it forces itself upon our consciousness.

The early but incorrect impression tends to persist because the amount of information necessary to invalidate a hypothesis is considerably greater than the amount of information required to make an initial interpretation. The problem is not that there is any inherent difficulty in grasping new perceptions or new ideas, but that established perceptions are so difficult to change. People form impressions on the basis of very little information, but once formed, they do not reject or change them unless they obtain rather solid evidence. Analysts might seek to limit the adverse impact of this tendency by suspending judgment for as long as possible as new information is being received.

Implications for Intelligence Analysis

Comprehending the nature of perception has significant implications for understanding the nature and limitations of intelligence analysis. The circumstances under which accurate perception is most difficult are exactly the circumstances under which intelligence analysis is generally conducted–dealing with highly ambiguous situations on the basis of information that is processed incrementally under pressure for early judgment. This is a recipe for inaccurate perception.

Intelligence seeks to illuminate the unknown. Almost by definition, intelligence analysis deals with highly ambiguous situations. As previously noted, the greater the ambiguity of the stimuli, the greater the impact of expectations and pre-existing images on the perception of that stimuli. Thus, despite maximum striving for objectivity, the intelligence analyst’s own preconceptions are likely to exert a greater impact on the analytical product than in other fields where an analyst is working with less ambiguous and less discordant information.

Moreover, the intelligence analyst is among the first to look at new problems at an early stage when the evidence is very fuzzy indeed. The analyst then follows a problem as additional increments of evidence are received and the picture gradually clarifies–as happened with test subjects in the experiment demonstrating that initial exposure to blurred stimuli interferes with accurate perception even after more and better information becomes available. If the results of this experiment can be generalized to apply to intelligence analysts, the experiment suggests that an analyst who starts observing a potential problem situation at an early and unclear stage is at a disadvantage as compared with others, such as policymakers, whose first exposure may come at a later stage when more and better information is available.

The receipt of information in small increments over time also facilitates assimilation of this information into the analyst’s existing views. No one item of information may be sufficient to prompt the analyst to change a previous view. The cumulative message inherent in many pieces of information may be significant but is attenuated when this information is not examined as a whole. The Intelligence Community’s review of its performance before the 1973 Arab-Israeli War noted:

The problem of incremental analysis–especially as it applies to the current intelligence process–was also at work in the period preceding hostilities. Analysts, according to their own accounts, were often proceeding on the basis of the day’s take, hastily comparing it with material received the previous day. They then produced in ‘assembly line fashion’ items which may have reflected perceptive intuition but which [did not] accrue from a systematic consideration of an accumulated body of integrated evidence.

And finally, the intelligence analyst operates in an environment that exerts strong pressures for what psychologists call premature closure. Customer demand for interpretive analysis is greatest within two or three days after an event occurs. The system requires the intelligence analyst to come up with an almost instant diagnosis before sufficient hard information, and the broader background information that may be needed to gain perspective, become available to make possible a well-grounded judgment. This diagnosis can only be based upon the analyst’s preconceptions concerning how and why events normally transpire in a given society.

The problems outlined here have implications for the management as well as the conduct of analysis. Given the difficulties inherent in the human processing of complex information, a prudent management system should:

  • Encourage products that clearly delineate their assumptions and chains of inference and that specify the degree and source of uncertainty involved in the conclusions.
  • Support analyses that periodically re-examine key problems from the ground up in order to avoid the pitfalls of the incremental approach.
  • Emphasize procedures that expose and elaborate alternative points of view.
  • Educate consumers about the limitations as well as the capabilities of intelligence analysis; define a set of realistic expectations as a standard against which to judge analytical performance.

Chapter 3
Memory: How Do We Remember What We Know?

Differences between stronger and weaker analytical performance are attributable in large measure to differences in the organization of data and experience in analysts’ long-term memory. The contents of memory form a continuous input into the analytical process, and anything that influences what information is remembered or retrieved from memory also influences the outcome of analysis.

This chapter discusses the capabilities and limitations of several components of the memory system. Sensory information storage and short-term memory are beset by severe limitations of capacity, while long-term memory, for all practical purposes, has a virtually infinite capacity. With long-term memory, the problems concern getting information into it and retrieving information once it is there, not physical limits on the amount of information that may be stored. Understanding how memory works provides insight into several analytical strengths and weaknesses.

Components of the Memory System

What is commonly called memory is not a single, simple function. It is an extraordinarily complex system of diverse components and processes. There are at least three, and very likely more, distinct memory processes. The most important from the standpoint of this discussion and best documented by scientific research are sensory information storage (SIS), short-term memory (STM), and long-term memory (LTM). Each differs with respect to function, the form of information held, the length of time information is retained, and the amount of information-handling capacity. Memory researchers also posit the existence of an interpretive mechanism and an overall memory monitor or control mechanism that guides interaction among various elements of the memory system.

Sensory Information Storage

Sensory information storage holds sensory images for several tenths of a second after they are received by the sensory organs. The functioning of SIS may be observed if you close your eyes, then open and close them again as rapidly as possible.

Short-Term Memory

Information passes from SIS into short-term memory, where again it is held for only a short period of time–a few seconds or minutes. Whereas SIS holds the complete image, STM stores only the interpretation of the image. If a sentence is spoken, SIS retains the sounds, while STM holds the words formed by these sounds.

Retrieval of information from STM is direct and immediate because the information has never left the conscious mind. Information can be maintained in STM indefinitely by a process of “rehearsal”–repeating it over and over again. But while rehearsing some items to retain them in STM, people cannot simultaneously add new items.

Long-Term Memory

Some information retained in STM is processed into long-term memory. This information on past experiences is filed away in the recesses of the mind and must be retrieved before it can be used. In contrast to the immediate recall of current experience from STM, retrieval of information from LTM is indirect and sometimes laborious.

Loss of detail as sensory stimuli are interpreted and passed from SIS into STM and then into LTM is the basis for the phenomenon of selective perception discussed in the previous chapter. It imposes limits on subsequent stages of analysis, inasmuch as the lost data can never be retrieved. People can never take their mind back to what was actually there in sensory information storage or short-term memory. They can only retrieve their interpretation of what they thought was there as stored in LTM.

There are no practical limits to the amount of information that may be stored in LTM. The limitations of LTM are the difficulty of processing information into it and retrieving information from it.

Despite much research on memory, little agreement exists on many critical points. What is presented here is probably the lowest common denominator on which most researchers would agree.

Organization of Information in Long-Term Memory.

analysts’ needs are best served by a very simple image of the structure of memory.

Imagine memory as a massive, multidimensional spider web. This image captures what is, for the purposes of this book, perhaps the most important property of information stored in memory–its interconnectedness. One thought leads to another. It is possible to start at any one point in memory and follow a perhaps labyrinthine path to reach any other point. Information is retrieved by tracing through the network of interconnections to the place where it is stored.

Retrievability is influenced by the number of locations in which information is stored and the number and strength of pathways from this information to other concepts that might be activated by incoming information. The more frequently a path is followed, the stronger that path becomes and the more readily available the information located along that path. If one has not thought of a subject for some time, it may be difficult to recall details. After thinking our way back into the appropriate context and finding the general location in our memory, the interconnections become more readily available. We begin to remember names, places, and events that had seemed to be forgotten.

Once people have started thinking about a problem one way, the same mental circuits or pathways get activated and strengthened each time they think about it. This facilitates the retrieval of information. These same pathways, however, also become the mental ruts that make it difficult to reorganize the information mentally so as to see it from a different perspective.

One useful concept of memory organization is what some cognitive psychologists call a “schema.” A schema is any pattern of relationships among data stored in memory. It is any set of nodes and links between them in the spider web of memory that hang together so strongly that they can be retrieved and used more or less as a single unit.

For example, a person may have a schema for a bar that when activated immediately makes available in memory knowledge of the properties of a bar and what distinguishes a bar, say, from a tavern. It brings back memories of specific bars that may in turn stimulate memories of thirst, guilt, or other feelings or circumstances. People also have schemata (plural for schema) for abstract concepts such as a socialist economic system and what distinguishes it from a capitalist or communist system. Schemata for phenomena such as success or failure in making an accurate intelligence estimate will include links to those elements of memory that explain typical causes and implications of success or failure. There must also be schemata for processes that link memories of the various steps involved in long division, regression analysis, or simply making inferences from evidence and writing an intelligence report.

Any given point in memory may be connected to many different overlapping schemata. This system is highly complex and not well understood.

It serves the purpose of emphasizing that memory does have structure. It also shows that how knowledge is connected in memory is critically important in determining what information is retrieved in response to any stimulus and how that information is used in reasoning.

Concepts and schemata stored in memory exercise a powerful influence on the formation of perceptions from sensory data.

If information does not fit into what people know, or think they know, they have great difficulty processing it.

The content of schemata in memory is a principal factor distinguishing stronger from weaker analytical ability. This is aptly illustrated by an experiment with chess players. When chess grandmasters and masters and ordinary chess players were given five to 10 seconds to note the position of 20 to 25 chess pieces placed randomly on a chess board, the masters and ordinary players were alike in being able to remember the places of only about six pieces. If the positions of the pieces were taken from an actual game (unknown to the test subjects), however, the grandmasters and masters were usually able to reproduce almost all the positions without error, while the ordinary players were still able to place correctly only a half-dozen pieces.

That the unique ability of the chess masters did not result from a pure feat of memory is indicated by the masters’ inability to perform better than ordinary players in remembering randomly placed positions. Their exceptional performance in remembering positions from actual games stems from their ability to immediately perceive patterns that enable them to process many bits of information together as a single chunk or schema. The chess master has available in long-term memory many schemata that connect individual positions together in coherent patterns. When the position of chess pieces on the board corresponds to a recognized schema, it is very easy for the master to remember not only the positions of the pieces, but the outcomes of previous games in which the pieces were in these positions. Similarly, the unique abilities of the master analyst are attributable to the schemata in long-term memory that enable the analyst to perceive patterns in data that pass undetected by the average observer.

Getting Information Into and Out of Long-Term Memory. It used to be that how well a person learned something was thought to depend upon how long it was kept in short-term memory or the number of times they repeated it to themselves. Research evidence now suggests that neither of these factors plays the critical role. Continuous repetition does not necessarily guarantee that something will be remembered. The key factor in transferring information from short-term to long-term memory is the development of associations between the new information and schemata already available in memory. This, in turn, depends upon two variables: the extent to which the information to be learned relates to an already existing schema, and the level of processing given to the new information.

Depth of processing is the second important variable in determining how well information is retained. Depth of processing refers to the amount of effort and cognitive capacity employed to process information, and the number and strength of associations that are thereby forged between the data to be learned and knowledge already in memory. In experiments to test how well people remember a list of words, test subjects might be asked to perform different tasks that reflect different levels of processing. The following illustrative tasks are listed in order of the depth of mental processing required: say how many letters there are in each word on the list, give a word that rhymes with each word, make a mental image of each word, make up a story that incorporates each word.

It turns out that the greater the depth of processing, the greater the ability to recall words on a list. This result holds true regardless of whether the test subjects are informed in advance that the purpose of the experiment is to test them on their memory. Advising test subjects to expect a test makes almost no difference in their performance, presumably because it only leads them to rehearse the information in short-term memory, which is ineffective as compared with other forms of processing.

There are three ways in which information may be learned or committed to memory: by rote, assimilation, or use of a mnemonic device.

By Rote. Material to be learned is repeated verbally with sufficient frequency that it can later be repeated from memory without use of any memory aids. When information is learned by rote, it forms a separate schema not closely interwoven with previously held knowledge. That is, the mental processing adds little by way of elaboration to the new information, and the new information adds little to the elaboration of existing schemata. Learning by rote is a brute force technique. It seems to be the least efficient way of remembering.

By Assimilation. Information is learned by assimilation when the structure or substance of the information fits into some memory schema already possessed by the learner. The new information is assimilated to or linked to the existing schema and can be retrieved readily by first accessing the existing schema and then reconstructing the new information. Assimilation involves learning by comprehension and is, therefore, a desirable method, but it can only be used to learn information that is somehow related to our previous experience.

By Using A Mnemonic Device. A mnemonic device is any means of organizing or encoding information for the purpose of making it easier to remember. A high school student cramming for a geography test might use the acronym “HOMES” as a device for remembering the first letter of each of the Great Lakes–Huron, Ontario, etc.

Memory and Intelligence Analysis

An analyst’s memory provides continuous input into the analytical process. This input is of two types–additional factual information on historical background and context, and schemata the analyst uses to determine the meaning of newly acquired information. Information from memory may force itself on the analyst’s awareness without any deliberate effort by the analyst to remember; or, recall of the information may require considerable time and strain. In either case, anything that influences what information is remembered or retrieved from memory also influences intelligence analysis.

Judgment is the joint product of the available information and what the analyst brings to the analysis of this information.

Substantive knowledge and analytical experience determine the store of memories and schemata the analyst draws upon to generate and evaluate hypotheses. The key is not a simple ability to recall facts, but the ability to recall patterns that relate facts to each other and to broader concepts–and to employ procedures that facilitate this process.

Stretching the Limits of Working Memory

Limited information is available on what is commonly thought of as “working memory”–the collection of information that an analyst holds in the forefront of the mind as he or she does analysis. The general concept of working memory seems clear from personal introspection.

In writing this chapter, I am very conscious of the constraints on my ability to keep many pieces of information in mind while experimenting with ways to organize this information and seeking words to express my thoughts. To help offset these limits on my working memory, I have accumulated a large number of written notes containing ideas and half-written paragraphs. Only by using such external memory aids am I able to cope with the volume and complexity of the information I want to use.

The recommended technique for coping with this limitation of working memory is called externalizing the problem–getting it out of one’s head and down on paper in some simplified form that shows the main elements of the problem and how they relate to each other.

breaking down a problem into its component parts and then preparing a simple “model” that shows how the parts relate to the whole. When working on a small part of the problem, the model keeps one from losing sight of the whole.

A simple model of an analytical problem facilitates the assimilation of new information into long-term memory; it provides a structure to which bits and pieces of information can be related. The model defines the categories for filing information in memory and retrieving it on demand. In other words, it serves as a mnemonic device that provides the hooks on which to hang information so that it can be found when needed.

“Hardening of the Categories”. Memory processes tend to work with generalized categories. If people do not have an appropriate category for something, they are unlikely to perceive it, store it in memory, or be able to retrieve it from memory later. If categories are drawn incorrectly, people are likely to perceive and remember things inaccurately. When information about phenomena that are different in important respects nonetheless gets stored in memory under a single concept, errors of analysis may result.

“Hardening of the categories” is a common analytical weakness. Fine distinctions among categories and tolerance for ambiguity contribute to more effective analysis.

Things That Influence What Is Remembered. Factors that influence how information is stored in memory and that affect future retrievability include: being the first-stored information on a given topic, the amount of attention focused on the information, the credibility of the information, and the importance attributed to the information at the moment of storage. By influencing the content of memory, all of these factors also influence the outcome of intelligence analysis.

Memory Rarely Changes Retroactively. Analysts often receive new information that should, logically, cause them to reevaluate the credibility or significance of previous information. Ideally, the earlier information should then become either more salient and readily available in memory, or less so. But it does not work that way. Unfortunately, memories are seldom reassessed or reorganized retroactively in response to new information. For example, information that is dismissed as unimportant or irrelevant because it did not fit an analyst’s expectations does not become more memorable even if the analyst changes his or her thinking to the point where the same information, received today, would be recognized as very significant.

Memory Can Handicap as Well as Help

Understanding how memory works provides some insight into the nature of creativity, openness to new information, and breaking mind-sets. All involve spinning new links in the spider web of memory–links among facts, concepts, and schemata that previously were not connected or only weakly connected.

There is, however, a crucial difference between the chess master and the master intelligence analyst. Although the chess master faces a different opponent in each match, the environment in which each contest takes place remains stable and unchanging: the permissible moves of the diverse pieces are rigidly determined, and the rules cannot be changed without the master’s knowledge. Once the chess master develops an accurate schema, there is no need to change it. The intelligence analyst, however, must cope with a rapidly changing world. Many countries that previously were US adversaries are now our formal or de facto allies. The American and Russian governments and societies are not the same today as they were 20 or even 10 or five years ago. Schemata that were valid yesterday may no longer be functional tomorrow.

Learning new schemata often requires the unlearning of existing ones, and this is exceedingly difficult. It is always easier to learn a new habit than to unlearn an old one.

PART II–TOOLS FOR THINKING

Chapter 4

Strategies for Analytical Judgment: Transcending the Limits of Incomplete Information

When intelligence analysts make thoughtful analytical judgments, how do they do it? In seeking answers to this question, this chapter discusses the strengths and limitations of situational logic, theory, comparison, and simple immersion in the data as strategies for the generation and evaluation of hypotheses. The final section discusses alternative strategies for choosing among hypotheses. One strategy too often used by intelligence analysts is described as “satisficing”–choosing the first hypothesis that appears good enough rather than carefully identifying all possible hypotheses and determining which is most consistent with the evidence.

Intelligence analysts should be self-conscious about their reasoning process. They should think about how they make judgments and reach conclusions, not just about the judgments and conclusions themselves.

Judgment is what analysts use to fill gaps in their knowledge. It entails going beyond the available information and is the principal means of coping with uncertainty. It always involves an analytical leap, from the known into the uncertain.

Judgment is an integral part of all intelligence analysis. While the optimal goal of intelligence collection is complete knowledge, this goal is seldom reached in practice. Almost by definition of the intelligence mission, intelligence issues involve considerable uncertainty.

Analytical strategies are important because they influence the data one attends to. They determine where the analyst shines his or her searchlight, and this inevitably affects the outcome of the analytical process.

Strategies for Generating and Evaluating Hypotheses

the goal is to understand the several kinds of careful, conscientious analysis one would hope and expect to find among a cadre of intelligence analysts dealing with highly complex issues.

Situational Logic

This is the most common operating mode for intelligence analysts. Generation and analysis of hypotheses start with consideration of concrete elements of the current situation, rather than with broad generalizations that encompass many similar cases. The situation is regarded as one-of-a-kind, so that it must be understood in terms of its own unique logic, rather than as one example of a broad class of comparable events.

Starting with the known facts of the current situation and an understanding of the unique forces at work at that particular time and place, the analyst seeks to identify
the logical antecedents or consequences of this situation. A scenario is developed that hangs together as a plausible narrative. The analyst may work backwards to explain the origins or causes of the current situation or forward to estimate the future outcome.

Situational logic commonly focuses on tracing cause-effect relationships or, when dealing with purposive behavior, means-ends relationships. The analyst identifies the goals being pursued and explains why the foreign actor(s) believe certain means will achieve those goals.

Particular strengths of situational logic are its wide applicability and ability to integrate a large volume of relevant detail. Any situation, however unique, may be analyzed in this manner.

Situational logic as an analytical strategy also has two principal weaknesses. One is that it is so difficult to understand the mental and bureaucratic processes of foreign leaders and governments. To see the options faced by foreign leaders as these leaders see them, one must understand their values and assumptions and even their misperceptions and misunderstandings. Without such insight, interpreting foreign leaders’ decisions or forecasting future decisions is often little more than partially informed speculation. Too frequently, foreign behavior appears “irrational” or “not in their own best interest.” Such conclusions often indicate analysts have projected American values and conceptual frameworks onto the foreign leaders and societies, rather than understanding the logic of the situation as it appears to them.

The second weakness is that situational logic fails to exploit the theoretical knowledge derived from study of similar phenomena in other countries and other time periods. The subject of national separatist movements illustrates the point. Nationalism is a centuries-old problem, but most Western industrial democracies have been considered well-integrated national communities.

Analyzing many examples of a similar phenomenon, as discussed below, enables one to probe more fundamental causes than those normally considered in logic-of-the- situation analysis. The proximate causes identified by situational logic appear, from the broader perspective of theoretical analysis, to be but symptoms indicating the presence of more fundamental causal factors. A better understanding of these fundamental causes is critical to effective forecasting, especially over the longer range.

Applying Theory

Theory is an academic term not much in vogue in the Intelligence Community, but it is unavoidable in any discussion of analytical judgment. In one popular meaning of the term, “theoretical” is associated with the terms “impractical” and “unrealistic”. Needless to say, it is used here in a quite different sense.

A theory is a generalization based on the study of many examples of some phenomenon. It specifies that when a given set of conditions arises, certain other conditions will follow either with certainty or with some degree of probability. In other words, conclusions are judged to follow from a set of conditions and a finding that these conditions apply in the specific case being analyzed.

What academics refer to as theory is really only a more explicit version of what intelligence analysts think of as their basic understanding of how individuals, institutions, and political systems normally behave.

Theoretical propositions frequently fail to specify the time frame within which developments might be anticipated to occur.

Further elaboration of the theory relating economic development and foreign ideas to political instability in feudal societies would identify early warning indicators that analysts might look for. Such indicators would guide both intelligence collection and analysis of sociopolitical and socioeconomic data and lead to hypotheses concerning when or under what circumstances such an event might occur.

But if theory enables the analyst to transcend the limits of available data, it may also provide the basis for ignoring evidence that is truly indicative of future events.

Figure 4 below illustrates graphically the difference between theory and situational logic. Situational logic looks at the evidence within a single country on multiple interrelated issues, as shown by the column highlighted in gray. This is a typical area studies approach. Theoretical analysis looks at the evidence related to a single issue in multiple countries, as shown by the row highlighted in gray. This is a typical social science approach.

The distinction between theory and situational logic is not as clear as it may seem from this graphic, however. Logic-of-the-situation analysis also draws heavily on theoretical assumptions. How does the analyst select the most significant elements to describe the current situation, or identify the causes or consequences of these elements, without some implicit theory that relates the likelihood of certain outcomes to certain antecedent conditions?

Comparison with Historical Situations

A third approach for going beyond the available information is comparison. An analyst seeks understanding of current events by comparing them with historical precedents in the same country, or with similar events in other countries. Analogy is one form of comparison. When an historical situation is deemed comparable to current circumstances, analysts use their understanding of the historical precedent to fill gaps in their understanding of the current situation. Unknown elements of the present are assumed to be the same as known elements of the historical precedent. Thus, analysts reason that the same forces are at work, that the outcome of the present situation is likely to be similar to the outcome of the historical situation, or that a certain policy is required in order to avoid the same outcome as in the past.

Comparison differs from situational logic in that the present situation is interpreted in the light of a more or less explicit conceptual model that is created by looking at similar situations in other times or places. It differs from theoretical analysis in that this conceptual model is based on a single case or only a few cases, rather than on many similar cases. Comparison may also be used to generate theory, but this is a more narrow kind of theorizing that cannot be validated nearly as well as generalizations inferred from many comparable cases.

Reasoning by comparison is a convenient shortcut, one chosen when neither data nor theory are available for the other analytical strategies, or simply because it is easier and less time-consuming than a more detailed analysis. A careful comparative analysis starts by specifying key elements of the present situation. The analyst then seeks out one or more historical precedents that may shed light on the present. Frequently, however, a historical precedent may be so vivid and powerful that it imposes itself upon a person’s thinking from the outset, conditioning them to perceive the present primarily in terms of its similarity to the past. This is reasoning by analogy. As Robert Jervis noted, “historical analogies often precede, rather than follow, a careful analysis of a situation.”

The tendency to relate contemporary events to earlier events as a guide to understanding is a powerful one. Comparison helps achieve understanding by reducing the unfamiliar to the familiar. In the absence of data required for a full understanding of the current situation, reasoning by comparison may be the only alternative. Anyone taking this approach, however, should be aware of the significant potential for error. This course is an implicit admission of the lack of sufficient information to understand the present situation in its own right, and lack of relevant theory to relate the present situation to many other comparable situations.

In a short book that ought to be familiar to all intelligence analysts, Ernest May traced the impact of historical analogies on U.S. Foreign Policy. He found that because of reasoning by analogy, US policymakers tend to be one generation behind, determined to avoid the mistakes of the previous generation. They pursue the policies that would have been most appropriate in the historical situation but are not necessarily well adapted to the current one.

Communist aggression after World War II was seen as analogous to Nazi aggression, leading to a policy of containment that could have prevented World War II.

May argues that policymakers often perceive problems in terms of analogies with the past, but that they ordinarily use history badly:

When resorting to an analogy, they tend to seize upon the first that comes to mind. They do not research more widely. Nor do they pause to analyze the case, test its fitness, or even ask in what ways it might be misleading.

As compared with policymakers, intelligence analysts have more time available to “analyze rather than analogize.” Intelligence analysts tend to be good historians, with a large number of historical precedents available for recall. The greater the number of potential analogues an analyst has at his or her disposal, the greater the likelihood of selecting an appropriate one. The greater the depth of an analyst’s knowledge, the greater the chances the analyst will perceive the differences as well as the similarities between two situations. Even under the best of circumstances, however, inferences based on comparison with a single analogous situation probably are more prone to error than most other forms of inference.

The most productive uses of comparative analysis are to suggest hypotheses and to highlight differences, not to draw conclusions. Comparison can suggest the presence or the influence of variables that are not readily apparent in the current situation, or stimulate the imagination to conceive explanations or possible outcomes that might not otherwise occur to the analyst. In short, comparison can generate hypotheses that then guide the search for additional information to confirm or refute these hypotheses. It should not, however, form the basis for conclusions unless thorough analysis of both situations has confirmed they are indeed comparable.

Data Immersion

Analysts sometimes describe their work procedure as immersing themselves in the data without fitting the data into any preconceived pattern. At some point an apparent pattern (or answer or explanation) emerges spontaneously, and the analyst then goes back to the data to check how well the data support this judgment. According to this view, objectivity requires the analyst to suppress any personal opinions or preconceptions, so as to be guided only by the “facts” of the case.

To think of analysis in this way overlooks the fact that information cannot speak for itself. The significance of information is always a joint function of the nature of the information and the context in which it is interpreted. The context is provided by the analyst in the form of a set of assumptions and expectations concerning human and organizational behavior. These preconceptions are critical determinants of which information is considered relevant and how it is interpreted.

Analysis begins when the analyst consciously inserts himself or herself into the process to select, sort, and organize information. This selection and organization can only be accomplished according to conscious or subconscious assumptions and preconceptions.

In research to determine how physicians make medical diagnoses, the doctors who comprised the test subjects were asked to describe their analytical strategies. Those who stressed thorough collection of data as

their principal analytical method were significantly less accurate in their diagnoses than those who described themselves as following other analytical strategies such as identifying and testing hypotheses.

Relationships Among Strategies

No one strategy is necessarily better than the others. In order to generate all relevant hypotheses and make maximum use of all potentially relevant information, it would be desirable to employ all three strategies at the early hypothesis generation phase of a research project. Unfortunately, analysts commonly lack the inclination or time to do so.

Differences in analytical strategy may cause fundamental differences in perspective between intelligence analysts and some of the policymakers for whom they write. Higher level officials who are not experts on the subject at issue use far more theory and comparison and less situational logic than intelligence analysts. Any policymaker or other senior manager who lacks the knowledge base of the specialist and does not have time for detail must, of necessity, deal with broad generalizations. Many decisions must be made, with much less time to consider each of them than is available to the intelligence analyst. This requires the policymaker to take a more conceptual approach, to think in terms of theories, models, or analogies that summarize large amounts of detail. Whether this represents sophistication or oversimplification depends upon the individual case and, perhaps, whether one agrees lead to increased diagnostic accuracy.

or disagrees with the judgments made. In any event, intelligence analysts would do well to take this phenomenon into account when writing for their consumers.

Strategies for Choice Among Hypotheses

A systematic analytical process requires selection among alternative hypotheses, and it is here that analytical practice often diverges significantly from the ideal and from the canons of scientific method. The ideal is to generate a full set of hypotheses, systematically evaluate each hypothesis, and then identify the hypothesis that provides the best fit to the data.

In practice, other strategies are commonly employed. Alexander George has identified a number of less-than-optimal strategies for making decisions in the face of incomplete information and multiple, competing values and goals. While George conceived of these strategies as applicable to how decisionmakers choose among alternative policies, most also apply to how intelligence analysts might decide among alternative analytical hypotheses.

The relevant strategies George identified are:

  • “Satisficing”–selecting the first identified alternative that appears “good enough” rather than examining all alternatives to determine which is “best.”
  • Incrementalism–focusing on a narrow range of alternatives representing marginal change, without considering the need for dramatic change from an existing position.
  • Consensus–opting for the alternative that will elicit the greatest agreement and support. Simply telling the boss what he or she wants to hear is one version of this.
  • Reasoning by analogy–choosing the alternative that appears most likely to avoid some previous error or to duplicate a previous success.
  • Relying on a set of principles or maxims that distinguish a “good” from a “bad” alternative.

“Satisficing”

I would suggest, based on personal experience and discussions with analysts, that most analysis is conducted in a manner very similar to the satisficing mode (selecting the first identified alternative that appears “good enough”). The analyst identifies what appears to be the most likely hypothesis–that is, the tentative estimate, explanation, or description of the situation that appears most accurate.

This approach has three weaknesses: the selective perception that results from focus on a single hypothesis, failure to generate a complete set of competing hypotheses, and a focus on evidence that confirms rather than disconfirms hypotheses. Each of these is discussed below.

Selective Perception. Tentative hypotheses serve a useful function in helping analysts select, organize, and manage information. They narrow the scope of the problem so that the analyst can focus efficiently on data that are most relevant and important. The hypotheses serve as organizing frameworks in working memory and thus facilitate retrieval of information from memory. In short, they are essential elements of the analytical process. But their functional utility also entails some cost, because a hypothesis functions as a perceptual filter. Analysts, like people in general, tend to see what they are looking for and to overlook that which is not specifically included in their search strategy. They tend to limit the processed information to that which is relevant to the current hypothesis. If the hypothesis is incorrect, information may be lost that would suggest a new or modified hypothesis.

This difficulty can be overcome by the simultaneous consideration of multiple hypotheses.

It has the advantage of focusing attention on those few items of evidence that have the greatest diagnostic value in distinguishing among the validity of competing hypotheses. Most evidence is consistent with several different hypotheses, and this fact is easily overlooked when analysts focus on only one hypothesis at a time–especially if their focus is on seeking to confirm rather than disprove what appears to be the most likely answer.

Failure To Generate Appropriate Hypotheses. If tentative hypotheses determine the criteria for searching for information and judging its relevance, it follows that one may overlook the proper answer if it is not encompassed within the several hypotheses being considered. Research on hypothesis generation suggests that performance on this task is woefully inadequate.

Analysts need to take more time to develop a full set of competing hypotheses, using all three of the previously discussed strategies–theory, situational logic, and comparison.

Failure To Consider Diagnosticity of Evidence. In the absence of a complete set of alternative hypotheses, it is not possible to evaluate the “diagnosticity” of evidence. Unfortunately, many analysts are unfamiliar with the concept of diagnosticity of evidence. It refers to the extent to which any item of evidence helps the analyst determine the relative likelihood of alternative hypotheses.

Evidence is diagnostic when it influences an analyst’s judgment on the relative likelihood of the various hypotheses. If an item of evidence seems consistent with all the hypotheses, it may have no diagnostic value at all. It is a common experience to discover that most available evidence really is not very helpful, as it can be reconciled with all the hypotheses.

Failure To Reject Hypotheses

Scientific method is based on the principle of rejecting hypotheses, while tentatively accepting only those hypotheses that cannot be refuted. Intuitive analysis, by comparison, generally concentrates on confirming a hypothesis and commonly accords more weight to evidence supporting a hypothesis than to evidence that weakens it. Ideally, the reverse would be true. While analysts usually cannot apply the statistical procedures of scientific methodology to test their hypotheses, they can and should adopt the conceptual strategy of seeking to refute rather than confirm hypotheses.

Failure To Reject Hypotheses

Scientific method is based on the principle of rejecting hypotheses, while tentatively accepting only those hypotheses that cannot be refuted. Intuitive analysis, by comparison, generally concentrates on confirming a hypothesis and commonly accords more weight to evidence supporting a hypothesis than to evidence that weakens it. Ideally, the reverse would be true. While analysts usually cannot apply the statistical procedures of scientific methodology to test their hypotheses, they can and should adopt the conceptual strategy of seeking to refute rather than confirm hypotheses.

There are two aspects to this problem: people do not naturally seek disconfirming evidence, and when such evidence is received it tends to be discounted. If there is any question about the former, consider how often people test their political and religious beliefs by reading newspapers and books representing an opposing viewpoint.

Apart from the psychological pitfalls involved in seeking confirmatory evidence, an important logical point also needs to be considered. The logical reasoning underlying the scientific method of rejecting hypotheses is that “…no confirming instance of a law is a verifying instance, but that any disconfirming instance is a falsifying instance”.

In other words, a hypothesis can never be proved by the enumeration of even a large body of evidence consistent with that hypothesis, because the same body of evidence may also be consistent with other hypotheses. A hypothesis may be disproved, however, by citing a single item of evidence that is incompatible with it.

the validity of a hypothesis can only be tested by seeking to disprove it rather than confirm it.

Consider lists of early warning indicators, for example. They are designed to be indicative of an impending attack. Very many of them, however, are also consistent with the hypothesis that military movements are a bluff to exert diplomatic pressure and that no military action will be forthcoming. When analysts seize upon only one of these hypotheses and seek evidence to confirm it, they will often be led astray.

The evidence available to the intelligence analyst is in one important sense different from the evidence available to test subjects asked to infer the number sequence rule. The intelligence analyst commonly deals with problems in which the evidence has only a probabilistic relationship to the hypotheses being considered. Thus it is seldom possible to eliminate any hypothesis entirely, because the most one can say is that a given hypothesis is unlikely given the nature of the evidence, not that it is impossible.

This weakens the conclusions that can be drawn from a strategy aimed at eliminating hypotheses, but it does not in any way justify a strategy aimed at confirming them.

Conclusion

There are many detailed assessments of intelligence failures, but few comparable descriptions of intelligence successes. In reviewing the literature on intelligence successes, Frank Stech found many examples of success but only three accounts that provide sufficient methodological details to shed light on the intellectual processes and methods that contributed to the successes.

Chapter 5
Do You Really Need More Information?

The difficulties associated with intelligence analysis are often attributed to the inadequacy of available information. Thus the US Intelligence Community invests heavily in improved intelligence collection systems while managers of analysis lament the comparatively small sums devoted to enhancing analytical resources, improving analytical methods, or gaining better understanding of the cognitive processes involved in making analytical judgments. This chapter questions the often- implicit assumption that lack of information is the principal obstacle to accurate intelligence judgements.

Using experts in a variety of fields as test subjects, experimental psychologists have examined the relationship between the amount of information available to the experts, the accuracy of judgments they make based on this information, and the experts’

intelligence judgments.

confidence in the accuracy of these judgments. The word “information,” as used in this context, refers to the totality of material an analyst has available to work with in making a judgment.

Key findings from this research are:

  • Once an experienced analyst has the minimum information necessary to make an informed judgment, obtaining additional information generally does not improve the accuracy of his or her estimates. Additional information does, however, lead the analyst to become more confident in the judgment, to the point of overconfidence.
  • Experienced analysts have an imperfect understanding of what information they actually use in making judgments. They are unaware of the extent to which their judgments are determined by a few dominant factors, rather than by the systematic integration of all available information. Analysts actually use much less of the available information than they think they do.

To interpret the disturbing but not surprising findings from these experiments, it is necessary to consider four different types of information and discuss their relative value in contributing to the accuracy of analytical judgments. It is also helpful to distinguish analysis in which results are driven by the data from analysis that is driven by the conceptual framework employed to interpret the data.

Understanding the complex relationship between amount of information and accuracy of judgment has implications for both the management and conduct of intelligence analysis. Such an understanding suggests analytical procedures and management initiatives that may indeed contribute to more accurate analytical judgments. It also suggests that resources needed to attain a better understanding of the entire analytical process might profitably be diverted from some of the more costly intelligence collection programs.

Modeling Expert Judgment

Another significant question concerns the extent to which analysts possess an accurate understanding of their own mental processes. How good is their insight into how they actually weight evidence in making judgments? For each situation to be analyzed, they have an implicit “mental model” consisting of beliefs and assumptions as to which variables are most important and how they are related to each other. If analysts have good insight into their own mental model, they should be able to identify and describe the variables they have considered most important in making judgments.

There is strong experimental evidence, however, that such self-insight is usually faulty. The expert perceives his or her own judgmental process, including the number of different kinds of information taken into account, as being considerably more complex than is in fact the case. Experts overestimate the importance of factors that have only a minor impact on their judgment and underestimate the extent to which their decisions are based on a few major variables. In short, people’s mental models are simpler than they think, and the analyst is typically unaware not only of which variables should have the greatest influence, but also which variables actually are having the greatest influence.

When Does New Information Affect Our Judgment?

To evaluate the relevance and significance of these experimental findings in the context of intelligence analysts’ experiences, it is necessary to distinguish four types of additional information that an analyst might receive:

  • Additional detail about variables already included in the analysis: Much raw intelligence reporting falls into this category. One would not expect such supplementary information to affect the overall accuracy of the analyst’s judgment, and it is readily understandable that further detail that is consistent with previous information increases the analyst’s confidence. Analyses for which considerable depth of detail is available to support the conclusions tend to be more persuasive to their authors as well as to their readers.
  • Identification of additional variables: Information on additional variables permits the analyst to take into account other factors that may affect the situation. This is the kind of additional information used in the horserace handicapper experiment. Other experiments have employed some combination of additional variables and additional detail on the same variables. The finding that judgments are based on a few critical variables rather than on the entire spectrum of evidence helps to explain why information on additional variables does not normally improve predictive accuracy. Occasionally, in situations when there are known gaps in an analyst’s understanding, a single report concerning some new and previously unconsidered factor–for example, an authoritative report on a policy decision or planned coup d’etat–will have a major impact on the analyst’s judgment. Such a report would fall into one of the next two categories of new information.
  • Information concerning the value attributed to variables already included in the analysis: An example of such information would be the horserace handicapper learning that a horse he thought would carry 110 pounds will actually carry only 106. Current intelligence reporting tends to deal with this kind of information; for example, an analyst may learn that a dissident group is stronger than had been anticipated. New facts affect the accuracy of judgments when they deal with changes in variables that are critical to the estimates. Analysts’ confidence in judgments based on such information is influenced by their confidence in the accuracy of the information as well as by the amount of information.
  • Information concerning which variables are most important and how they relate to each other: Knowledge and assumptions as to which variables are most important and how they are interrelated comprise the mental model that tells the analyst how to analyze the data received. Explicit investigation of such relationships is one factor that distinguishes systematic research from current intelligence reporting and raw intelligence. In the context of the horserace handicapper experiment, for example, handicappers had to select which variables to include in their analysis. Is weight carried by a horse more, or less, important than several other variables that affect a horse’s performance? Any information that affects this judgment influences how the handicapper analyzes the available data; that is, it affects his mental model.

The accuracy of an analyst’s judgment depends upon both the accuracy of our mental model (the fourth type of information discussed above) and the accuracy of the values attributed to the key variables in the model (the third type of information discussed above). Additional detail on variables already in the analyst’s mental model and information on other variables that do not in fact have a significant influence on our judgment (the first and second types of information) have a negligible impact on accuracy, but form the bulk of the raw material analysts work with. These kinds of information increase confidence because the conclusions seem to be supported by such a large body of data.

This discussion of types of new information is the basis for distinguishing two types of analysis- data-driven analysis and conceptually-driven analysis.

Data-Driven Analysis

In this type of analysis, accuracy depends primarily upon the accuracy and completeness of the available data. If one makes the reasonable assumption that the analytical model is correct and the further assumption that the analyst properly applies this model to the data, then the accuracy of the analytical judgment depends entirely upon the accuracy and completeness of the data.

Analyzing the combat readiness of a military division is an example of data-driven analysis. In analyzing combat readiness, the rules and procedures to be followed are relatively well established.

Most elements of the mental model can be made explicit so that other analysts may be taught to understand and follow the same analytical procedures and arrive at the same or similar results. There is broad, though not necessarily universal, agreement on what the appropriate model is. There are relatively objective standards for judging the quality of analysis, inasmuch as the conclusions follow logically from the application of the agreed-upon model to the available data.

Conceptually Driven Analysis

Conceptually driven analysis is at the opposite end of the spectrum from data-driven analysis. The questions to be answered do not have neat boundaries, and there are many unknowns. The number of potentially relevant variables and the diverse and imperfectly understood relationships among these variables involve the analyst in enormous complexity and uncertainty.

In the absence of any agreed-upon analytical schema, analysts are left to their own devices. They interpret information with the aid of mental models that are largely implicit rather than explicit. Assumptions concerning political forces and processes in the subject country may not be apparent even to the analyst. Such models are not representative of an analytical consensus. Other analysts examining the same data may well reach different conclusions, or reach the same conclusions but for different reasons. This analysis is conceptually driven, because the outcome depends at least as much upon the conceptual framework employed to analyze the data as it does upon the data itself.

To illustrate further the distinction between data-driven and conceptually driven analysis, it is useful to consider the function of the analyst responsible for current intelligence, especially current political intelligence as distinct from longer term research. The daily routine is driven by the incoming wire service news, embassy cables, and clandestine-source reporting from overseas that must be interpreted for dissemination to consumers throughout the Intelligence Community. Although current intelligence reporting is driven by incoming information, this is not what is meant by data-driven analysis. On the contrary, the current intelligence analyst’s task is often extremely concept-driven. The analyst must provide immediate interpretation of the latest, often unexpected events. Apart from his or her store of background information, the analyst may have no data other than the initial, usually incomplete report. Under these circumstances, interpretation is based upon an implicit mental model of how and why events normally transpire in the country for which the analyst is responsible. Accuracy of judgment depends almost exclusively upon accuracy of the mental model, for there is little other basis for judgment.

Partly because of the nature of human perception and information-processing, beliefs of all types tend to resist change. This is especially true of the implicit assumptions and supposedly self-evident truths that play an important role in forming mental models. Analysts are often surprised to learn that what are to them self-evident truths are by no means self-evident to others, or that self-evident truth at one point in time may be commonly regarded as uninformed assumption 10 years later.

Information that is consistent with an existing mind-set is perceived and processed easily and reinforces existing beliefs. Because the mind strives instinctively for consistency, information that is inconsistent with an existing mental image tends to be overlooked, perceived in a distorted manner, or rationalized to fit existing assumptions and beliefs.

Mosaic Theory of Analysis

Understanding of the analytic process has been distorted by the mosaic metaphor commonly used to describe it. According to the mosaic theory of intelligence, small pieces of information are collected that, when put together like a mosaic or jigsaw puzzle, eventually enable analysts to perceive a clear picture of reality. The analogy suggests that accurate estimates depend primarily upon having all the pieces, that is, upon accurate and relatively complete information. It is important to collect and store the small pieces of information, as these are the raw material from which the picture is made; one never knows when it will be possible for an astute analyst to fit a piece into the puzzle. Part of the rationale for large technical intelligence collection systems is rooted in this mosaic theory.

Insights from cognitive psychology suggest that intelligence analysts do not work this way and that the most difficult analytical tasks cannot be approached in this manner. Analysts commonly find pieces that appear to fit many different pictures. Instead of a picture emerging from putting all the pieces together, analysts typically form a picture first and then select the pieces to fit. Accurate estimates depend at least as much upon the mental model used in forming the picture as upon the number of pieces of the puzzle that have been collected.

While analysis and collection are both important, the medical analogy attributes more value to analysis and less to collection than the mosaic metaphor.

Conclusions

To the leaders and managers of intelligence who seek an improved intelligence product, these findings offer a reminder that this goal can be achieved by improving analysis as well as collection. There appear to be inherent practical limits on how much can be gained by efforts to improve collection. By contrast, an open and fertile field exists for imaginative efforts to improve analysis.

These efforts should focus on improving the mental models employed by analysts to interpret information and the analytical processes used to evaluate it. While this will be difficult to achieve, it is so critical to effective intelligence analysis that even small improvements could have large benefits. Specific recommendations are included the next three chapters and in Chapter 14, “Improving Intelligence Analysis.”

Chapter 6

Keeping an Open Mind

Minds are like parachutes. They only function when they are open. After reviewing how and why thinking gets channeled into mental ruts, this chapter looks at mental tools to help analysts keep an open mind, question assumptions, see different perspectives, develop new ideas, and recognize when it is time to change their minds.

A new idea is the beginning, not the end, of the creative process. It must jump over many hurdles before being embraced as an organizational product or solution. The organizational climate plays a crucial role in determining whether new ideas bubble to the surface or are suppressed.

                         *******************

Major intelligence failures are usually caused by failures of analysis, not failures of collection. Relevant information is discounted, misinterpreted, ignored, rejected, or overlooked because it fails to fit a prevailing mental model or mind-set. The “signals” are lost in the “noise.”

A mind-set is neither good nor bad. It is unavoidable. It is, in essence, a distillation of all that analysts think they know about a subject. It forms a lens through which they perceive the world, and once formed, it resists change.

Understanding Mental Ruts

Chapter 3 on memory suggested thinking of information in memory as somehow interconnected like a massive, multidimensional spider web. It is possible to connect any point within this web to any other point. When analysts connect the same points frequently, they form a path that makes it easier to take that route in the future. Once they start thinking along certain channels, they tend to continue thinking the same way and the path may become a rut.

Talking about breaking mind-sets, or creativity, or even just openness to new information is really talking about spinning new links and new paths through the web of memory. These are links among facts and concepts, or between schemata for organizing facts or concepts, that were not directly connected or only weakly connected before.

Problem-Solving Exercise

intelligence analysis is too often limited by similar, unconscious, self-imposed constraints or “cages of the mind.”

You do not need to be constrained by conventional wisdom. It is often wrong. You do not necessarily need to be constrained by existing policies. They can sometimes be changed if you show a good reason for doing so. You do not necessarily need to be constrained by the specific analytical requirement you were given. The policymaker who originated the requirement may not have thought through his or her needs or the requirement may be somewhat garbled as it passes down through several echelons to you to do the work. You may have a better understanding than the policymaker of what he or she needs, or should have, or what is possible to do. You should not hesitate to go back up the chain of command with a suggestion for doing something a little different than what was asked for.

Mental Tools

People use various physical tools such as a hammer and saw to enhance their capacity to perform various physical tasks. People can also use simple mental tools to enhance their ability to perform mental tasks. These tools help overcome limitations in human mental machinery for perception, memory, and inference.

Questioning Assumptions

It is a truism that analysts need to question their assumptions. Experience tells us that when analytical judgments turn out to be wrong, it usually was not because the information was wrong. It was because an analyst made one or more faulty assumptions that went unchallenged.

Sensitivity Analysis. One approach is to do an informal sensitivity analysis. How sensitive is the ultimate judgment to changes in any of the major variables or driving forces in the analysis? Those linchpin assumptions that drive the analysis are the ones that need to be questioned. Analysts should ask themselves what could happen to make any of these assumptions out of date, and how they can know this has not already happened. They should try to disprove their assumptions rather than confirm them. If an analyst cannot think of anything that would cause a change of mind, his or her mind-set may be so deeply entrenched that the analyst cannot see the conflicting evidence. One advantage of the competing hypotheses approach discussed in Chapter 8 is that it helps identify the linchpin assumptions that swing a conclusion in one direction or another.

Identify Alternative Models. Analysts should try to identify alternative models, conceptual frameworks, or interpretations of the data by seeking out individuals who disagree with them rather than those who agree. Most people do not do that very often. It is much more comfortable to talk with people in one’s own office who share the same basic mind-set. There are a few things that can be done as a matter of policy, and that have been done in some offices in the past, to help overcome this tendency.

At least one Directorate of Intelligence component, for example, has had a peer review process in which none of the reviewers was from the branch that produced the report. The rationale for this was that an analyst’s immediate colleagues and supervisor(s) are likely to share a common mind-set. Hence these are the individuals least likely to raise fundamental issues challenging the validity of the analysis. To

avoid this mind-set problem, each research report was reviewed by a committee of three analysts from other branches handling other countries or issues. None of them had specialized knowledge of the subject. They were, however, highly accomplished analysts. Precisely because they had not been immersed in the issue in question, they were better able to identify hidden assumptions and other alternatives, and to judge whether the analysis adequately supported the conclusions.

Be Wary of Mirror Images. One kind of assumption an analyst should always recognize and question is mirror-imaging–filling gaps in the analyst’s own knowledge by assuming that the other side is likely to act in a certain way because that is how the US would act under similar circumstances. To say, “if I were a Russian intelligence officer …” or “if I were running the Indian Government …” is mirror-imaging. Analysts may have to do that when they do not know how the Russian intelligence officer or the Indian Government is really thinking. But mirror-imaging leads to dangerous assumptions, because people in other cultures do not think the way we do.

Failure to understand that others perceive their national interests differently from the way we perceive those interests is a constant source of problems in intelligence analysis.

Seeing Different Perspectives

Another problem area is looking at familiar data from a different perspective. If you play chess, you know you can see your own options pretty well. It is much more difficult to see all the pieces on the board as your opponent sees them, and to anticipate how your opponent will react to your move. That is the situation analysts are in when they try to see how the US Government’s actions look from another country’s perspective. Analysts constantly have to move back and forth, first seeing the situation from the US perspective and then from the other country’s perspective. This is difficult to do

Thinking Backwards. One technique for exploring new ground is thinking backwards. As an intellectual exercise, start with an assumption that some event you did not expect has actually occurred. Then, put yourself into the future, looking back to explain how this could have happened. Think what must have happened six months or a year earlier to set the stage for that outcome, what must have happened six months or a year before that to prepare the way, and so on back to the present.

Crystal Ball. The crystal ball approach works in much the same way as thinking 71

backwards. Imagine that a “perfect” intelligence source (such as a crystal ball) has told you a certain assumption is wrong. You must then develop a scenario to explain how this could be true. If you can develop a plausible scenario, this suggests your assumption is open to some question.

Role playing. Role playing is commonly used to overcome constraints and inhibitions that limit the range of one’s thinking. Playing a role changes “where you sit.” It also gives one license to think and act differently. Simply trying to imagine how another leader or country will think and react, which analysts do frequently, is not role playing. One must actually act out the role and become, in a sense, the person whose role is assumed. It is only “living” the role that breaks an analyst’s normal mental set and permits him or her to relate facts and ideas to each other in ways that differ from habitual patterns. An analyst cannot be expected to do this alone; some group interaction is required, with different analysts playing different roles, usually in the context of an organized simulation or game.

Just one notional intelligence report is sufficient to start the action in the game. In my experience, it is possible to have a useful political game in just one day with almost no investment in preparatory work.

Devil’s Advocate. A devil’s advocate is someone who defends a minority point of view. He or she may not necessarily agree with that view, but may choose or be assigned to represent it as strenuously as possible. The goal is to expose conflicting interpretations and show how alternative assumptions and images make the world look different. It often requires time, energy, and commitment to see how the world looks from a different perspective.

Imagine that you are the boss at a US facility overseas and are worried about the possibility of a terrorist attack. A standard staff response would be to review existing measures and judge their adequacy. There might well be pressure–subtle or otherwise–from those responsible for such arrangements to find them satisfactory. An alternative or supplementary approach would be to name an individual or small group as a devil’s advocate assigned to develop actual plans for launching such an attack. The assignment to think like a terrorist liberates the designated person(s) to think unconventionally and be less inhibited about finding weaknesses in the system that might embarrass colleagues, because uncovering any such weaknesses is the assigned task.

Recognizing When To Change Your Mind

As a general rule, people are too slow to change an established view, as opposed to being too willing to change. The human mind is conservative. It resists change. Assumptions that worked well in the past continue to be applied to new situations long after they have become outmoded.

Learning from Surprise. A study of senior managers in industry identified how some successful managers counteract this conservative bent. They do it, according to the study, looks from a different perspective.

By paying attention to their feelings of surprise when a particular fact does not fit their prior understanding, and then by highlighting rather than denying the novelty. Although surprise made them feel uncomfortable, it made them take the cause [of the surprise] seriously and inquire into it….Rather than deny, downplay, or ignore disconfirmation [of their prior view], successful senior managers often treat it as friendly and in a way cherish the discomfort surprise creates. As a result, these managers often perceive novel situations early on and in a frame of mind relatively undistorted by hidebound notions.

Analysts should keep a record of unexpected events and think hard about what they might mean, not disregard them or explain them away. It is important to consider whether these surprises, however small, are consistent with some alternative hypothesis. One unexpected event may be easy to disregard, but a pattern of surprises may be the first clue that your understanding of what is happening requires some adjustment, is at best incomplete, and may be quite wrong.

Strategic Assumptions vs. Tactical Indicators. Abraham Ben-Zvi analyzed five cases of intelligence failure to foresee a surprise attack. He made a useful distinction between estimates based on strategic assumptions and estimates based on tactical indications.

Tactical indicators are specific reports of preparations or intent to initiate hostile action or, in the recent Indian case, reports of preparations for a nuclear test.

tactical indicators should be given increased weight in the decisionmaking process.

At a minimum, the emergence of tactical indicators that contradict our strategic assumption should trigger a higher level of intelligence alert. It may indicate that a bigger surprise is on the way.

Stimulating Creative Thinking

Imagination and creativity play important roles in intelligence analysis as in most other human endeavors. Intelligence judgments require the ability to imagine possible causes and outcomes of a current situation. All possible outcomes are not given. The analyst must think of them by imagining scenarios that explicate how they might come about. Similarly, imagination as well as knowledge is required to reconstruct how a problem appears from the viewpoint of a foreign government. Creativity is required to question things that have long been taken for granted. The fact that apples fall from trees was well known to everyone. Newton’s creative genius was to ask “why?” Intelligence analysts, too, are expected to raise new questions that lead to the identification of previously unrecognized relationships or to possible outcomes that had not previously been foreseen.

A creative analytical product shows a flair for devising imaginative or innovative– but also accurate and effective–ways to fulfill any of the major requirements of analysis: gathering information, analyzing information, documenting evidence, and/or presenting conclusions. Tapping unusual sources of data, asking new questions, applying unusual analytic methods, and developing new types of products or new ways of fitting analysis to the needs of consumers are all examples of creative activity.

The old view that creativity is something one is born with, and that it cannot be taught or developed, is largely untrue. While native talent, per se, is important and may be immutable, it is possible to learn to employ one’s innate talents more productively. With understanding, practice, and conscious effort, analysts can learn to produce more imaginative, innovative, creative work.

There is a large body of literature on creativity and how to stimulate it. At least a half- dozen different methods have been developed for teaching, facilitating, or liberating creative thinking. All the methods for teaching or facilitating creativity are based on the assumption that the process of thinking can be separated from the content of thought. One learns mental strategies that can be applied to any subject.

It is not our purpose here to review commercially available programs for enhancing creativity. Such programmatic approaches can be applied more meaningfully to problems of new product development, advertising, or management than to intelligence analysis. It is relevant, however, to discuss several key principles and techniques that these programs have in common, and that individual intelligence analysts or groups of analysts can apply in their work.

Intelligence analysts must generate ideas concerning potential causes or explanations of events, policies that might be pursued or actions taken by a foreign government, possible outcomes of an existing situation, and variables that will influence which outcome actually comes to pass. Analysts also need help to jog them out of mental ruts, to stimulate their memories and imaginations, and to perceive familiar events from a new perspective.

Deferred Judgment. The principle of deferred judgment is undoubtedly the most important. The idea-generation phase of analysis should be separated from the idea- evaluation phase, with evaluation deferred until all possible ideas have been brought out. This approach runs contrary to the normal procedure of thinking of ideas and evaluating them concurrently. Stimulating the imagination and critical thinking are both important, but they do not mix well. A judgmental attitude dampens the imagination, whether it manifests itself as self-censorship of one’s own ideas or fear of critical evaluation by colleagues or supervisors. Idea generation should be a freewheeling, unconstrained, uncritical process.

New ideas are, by definition, unconventional, and therefore likely to be suppressed, either consciously or unconsciously, unless they are born in a secure and protected environment. Critical judgment should be suspended until after the idea-generation stage of analysis has been completed. A series of ideas should be written down and then evaluated later. This applies to idea searching by individuals as well as brainstorming in a group. Get all the ideas out on the table before evaluating any of them.

Quantity Leads to Quality. A second principle is that quantity of ideas eventually leads to quality. This is based on the assumption that the first ideas that come to mind will be those that are most common or usual. It is necessary to run through these conventional ideas before arriving at original or different ones. People have habitual ways of thinking, ways that they continue to use because they have seemed successful in the past. It may well be that these habitual responses, the ones that come first to mind, are the best responses and that further search is unnecessary. In looking for usable new ideas, however, one should seek to generate as many ideas as possible before evaluating any of them.

No Self-Imposed Constraints. A third principle is that thinking should be allowed– indeed encouraged–to range as freely as possible. It is necessary to free oneself from self-imposed constraints, whether they stem from analytical habit, limited perspective, social norms, emotional blocks, or whatever.

Cross-Fertilization of Ideas. A fourth principle of creative problem-solving is that cross-fertilization of ideas is important and necessary. Ideas should be combined with each other to form more and even better ideas. If creative thinking involves forging new links between previously unrelated or weakly related concepts, then creativity will be stimulated by any activity that brings more concepts into juxtaposition with each other in fresh ways. Interaction with other analysts is one basic mechanism for this. As a general rule, people generate more creative ideas when teamed up with others; they help to build and develop each other’s ideas. Personal interaction stimulates new associations between ideas. It also induces greater effort and helps maintain concentration on the task.

These favorable comments on group processes are not meant to encompass standard committee meetings or coordination processes that force consensus based on the lowest common denominator of agreement. My positive words about group interaction apply primarily to brainstorming sessions aimed at generating new ideas and in which, according to the first principle discussed above, all criticism and evaluation are deferred until after the idea generation stage is completed.

Thinking things out alone also has its advantages: individual thought tends to be more structured and systematic than interaction within a group. Optimal results come from alternating between individual thinking and team effort, using group interaction to generate ideas that supplement individual thought. A diverse group is clearly preferable to a homogeneous one. Some group participants should be analysts who are not close to the problem, inasmuch as their ideas are more likely to reflect different insights.

Idea Evaluation. All creativity techniques are concerned with stimulating the flow of ideas. There are no comparable techniques for determining which ideas are best. The procedures are, therefore, aimed at idea generation rather than idea evaluation. The same procedures do aid in evaluation, however, in the sense that ability to generate more alternatives helps one see more potential consequences, repercussions, and effects that any single idea or action might entail.

Organizational Environment

A new idea is not the end product of the creative process. Rather, it is the beginning of what is sometimes a long and tortuous process of translating an idea into an innovative product. The idea must be developed, evaluated, and communicated to others, and this process is influenced by the organizational setting in which it transpires. The potentially useful new idea must pass over a number of hurdles before it is embraced as an organizational product.

Organizational Environment

A new idea is not the end product of the creative process. Rather, it is the beginning of what is sometimes a long and tortuous process of translating an idea into an innovative product. The idea must be developed, evaluated, and communicated to others, and this process is influenced by the organizational setting in which it transpires. The potentially useful new idea must pass over a number of hurdles before it is embraced as an organizational product.

A panel of judges composed of the leading scientists in the field of medical sociology was asked to evaluate the principal published results from each of the 115 research projects. Judges evaluated the research results on the basis of productivity and innovation.

Productivity was defined as the “extent to which the research represents an addition to knowledge along established lines of research or as extensions of previous theory.”

Innovativeness was defined as “additions to knowledge through new lines of research or the development of new theoretical statements of findings that were not explicit in previous theory. Innovation, in other words, involved raising new questions and developing new approaches to the acquisition of knowledge, as distinct from working productively within an already established framework. This same definition applies to innovation in intelligence analysis.

Andrews found virtually no relationship between the scientists’ creative ability and the innovativeness of their research. (There was also no relationship between level of intelligence and innovativeness.) Those who scored high on tests of creative ability did not necessarily receive high ratings from the judges evaluating the innovativeness of their work. A possible explanation is that either creative ability or innovation, or both, were not measured accurately, but Andrews argues persuasively for another view. Various social and psychological factors have so great an effect on the steps needed to translate creative ability into an innovative research product that there is no measurable effect traceable to creative ability alone. In order to document this conclusion, Andrews analyzed data from the questionnaires in which the scientists described their work environment.

Andrews found that scientists possessing more creative ability produced more innovative work only under the following favorable conditions:

  • When the scientist perceived himself or herself as responsible for initiating new activities. The opportunity for innovation, and the encouragement of it, are–not surprisingly–important variables.
  • When the scientist had considerable control over decisionmaking concerning his or her research program–in other words, the freedom to set goals, hire research assistants, and expend funds. Under these circumstances, a new idea is less likely to be snuffed out before it can be developed into a creative and useful product.
  • When the scientist felt secure and comfortable in his or her professional role. New ideas are often disruptive, and pursuing them carries the risk of failure. People are more likely to advance new ideas if they feel secure in their positions.
  • When the scientist’s administrative superior “stayed out of the way.” Research is likely to be more innovative when the superior limits himself or herself to support and facilitation rather than direct involvement.
  • When the project was relatively small with respect to the number of people involved, budget, and duration. Small size promotes flexibility, and this in turn is more conducive to creativity.
  • When the scientist engaged in other activities, such as teaching or administration, in addition to the research project. Other work may provide useful stimulation or help one identify opportunities for developing or implementing new ideas. Some time away from the task, or an incubation period, is generally recognized as part of the creative process.”

The importance of any one of these factors was not very great, but their impact was cumulative. The presence of all or most of these conditions exerted a strongly favorable influence on the creative process. Conversely, the absence of these conditions made it quite unlikely that even highly creative scientists could develop their new ideas into innovative research results. Under unfavorable conditions, the most creatively inclined scientists produced even less innovative work than their less imaginative colleagues, presumably because they experienced greater frustration with their work environment.

There are, of course, exceptions to the rule. Some creativity occurs even in the face of intense opposition. A hostile environment can be stimulating, enlivening, and challenging. Some people gain satisfaction from viewing themselves as lonely fighters in the wilderness, but when it comes to conflict between a large organization and a creative individual within it, the organization generally wins.

Recognizing the role of organizational environment in stimulating or suppressing creativity points the way to one obvious set of measures to enhance creative organizational performance. Managers of analysis, from first-echelon supervisors to the Director of Central Intelligence, should take steps to strengthen and broaden the perception among analysts that new ideas are welcome. This is not easy; creativity implies criticism of that which already exists. It is, therefore, inherently disruptive of established ideas and organizational practices.

Particularly within his or her own office, an analyst needs to enjoy a sense of security, so that partially developed ideas may be expressed and bounced off others as sounding boards with minimal fear of criticism or ridicule for deviating from established orthodoxy. At its inception, a new idea is frail and vulnerable. It needs to be nurtured, developed, and tested in a protected environment before being exposed to the harsh reality of public criticism. It is the responsibility of an analyst’s immediate supervisor and office colleagues to provide this sheltered environment.

Conclusions

Creativity, in the sense of new and useful ideas, is at least as important in intelligence analysis as in any other human endeavor. Procedures to enhance innovative thinking are not new. Creative thinkers have employed them successfully for centuries. The only new elements–and even they may not be new anymore–are the grounding of these procedures in psychological theory to explain how and why they work, and their formalization in systematic creativity programs.

Another prerequisite to creativity is sufficient strength of character to suggest new ideas to others, possibly at the expense of being rejected or even ridiculed on occasion. “The ideas of creative people often lead them into direct conflict with the trends of their time, and they need the courage to be able to stand alone.”

Chapter 7

Structuring Analytical Problems

This chapter discusses various structures for decomposing and externalizing complex analytical problems when we cannot keep all the relevant factors in the forefront of our consciousness at the same time.

Decomposition means breaking a problem down into its component parts. Externalization means getting the problem out of our heads and into some visible form that we can work with.

There are two basic tools for dealing with complexity in analysis–decomposition and externalization.

Decomposition means breaking a problem down into its component parts. That is, indeed, the essence of analysis. Webster’s Dictionary defines analysis as division of a complex whole into its parts or elements.

The spirit of decision analysis is to divide and conquer: Decompose a complex problem into simpler problems, get one’s thinking straight in these simpler problems, paste these analyses together with a logical glue …

Externalization means getting the decomposed problem out of one’s head and down on paper or on a computer screen in some simplified form that shows the main variables, parameters, or elements of the problem and how they relate to each other.

Putting ideas into visible form ensures that they will last. They will lie around for days goading you into having further thoughts. Lists are effective because they exploit people’s tendency to be a bit compulsive–we want to keep adding to them. They let us get the obvious and habitual answers out of the way, so that we can add to the list by thinking of other ideas beyond those that came first to mind. One specialist in creativity has observed that “for the purpose of moving our minds, pencils can serve as crowbars” –just by writing things down and making lists that stimulate new associations.

Problem Structure

Anything that has parts also has a structure that relates these parts to each other. One of the first steps in doing analysis is to determine an appropriate structure for the analytical problem, so that one can then identify the various parts and begin assembling information on them. Because there are many different kinds of analytical problems, there are also many different ways to structure analysis.

Lists such as Franklin made are one of the simplest structures. An intelligence analyst might make lists of relevant variables, early warning indicators, alternative explanations, possible outcomes, factors a foreign leader will need to take into account when making a decision, or arguments for and against a given explanation or outcome.

Other tools for structuring a problem include outlines, tables, diagrams, trees, and matrices, with many sub-species of each. For example, trees include decision trees and fault trees. Diagrams includes causal diagrams, influence diagrams, flow charts, and cognitive maps.

Chapter 8
Analysis of Competing Hypotheses

Analysis of competing hypotheses, sometimes abbreviated ACH, is a tool to aid judgment on important issues requiring careful weighing of alternative explanations or conclusions. It helps an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.

ACH is an eight-step procedure grounded in basic insights from cognitive

psychology, decision analysis, and the scientific method. It is a surprisingly effective, proven process that helps analysts avoid common analytic pitfalls. Because of its thoroughness, it is particularly appropriate for controversial issues when analysts want to leave an audit trail to show what they considered and how they arrived at their judgement.

When working on difficult intelligence issues, analysts are, in effect, choosing among several alternative hypotheses. Which of several possible explanations is the correct one? Which of several possible outcomes is the most likely one? As previously noted, this book uses the term “hypothesis” in its broadest sense as a potential explanation or conclusion that is to be tested by collecting and presenting evidence.

Analysis of competing hypotheses (ACH) requires an analyst to explicitly identify all the reasonable alternatives and have them compete against each other for the analyst’s favor, rather than evaluating their plausibility one at a time.

The way most analysts go about their business is to pick out what they suspect intuitively is the most likely answer, then look at the available information from the point of view of whether or not it supports this answer. If the evidence seems to support the favorite hypothesis, analysts pat themselves on the back (“See, I knew it all along!”) and look no further. If it does not, they either reject the evidence as misleading or develop another hypothesis and go through the same procedure again.

Simultaneous evaluation of multiple, competing hypotheses is very difficult to do. To retain three to five or even seven hypotheses in working memory and note how each item of information fits into each hypothesis is beyond the mental capabilities of most people. It takes far greater mental agility than listing evidence supporting a single hypothesis that was pre-judged as the most likely answer. It can be accomplished, though, with the help of the simple procedures discussed here. The box below contains a step-by-step outline of the ACH process.

Step 1

Identify the possible hypotheses to be considered. Use a group of analysts with different perspectives to brainstorm the possibilities.

Step-by-Step Outline of Analysis of Competing Hypotheses

  1. Identify the possible hypotheses to be considered. Use a group of analysts with different perspectives to brainstorm the possibilities.
  2. Make a list of significant evidence and arguments for and against each hypothesis.
  3. Prepare a matrix with hypotheses across the top and evidence down the side. Analyze the “diagnosticity” of the evidence and arguments–that is, identify which items are most helpful in judging the relative likelihood of the hypotheses.
  4. Refine the matrix. Reconsider the hypotheses and delete evidence and arguments that have no diagnostic value.
  5. Draw tentative conclusions about the relative likelihood of each hypothesis. Proceed by trying to disprove the hypotheses rather than prove them.
  6. Analyze how sensitive your conclusion is to a few critical items of evidence. Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.
  7. Report conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one.
  8. Identify milestones for future observation that may indicate events are taking a different course than expected.

It is useful to make a clear distinction between the hypothesis generation and hypothesis evaluation stages of analysis. Step 1 of the recommended analytical process is to identify all hypotheses that merit detailed examination. At this early hypothesis generation stage, it is very useful to bring together a group of analysts with different backgrounds and perspectives. Brainstorming in a group stimulates the imagination and may bring out possibilities that individual members of the group had not thought of. Initial discussion in the group should elicit every possibility, no matter how remote, before judging likelihood or feasibility. Only when all the possibilities are on the table should you then focus on judging them and selecting the hypotheses to be examined in greater detail in subsequent analysis.

Early rejection of unproven, but not disproved, hypotheses biases the subsequent analysis, because one does not then look for the evidence that might support them. Unproven hypotheses should be kept alive until they can be disproved.

Step 2

Make a list of significant evidence and arguments for and against each hypothesis.

In assembling the list of relevant evidence and arguments, these terms should be interpreted very broadly. They refer to all the factors that have an impact on your judgments about the hypotheses. Do not limit yourself to concrete evidence in the current intelligence reporting. Also include your own assumptions or logical deductions about another person’s or group’s or country’s intentions, goals, or standard procedures. These assumptions may generate strong preconceptions as to which hypothesis is most likely. Such assumptions often drive your final judgment, so it is important to include them in the list of “evidence.”

First, list the general evidence that applies to all the hypotheses. Then consider each hypothesis individually, listing factors that tend to support or contradict each one. You will commonly find that each hypothesis leads you to ask different questions and, therefore, to seek out somewhat different evidence.

Step 3

Prepare a matrix with hypotheses across the top and evidence down the side. Analyze the “diagnosticity” of the evidence and arguments- that is, identify which items are most helpful in judging the relative likelihood of alternative hypotheses.

Step 3 is perhaps the most important element of this analytical procedure. It is also the step that differs most from the natural, intuitive approach to analysis, and, therefore, the step you are most likely to overlook or misunderstand.

The procedure for Step 3 is to take the hypotheses from Step 1 and the evidence and arguments from Step 2 and put this information into a matrix format, with the hypotheses across the top and evidence and arguments down the side. This gives an overview of all the significant components of your analytical problem.

Then analyze how each piece of evidence relates to each hypothesis. This differs from the normal procedure, which is to look at one hypothesis at a time in order to consider how well the evidence supports that hypothesis. That will be done later, in Step 5. At this point, in Step 3, take one item of evidence at a time, then consider how consistent that evidence is with each of the hypotheses. Here is how to remember this distinction. In Step 3, you work acrossthe rows of the matrix, examining one item of evidence at a time to see how consistent that item of evidence is with each of the hypotheses. In Step 5, you work down the columns of the matrix, examining one hypothesis at a time, to see how consistent that hypothesis is with all the evidence.

To fill in the matrix, take the first item of evidence and ask whether it is consistent with, inconsistent with, or irrelevant to each hypothesis. Then make a notation accordingly in the appropriate cell under each hypothesis in the matrix. The form of these notations in the matrix is a matter of personal preference. It may be pluses, minuses, and question marks. It may be C, I, and N/A standing for consistent, inconsistent, or not applicable. Or it may be some textual notation. In any event, it will be a simplification, a shorthand representation of the complex reasoning that went on as you thought about how the evidence relates to each hypothesis.

After doing this for the first item of evidence, then go on to the next item of evidence and repeat the process until all cells in the matrix are filled

The matrix format helps you weigh the diagnosticity of each item of evidence, which is a key difference between analysis of competing hypotheses and traditional analysis.

Evidence is diagnostic when it influences your judgment on the relative likelihood of the various hypotheses identified in Step 1. If an item of evidence seems consistent with all the hypotheses, it may have no diagnostic value. A common experience is to discover that most of the evidence supporting what you believe is the most likely hypothesis really is not very helpful, because that same evidence is also consistent with other hypotheses. When you do identify items that are highly diagnostic, these should drive your judgment.

Step 4

Refine the matrix. Reconsider the hypotheses and delete evidence and arguments that have no diagnostic value.

The exact wording of the hypotheses is obviously critical to the conclusions one can draw from the analysis. By this point, you will have seen how the evidence breaks out under each hypothesis, and it will often be appropriate to reconsider and reword the hypotheses. Are there hypotheses that need to be added, or finer distinctions that need to be made in order to consider all the significant alternatives? If there is little or no evidence that helps distinguish between two hypotheses, should they be combined into one?

Also reconsider the evidence. Is your thinking about which hypotheses are most likely and least likely influenced by factors that are not included in the listing of evidence? If so, put them in. Delete from the matrix items of evidence or assumptions

that now seem unimportant or have no diagnostic value. Save these items in a separate list as a record of information that was considered.

Step 5

Draw tentative conclusions about the relative likelihood of each hypothesis. Proceed by trying to disprove hypotheses rather than prove them.

In Step 3, you worked across the matrix, focusing on a single item of evidence or argument and examining how it relates to each hypothesis. Now, work down the matrix, looking at each hypothesis as a whole. The matrix format gives an overview of all the evidence for and against all the hypotheses, so that you can examine all the hypotheses together and have them compete against each other for your favor.

In evaluating the relative likelihood of alternative hypotheses, start by looking for evidence or logical deductions that enable you to reject hypotheses, or at least to determine that they are unlikely. A fundamental precept of the scientific method is to proceed by rejecting or eliminating hypotheses, while tentatively accepting only those hypotheses that cannot be refuted. The scientific method obviously cannot be applied in toto to intuitive judgment, but the principle of seeking to disprove hypotheses, rather than confirm them, is useful.

No matter how much information is consistent with a given hypothesis, one cannot prove that hypothesis is true, because the same information may also be consistent with one or more other hypotheses. On the other hand, a single item of evidence that is inconsistent with a hypothesis may be sufficient grounds for rejecting that hypothesis.

People have a natural tendency to concentrate on confirming hypotheses they already believe to be true, and they commonly give more weight to information that supports a hypothesis than to information that weakens it. This is wrong; we should do just the opposite. Step 5 again requires doing the opposite of what comes naturally.

In examining the matrix, look at the minuses, or whatever other notation you used to indicate evidence that may be inconsistent with a hypothesis. The hypotheses with the fewest minuses is probably the most likely one. The hypothesis with the most minuses is probably the least likely one. The fact that a hypothesis is inconsistent with the evidence is certainly a sound basis for rejecting it. The pluses, indicating evidence that is consistent with a hypothesis, are far less significant. It does not follow that the hypothesis with the most pluses is the most likely one, because a long list of evidence that is consistent with almost any reasonable hypothesis can be easily made. What is difficult to find, and is most significant when found, is hard evidence that is clearly inconsistent with a reasonable hypothesis.

The matrix should not dictate the conclusion to you. Rather, it should accurately reflect your judgment of what is important and how these important factors relate to the probability of each hypothesis. You, not the matrix, must make the decision. The matrix serves only as an aid to thinking and analysis, to ensure consideration of all the possible interrelationships between evidence and hypotheses and identification of those few items that really swing your judgment on the issue.

If following this procedure has caused you to consider things you might otherwise have overlooked, or has caused you to revise your earlier estimate of the relative probabilities of the hypotheses, then the procedure has served a useful purpose. When you are done, the matrix serves as a shorthand record of your thinking and as an audit trail showing how you arrived at your conclusion.

This procedure forces you to spend more analytical time than you otherwise would on what you had thought were the less likely hypotheses. This is desirable. The seemingly less likely hypotheses usually involve plowing new ground and, therefore, require more work. What you started out thinking was the most likely hypothesis tends to be based on a continuation of your own past thinking. A principal advantage of the analysis of competing hypotheses is that it forces you to give a fairer shake to all the alternatives.

Step 6

Analyze how sensitive your conclusion is to a few critical items of evidence.

Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.

If there is any concern at all about denial and deception, this is an appropriate place to consider that possibility. Look at the sources of your key evidence. Are any of the sources known to the authorities in the foreign country? Could the information have been manipulated? Put yourself in the shoes of a foreign deception planner to evaluate motive, opportunity, means, costs, and benefits of deception as they might appear to the foreign country.

When analysis turns out to be wrong, it is often because of key assumptions that went unchallenged and proved invalid. It is a truism that analysts should identify and question assumptions, but this is much easier said than done. The problem is to determine which assumptions merit questioning. One advantage of the ACH procedure is that it tells you what needs to be rechecked.

In Step 6 you may decide that additional research is needed to check key judgments. For example, it may be appropriate to go back to check original source materials rather than relying on someone else’s interpretation. In writing your report, it is desirable to identify critical assumptions that went into your interpretation and to note that your conclusion is dependent upon the validity of these assumptions.

Step 7

Report conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one.

If your report is to be used as the basis for decisionmaking, it will be helpful for the decisionmaker to know the relative likelihood of all the alternative possibilities. Analytical judgments are never certain. There is always a good possibility of their being wrong. Decisionmakers need to make decisions on the basis of a full set of alternative possibilities, not just the single most likely alternative. Contingency or fallback plans may be needed in case one of the less likely alternatives turns out to be true.

When one recognizes the importance of proceeding by eliminating rather than confirming hypotheses, it becomes apparent that any written argument for a certain judgment is incomplete unless it also discusses alternative judgments that were considered and why they were rejected. In the past, at least, this was seldom done.

Step 8

Identify milestones for future observation that may indicate events are taking a different course than expected.

Analytical conclusions should always be regarded as tentative. The situation may change, or it may remain unchanged while you receive new information that alters your appraisal. It is always helpful to specify in advance things one should look for or be alert to that, if observed, would suggest a significant change in the probabilities. This is useful for intelligence consumers who are following the situation on a continuing basis. Specifying in advance what would cause you to change your mind will also make it more difficult for you to rationalize such developments, if they occur, as not really requiring any modification of your judgment.

Summary and Conclusion

Three key elements distinguish analysis of competing hypotheses from conventional intuitive analysis.

  • Analysis starts with a full set of alternative possibilities, rather than with a most likely alternative for which the analyst seeks confirmation. This ensures that alternative hypotheses receive equal treatment and a fair shake.
  • Analysis identifies and emphasizes the few items of evidence or assumptions that have the greatest diagnostic value in judging the relative likelihood of the alternative hypotheses. In conventional intuitive analysis, the fact that key evidence may also be consistent with alternative hypotheses is rarely considered explicitly and often ignored.
  • Analysis of competing hypotheses involves seeking evidence to refute hypotheses. The most probable hypothesis is usually the one with the least evidence against it, not the one with the most evidence for it. Conventional analysis generally entails looking for evidence to confirm a favored hypothesis.

A principal lesson is this. Whenever an intelligence analyst is tempted to write the phrase “there is no evidence that …,” the analyst should ask this question: If this hypothesis is true, can I realistically expect to see evidence of it? In other words, if India were planning nuclear tests while deliberately concealing its intentions, could the analyst realistically expect to see evidence of test planning? The ACH procedure leads the analyst to identify and face these kinds of questions.

There is no guarantee that ACH or any other procedure will produce a correct answer. The result, after all, still depends on fallible intuitive judgment applied to incomplete and ambiguous information. Analysis of competing hypotheses does, however, guarantee an appropriate process of analysis. This procedure leads you through a rational, systematic process that avoids some common analytical pitfalls. It increases the odds of getting the right answer, and it leaves an audit trail showing the evidence used in your analysis and how this evidence was interpreted. If others disagree with your judgment, the matrix can be used to highlight the precise area of disagreement. Subsequent discussion can then focus productively on the ultimate source of the differences.

The ACH procedure has the offsetting advantage of focusing attention on the few items of critical evidence that cause the uncertainty or which, if they were available, would alleviate it. This can guide future collection, research, and analysis to resolve the uncertainty and produce a more accurate judgment.

PART THREE–COGNITIVE BIASES

Chapter 9
What Are Cognitive Biases?

This mini-chapter discusses the nature of cognitive biases in general. The four chapters that follow it describe specific cognitive biases in the evaluation of evidence, perception of cause and effect, estimation of probabilities, and evaluation of intelligence reporting.

Cognitive biases are mental errors caused by our simplified information processing strategies. It is important to distinguish cognitive biases from other forms of bias, such as cultural bias, organizational bias, or bias that results from one’s own self- interest. In other words, a cognitive bias does not result from any emotional or intellectual predisposition toward a certain judgment, but rather from subconscious mental procedures for processing information. A cognitive bias is a mental error that is consistent and predictable.

Cognitive biases are similar to optical illusions in that the error remains compelling even when one is fully aware of its nature. Awareness of the bias, by itself, does not produce a more accurate perception. Cognitive biases, therefore, are, exceedingly difficult to overcome.

Chapter 10
Biases in Evaluation of Evidence

Evaluation of evidence is a crucial step in analysis, but what evidence people rely on and how they interpret it are influenced by a variety of extraneous factors. Information presented in vivid and concrete detail often has unwarranted impact, and people tend to disregard abstract or statistical information that may have greater evidential value. We seldom take the absence of evidence into account. The human mind is also oversensitive to the consistency of the evidence, and insufficiently sensitive to the reliability of the evidence. Finally, impressions often remain even after the evidence on which they are based has been totally discredited.

The intelligence analyst works in a somewhat unique informational environment. Evidence comes from an unusually diverse set of sources: newspapers and wire services, observations by American Embassy officers, reports from controlled agents and casual informants, information exchanges with foreign governments, photo reconnaissance, and communications intelligence. Each source has its own unique strengths, weaknesses, potential or actual biases, and vulnerability to manipulation and deception. The most salient characteristic of the information environment is its diversity–multiple sources, each with varying degrees of reliability, and each commonly reporting information which by itself is incomplete and sometimes inconsistent or even incompatible with reporting from other sources. Conflicting information of uncertain reliability is endemic to intelligence analysis, as is the need to make rapid judgments on current events even before all the evidence is in.

The Vividness Criterion

The impact of information on the human mind is only imperfectly related to its true value as evidence. Specifically, information that is vivid, concrete, and personal has a greater impact on our thinking than pallid, abstract information that may actually have substantially greater value as evidence. For example:

  • Information that people perceive directly, that they hear with their own ears or see with their own eyes, is likely to have greater impact than information received secondhand that may have greater evidential value.
  • Case histories and anecdotes will have greater impact than more informative but abstract aggregate or statistical data.

Events that people experience personally are more memorable than those they only read about. Concrete words are easier to remember than abstract words, and words of all types are easier to recall than numbers. In short, information having the qualities cited in the preceding paragraph is more likely to attract and hold our attention. It is more likely to be stored and remembered than abstract reasoning or statistical summaries, and therefore can be expected to have a greater immediate effect as well as a continuing impact on our thinking in the future.

Personal observations by intelligence analysts and agents can be as deceptive as secondhand accounts. Most individuals visiting foreign countries become familiar with only a small sample of people representing a narrow segment of the total society. Incomplete and distorted perceptions are a common result.

a “man-who” example seldom merits the evidential weight intended by the person citing the example, or the weight often accorded to it by the recipient.

The most serious implication of vividness as a criterion that determines the impact of evidence is that certain kinds of very valuable evidence will have little influence simply because they are abstract. Statistical data, in particular, lack the rich and concrete detail to evoke vivid images, and they are often overlooked, ignored, or minimized.

For example, the Surgeon General’s report linking cigarette smoking to cancer should have, logically, caused a decline in per-capita cigarette consumption. No such decline occurred for more than 20 years. The reaction of physicians was particularly informative. All doctors were aware of the statistical evidence and were more exposed than the general population to the health problems caused by smoking. How they reacted to this evidence depended upon their medical specialty. Twenty years after the Surgeon General’s report, radiologists who examine lung x-rays every day had the lowest rate of smoking. Physicians who diagnosed and treated lung cancer victims were also quite unlikely to smoke. Many other types of physicians continued to smoke. The probability that a physician continued to smoke was directly related to the distance of the physician’s specialty from the lungs. In other words, even physicians, who were well qualified to understand and appreciate the statistical data, were more influenced by their vivid personal experiences than by valid statistical data.

Absence of Evidence

A principal characteristic of intelligence analysis is that key information is often lacking. Analytical problems are selected on the basis of their importance and the perceived needs of the consumers, without much regard for availability of information. Analysts have to do the best they can with what they have, somehow taking into account the fact that much relevant information is known to be missing.

Ideally, intelligence analysts should be able to recognize what relevant evidence is lacking and factor this into their calculations. They should also be able to estimate the potential impact of the missing data and to adjust confidence in their judgment accordingly. Unfortunately, this ideal does not appear to be the norm. Experiments suggest that “out of sight, out of mind” is a better description of the impact of gaps in the evidence.

This problem has been demonstrated using fault trees, which are schematic drawings showing all the things that might go wrong with any endeavor. Fault trees are often used to study the fallibility of complex systems such as a nuclear reactor or space capsule.

Missing data is normal in intelligence problems, but it is probably more difficult to recognize that important information is absent and to incorporate this fact into judgments on intelligence questions than in the more concrete “car won’t start” experiment.

Oversensitivity to Consistency

The internal consistency in a pattern of evidence helps determine our confidence in judgments based on that evidence. In one sense, consistency is clearly an appropriate guideline for evaluating evidence. People formulate alternative explanations or estimates and select the one that encompasses the greatest amount of evidence within a logically consistent scenario. Under some circumstances, however, consistency can be deceptive. Information may be consistent only because it is highly correlated or redundant, in which case many related reports may be no more informative than a single report. Or it may be consistent only because information is drawn from a very small sample or a biased sample.

If the available evidence is consistent, analysts will often overlook the fact that it represents a very small and hence unreliable sample taken from a large and heterogeneous group. This is not simply a matter of necessity– of having to work with the information on hand, however imperfect it may be. Rather, there is an illusion of validity caused by the consistency of the information.

The tendency to place too much reliance on small samples has been dubbed the “law of small numbers.” This is a parody on the law of large numbers, the basic statistical principle that says very large samples will be highly representative of the population from which they are drawn. This is the principle that underlies opinion polling, but most people are not good intuitive statisticians. People do not have much intuitive feel for how large a sample has to be before they can draw valid conclusions from it. The so-called law of small numbers means that, intuitively, we make the mistake of treating small samples as though they were large ones.

Coping with Evidence of Uncertain Accuracy

There are many reasons why information often is less than perfectly accurate: misunderstanding, misperception, or having only part of the story; bias on the part of the ultimate source; distortion in the reporting chain from subsource through source, case officer, reports officer, to analyst; or misunderstanding and misperception by the analyst. Further, much of the evidence analysts bring to bear in conducting analysis is retrieved from memory, but analysts often cannot remember even the source of information they have in memory let alone the degree of certainty they attributed to the accuracy of that information when it was first received.

The human mind has difficulty coping with complicated probabilistic relationships,

so people tend to employ simple rules of thumb that reduce the burden of processing such information. In processing information of uncertain accuracy or reliability, analysts tend to make a simple yes or no decision. If they reject the evidence, they tend to reject it fully, so it plays no further role in their mental calculations. If they accept the evidence, they tend to accept it wholly, ignoring the probabilistic nature of the accuracy or reliability judgment. This is called a “best guess” strategy.

A more sophisticated strategy is to make a judgment based on an assumption that the available evidence is perfectly accurate and reliable, then reduce the confidence in this judgment by a factor determined by the assessed validity of the information. For example, available evidence may indicate that an event probably (75 percent) will occur, but the analyst cannot be certain that the evidence on which this judgment is based is wholly accurate or reliable.

The same processes may also affect our reaction to information that is plausible but known from the beginning to be of questionable authenticity. Ostensibly private statements by foreign officials are often reported though intelligence channels. In many instances it is not clear whether such a private statement by a foreign ambassador, cabinet member, or other official is an actual statement of private views, an indiscretion, part of a deliberate attempt to deceive the US Government, or part of an approved plan to convey a truthful message that the foreign government believes is best transmitted through informal channels.

Knowing that the information comes from an uncontrolled source who may be trying to manipulate us does not necessarily reduce the impact of the information.

Persistence of Impressions Based on Discredited

Evidence

Impressions tend to persist even after the evidence that created those impressions has been fully discredited. Psychologists have become interested in this phenomenon because many of their experiments require that the test subjects be deceived. For example, test subjects may be made to believe they were successful or unsuccessful in performing some task, or that they possess certain abilities or personality traits, when this is not in fact the case. Professional ethics require that test subjects be disabused of these false impressions at the end of the experiment, but this has proved surprisingly difficult to achieve.

Test subjects’ erroneous impressions concerning their logical problem-solving abilities persevered even after they were informed that manipulation of good or poor teaching performance had virtually guaranteed their success or failure.

An interesting but speculative explanation is based on the strong tendency to seek causal explanations, as discussed in the next chapter. When evidence is first received, people postulate a set of causal connections that explains this evidence.

The stronger the perceived causal linkage, the stronger the impression created by the evidence.

Colloquially, one might say that once information rings a bell, the bell cannot be unrung.

The ambiguity of most real-world situations contributes to the operation of this perseverance phenomenon. Rarely in the real world is evidence so thoroughly discredited as is possible in the experimental laboratory. Imagine, for example, that you are told that a clandestine source who has been providing information for some time is actually under hostile control.

Chapter 11
Biases in Perception of Cause and Effect

Judgments about cause and effect are necessary to explain the past, understand the present, and estimate the future. These judgments are often biased by factors over which people exercise little conscious control, and this can influence many types of judgments made by intelligence analysts. Because of a need to impose order on our environment, we seek and often believe we find causes for what are actually accidental or random phenomena. People overestimate the extent to which other countries are pursuing a coherent, coordinated, rational plan, and thus also overestimate their own ability to predict future events in those nations. People also tend to assume that causes are similar to their effects, in the sense that important or large effects must have large causes.

When inferring the causes of behavior, too much weight is accorded to personal qualities and dispositions of the actor and not enough to situational determinants of the actor’s behavior. People also overestimate their own importance as both a cause and a target of the behavior of others. Finally, people often perceive relationships that do not in fact exist, because they do not have an intuitive understanding of the kinds and amount of information needed to prove a relationship.

There are several modes of analysis by which one might infer cause and effect. In more formal analysis, inferences are made through procedures that collectively comprise the scientific method. The scientist advances a hypothesis, then tests this hypothesis by the collection and statistical analysis of data on many instances of the phenomenon in question. Even then, causality cannot be proved beyond all possible doubt. The scientist seeks to disprove a hypothesis, not to confirm it. A hypothesis is accepted only when it cannot be rejected.

Collection of data on many comparable cases to test hypotheses about cause and effect is not feasible for most questions of interest to the Intelligence Community, especially questions of broad political or strategic import relating to another country’s intentions. To be sure, it is feasible more often than it is done, and increased use of scientific procedures in political, economic, and strategic research is much to be encouraged. But the fact remains that the dominant approach to intelligence analysis is necessarily quite different. It is the approach of the historian rather than the scientist, and this approach presents obstacles to accurate inferences about causality.

The key ideas here are coherence and narrative. These are the principles that guide the organization of observations into meaningful structures and patterns. The historian commonly observes only a single case, not a pattern of covariation (when two things are related so that change in one is associated with change in the other) in many comparable cases. Moreover, the historian observes simultaneous changes in so many variables that the principle of covariation generally is not helpful in sorting out the complex relationships among them. The narrative story, on the other hand, offers a means of organizing the rich complexity of the historian’s observations. The historian uses imagination to construct a coherent story out of fragments of data.

The intelligence analyst employing the historical mode of analysis is essentially a storyteller.

He or she constructs a plot from the previous events, and this plot then dictates the possible endings of the incomplete story. The plot is formed of the “dominant concepts or leading ideas” that the analyst uses to postulate patterns of relationships among the available data. The analyst is not, of course, preparing a work of fiction. There are constraints on the analyst’s imagination, but imagination is nonetheless involved because there is an almost unlimited variety of ways in which the available data might be organized to tell a meaningful story. The constraints are the available evidence and the principle of coherence. The story must form a logical and coherent whole and be internally consistent as well as consistent with the available evidence.

Recognizing that the historical or narrative mode of analysis involves telling a coherent story helps explain the many disagreements among analysts, inasmuch as coherence is a subjective concept. It assumes some prior beliefs or mental model about what goes with what. More relevant to this discussion, the use of coherence rather than scientific observation as the criterion for judging truth leads to biases that presumably influence all analysts to some degree. Judgments of coherence may be influenced by many extraneous factors, and if analysts tend to favor certain types of explanations as more coherent than others, they will be biased in favor of those explanations.

Bias in Favor of Causal Explanations

One bias attributable to the search for coherence is a tendency to favor causal explanations. Coherence implies order, so people naturally arrange observations into regular patterns and relationships. If no pattern is apparent, our first thought is that we lack understanding, not that we are dealing with random phenomena that have no purpose or reason.

These examples suggest that in military and foreign affairs, where the patterns are at best difficult to fathom, there may be many events for which there are no valid causal explanations. This certainly affects the predictability of events and suggests limitations on what might logically be expected of intelligence analysts.

Bias Favoring Perception of Centralized Direction

Very similar to the bias toward causal explanations is a tendency to see the actions of other governments (or groups of any type) as the intentional result of centralized direction and planning. “…most people are slow to perceive accidents, unintended consequences, coincidences, and small causes leading to large effects. Instead, coordinated actions, plans and conspiracies are seen.” Analysts overestimate the extent to which other countries are pursuing coherent, rational, goal-maximizing policies, because this makes for more coherent, logical, rational explanations. This bias also leads analysts and policymakers alike to overestimate the predictability of future events in other countries.

But a focus on such causes implies a disorderly world in which outcomes are determined more by chance than purpose. It is especially difficult to incorporate these random and usually unpredictable elements into a coherent narrative, because evidence is seldom available to document them on a timely basis. It is only in historical perspective, after memoirs are written and government documents released, that the full story becomes available.

This bias has important consequences. Assuming that a foreign government’s actions result from a logical and centrally directed plan leads an analyst to:

  • Have expectations regarding that government’s actions that may not be fulfilled if the behavior is actually the product of shifting or inconsistent values, bureaucratic bargaining, or sheer confusion and blunder.
  • Draw far-reaching but possibly unwarranted inferences from isolated statements or actions by government officials who may be acting on their own rather than on central direction.
  • Overestimate the United States’ ability to influence the other government’s actions.
  • Perceive inconsistent policies as the result of duplicity and Machiavellian maneuvers, rather than as the product of weak leadership, vacillation, or bargaining among diverse bureaucratic or political interests.

Similarity of Cause and Effect

When systematic analysis of covariation is not feasible and several alternative causal explanations seem possible, one rule of thumb people use to make judgments of cause and effect is to consider the similarity between attributes of the cause and attributes of the effect. Properties of the cause are “…inferred on the basis of being correspondent with or similar to properties of the effect.” Heavy things make heavy noises; dainty things move daintily; large animals leave large tracks. When dealing with physical properties, such inferences are generally correct.

The tendency to reason according to similarity of cause and effect is frequently found in conjunction with the previously noted bias toward inferring centralized direction. Together, they explain the persuasiveness of conspiracy theories. Such theories are invoked to explain large effects for which there do not otherwise appear to be correspondingly large causes.

Intelligence analysts are more exposed than most people to hard evidence of real plots, coups, and conspiracies in the international arena. Despite this–or perhaps because of it–most intelligence analysts are not especially prone to what are generally regarded as conspiracy theories. Although analysts may not exhibit this bias in such extreme form, the bias presumably does influence analytical judgments in myriad little ways. In examining causal relationships, analysts generally construct causal explanations that are somehow commensurate with the magnitude of their effects and that attribute events to human purposes or predictable forces rather than to human weakness, confusion, or unintended consequences.

Internal vs. External Causes of Behavior

Much research into how people assess the causes of behavior employs a basic dichotomy between internal determinants and external determinants of human actions. Internal causes of behavior include a person’s attitudes, beliefs, and personality. External causes include incentives and constraints, role requirements, social pressures, or other forces over which the individual has little control. The research examines the circumstances under which people attribute behavior either to stable dispositions of the actor or to characteristics of the situation to which the actor responds.

Differences in judgments about what causes another person’s or government’s behavior affect how people respond to that behavior. How people respond to friendly or unfriendly actions by others may be quite different if they attribute the behavior to the nature of the person or government than if they see the behavior as resulting from situational constraints over which the person or government has little control.

Not enough weight is assigned to external circumstances that may have influenced the other person’s choice of behavior. This pervasive tendency has been demonstrated in many experiments under quite diverse circumstances and has often been observed in diplomatic and military interactions.

Susceptibility to this biased attribution of causality depends upon whether people are examining their own behavior or observing that of others. It is the behavior of others that people tend to attribute to the nature of the actor, whereas they see their own behavior as conditioned almost entirely by the situation in which they find themselves. This difference is explained largely by differences in information available to actors and observers. People know a lot more about themselves.

The actor has a detailed awareness of the history of his or her own actions under similar circumstances. In assessing the causes of our own behavior, we are likely to consider our previous behavior and focus on how it has been influenced by different situations. Thus situational variables become the basis for explaining our own behavior. This contrasts with the observer, who typically lacks this detailed knowledge of the other person’s past behavior. The observer is inclined to focus on how the other person’s behavior compares with the behavior of others under similar circumstances.

This difference in the type and amount of information available to actors and observers applies to governments as well as people. An actor’s personal involvement with the actions being observed enhances the likelihood of bias. “Where the observer is also an actor, he is likely to exaggerate the uniqueness and emphasize the dispositional origins of the responses of others to his own actions.”

The persistent tendency to attribute cause and effect in this manner is not simply the consequence of self-interest or propaganda by the opposing sides. Rather, it is the readily understandable and predictable result of how people normally attribute causality under many different circumstances.

As a general rule, biased attribution of causality helps sow the seeds of mistrust and misunderstanding between people and between governments. We tend to have quite different perceptions of the causes of each other’s behavior.

Overestimating Our Own Importance

Individuals and governments tend to overestimate the extent to which they successfully influence the behavior of others. This is an exception to the previously noted generalization that observers attribute the behavior of others to the nature of the actor. It occurs largely because a person is so familiar with his or her own efforts to influence another, but much less well informed about other factors that may have influenced the other’s decision.

In estimating the influence of US policy on the actions of another government, analysts more often than not will be knowledgeable of US actions and what they are intended to achieve, but in many instances they will be less well informed concerning the internal processes, political pressures, policy conflicts, and other influences on the decision of the target government.

Illusory Correlation

At the start of this chapter, covariation was cited as one basis for inferring causality. It was noted that covariation may either be observed intuitively or measured statistically. This section examines the extent to which the intuitive perception of covariation deviates from the statistical measurement of covariation.

Statistical measurement of covariation is known as correlation. Two events are correlated when the existence of one event implies the existence of the other. Variables are correlated when a change in one variable implies a similar degree of change in another. Correlation alone does not necessarily imply causation. For example, two events might co-occur because they have a common cause, rather than because one causes the other. But when two events or changes do co-occur, and the time sequence is such that one always follows the other, people often infer that the first caused the second. Thus, inaccurate perception of correlation leads to inaccurate perception of cause and effect.

Judgments about correlation are fundamental to all intelligence analysis. For example, assumptions that worsening economic conditions lead to increased political support for an opposition party, that domestic problems may lead to foreign adventurism, that military government leads to unraveling of democratic institutions, or that negotiations are more successful when conducted from a position of strength are all based on intuitive judgments of correlation between these variables. In many cases these assumptions are correct, but they are seldom tested by systematic observation and statistical analysis.

Much intelligence analysis is based on common-sense assumptions about how people and governments normally behave. The problem is that people possess a great facility for invoking contradictory “laws” of behavior to explain, predict, or justify different actions occurring under similar circumstances. “Haste makes waste” and “He who hesitates is lost” are examples of inconsistent explanations and admonitions. They make great sense when used alone and leave us looking foolish when presented together. “Appeasement invites aggression” and “agreement is based upon compromise” are similarly contradictory expressions.

When confronted with such apparent contradictions, the natural defense is that “it all depends on. …” Recognizing the need for such qualifying statements is one of the differences between subconscious information processing and systematic, self- conscious analysis. Knowledgeable analysis might be identified by the ability to fill in the qualification; careful analysis by the frequency with which one remembers to do so.

Of the 86 test subjects involved in several runnings of this experiment, not a single one showed any intuitive understanding of the concept of correlation. That is, no one understood that to make a proper judgment about the existence of a relationship, one must have information on all four cells of the table.

Let us now consider a similar question of correlation on a topic of interest to intelligence analysts. What are the characteristics of strategic deception and how can analysts detect it? In studying deception, one of the important questions is: what are the correlates of deception? Historically, when analysts study instances of deception, what else do they see that goes along with it, that is somehow related to deception, and that might be interpret as an indicator of deception? Are there certain practices relating to deception, or circumstances under which deception is most likely to occur, that permit one to say, that, because we have seen x or y or z, this most likely means a deception plan is under way? This would be comparable to a doctor observing certain symptoms and concluding that a given disease may be present. This is essentially a problem of correlation. If one could identify several correlates of deception, this would significantly aid efforts to detect it.

The lesson to be learned is not that analysts should do a statistical analysis of every relationship. They usually will not have the data, time, or interest for that. But analysts should have a general understanding of what it takes to know whether a relationship exists. This understanding is definitely not a part of people’s intuitive knowledge. It does not come naturally. It has to be learned. When dealing with such issues, analysts have to force themselves to think about all four cells of the table and the data that would be required to fill each cell.

Even if analysts follow these admonitions, there are several factors that distort judgment when one does not follow rigorous scientific procedures in making and recording observations. These are factors that influence a person’s ability to recall examples that fit into the four cells. For example, people remember occurrences more readily than non-occurrences. “History is, by and large, a record of what people did, not what they failed to do.”

Many erroneous theories are perpetuated because they seem plausible and because people record their experience in a way that supports rather than refutes them.

Chapter 12
Biases in Estimating Probabilities

In making rough probability judgments, people commonly depend upon one of several simplified rules of thumb that greatly ease the burden of decision. Using the “availability” rule, people judge the probability of an event by the ease with which they can imagine relevant instances of similar events or the number of such events that they can easily remember. With the “anchoring” strategy, people pick some natural starting point for a first approximation and then adjust this figure based on the results of additional information or analysis. Typically, they do not adjust the initial judgment enough.

Expressions of probability, such as possible and probable, are a common source of ambiguity that make it easier for a reader to interpret a report as consistent with the reader’s own preconceptions. The probability of a scenario is often miscalculated. Data on “prior probabilities” are commonly ignored unless they illuminate causal relationships.

Availability Rule

One simplified rule of thumb commonly used in making probability estimates is known as the availability rule. In this context, “availability” refers to imaginability or retrievability from memory. Psychologists have shown that two cues people use unconsciously in judging the probability of an event are the ease with which they can imagine relevant instances of the event and the number or frequency of such events that they can easily remember. People are using the availability rule of thumb whenever they estimate frequency or probability on the basis of how easily they can recall or imagine instances of whatever it is they are trying to estimate.

people are frequently led astray when the ease with which things come to mind is influenced by factors unrelated to their probability. The ability to recall instances of an event is influenced by how recently the event occurred, whether we were personally involved, whether there were vivid and memorable details associated with the event, and how important it seemed at the time. These and other factors that influence judgment are unrelated to the true probability of an event.

Intelligence analysts may be less influenced than others by the availability bias. Analysts are evaluating all available information, not making quick and easy inferences. On the other hand, policymakers and journalists who lack the time or access to evidence to go into details must necessarily take shortcuts. The obvious shortcut is to use the availability rule of thumb for making inferences about probability.

Many events of concern to intelligence analysts

…are perceived as so unique that past history does not seem relevant to the evaluation of their likelihood. In thinking of such events we often construct scenarios, i.e., stories that lead from the present situation to the target event. The plausibility of the scenarios that come to mind, or the difficulty of producing them, serve as clues to the likelihood of the event. If no reasonable scenario comes to mind, the event is deemed impossible or highly unlikely. If several scenarios come easily to mind, or if one scenario is particularly compelling, the event in question appears probable.

Many extraneous factors influence the imaginability of scenarios for future events, just as they influence the retrievability of events from memory. Curiously, one of these is the act of analysis itself. The act of constructing a detailed scenario for a possible future event makes that event more readily imaginable and, therefore, increases its perceived probability. This is the experience of CIA analysts who have used various tradecraft tools that require, or are especially suited to, the analysis of unlikely but nonetheless possible and important hypotheses.

In sum, the availability rule of thumb is often used to make judgments about likelihood or frequency. People would be hard put to do otherwise, inasmuch as it is such a timesaver in the many instances when more detailed analysis is not warranted or not feasible. Intelligence analysts, however, need to be aware when they are taking shortcuts. They must know the strengths and weaknesses of these procedures…

For intelligence analysts, recognition that they are employing the availability rule should raise a caution flag. Serious analysis of probability requires identification and assessment of the strength and interaction of the many variables that will determine the outcome of a situation.

Anchoring

Another strategy people seem to use intuitively and unconsciously to simplify the task of making judgments is called anchoring. Some natural starting point, perhaps from a previous analysis of the same subject or from some partial calculation, is used as a first approximation to the desired judgment. This starting point is then adjusted, based on the results of additional information or analysis. Typically, however, the starting point serves as an anchor or drag that reduces the amount of adjustment, so the final estimate remains closer to the starting point than it ought to be.

Whenever analysts move into a new analytical area and take over responsibility for updating a series of judgments or estimates made by their predecessors, the previous judgments may have such an anchoring effect. Even when analysts make their own initial judgment, and then attempt to revise this judgment on the basis of new information or further analysis, there is much evidence to suggest that they usually do not change the judgment enough.

Anchoring provides a partial explanation of experiments showing that analysts tend to be overly sure of themselves in setting confidence ranges. A military analyst who estimates future missile or tank production is often unable to give a specific figure as a point estimate.

Reasons for the anchoring phenomenon are not well understood. The initial estimate serves as a hook on which people hang their first impressions or the results of earlier calculations. In recalculating, they take this as a starting point rather than starting over from scratch, but why this should limit the range of subsequent reasoning is not clear.

There is some evidence that awareness of the anchoring problem is not an adequate antidote. This is a common finding in experiments dealing with cognitive biases. The biases persist even after test subjects are informed of them and instructed to try to avoid them or compensate for them.

One technique for avoiding the anchoring bias, to weigh anchor so to speak, may be to ignore one’s own or others’ earlier judgments and rethink a problem from scratch.

In other words, consciously avoid any prior judgment as a starting point. There is no experimental evidence to show that this is possible or that it will work, but it seems worth trying. Alternatively, it is sometimes possible to avoid human error by employing formal statistical procedures.

Expression of Uncertainty

Probabilities may be expressed in two ways. Statistical probabilities are based on empirical evidence concerning relative frequencies. Most intelligence judgments deal with one-of-a-kind situations for which it is impossible to assign a statistical probability. Another approach commonly used in intelligence analysis is to make a “subjective probability” or “personal probability” judgment. Such a judgment is an expression of the analyst’s personal belief that a certain explanation or estimate is correct. It is comparable to a judgment that a horse has a three-to-one chance of winning a race.

When intelligence conclusions are couched in ambiguous terms, a reader’s interpretation of the conclusions will be biased in favor of consistency with what the reader already believes.

The main point is that an intelligence report may have no impact on the reader if it is couched in such ambiguous language that the reader can easily interpret it as consistent with his or her own preconceptions. This ambiguity can be especially troubling when dealing with low-probability, high-impact dangers against which policymakers may wish to make contingency plans.

How can analysts express uncertainty without being unclear about how certain they are? Putting a numerical qualifier in parentheses after the phrase expressing degree of uncertainty is an appropriate means of avoiding misinterpretation. This may be an odds ratio (less than a one-in-four chance) or a percentage range (5 to 20 percent) or (less than 20 percent). Odds ratios are often preferable, as most people have a better intuitive understanding of odds than of percentages.

Assessing Probability of a Scenario

Intelligence analysts sometimes present judgments in the form of a scenario–a series of events leading to an anticipated outcome. There is evidence that judgments concerning the probability of a scenario are influenced by amount and nature of detail in the scenario in a way that is unrelated to actual likelihood of the scenario.

A scenario consists of several events linked together in a narrative description. To calculate mathematically the probability of a scenario, the proper procedure is to multiply the probabilities of each individual event. Thus, for a scenario with three events, each of which will probably (70 percent certainty) occur, the probability of the scenario is .70 x .70 x .70 or slightly over 34 percent. Adding a fourth probable (70 percent) event to the scenario would reduce its probability to 24 percent.

Most people do not have a good intuitive grasp of probabilistic reasoning. One approach to simplifying such problems is to assume (or think as though) one or more probable events have already occurred. This eliminates some of the uncertainty from the judgment.

When the averaging strategy is employed, highly probable events in the scenario tend to offset less probable events. This violates the principle that a chain cannot be stronger than its weakest link. Mathematically, the least probable event in a scenario sets the upper limit on the probability of the scenario as a whole. If the averaging strategy is employed, additional details may be added to the scenario that are so plausible they increase the perceived probability of the scenario, while, mathematically, additional events must necessarily reduce its probability.

Base-Rate Fallacy

In assessing a situation, an analyst sometimes has two kinds of evidence available– specific evidence about the individual case at hand, and numerical data that summarize information about many similar cases. This type of numerical information is called a base rate or prior probability. The base-rate fallacy is that the numerical data are commonly ignored unless they illuminate a causal relationship.

Most people do not incorporate the prior probability into their reasoning because it does not seem relevant. It does not seem relevant because there is no causal relationship between the background information on the percentages of jet fighters in the area and the pilot’s observation.

The so-called planning fallacy, to which I personally plead guilty, is an example of a problem in which base rates are not given in numerical terms but must be abstracted from experience. In planning a research project, I may estimate being able to complete it in four weeks. This estimate is based on relevant case-specific evidence: desired length of report, availability of source materials, difficulty of the subject matter, allowance for both predictable and unforeseeable interruptions, and so on. I also possess a body of experience with similar estimates I have made in the past. Like many others, I almost never complete a research project within the initially estimated time frame! But I am seduced by the immediacy and persuasiveness of the case- specific evidence. All the causally relevant evidence about the project indicates I should be able to complete the work in the time allotted for it. Even though I know from experience that this never happens, I do not learn from this experience. I continue to ignore the non-causal, probabilistic evidence based on many similar projects in the past, and to estimate completion dates that I hardly ever meet. (Preparation of this book took twice as long as I had anticipated. These biases are, indeed, difficult to avoid!)

Chapter 13

Hindsight Biases in Evaluation of Intelligence Reporting

Evaluations of intelligence analysis–analysts’ own evaluations of their judgments as well as others’ evaluations of intelligence products–are distorted by systematic biases. As a result, analysts overestimate the quality of their analytical performance, and others underestimate the value and quality of their efforts. These biases are not simply the product of self-interest and lack of objectivity. They stem from the nature of human mental processes and are difficult and perhaps impossible to overcome.

Hindsight biases influence the evaluation of intelligence reporting in three ways:

  • Analysts normally overestimate the accuracy of their past judgments.
  • Intelligence consumers normally underestimate how much they learned from intelligence reports.
  • Overseers of intelligence production who conduct postmortem analyses of an intelligence failure normally judge that events were more readily foreseeable than was in fact the case.

The analyst, consumer, and overseer evaluating analytical performance all have one thing in common. They are exercising hindsight. They take their current state of knowledge and compare it with what they or others did or could or should have known before the current knowledge was received. This is in sharp contrast with intelligence estimation, which is an exercise in foresight, and it is the difference between these two modes of thought–hindsight and foresight–that seems to be a source of bias.

an analyst’s intelligence judgments are not as good as analysts think they are, or as bad as others seem to believe. Because the biases generally cannot be overcome, they would appear to be facts of life that analysts need to take into account in evaluating their own performance and in determining what evaluations to expect from others. This suggests the need for a more systematic effort to:

  • Define what should be expected from intelligence analysts.
  • Develop an institutionalized procedure for comparing intelligence judgments and estimates with actual outcomes.
  • Measure how well analysts live up to the defined expectations.

The discussion now turns to the experimental evidence demonstrating these biases from the perspective of the analyst, consumer, and overseer of intelligence.

The Analyst’s Perspective

Analysts interested in improving their own performance need to evaluate their past estimates in the light of subsequent developments. To do this, analysts must either remember (or be able to refer to) their past estimates or must reconstruct their past estimates on the basis of what they remember having known about the situation at the time the estimates were made.

Experimental evidence suggests a systematic tendency toward faulty memory of past estimates. That is, when events occur, people tend to overestimate the extent to which they had previously expected them to occur. And conversely, when events do not occur, people tend to underestimate the probability they had previously assigned to their occurrence. In short, events generally seem less surprising than they should on the basis of past estimates. This experimental evidence accords with analysts’ intuitive experience. Analysts rarely appear–or allow themselves to appear–very surprised by the course of events they are following.

The Consumer’s Perspective

When consumers of intelligence reports evaluate the quality of the intelligence product, they ask themselves the question: “How much did I learn from these reports that I did not already know?” In answering this question, there is a consistent tendency for most people to underestimate the contribution made by new information.

people tend to underestimate both how much they learn from new information and the extent to which new information permits them to make correct judgments with greater confidence. To the extent that intelligence consumers manifest these same biases, they will tend to underrate the value to them of intelligence reporting.

The Overseer’s Perspective

An overseer, as the term is used here, is one who investigates intelligence performance by conducting a postmortem examination of a high-profile intelligence failure.

Such investigations are carried out by Congress, the Intelligence Community staff, and CIA or DI management. For those outside the executive branch who do not regularly read the intelligence product, this sort of retrospective evaluation of known intelligence failures is a principal basis for judgments about the quality of intelligence analysis.

A fundamental question posed in any postmortem investigation of intelligence failure is this: Given the information that was available at the time, should analysts have been able to foresee what was going to happen? Unbiased evaluation of intelligence performance depends upon the ability to provide an unbiased answer to this question.

The experiments reported in the following paragraphs tested the hypotheses that knowledge of an outcome increases the perceived inevitability of that outcome, and that people who are informed of the outcome are largely unaware that this information has changed their perceptions in this manner.

An average of all estimated outcomes in six sub-experiments (a total of 2,188 estimates by 547 subjects) indicates that the knowledge or belief that one of four possible outcomes has occurred approximately doubles the perceived probability of that outcome as judged with hindsight as compared with foresight.

The fact that outcome knowledge automatically restructures a person’s judgments about the relevance of available data is probably one reason it is so difficult to reconstruct how our thought processes were or would have been without this outcome knowledge.

These results indicate that overseers conducting postmortem evaluations of what analysts should have been able to foresee, given the available information, will tend to perceive the outcome of that situation as having been more predictable than was, in fact, the case. Because they are unable to reconstruct a state of mind that views the situation only with foresight, not hindsight, overseers will tend to be more critical of intelligence performance than is warranted.

Discussion of Experiments

Experiments that demonstrated these biases and their resistance to corrective action were conducted as part of a research program in decision analysis funded by the Defense Advanced Research Projects Agency. Unfortunately, the experimental subjects were students, not members of the Intelligence Community. There is, nonetheless, reason to believe the results can be generalized to apply to the Intelligence Community. The experiments deal with basic human mental processes, and the results do seem consistent with personal experience in the Intelligence Community. In similar kinds of psychological tests, in which experts, including intelligence analysts, were used as test subjects, the experts showed the same pattern of responses as students.

One would expect the biases to be even greater in foreign affairs professionals whose careers and self-esteem depend upon the presumed accuracy of their judgments.

Can We Overcome These Biases?

Analysts tend to blame biased evaluations of intelligence performance at best on ignorance and at worst on self-interest and lack of objectivity. Both these factors may also be at work, but the experiments suggest the nature of human mental processes is also a principal culprit. This is a more intractable cause than either ignorance or lack of objectivity.

in these experimental situations the biases were highly resistant to efforts to overcome them. Subjects were instructed to make estimates as if they did not already know the answer, but they were unable to do so. One set of test subjects was briefed specifically on the bias, citing the results of previous experiments. This group was instructed to try to compensate for the bias, but it was unable to do so. Despite maximum information and the best of intentions, the bias persisted.

This intractability suggests the bias does indeed have its roots in the nature of our mental processes. Analysts who try to recall a previous estimate after learning the actual outcome of events, consumers who think about how much a report has added to their knowledge, and overseers who judge whether analysts should have been able to avoid an intelligence failure, all have one thing in common. They are engaged in a mental process involving hindsight. They are trying to erase the impact of knowledge, so as to remember, reconstruct, or imagine the uncertainties they had or would have had about a subject prior to receipt of more or less definitive information.

There is one procedure that may help to overcome these biases. It is to pose such questions as the following: Analysts should ask themselves, “If the opposite outcome had occurred, would I have been surprised?” Consumers should ask, “If this report had told me the opposite, would I have believed it?” And overseers should ask, “If the opposite outcome had occurred, would it have been predictable given the information that was available?” These questions may help one recall or reconstruct the uncertainty that existed prior to learning the content of a report or the outcome of a situation.

PART IV—CONCLUSIONS

Chapter 14

Improving Intelligence Analysis

This chapter offers a checklist for analysts–a summary of tips on how to navigate the minefield of problems identified in previous chapters. It also identifies steps that managers of intelligence analysis can take to help create an environment in which analytical excellence can flourish.

Checklist for Analysts

This checklist for analysts summarizes guidelines for maneuvering through the minefields encountered while proceeding through the analytical process. Following the guidelines will help analysts protect themselves from avoidable error and improve their chances of making the right calls. The discussion is organized around six key steps in the analytical process: defining the problem, generating hypotheses, collecting information, evaluating hypotheses, selecting the most likely hypothesis, and the ongoing monitoring of new information.

Defining the Problem

Start out by making certain you are asking–or being asked–the right questions. Do not hesitate to go back up the chain of command with a suggestion for doing something a little different from what was asked for. The policymaker who originated the requirement may not have thought through his or her needs, or the requirement may be somewhat garbled as it passes down through several echelons of management.

Generating Hypotheses

Identify all the plausible hypotheses that need to be considered. Make a list of as many ideas as possible by consulting colleagues and outside experts. Do this in a brainstorming mode, suspending judgment for as long as possible until all the ideas are out on the table.

At this stage, do not screen out reasonable hypotheses only because there is no evidence to support them. This applies in particular to the deception hypothesis. If another country is concealing its intent through denial and deception, you should probably not expect to see evidence of it without completing a very careful analysis

of this possibility. The deception hypothesis and other plausible hypotheses for which there may be no immediate evidence should be carried forward to the next stage of analysis until they can be carefully considered and, if appropriate, rejected with good cause.

Collecting Information

Relying only on information that is automatically delivered to you will probably not solve all your analytical problems. To do the job right, it will probably be necessary to look elsewhere and dig for more information. Contact with the collectors, other Directorate of Operations personnel, or first-cut analysts often yields additional information. Also check academic specialists, foreign newspapers, and specialized journals.

Collect information to evaluate all the reasonable hypotheses, not just the one that seems most likely. Exploring alternative hypotheses that have not been seriously considered before often leads an analyst into unexpected and unfamiliar territory. For example, evaluating the possibility of deception requires evaluating another country’s or group’s motives, opportunities, and means for denial and deception. This, in turn, may require understanding the strengths and weaknesses of US human and technical collection capabilities.

It is important to suspend judgment while information is being assembled on each of the hypotheses. It is easy to form impressions about a hypothesis on the basis of very little information, but hard to change an impression once it has taken root. If you find yourself thinking you already know the answer, ask yourself what would cause you to change your mind; then look for that information.

Try to develop alternative hypotheses in order to determine if some alternative–when given a fair chance–might not be as compelling as your own preconceived view. Systematic development of an alternative hypothesis usually increases the perceived likelihood of that hypothesis. “A willingness to play with material from different angles and in the context of unpopular as well as popular hypotheses is an essential ingredient of a good detective, whether the end is the solution of a crime or an intelligence estimate”.

Evaluating Hypotheses

Do not be misled by the fact that so much evidence supports your preconceived idea of which is the most likely hypothesis. That same evidence may be consistent with several different hypotheses. Focus on developing arguments against each hypothesis rather than trying to confirm hypotheses. In other words, pay particular attention to evidence or assumptions that suggest one or more hypotheses are less likely than the others.

Assumptions are fine as long as they are made explicit in your analysis and you analyze the sensitivity of your conclusions to those assumptions. Ask yourself, would different assumptions lead to a different interpretation of the evidence and different conclusions?

Do not assume that every foreign government action is based on a rational decision in pursuit of identified goals. Recognize that government actions are sometimes best explained as a product of bargaining among semi-independent bureaucratic entities, following standard operating procedures under inappropriate circumstances, unintended consequences, failure to follow orders, confusion, accident, or coincidence.

Selecting the Most Likely Hypothesis

Proceed by trying to reject hypotheses rather than confirm them. The most likely hypothesis is usually the one with the least evidence against it, not the one with the most evidence for it.

In presenting your conclusions, note all the reasonable hypotheses that were considered.

Ongoing Monitoring

In a rapidly changing, probabilistic world, analytical conclusions are always tentative. The situation may change, or it may remain unchanged while you receive new information that alters your understanding of it. Specify things to look for that, if observed, would suggest a significant change in the probabilities.

Pay particular attention to any feeling of surprise when new information does not fit your prior understanding. Consider whether this surprising information is consistent with an alternative hypothesis. A surprise or two, however small, may be the first clue that your understanding of what is happening requires some adjustment, is at best incomplete, or may be quite wrong.

Management of Analysis

The cognitive problems described in this book have implications for the management as well as the conduct of intelligence analysis. This concluding section looks at what managers of intelligence analysis can do to help create an organizational environment in which analytical excellence flourishes. These measures fall into four general categories: research, training, exposure to alternative mind-sets, and guiding analytical products.

Support for Research

Management should support research to gain a better understanding of the cognitive processes involved in making intelligence judgments. There is a need for better understanding of the thinking skills involved in intelligence analysis, how to test job applicants for these skills, and how to train analysts to improve these skills. Analysts also need a fuller understanding of how cognitive limitations affect intelligence analysis and how to minimize their impact. They need simple tools and techniques to help protect themselves from avoidable error. There is so much research to be done that it is difficult to know where to start.

Training

Most training of intelligence analysts is focused on organizational procedures, writing style, and methodological techniques. Analysts who write clearly are assumed to be thinking clearly. Yet it is quite possible to follow a faulty analytical process and write a clear and persuasive argument in support of an erroneous judgment.

More training time should be devoted to the thinking and reasoning processes involved in making intelligence judgments, and to the tools of the trade that are available to alleviate or compensate for the known cognitive problems encountered in analysis. This book is intended to support such training.

It would be worthwhile to consider how an analytical coaching staff might be formed to mentor new analysts or consult with analysts working particularly difficult issues. One possible model is the SCORE organization that exists in many communities. SCORE stands for Senior Corps of Retired Executives. It is a national organization of retired executives who volunteer their time to counsel young entrepreneurs starting their own businesses.

New analysts could be required to read a specified set of books or articles relating to analysis, and to attend a half-day meeting once a month to discuss the reading and other experiences related to their development as analysts. A comparable voluntary program could be conducted for experienced analysts. This would help make analysts more conscious of the procedures they use in doing analysis. In addition to their educational value, the required readings and discussion would give analysts a common experience and vocabulary for communicating with each other, and with management, about the problems of doing analysis.

My suggestions for writings that would qualify for a mandatory reading program include: Robert Jervis’ Perception and Misperception in International Politics (Princeton University Press, 1977); Graham Allison’s Essence of Decision: Explaining the Cuban Missile Crisis (Little, Brown, 1971); Ernest May’s “Lessons” of the Past: The Use and Misuse of History in American Foreign Policy (Oxford University Press, 1973); Ephraim Kam’s, Surprise Attack (Harvard University Press, 1988); Richard Betts’ “Analysis, War and Decision: Why Intelligence Failures Are Inevitable,” World Politics, Vol. 31, No. 1 (October 1978); Thomas Kuhn’s The Structure of Scientific Revolutions (University of Chicago Press, 1970); and Robin Hogarth’s Judgement and Choice (John Wiley, 1980). Although these were all written many years ago, they are classics of permanent value. Current analysts will doubtless have other works to recommend. CIA and Intelligence Community postmortem analyses of intelligence failure should also be part of the reading program.

To encourage learning from experience, even in the absence of a high-profile failure, management should require more frequent and systematic retrospective evaluation of analytical performance. One ought not generalize from any single instance of a correct or incorrect judgment, but a series of related judgments that are, or are not, borne out by subsequent events can reveal the accuracy or inaccuracy of the analyst’s mental model. Obtaining systematic feedback on the accuracy of past judgments is frequently difficult or impossible, especially in the political intelligence field. Political judgments are normally couched in imprecise terms and are generally conditional upon other developments. Even in retrospect, there are no objective criteria for evaluating the accuracy of most political intelligence judgments as they are presently written.

In the economic and military fields, however, where estimates are frequently concerned with numerical quantities, systematic feedback on analytical performance is feasible. Retrospective evaluation should be standard procedure in those fields in which estimates are routinely updated at periodic intervals. The goal of learning from retrospective evaluation is achieved, however, only if it is accomplished as part of an objective search for improved understanding, not to identify scapegoats or assess blame. This requirement suggests that retrospective evaluation should be done routinely within the organizational unit that prepared the report, even at the cost of some loss of objectivity.

Exposure to Alternative Mind-Sets

The realities of bureaucratic life produce strong pressures for conformity. Management needs to make conscious efforts to ensure that well-reasoned competing views have the opportunity to surface within the Intelligence Community. Analysts need to enjoy a sense of security, so that partially developed new ideas may be expressed and bounced off others as sounding boards with minimal fear of criticism for deviating from established orthodoxy.

Intelligence analysts have often spent less time living in and absorbing the culture of the countries they are working on than outside experts on those countries. If analysts fail to understand the foreign culture, they will not see issues as the foreign government sees them. Instead, they may be inclined to mirror-image–that is, to assume that the other country’s leaders think like we do. The analyst assumes that the other country will do what we would do if we were in their shoes.

Mirror-imaging is a common source of analytical error,

Pre-publication review of analytical reports offers another opportunity to bring alternative perspectives to bear on an issue. Review procedures should explicitly question the mental model employed by the analyst in searching for and examining evidence. What assumptions has the analyst made that are not discussed in the draft itself, but that underlie the principal judgments? What alternative hypotheses have been considered but rejected, and for what reason? What could cause the analyst to change his or her mind?

Ideally, the review process should include analysts from other areas who are not specialists in the subject matter of the report. Analysts within the same branch or division often share a similar mind-set. Past experience with review by analysts from other divisions or offices indicates that critical thinkers whose expertise is in other areas make a significant contribution. They often see things or ask questions that the author has not seen or asked. Because they are not so absorbed in the substance, they are better able to identify the assumptions and assess the argumentation, internal consistency, logic, and relationship of the evidence to the conclusion. The reviewers also profit from the experience by learning standards for good analysis that are independent of the subject matter of the analysis.

Guiding Analytical Products

On key issues, management should reject most single-outcome analysis–that is, the single-minded focus on what the analyst believes is probably happening or most likely will happen.

One guideline for identifying unlikely events that merit the specific allocation of resources is to ask the following question: Are the chances of this happening, however small, sufficient that if policymakers fully understood the risks, they might want to make contingency plans or take some form of preventive or preemptive action? If the answer is yes, resources should be committed to analyze even what appears to be an unlikely outcome.

Finally, management should educate consumers concerning the limitations as well as the capabilities of intelligence analysis and should define a set of realistic expectations as a standard against which to judge analytical performance.

The Bottom Line

Analysis can be improved! None of the measures discussed in this book will guarantee that accurate conclusions will be drawn from the incomplete and ambiguous information that intelligence analysts typically work with. Occasional intelligence failures must be expected. Collectively, however, the measures discussed here can certainly improve the odds in the analysts’ favor.

Review of Irresistible Revolution: Marxism’s Coal of Conquest & The Unmaking of the American Military

Irresistible Revolution: Marxism’s Goal of Conquest & the Unmaking of the American Military was written by a former lieutenant colonel in the United State Space Force, Matthew Lohmeier, and published in 2021. The book is, roughly, three-fifths about the history of Marxism as an intellectual project and two-fifths about the author’s personal experiences in the U.S. Military

The opening chapter, Transforming American History, provides a brief account of the controversies and figures involved in the 1619 Project and the 1776 Commission in the context of cultural war. There is a struggle over the meaning of America presently underway that is at the heart of a social and political polarization that threatens to permanently fracture American civil society. Lohmeier describes it as a contest over the meaning of America highlighting how civil rights icons Frederick Douglass and Martin Luther King, Jr. recognized the Declaration of Independence and the Constitution as “deep wells of democracy” with the Marxist view which holds these documents in contempt. This is framed via reference to George Orwell and the power one gains via dictation over official truth and a broader account of slavery is briefly touched upon, highlighting how president of the National Association of Scholars Peter W. Wood’s historical research shows slavery was a worldwide phenomenon as early as the 14th century – far before the 1619 period cited by the 1619 Project.

In Chapter Two, America’s Founding Philosophy, Lormeier provides a historiography of American political economy as embodied in the Founder’s ideology that is instructive in highlighting how, in contrast to the claims made by those on the left, that women and blacks were never conceived of us being innately inferior but inherently equal as human beings but historically unequal due to the conditions of the society. The Declaration of Independence and the Constitution were enablers of the means by which historically unequal groups could achieve legal equality. Their Transitional Program then, to borrow a Marxian turn of phrase, was not delineated in a strategic set of actions to be taken to achieve a quasi-utopian outcome but by their ability to reference to the Declaration, the Constitution and legal literature to enable to legitimize what was already granted to them “by God”.

In chapter 3, Marxism’s Goal of Conquest, Lohmeier shares knowledge he learned while taking courses within the U.S. military to give context to the schools of thought in Europe that Marxism developed from. He explains how the writings of a wide-number of political conspiracists positively valued collectivism, which made it fundamentally difference from the individual rights framework of the Constitution. In relating their communalist concepts to historical precedents, such as the Cultural Revolution in China, wherein ‘forced equality’ became a government mandate it becomes appear how this collectivist approach which holds no respect for individual rights becomes a cudgel to legitimize and legalize sorts of abuses. Marx’s relationship to Hegel and the secret social orders in not-yet-united regions of Germany organized against the Kings and Princes are described, as was the view that Universal Revolution – one which inverted the current system of values in the home, economy and nation – was necessary and how the most strategic way to achieve this was through corruption and unseen influence.

In Chapter 4, Marx, Marxism and Revolution, Lohmeier provides a more thorough biographical account of Marx as well as focused analysis of Marx’s writing. From the context provided in the previous chapter we see the how writings and actions of Marx, those that influenced his thought, and his contemporary comrades in political arms all viewed the individual with disdain. History is class struggle, nothing more, and the bourgeoisie that prevents the dictatorship of the proletariat from being enacted are akin to devils keeping man from reaching heaven. A few examples of how the Communist Manifesto has been used by practitioners such as Lenin and Mao to justify atrocities are cited – but the majority of the chapter is devoted to explicating in detail the rhetorical and political innovations of Marxism. Marxism is a totalitarian legitimization of social destruction and replacement with something that – due to it’s “collectiveness” – is claimed to be an improvement.

In Chapter 5, Marx’s Many Faces, provides several historic accounts that highlight how Communist societies have treated outsiders, examples of communist infiltration into social bodies, and modern examples of this collectivist language making its way into U.S. institutions. One such example of the first category is the processes that American service-members had to go experience following capture in the Korean and Vietnamese War – prolonged, tortuous interrogation combined with efforts to indoctrinate. An example of the middle category – subversions – is that of William Montgomery Brown’s entrance at the behest of the Communist Party into the Episcopal church and his training to become a Bishop.  Such efforts, as described in Color, Communism, & Common Sense, were coordinated at a national level with guidance at the international level from the Kremlin in Soviet Russia. An example of the last is reflected in Critical Race Theory, which is shown not only to have many of the same rhetorical and political elements as that of Marxist thought – but that many of it’s early developers and current advocated openly avow such a worldview.

In Chapter 6, The New American Military Culture, Lohmeier’s descriptions on how Diversity, Equity, and Inclusion (DEI) concepts and practices have been made necessary components of armed services training and how policies linked to them have affected combat readiness is insightful. He focuses on the way in which normative social values are promoted via mandatory training that teaches an iteration of ‘anti-racism’ that functions to smuggle in Marxist, revolutionary values. He highlights how how “Servicemembers are allowed to support the BLM movement. They are not, however, allowed to criticize it.” (Lohmeier 121). As a personal account of the impact of such training on troop morale, retention rates and the way that political controversy has come to inform hiring, firing and promotions the book is insightful. It’s impact on morale (teaching as it does that people within a racially diverse unit represent oppressors and oppressed), professionalism (teaching as it does that people from historically oppressed groups should be ascend professionally because of that rather than traditional metrics of merit, and combat readiness (teaching as it does that the U.S. is an immoral country), and other factors is shown to be real and concerning. Accounts are shared, for example, about how cadets cite how the transformation of the merit-based system that they elected to join becoming a racialized organization as the reason for their decision not to re-enlist; about lieutenant colonels adopting the language of radical extremism and saying that if elections don’t go his way the system should be “burned to the ground”; and lowered recruitment numbers amongst other examples. Citing a 40-page June 25, 2020 policy proposal written by officers commissioned at West Point, he shows how the new demands for “racial inclusion” – influenced by Robin DiAngelo and Ibram X. Kendi – was nothing more than Marxism using racial language and mirrored more the Port Huron Statement than a document written by those supposedly trained to understand American values, i.e. individualism and meritocracy.

Chapter 7 closes the books primarily with comparing the contemporary U.S. context, to historical precedents for the type of ideological warfare now running unchecked, from the Civil War in Yugoslavia to the actions of the Red Guards following the Communist Party’s capture of China. Lohmeier highlights examples of laws advocated by the Democratic Socialists of America – whose worldview is influenced by Marxism – as well as interpretations of historic events such as the January 6th Protests at the U.S. Capital Building.

As a whole, my primary criticism of Lohmeier’s book is in the descriptions of actors and networks in the U.S. that are currently involved in political and ideological activism. In the first chapter, for example, he describes how (1) materials written by an author (Hannah-Jones) that had received a fellowship to study in Cuba, (2) produced by a foundation started by Howard Zinn and (3) promoted by a group whose roots trace to the United States Social Forum made their way into a suggested reading list for high school students and enlisted personnel. And yet there is no mention of the fact that Zinn was a founding member of the Cuban and Venezuelan-directed Networks of Artists and Intellectuals in Defense of Humanity, nor the relationship of Black Lives Matter’s founders, executives and elders’ relationship to the World Social Forum and the Cuban and Venezuelan. Because of this lack of intelligence-based analysis, amorphous descriptions of the groups involved in the force as being a “potent cultural force” seem to be merely creatures of individual choices responding to national issues even though they are not.

Lohmeier’s focus of the book primarily being on “Marxism’s Goal of Conquest” – however – this is understandable. Assessed from this vantage point the book is a success – though I do wish that more focus would have been given on examples of how DEI/crypto-Marxism has impacted U.S. military culture.

Review of Confronting the Evolving Global Security Landscape: Lessons from the Past and Present

Max G. Manwaring, PhD, is a retired professor of military strategy at the Strategic Studies Institute of the U.S. Army War College (USAWC), where he has held the General Douglas MacArthur Chair of Research. His recent publication Confronting the Evolving Global Security Landscape: Lessons from the Past and Present shares empirical-research-based theories informed by case studies that describe the complexity of modern-day threats to the security of nations and the global system as a whole rather than according to dated theoretical frameworks.

How to Understand the Current Moment

The present-day global security situation is characterized by an unconventional spectrum of conflict that no one from the traditional Westphalian school of thought would recognize or be comfortable with. In addition to conventional attrition war conducted by easily recognized military forces of another nation-state, we see something considerably more complex and ambiguous. Regardless of any given politically correct term for war or conflict, all state and nonstate actors involved in any kind of conflict are engaged in one common political act – that, war. The intent is to control and/or radically change and government and to institutionalize the acceptance of the aggressor’s objectives and values. It is important to remember that these “new” actors and “new” types of battlefields are being ignored or, alternatively, they are considered too complicated and ambiguous to deal with. Yet they seriously threaten the security, stability, development, and well-being of all parts of the global community.

Change and Development within the Inter/National Security Concept

Stability is a concept that has sometimes been confused with security. Often these terms are used synonymously and are defined as the protection of territory and people. Threats against stability/security include direct military threats as well as offensive/defensive aggression supported by propaganda, information, moral warfare, and a combination of other types of conflict that might include, but are not limited to, psychological war, financial war, trade war, cyber war, diplomatic war, narco-criminal-war, and guerrilla war. There are no ways or means that cannot be combined with others.

These terms, however, are not the same. Security is better thought of as the foundational element that enables national and international socioeconomic-political development and includes the task of generating “responsible governance”.

In 1996, the secretary general of the United Nations, Boutros Boutros-Ghali, described the most important dialectic at work in the post-Cold War world as globalization and fragmentation. As a consequence of his research in the field he introduced two new types of threats. These included:

(1) A new set of players that includes insurgents, transnational criminal organizations, private armies, militias, and gangs that are taking on roles that were once reserved for nation-states.

(2) indirect and implicit threats to stability and human well-being such as unmet political, economic, and social expectations.

This broadened concept of security ultimately depends on eradication of the causes, as well as the perpetrators, of instability.

A New Sociology of Security

These new global developments and the emergence of new players and practices in the global security arena dictate a new sociology of security and redefinition of the major characteristics of contemporary socioeconomic-political conflict. A few of these defining characteristics include the followings:

  • The center of gravity is no longer an easily identifiable military force. It is now leadership and public opinion – and the fight takes place among the people, not on a conventional battlefield.
  • The broadened concept of security (responsible sovereignty) ultimately depends on the eradication of causes, as well as perpetrators, of instability.
  • The primary objective of security is no longer the control of sovereign territory and the people in it. The strategic objective is to capture the imagination of the people and the will of their leaders – thereby winning a public opinion trial of relative moral strength.
  • The principal tools of contemporary conflict are now the proactive and coercive use of words, images, symbols, perceptions, ideas, and dreams.
  • War is now total in terms of scope, method, objective, and time.
  • There are the following three rules: (1) only the foolish fight fair; (2) there are no rules; and (3) the only viable morality within the anarchy of the world disorder is national self-interest.

The Peace-Security Paradigm

The fulfilment of a holistic, population-centric, legitimate governance and stability-security equation for national and global security consists of three principal elements. They are derived from the independent variable that define security (i.e., S). These three primary elements are as follows: (1) the coercive capacity to provide a culturally acceptable level of personal and collective stability (i.e., M), (2) the ability to generate socioeconomic development (i.e., E), and (3) the political competence and rectitude to develop a type of governance to which a people can relate and support (i.e., PC). It is heuristically valuable to portray the formula among these elements in a mathematical formula: S = (M+ E) X (PC)

This peace-security equation was developed from the SWORD model (a.k.a. the Manwaring Paradigm) and warrants high confidence that the findings are universal and explain much of the reality of the contemporary conservative environment. 

Five Components of a Legitimate Government

  • Free, fair, and frequent selection of political leavers
  • The level of participation in or acceptance of the political process
  • The level of perceived government corruption
  • The level of perceived individual or collective well-being
  • The level of regime acceptance by the major social institutions

These key indicators of moral legitimacy are not exhaustive, but they statistically explain a high percentage of the legitimacy phenomenon and provide the basic architecture for the actions necessary to assist governments in their struggle to survive, develop, and prosper. The degree to which a political actor efficiently manages a balanced mix of these five variables enables stability, development, political competence, security, acceptance, and sustainable peace, or the reverse. 

The Quintuple Threat to Security

There are five threats that when combined pose a grave danger to the security of a nation’s sovereignty. These are domains that external threats and, potentially, their internal partners will seek to damage. These include:

  • Undermining the ability of the government to perform its legitimizing functions.
  • Significantly changing a government’s foreign, defense or other policies.
  • Isolating religion or racial communities from the rest of the host nation’s society and replacing traditional state authority with alternative governance (e.g. ideological, plutocratic, criminal or religious).
  • Transforming socially isolated human terrain into “virtual states” within the host state, without a centralized bureaucracy or easily identified military or police forces.
  • Conducting low-cost actions calculated to maximize damage, minimize response, and display carefully staged media events that lead to the erosion of the legitimacy and stability of a targeted state’s political-economic-social system.

Each of the above elements, when combined and sustained over time leads to the inability of a nation to maintain peace and security. 

Linear-Analytic Case Study Elements

Research applying these methods will frequently find that the study of only a few sharply contrasting instances can produce a wealth of new insights as it produces an enhanced understanding of the architecture of successful and unsuccessful strategies – or best/worst practices – in dealing with complex contemporary hybrid conflicts.

With this information, the strategic and analytical commonalities and recommendations can be determined that are relevant to each case examined, as well as the larger general security phenomenon.

The standard approach to such case studies is to include the following three elements: Issue and Context, Fundings and Outcome, Conclusions and Implications.

The case studies which Manwaring then covers include the following:

  • Lessons from Italy (1968 – 1983) and Western Sahara (1975 – Present)
  • Lessons from Somalia (1992 – 1993) and Bosnia (1992 – 1998)
  • Lessons from Argentina (1960 – Present) and Mexico (1999 – Present)
  • Lessons from Vietnam (1959 – 1975) and Algeria (1954 – 1962)
  • Lessons from Malaya (1948 – 1960) and El Salvador (1979 – 1992)
  • Lessons from Venezuela (1998 – Present and Uruguay (1962 – 2005)
  • The Proxy War against the Soviet 40th Army in Afghanistan

Conclusion

From the aforementioned cases a large number of insights on the nature of global security are able to be developed. Perhaps most importantly – in understanding the new nontraditional and greatly enlarged security arena, one must be organizationally and cognitively prepared to deal with proxies as well as other unconventional players operating across the entire spectrum of conflict.

Strong empirical evidence illustrates that the essence of any given contemporary threat situation must be to co-opt, persuade, control, and/or compel an adversary’s public opinion and political leadership to accept one’s will. That defines war. The origins of this form of ‘protracted struggle’ originates largely within the canon of Marxist thought.

Lenin articulated the contemporary political vision within which many nonstate and state actors operate. He taught that anyone wishing to force radical political-economic-social change and compel an opponent to accede to his or her will must organize, equip, train, and employ a body of small political agitator groups. His intent was straightforward. If these unconventional and clandestine individuals of statecraft succeed in helping to tear apart the fabric on which a targeted enemy rests, the instability and violence they create can serve as the “midwife of a new social order.”

This has a number of implications that are important both for an enlightened electorate and political operatives should consider when deliberating and legislating about the contemporary global security arena. Most importantly, this includes understanding that the root causes of insecurity are rooted in economic and social issues and that the way in which to assess risks to security is to look at the individuals and groups engaged in activities defined as threats to the peace-security paradigm.

Notes on Information Collection FM 3-55

Notes on Information Collection FM 3-55

Preface

Although this is the first edition of field manual (FM) 3-55, the concepts are not new. Many who read this FM will recognize that it is a culmination of decades of refinement. In this manual, the term information collection is introduced as the Army’s replacement for intelligence, surveillance, and reconnaissance (also known as ISR). ISR is a joint term, for which the Army revised to meet Army needs.

 

Introduction

A nuanced understanding of the situation is everything. Analyze the intelligence that is gathered, share it, and fight for more. Every patrol should have tasks designed to augment understanding of the area of operations and the enemy. Operate on a “need to share” rather than a “need to know” basis. Disseminate intelligence as soon as possible to all who can benefit from it.

General David H. Petraeus, U.S. Army

Military Review

The Army currently has no unified methodology or overall plan to define or establish how it performs or supports information collection activities at all echelons. This publication clarifies how the Army plans, prepares, and executes information collection activities within or between echelons.

This manual emphasizes three themes. First, foundations of information collection that demonstrate information collection activities are a synergistic whole, with emphasis on synchronization and integration of all components and systems. Second, commanders and staff have vital responsibilities in information collection planning and execution, with emphasis on the importance of the commander’s role. Finally, the planning requirements and assessing success of information collection is measured by its contributions to the commander’s understanding, visualization, and decisionmaking abilities.

 

With the exception of cyberspace, all operations will be conducted among the people and outcomes will be measured in terms of effects on populations. This increases the complexity of information collection planning, execution, and assessment, requiring a deeper level of situational understanding from commanders.

 

Commanders drive information collection activities through their choice of critical information requirements and through mission command in driving the operations process. Commanders visualize, describe, direct, lead, and assess throughout the operations process with understanding as the start point. Intelligence preparation of the battlefield assists them in developing an in-depth understanding of the enemy and the operational environment. They then visualize the desired end state and a broad concept of how to shape the current conditions into the end state. Commanders describe their visualization through the commander’s intent, planning guidance, and concept of operations in order to bring clarity to an uncertain situation. They also express gaps in relevant information as commander’s critical information requirements. The challenge is for information collection activities to answer those requirements with timely, relevant, and accurate intelligence that enables commanders to make sound decisions.

Chapter 1
Foundations of Information Collection

This chapter presents the basics of information collection. It begins with the definition and purpose of information collection. It then discusses the information collection processes. Lastly, the chapter discusses primary information collection tasks and missions.

DEFINITION

1-1. Knowledge is the precursor to effective action, whether in the informational or physical domain. Knowledge about an operational environment requires aggressive and continuous operations to acquire information. Information collected from multiple sources and analyzed becomes intelligence that provides answers to commander’s critical information requirements.

1-2. Commanders have used to provide intelligence to reduce the inherent uncertainty of war. Achieving success in today’s conflicts demands extraordinary commitment to reducing this uncertainty.

1-3. Information collection is an activity that synchronizes and integrates the planning and employment of sensors and assets as well as the processing, exploitation, and dissemination of systems in direct support of current and future operations. This activity implies a function, mission, or action as well as the organization that performs it.

1-4. At the tactical level, reconnaissance, surveillance, security, and intelligence missions or operations are the primary means by which a commander plans, organizes, and executes shaping operations that answer the commander’s critical information requirements and support the decisive operation.

1-5. The intelligence and operations staffs work together to collect, process, and analyze the information the commander requires concerning the enemy, other adversaries, climate, weather, terrain, population, and other civil considerations that affect operations. Intelligence relies on reconnaissance, security, intelligence operations, and surveillance for its data and information. Conversely, without intelligence, commanders and staffs do not know where or when to conduct reconnaissance, security, intelligence operations, or surveillance. The usefulness of the data collected depends upon the processing and exploitation common to these activities.

1-6. Commanders integrate information collection to form an information collection plan that capitalizes on different capabilities. Information collection assets provide data and information. Intelligence is the product resulting from the collection, processing, integration, evaluation, analysis, and interpretation of available information concerning foreign nations, hostile or potentially hostile forces or elements, or areas of actual or potential operations. The term is also applied to the activity which results in the product and to the organizations engaged in such activity

Intelligence informs commanders and staffs where and when to look. Reconnaissance, security, intelligence operations, and surveillance are the ways—with the means ranging from national and joint collection capabilities to individual Soldier observations and reports. The end is intelligence that supports commander’s decisionmaking. The result—successful execution and assessment of operations—depends upon the effective synchronization and integration of the information collection effort.

1-7. These activities of information collection support the commander’s understanding and visualization of the operation by identifying gaps in information, aligning assets and resources against them, and assessing the collected information and intelligence to inform the commander’s decisions. They also support the staff’s integrating processes during planning and execution. The direct result of the information collection effort is a coordinated plan that supports the operation.

PURPOSE

1-8. Information collection activities provide commanders with detailed, timely, and accurate intelligence, enabling them to visualize threat capabilities and vulnerabilities, and to gain situational understanding. Information collected from multiple sources and analyzed becomes intelligence that provides answers to commander’s critical information requirements as part of an evolving understanding to the area of operations. These activities contribute to the achievement of a timely and accurate common operational picture (COP).

1-9. Effective information collection activities—

  • Provide relevant information and intelligence products to commanders and staffs.
  • Provide combat information to commanders.
  • Contribute to situational awareness and facilitates continuous situational understanding.
  • Generate a significant portion of the COP vertically and horizontally among organizations, commanders, and staffs.
  • Support the commander’s visualization, permitting more effective mission command.
  • Answer the CCIRs.
  • Facilitate and are facilitated by the intelligence preparation of the battlefield (IPB).
  • Support effective, efficient, and accurate targeting.
  • Decrease risk for the unit.

1-10. Commanders and staffs continuously plan, task, and employ collection assets and forces to collect information. They request information and resources through higher echelons as needed. This information and intelligence enable commanders to make informed decisions that are translated into action.

1-11. Information collection planning is crucial to mission success. The four fundamentals in effectively planning, synchronizing, and integrating information collection activities are—

  • The commander drives the information collection effort.
  • Effectiveinformationcollectionsynchronizationandintegrationrequiresfullstaffparticipation.
  • Conducting information collection requires a collection capability, either organic or augmented by nonorganic resources.
  • Conducting information collection requires an analytical capability to analyze and produce actionable intelligence.

1-12. Commanders must be involved in the information collection planning process by quickly and clearly articulating their CCIRs to the staff. This enables the staff to facilitate the commander’s visualization and decisionmaking by focusing on the CCIRs.

1-14. Conducting information collection activities requires an organic collection capability, either organic or augmented by nonorganic resources. Acquiring the required information to answer the requirements encompasses the efforts of reconnaissance, security, surveillance, intelligence operations, and the skills of Soldiers. All the activities that contribute to developing continuous knowledge about the area of operations are considered information collection activities. Planners must understand all collection assets and resources available to them and the procedures to request or task collection from those assets, resources, and organizations.

1-15. Conducting these activities requires an analytical capability to interpret information and produce actionable intelligence. The analyst’s ability to employ critical thinking and use multiple sources during intelligence analysis reduces uncertainty and helps solve problems that could not be resolved via a single source of information. This requires staff sections to understand the capabilities and limitations of assets to collect and report. The staff must also establish reporting guidelines to the collection assets.

INFORMATION COLLECTION PROCESS

1-16. Information collection is the acquisition of information and the provision of this information to processing elements. This process performs the following tasks:

  • Plan requirements and assess collection.
  • Task and direct collection.
  • Execute collection.

PLAN REQUIREMENTS AND ASSESS COLLECTION

1-17. The intelligence staff (in collaboration with the operations officer and the entire staff) receives and validates requirements for collection, prepares the requirements planning tools, recommends collection assets and capabilities to the operations staff, and maintains synchronization as operations progress.

TASK AND DIRECT COLLECTION

1-18. The operations officer (based on recommendations from the staff) tasks, directs, and when necessary re-tasks the information collection assets.

EXECUTE COLLECTION

1-19. Executing collection focuses on requirements tied to the execution of tactical missions (such as reconnaissance, surveillance, security, and intelligence operations) based on the CCIRs. Collection activities acquire information about the adversary and the area of operations and provide that information to intelligence processing and exploitation elements. Typically collection activities begin soon after receipt of mission and continue throughout preparation and execution of the operation. They do not cease at conclusion of the mission but continue as required. This allows the commander to focus combat power, execute current operations, and prepare for future operations simultaneously.

1-20. The subtasks are—

  • Establish technical channels and provide guidance.
  • Collect and report information.
  • Establish a mission intelligence briefing and debriefing program.

Establish Technical Channels and Provide Guidance

1-21. This subtask includes providing and conducting technical channels to refine and focus the intelligence disciplines’ information collection tasks. It coordinates the disciplines’ assets when operating in another unit’s area of operations.

1-23. Technical channels refer to supervision of intelligence operations and disciplines. Technical channels do not interfere with the ability to task organic intelligence operations assets. It ensures adherence to existing policies or regulations by providing technical guidance for intelligence operations tasks contained within the information collection plan.

1-24. Technical channels also involve translating tasks into the specific parameters used to focus the highly technical intelligence operations collection or the legally sensitive aspects of signals intelligence collection as well as human intelligence military source operations and counterintelligence tasks. Technical channels provide the means to meets the overall commander’s intent for intelligence operations. Technical channels include but are not limited to defining, managing, or guiding the use of specific intelligence assets or identifying critical technical collection criteria (such as technical indicators and recommending collection techniques or procedures).

Collect and Report Information

1-25. This task involves collecting and reporting information in response to collection tasks. Collection assets collect information and data about the threat, terrain and weather, and civil considerations for a particular area of operations (AO) and area of interest. A successful information collection effort results in the timely collection and reporting of relevant and accurate information, which supports the production of intelligence or combat information.

Collect

1-26. As part of the collection plan, elements of all units obtain information and data concerning the threat, terrain and weather, and civil considerations within the AO. Well-developed procedures and carefully planned flexibility to support emerging targets, changing requirements, and the need to support combat assessment is critical. Once staffs collect the information, they process it into a form that enables analysts to extract essential information and produce intelligence and targeting data. Once Solders collect the information, it is processed into a form that enables analysis. Collected and processed information is provided to the appropriate units, organizations, or agencies for analysis or action. This analyzed information forms the foundation of running estimates, targeting data, intelligence databases, and intelligence.

Report

1-27. Collection assets must follow standard operating procedures (SOPs) to ensure staffs tag reports with the numbers of the tasks they satisfy. Simultaneously, SOPs ensure assets understand and have a means of reporting important but unanticipated information. Collection assets reporting may convey that collection occurred, but the unit did not observe any activity satisfying the information collection task, which may be a significant indicator. As a part of reporting, the staff tracks which specific collection task originates from which intelligence requirement Such tracking ensures the staff provides the collected information to the original requester and to all who need the information. Correlating reporting to the original requirement and evaluating reports is key to effective information collection. The staff tracks the progress of each requirement and cross-references incoming reports to outstanding requirements.

PRIMARY INFORMATION COLLECTION TASKS AND MISSIONS

1-29. Information collection encompasses all activities and operations intended to gather data and information that, in turn, are used to create knowledge and support the commander’s requirements, situational understanding, and visualization. Commanders maximally achieve information collection when they care carefully employ all the collection tasks and missions together in an operation. This appropriate mix of collection tasks and missions helps satisfy as many different requirements as possible. It also ensures that the operations and intelligence working group does not favor or become too reliant on one particular unit, discipline, or system. The Army has four tasks or missions it primarily conducts as a part of the information collection plan:

  • Security operations.
  • Intelligence operations.

RECONNAISSANCE

1-30. Reconnaissance is those operations undertaken to obtain, by visual observation or other detection methods, information about the activities and resources of an enemy or adversary, or to secure data concerning the meteorological, hydrographical or geographical characteristics and the indigenous population of a particular area (FM 3-90). Reconnaissance primarily relies on the human dynamic rather than technical means. Reconnaissance is a focused collection effort. A combined arms operation, reconnaissance is normally tailored to actively collect information against specific targets for a specified time based on mission objectives.

1-31. Units perform reconnaissance using three methods: dismounted, mounted, and aerial (each can be augmented by sensors). Successful and effective units combine these methods. To gain information on the enemy or a particular area, units can use passive surveillance, technical means, and human interaction, or they can fight for information.

1-32. Reconnaissance produces information concerning the AO. Staffs perform reconnaissance before, during, and after other operations to provide information used in the IPB process. Commanders perform reconnaissance to formulate, confirm, or modify a course of action (COA). Reconnaissance provides information that commanders use to make informed decisions to confirm or modify the concept of operations. This information may concern the enemy, the local population, or any other aspect of the AO. Commanders at all echelons incorporate reconnaissance into their operations.

1-33. Reconnaissance identifies terrain characteristics, enemy and friendly obstacles to movement, and the disposition of enemy forces and civilians so that commanders can maneuver forces freely with reduced risk. Reconnaissance prior to unit movements and occupation of assembly areas is critical to protecting the force and preserving combat power. It also keeps U.S. forces free from contact as long as possible so that they can concentrate on the decisive operation.

Reconnaissance Objective

1-34. Commanders orient their reconnaissance by identifying a reconnaissance objective within the AO. The reconnaissance objective is a terrain feature, geographic area, enemy force, or specific civil considerations about which the commander wants to obtain additional information. The reconnaissance objective clarifies the intent of the reconnaissance by specifying the most important result to obtain from the reconnaissance mission. Every reconnaissance mission specifies a reconnaissance objective. Commanders assign reconnaissance objectives based on commander’s critical information requirements, reconnaissance asset capabilities, and reconnaissance asset limitations. The reconnaissance objective can be information about a specific geographical location (such as the cross-country trafficability of a specific area), a specific enemy activity to be confirmed or denied, a specific enemy element to be located or tracked, or specific civil considerations (such as critical infrastructure).

1-35. Commanders may need to provide additional detailed instructions beyond the reconnaissance objective (such as specific tasks to be performed or the priority of tasks). They do this by issuing additional guidance to their reconnaissance units or by specifying these instructions in the tasks to subordinate units in the operation order. For example, if a unit S-2 concludes that the enemy is not in an area and the terrain appears to be trafficable without obstacles, the commander may direct the reconnaissance squadron to conduct a zone reconnaissance mission with guidance to move rapidly and report by exception any terrain obstacles that will significantly slow the movement of subordinate maneuver echelons.

Reconnaissance Fundamentals

1-36. The seven fundamentals of reconnaissance are—

  • Ensure continuous reconnaissance.
  • Do not keep reconnaissance assets in reserve.
  • Orient on the reconnaissance objective.
  • Report information rapidly and accurately.
  • Retain freedom of maneuver.
  • Gain and maintain enemy contact.
  • Develop the situation rapidly.

Ensure Continuous Reconnaissance

1-37. The commander conducts reconnaissance before, during, and after all operations. Before an operation, reconnaissance focuses on filling gaps in information about the enemy, specific civil considerations, and the terrain. During an operation, reconnaissance focuses on providing the commander with updated information that verifies the enemy’s composition, dispositions, and intentions as the battle progresses. This allows commanders to verify which COA the enemy is actually adopting and to determine if the plan is still valid based on actual events in the AO. After an operation, reconnaissance focuses on maintaining contact with the enemy forces to determine their next move and collecting information necessary for planning subsequent operations.

1-38. Reconnaissance assets, like artillery assets, are never kept in reserve. When committed, reconnaissance assets use all their resources to accomplish the mission. This does not mean that all assets are committed all the time.

Orient on the Reconnaissance Objective

1-39. The commander uses the reconnaissance objective to focus the unit’s reconnaissance efforts. Commanders of subordinate reconnaissance elements remain focused on achieving this objective, regardless of what their forces encounter during the mission.

Report Information Rapidly and Accurately

1-40. Reconnaissance assets acquire and report accurate and timely information on the enemy, civil considerations, and the terrain over which operations are to be conducted. Information may quickly lose its value. Reconnaissance units report exactly what they see and, if appropriate, what they do not see. Seemingly unimportant information may be extremely important when combined with other information. Negative reports are as important as reports of enemy activity. Reconnaissance assets must report all information, including a lack of enemy activity; failure to report tells the commander nothing. The unit communications plan ensures that unit reconnaissance assets have the proper communication equipment to support the integrated information collection plan.

Retain Freedom of Maneuver

1-41. Reconnaissance assets must retain battlefield mobility to successfully accomplish their missions. If these assets are decisively engaged, reconnaissance stops and a battle for survival begin. Reconnaissance assets must have clear engagement criteria that support the maneuver commander’s intent. Initiative and knowledge of both the terrain and the enemy reduce the likelihood of decisive engagement and help maintain freedom of movement. Prior to initial contact, the reconnaissance unit adopts a combat formation designed to gain contact with the smallest possible friendly element. This provides the unit with the maximum opportunity for maneuver and enables it to avoid decisively engaging the entire unit. The IPB process can identify anticipated areas of likely contact to the commander.

Gain and Maintain Enemy Contact

1-42. Once a unit conducting reconnaissance gains contact with the enemy, it maintains that contact unless the commander directing the reconnaissance orders otherwise or the survival of the unit is at risk. This does not mean that individual scout and reconnaissance teams cannot break contact with the enemy. The commander of the unit conducting reconnaissance is responsible for maintaining contact using all available resources. The methods of maintaining contact can range from surveillance to close combat. Surveillance, combined with stealth, is often sufficient to maintain contact and is the preferred method. Units conducting reconnaissance avoid combat unless it is necessary to gain essential information, in which case the units use maneuver (fire and movement) to maintain contact while avoiding decisive engagement.

Develop the Situation Rapidly

1-43. When a reconnaissance asset encounters an enemy force or an obstacle, it must quickly determine the threat it faces. For an enemy force, it must determine the enemy’s composition, dispositions, activities, and movements, and assess the implications of that information. For an obstacle, the reconnaissance asset must determine the type and extent of the obstacle and whether it is covered by fire. Obstacles can provide information concerning the location of enemy forces, weapons capabilities, and organization of fires. In most cases, the reconnaissance unit developing the situation uses actions on contact.

Reconnaissance Forms

1-44. The four forms of reconnaissance are—

  • Route reconnaissance.
  • Zone reconnaissance.
  • Area reconnaissance.
  • Reconnaissance in force.

Route Reconnaissance

1-45. Route reconnaissance focuses along a specific line of communications (such as a road, railway, or cross-country mobility corridor). It provides new or updated information on route conditions (such as obstacles and bridge classifications, and enemy and civilian activity along the route). A route reconnaissance includes not only the route itself, but also all terrain along the route from which the enemy could influence the friendly force’s movement. The commander normally assigns this mission to use a specific route for friendly movement.

Zone Reconnaissance

1-46. Zone reconnaissance involves a directed effort to obtain detailed information on all routes, obstacles, terrain, enemy forces, or specific civil considerations within a zone defined by boundaries. Obstacles include both existing and reinforcing, as well as areas with chemical, biological, radiological, and nuclear (CBRN) contamination. Commanders assign zone reconnaissance missions when they need additional information on a zone before committing other forces in the zone. Zone reconnaissance missions are appropriate when the enemy situation is vague, existing knowledge of the terrain is limited, or combat operations have altered the terrain. A zone reconnaissance may include several route or area reconnaissance missions assigned to subordinate units.

Area Reconnaissance

1-47. Area reconnaissance focuses on obtaining detailed information about the enemy activity, terrain, or specific civil considerations within a prescribed area. This area may include a town, a neighborhood, a ridgeline, woods, an airhead, or any other feature critical to operations. The area may consist of a single point (such as a bridge or an installation). Areas are normally smaller than zones and not usually contiguous to other friendly areas targeted for reconnaissance. Because the area is smaller, units conduct an area reconnaissance more quickly than a zone reconnaissance.

Reconnaissance in Force

1-48. A reconnaissance in force is an aggressive reconnaissance conducted as an offensive operation with clearly stated reconnaissance objectives. A reconnaissance in force is a deliberate combat operation designed to discover or test the enemy’s strength, dispositions, reactions, or to obtain other information. Battalion-sized task forces or larger organizations usually conduct a reconnaissance in force.

 

Reconnaissance Focus, Reconnaissance Tempo, and Engagement Criteria

1-49. Commanders decide what guidance they will provide to shape the reconnaissance and surveillance effort. In terms of guidance, reconnaissance tempo and engagement criteria most closely apply organic reconnaissance elements. Reconnaissance focus can also be generally applied to surveillance assets, but in the specific sense of focusing a reconnaissance mission, it more closely applies to reconnaissance.

Reconnaissance Focus

1-50. Reconnaissance focus, combined with one or more reconnaissance objectives, helps to concentrate the efforts of the reconnaissance assets. The commander’s focus for reconnaissance usually falls in three general areas: CCIRs, targeting, and voids in information. The commander’s focus enables reconnaissance units to prioritize taskings and narrow their scope of operations.

1-51. Commanders use a reconnaissance pull when they do not know the enemy situation well or the situation changes rapidly. Reconnaissance pull fosters planning and decisionmaking based on changing assumptions into confirmed information. The unit uses initial assumptions and CCIRs to deploy reconnaissance assets as early as possible to collect information for developing COAs. The commander uses reconnaissance assets to confirm or deny initial CCIRs prior to deciding on a COA or maneuver option, thus pulling the unit to the decisive point on the battlefield.

1-52. Commanders use a reconnaissance push once committed to a COA or maneuver option. The commander pushes reconnaissance assets forward, as necessary, to gain greater visibility on specific named area of interest (NAI) to confirm or deny the assumptions on which the COA is based. Staffs use the information gathered during reconnaissance push to finalize the unit’s plan.

Reconnaissance Tempo

1-53. Tempo is the relative speed and rhythm of military operations over time with respect to the enemy. In terms of reconnaissance, tempo not only defines the pace of the operation, but also influences the depth of detail the reconnaissance can yield. Commanders establish time requirements for the reconnaissance force and express those requirements in a statement that describes the degree of completeness, covertness, and potential for engagement they are willing to accept. Commanders use their guidance on reconnaissance tempo to control the momentum of reconnaissance. Reconnaissance tempo is expressed as rapid or deliberate and forceful or stealthy.

1-54. Rapid operations and deliberate operations provide a description of the degree of completeness required by the commander. Rapid operations are fast paced, are focused on key pieces of information, and entail a small number of tasks. They describe reconnaissance that personnel must perform in a time- constrained environment. Deliberate operations are slow, detailed, and broad-based. They require the accomplishment of numerous tasks. The commander must allocate a significant amount of time to conduct a deliberate reconnaissance.

1-55. Forceful and stealthy operations provide a description of the level of covertness that the commander requires. Units conduct forceful operations without significant concern about being observed. Mounted units or combat units serving in a reconnaissance role often conduct forceful operations. In addition, forceful operations are appropriate in stability operations where the threat is not significant in relation to the requirement for information. Units conduct stealthy operations to minimize chance contact and prevent the reconnaissance force from being detected. They often are conducted dismounted and require increased allocation of time for success.

Engagement Criteria

1-56. Engagement criteria establish minimum thresholds for engagement (lethal and nonlethal). They clearly specify which targets the reconnaissance element is expected to engage and which it will hand off to other units or assets. For example, nonlethal contact identifies engagement criteria for tactical questioning of civilians and factional leaders. This criterion allows unit commanders to anticipate bypass criteria and to develop a plan to maintain visual contact with bypassed threats.

SURVEILLANCE

1-57. Surveillance is the systematic observation of aerospace, surface, or subsurface areas, places, persons, or things, by visual, aural, electronic, photographic, or other means (JP 3-0). Surveillance involves observing an area to collect information.

1-58. In the observation of a given area, the focus and tempo of the collection effort primarily comes from the commander’s intent and guidance. Surveillance involves observing the threat and local populace in a NAI or targeted area of interest (TAI). Surveillance may be conducted as a stand-alone mission, or as part of a reconnaissance mission (particularly area reconnaissance). Elements conducting surveillance must maximize assets, maintain continuous surveillance on all NAIs and TAIs, and report all information rapidly and accurately.

1-59. Surveillance tasks can be performed by a variety of assets (ground, air, sea, and space), means (Soldier and systems), and mediums (throughout the electromagnetic spectrum).

1-60. Generally, surveillance is considered a “task” when performed as part of a reconnaissance mission. However, many Army, joint, and national systems are designed specifically to conduct only surveillance. These are surveillance missions. Army military intelligence organizations typically conduct surveillance missions. Reconnaissance units can conduct surveillance tasks as part of reconnaissance, security, or other missions. The commonality of reconnaissance and surveillance is observation and reporting.

1-61. Surveillance is distinct from reconnaissance. Surveillance is tiered and layered technical assets collecting information. Often surveillance is passive and may be continuous.

the purpose of reconnaissance is to collect information, not initiate combat. Reconnaissance involves many tactics, techniques, and procedures throughout the course of a mission. An extended period of surveillance may be one of these. Commanders complement surveillance with frequent reconnaissance. Surveillance, in turn, increases the efficiency of reconnaissance by focusing those missions while reducing the risk to Soldiers.

1-62. Both reconnaissance and surveillance involve detection, location, tracking, and identification of entities in an assigned area and gaining environmental data, but they are not executed in the same way. During reconnaissance, collection assets are given the mission to find specific information by systematically checking different locations within the area. During surveillance, collection assets watch the same area, waiting for information to emerge when an entity or its signature appears.

Surveillance Characteristics

1-64. Effective surveillance—

  • Maintains continuous observations of all assigned NAIs and TAIs.
  • Provides early warning.
  • Identifies, tracks, and assesses key targets.
  • Provides mixed, redundant, and overlapping coverage.

 

Maintains Continuous Surveillance of All Assigned Named Areas of Interest and Targeted Areas of Interest

1-65. Once the surveillance of a NAI or TAI commences, units maintain it until they complete the mission or the higher commander terminates the mission. Commanders designate the receiver of the information and the means of communication.

 

Provides Early Warning

1-66. Surveillance aims to provide early warning of an enemy or threat action. Together with IPB, commanders use information collection to ascertain the enemy or threat course of action and timing. They then orient assets to observe these locations for indicators of threat actions. Reporting must be timely and complete.

Detects, Tracks, and Assesses Key Targets

1-67. Surveillance support for targeting includes detecting, tracking, and assessing those key targets. Surveillance support to targeting includes detecting and tracking desired targets in a timely, accurate manner. Clear and concise tasks must be given so the surveillance systems can detect a given target. Target tracking is inherent to detection. Mobile targets must be tracked to maintain a current target location. Once a target is detected, targeting planning cells must also consider the need to track targets. Tracking targets— such as moving, elusive, low contrast targets (to include individuals)—requires a heavy commitment of limited information collection assets and resources. Assessing key targets pertains to the results of attacks on targets. This helps commanders and staffs determine if their targeting objectives were met.

Provides Mixed, Redundant, and Overlapping Coverage

1-68. Commanders integrate the capabilities of limited assets to provide mixed, redundant, and overlapping coverage of critical locations identified during planning. The intelligence and operations staff work together to achieve balance. Commanders and staff continuously assess surveillance results to determine any changes in critical locations requiring this level of coverage.

Surveillance Types

1-69. The types of surveillance are zone, area, point, and network. Note: Forms of reconnaissance, as opposed to types of surveillance, are associated with maneuver units and missions.

Zone Surveillance

1-70. Zone surveillance is the temporary or continuous observation of an extended geographic zone defined by boundaries. It can be associated with but is not limited to a TAI or a NAI. Zone surveillance covers the widest geographical area of any type of surveillance. Multiple assets, including airborne surveillance assets and radar with wide coverage capabilities, are typically employed in zone surveillance.

Area Surveillance

1-71. Area surveillance is the temporary or continuous observation of a specific prescribed geographic area. It can be associated with, but is not limited to, a TAI or NAI. This area may include a town, a neighborhood, ridgeline, wood line, border crossing, farm, plantation, cluster or group of buildings, or other manmade or geographic feature. Unlike area reconnaissance, it does not include individual structures (such as a bridge or single building). Ground-mounted surveillance systems are particularly useful in area surveillance.

Point Surveillance

1-72. Point surveillance is the temporary or continuous observation of a place (such as a structure), person, or object. This can be associated with, but is not limited to, a TAI or a NAI. It is the most limited in geographic scope of all forms of surveillance. Point surveillance may involve tracking people. When surveillance involves tracking people, the “point” is that person or persons, regardless of movement and location. Tracking people normally requires a heavier commitment of assets and close coordination for handoff to ensure continuous observation.

Network Surveillance

1-73. Network surveillance is the observation of organizational, social, communications, cyberspace, or infrastructure connections and relationships. Network surveillance can also seek detailed information on connections and relationships among individuals, groups, and organizations, and the role and importance of aspects of physical or virtual infrastructure (such as bridges, marketplaces, and roads) in people’s lives. It can be associated with but is not limited to a TAI or a NAI.

SECURITY OPERATIONS

1-74. Security operations are shaping operations that can take place during all operations. Reconnaissance is a part of every security operation. Other collection assets provide the commander with early warning and information on the strength and disposition of enemy forces. The availability of information collection assets enables greater flexibility in the employment of the security force.

1-75. Security operations aim to protect the force from surprise and reduce the unknowns in any situation. A commander undertakes these operations to provide early and accurate warning of enemy operations, to provide the force being protected with time and maneuver space to react to the enemy, and to develop the situation to allow the commander to effectively use the protected force. Commanders may conduct security operations to the front, flanks, and rear of their forces.

The main difference between security operations and reconnaissance is that security operations orient on the force or facility being protected, while reconnaissance is enemy, populace, and terrain oriented.

1-76. The five forms of security operations commanders may employ are screen, guard, cover, area security, and local security.

1-77. Successful security operations depends upon properly applying the following five fundamentals:

  • Provide early and accurate warning.
  • Provide reaction time and maneuver space.
  • Orient on the force or facility to be secured.
  • Perform continuous reconnaissance.
  • Maintain enemy contact.

1-78. To properly apply the fundamental of “perform continuous reconnaissance,” the security force aggressively and continuously seeks the enemy, interacts with the populace, and reconnoiters key terrain. It conducts active area or zone reconnaissance to detect enemy movement or enemy preparations for action and to learn as much as possible about the terrain. The ultimate goal is to detect the enemy’s COA and assist the main body in countering it.

INTELLIGENCE OPERATIONS

1-79. Intelligence operations align intelligence assets and resources against requirements to the collect information and intelligence to inform the commander’s decisions. Conducting intelligence operations requires an organic collection and analysis capability. Those units without resources must rely on augmentation from within the intelligence enterprise for intelligence. Although the focus is normally on tactical intelligence, the Army draws on both strategic and operational intelligence resources. Each intelligence discipline provides the commander specific technical capabilities and sensors. Because of the unique capabilities and characteristics of intelligence operations, these capabilities and sensors require specific guidance through technical channels. The Army’s intelligence disciplines that contribute to intelligence operations are—

  • Human intelligence.
  • Geospatial intelligence.
  • Measurement and signature intelligence.
  • Signals intelligence.
  • Technical intelligence.

Counterintelligence

1-80. Counterintelligence counters or neutralizes intelligence collection efforts by foreign intelligence and security services and international terrorist organizations. It does this through collection, counterintelligence investigations, operations, analysis, production, and functional and technical services. Counterintelligence includes all actions taken to detect, identify, track, exploit, and neutralize the multidiscipline intelligence activities of friends, competitors, opponents, adversaries, and enemies. It is the key intelligence community contributor to protect U.S. interests and equities. Counterintelligence helps identify essential elements of friendly information (EEFI) by identifying vulnerabilities to threat collection and actions taken to counter collection and operations against U.S. forces.

Human Intelligence

1-81. Human intelligence is a category of intelligence derived from information collected and provided by human sources (JP 2-0). This information is collected by a trained human intelligence collector, from people and their associated documents and media sources. Units use the collected information to identify threat elements, intentions, composition, strength, dispositions, tactics, equipment, personnel, and capabilities.

 

Geospatial Intelligence

1-82. Title 10, U.S. Code establishes geospatial intelligence. Geospatial intelligence is the exploitation and analysis of imagery and geospatial information to describe, assess, and visually depict physical features and geographically referenced activities on the Earth.

Measurement and Signature Intelligence

1-83. Measurement and signature intelligence is technically derived intelligence that detects, locates, tracks, identifies, or describes the specific characteristics of fixed and dynamic target objects and sources. It also includes the additional advanced processing and exploitation of data derived from imagery intelligence and signals intelligence collection.

Signals Intelligence

1-84.Signals intelligence is produced by exploiting foreign communications systems and noncommunications emitters. Signals intelligence provides unique intelligence and analysis information in a timely manner.

Technical Intelligence

1-85. Technical intelligence is intelligence derived from the collection and analysis of threat and foreign military equipment and associated materiel.

Chapter 2
Commander and Staff Responsibilities

This chapter examines the roles, knowledge, and guidance of the commander in information collection activities. The commander’s involvement facilitates an effective information collection plan that is synchronized and integrated within the overall operation. This chapter then discusses the role of the staff. Lastly, this chapter discusses contributions from working groups.

THE ROLE OF THE COMMANDER

2-1. Commanders understand, visualize, describe, direct, lead, and assess operations. Understanding is fundamental to the commander’s ability to establish the situation’s context. Understanding involves analyzing and understanding the operational or mission variables in a given operational environment. It is derived from applying judgment to the common operational picture through the filter of the commander’s knowledge and experience.

2-2. Numerous factors determine the commander’s depth of understanding. Information from information collection and the resulting intelligence products prove indispensable in assisting the commander in understanding the area of operations (AO). Formulating commander’s critical information requirements (CCIRs) and keeping them current also contribute to this understanding. Maintaining understanding is a dynamic ability; a commander’s situational understanding changes as an operation progresses.

2-3. The commander must be involved in information collection planning. The commander directs information collection activities by—

Asking the right questions to focus the efforts of the staff.

Knowing the enemy. Personal involvement and knowledge have no substitutes.

Stating the commander’s intent clearly and decisively designating CCIRs.

Understanding the information collection assets and resources to exploit their full effectiveness.

2-4. Commanders prioritize collection activities primarily through providing their guidance and commander’s intent early in the planning process. Commanders must—

  • Personally identify and update CCIRs.
  • Ensure CCIRs are tied directly to the scheme of maneuver and decision points.
  • Limit CCIRs to only their most critical needs (because of limited collection assets).
  • Aggressively seek higher echelons’ collection of, and answers to, the information requirements.
  • Ensure CCIRs include the latest time information is of value (LTIOV) or the event by which the information is required.

2-5. The commander may also identify essential elements of friendly information (EEFI). The EEFI are not part of the CCIRs; rather they establish friendly information to protect, not enemy information to obtain.

2-6. Commanders ensure that both intelligence preparation of the battlefield (IPB) and information collection planning are integrated staff efforts. Every staff member plays an important role in both tasks.

2-7. Information collection planning and assessment must be continuous. Commanders ensure they properly assign information collection tasks based on the unit’s abilities to collect. Therefore, commanders match their information requirements as to not exceed the information collection and analytical ability of their unit.

2-8. Commanders assess operations. Commanders ensure collection activities provide the information needed. Timely reporting to the right analytical element at the right echelon is critical to information collection activities. Commanders continuously assess operations throughout the planning, preparation, and execution phases. The commander’s involvement and interaction enable the operations and intelligence officers to more effectively assess and update collection activities. The commander’s own assessment of the current situation and progress of the operation provides insight on what new information is needed and what is no longer required. The commander communicates this to the staff to assist them in updating CCIRs.

COMMANDER’S NEEDS

2-9. Staffs synchronize and integrate information collection activities with the warfighting functions based on the higher commander’s guidance and decisions. Commanders’ knowledge of collection activities enables them to focus the staff and subordinate commanders in planning, preparing, executing, and assessing information collection activities for the operation.

2-10. Commanders must understand the overall concept of operations from higher headquarters to determine specified and implied tasks and information requirements. There are a finite number of assets and resources for information collection activities. Commanders communicate this as guidance for planners and the staff.

2-11. Extended areas of operations, the necessity to conduct missions and develop information and intelligence over large areas, and extended time spans can surpass the organic capabilities of a unit. Commanders must be able to deal effectively with many agencies and organizations in the area of operations to help enable the unit to perform information collection activities. One of the essential aspects to this is terminology. When dealing with non-U.S. Army personnel and organizations, commanders ensure those involved understand the terms used and provide or request clarification as needed.

COMMANDER’S GUIDANCE

2-12. Commanders play a central role in planning primarily by providing guidance. This should include specific guidance for collection assets and required information. Commanders consider risks and provide guidance to the staff on an acceptable level of risk for information collection planning. The commander issues formal guidance at three specific points in the process:

  • Initial guidance following receipt of mission.
  • Commander’s planning guidance following mission analysis to guide course of action (COA) development.
  • Final planning guidance after the COA decision but before the final warning order (WARNO).

 

 

 

INITIAL GUIDANCE

2-13. After a unit receives a mission, the commander issues initial guidance. (FM 5-0 provides detailed information on the initial guidance.) The initial guidance accomplishes several things. It—

  • l  Begins the visualization process by identifying the tactical problem (the first step to problem solving).
  • l  Defines the area of operations. This presents a common operational picture for the commander and staff in seeing the terrain, including the populace.
  • l  Develops the initial commander’s intent, specifically key tasks (including tasks for reconnaissance), decisive point, and end state.
  • l  Challengesincludeanyguidanceforspecificstaffsections.
  • l

2-14. For information collection planning, the initial guidance includes— l Initialtimelineforinformationcollectionplanning.
l Initialinformationcollectionfocus.
l Initialinformationrequirements.

l Authorizedmovement.
l Collectionandproductdevelopmenttimeline.

2-15. The initial WARNO can alert information collection assets to begin collection activities to begin at this time. If this is the case, the initial WARNO includes—

  • Named areas of interest (NAIs) to be covered.
  • Collection tasks and specific information requirements to be collected.
  • Precise guidance on infiltration method, reporting criteria and timelines, fire support and casualty evacuation plan.

COMMANDER’S PLANNING GUIDANCE

2-16. The commander issues the commander’s planning guidance during the mission analysis step of the MDMP, following the approval of the restated mission and mission analysis brief. Part of the commander’s planning guidance is directly related to collection activities—the initial CCIRs and information collection guidance. The guidance for planning should contain sufficient information for the operations officer to complete a draft information collection plan. As a minimum, the commander’s planning guidance includes—

Current CCIRs.
Focus and tempo.
Engagement criteria.
Acceptable risk to assets.

2-17. The commander issues the initial commander’s intent with the commander’s planning guidance. The staff verifies the draft information collection plan is synchronized with the commander’s intent assesses any ongoing information collection activities, and recommends changes to support the commander’s intent, CCIRs, and concept of operations.

FINAL PLANNING GUIDANCE

2-18. After the decision briefing, the commander determines a COA the unit follows and issues final planning guidance. Final planning guidance includes—

  • Any new CCIRs, including the LTIOV.

ROLE OF THE STAFF

2-19. The staff must function as a single, cohesive unit—a professional team. Effective staff members know their respective responsibilities and duties. They are also familiar with the responsibilities and duties of other staff members.

2-21. The G-2 (S-2) must work in concert with the entire staff to identify collection requirements and implement the information collection plan. The intelligence staff determines collection requirements, (based upon inputs from the commander and other staff sections) develops the information collection matrix with input from the staff representatives, and continues to work with the staff planners to develop the information collection plan. The G-2 (S-2) also identifies those intelligence assets and resources— human intelligence, geospatial intelligence, measurement and signature intelligence, or signals intelligence—which can provide answers to the CCIRs.

2-22. The G-2X (S-2X) (hereafter referred to as the 2X) is the doctrinal term used to refer to the counterintelligence and human intelligence operations manager who works directly for the G-2 (S-2). The term also refers to the staff section led by the 2X.

2-24. The other members of the staff support the operations process. Through the conduct of the planning process, staffs develop requirements that are considered for inclusion as CCIRs and into the information collection plan. Staffs also monitor the situation and progress of the operation towards the commander’s desired goal. Staffs also prepare running estimates. A running estimate is the continuous assessment of the current situation used to determine if the current operation is proceeding according to the commander’s intent and if planned future operations are supportable (FM 5-0). Staffs continuously assess how new information might impact conducting operations. They update running estimates and determine if adjustments to the operation are required. Through this process, the staffs ensure that the information collection plan remains updated as the situation changes and requirements are answered or new requirements developed.

OPERATIONS AND INTELLIGENCE WORKING GROUP

2-29. At division and higher echelons, there are dedicated cells responsible for information collection planning. At battalion and brigade, there are no designated cells for information collection planning, this function is provided by the operations and intelligence staffs. Depending on the availability of personnel, the commander may choose to designate an ad hoc group referred to as an operations and intelligence working group. Because the primary staff officers’ responsibilities cannot be delegated, the staff—chief of staff or executive officer—should direct and manage the efforts of this working group to achieve a fully synchronized and integrated information collection plan.

2-30. Unit standard operating procedures and battle rhythms determine how frequently an operations and intelligence working group meets. This working group should be closely aligned with both the current operations and future operations (or plans) cells to ensure requirements planning tools are properly integrated into the overall operation plan. These planning tools should also be nested in the concepts for plans.

2-32. The working group aims to bring together the staff sections to validate requirements and deconflict the missions and taskings of organic and attached collection assets. Input is required from each member of the working group. The output of the working group is validation of outputs. This includes the following:

  • An understanding of how the enemy is going to fight.
  • A refined list of requirements.
  • Confirmation of the final disposition of all collection assets.
  • Review of friendly force information requirements, priority intelligence requirements (PIRs), and EEFI.
  • Validation of outputs of other working groups (for example, fusion and targeting working groups).
  • Review and establish critical NAIs and targeted areas of interest (TAIs).

2-33. The working group meeting is a critical event. Staffs must integrate it effectively into the unit battle rhythm to ensure the collection effort provides focus to operations rather than disrupting them. Preparation and focus are essential to a successful working group. All representatives, at a minimum, must come to the meeting prepared to discuss available assets, capabilities, limitations, and requirements related to their functions. Planning the working group’s battle rhythm is paramount to conducting effective information collection operations. Staffs schedule the working group cycle to complement the higher headquarters’ battle rhythm and its subsequent requirements and timelines.

2-34. The G-3 (S-3) (or representative) comes prepared to provide the following:

  • The current friendly situation.
  • Current CCIRs.
  • The availability of collection assets.
  • Requirements from higher headquarters (including recent fragmentary orders or taskings).
  • Changes to the commander’s intent.
  • Changes to the task organization.
  • Planned operations.

FUSION WORKING GROUP

2-37. Typically, brigade and above form a fusion working group. This working group aims to refine and fuse the intelligence between the command and its subordinate units. The output of this working group provides the intelligence staff with refinements to the situation template and the event template. The working group also refines existing PIRs and recommends new PIRs to the operations and intelligence working group. Additionally the working group reviews requirements to ensure currency.

TARGETING WORKING GROUP

2-38. The purpose of the targeting working group is to synchronize the unit’s targeting assets and priorities. For the staff, supporting the planning for the decide, detect, and assess (known as D3A) activities of the targeting process requires continuous updating of IPB products (such as situation templates and COA matrixes). The targeting working group considers targeting related collection and exploitation requirements. It also recommends additional requirements to the operations and intelligence working group. Staffs articulate these requirements as early in the targeting process as possible to support target development and other assessments.

2-39. Information collection support to target development takes the decide, detect, deliver, and assess methodology and applies this to the development of targets. Units using other targeting techniques—like find, fix, finish, exploit, assess, disseminate (known as F3EAD) or find, fix, track, target, engage, and assess (known as F2T2EA)—require no adaptation to the information collection support to targeting process. Nominations for request to current and future tasking orders as well as refinements to the high- value target lists are outputs of this working group.

2-40. The results of these working groups form the basis of the requests for information collection as well as products used by the intelligence staff in the creation of requirements planning tools. The operations staff integrates these tools in the creation of the information collection plan.

Chapter 3

Planning Requirements for and Assessing Information Collection

This chapter describes planning requirements for and assessing information collection for information collection activities. It discusses considerations for commanders for information collection planning. Then it discusses the support information collections provides to personnel recovery. It then covers the military decisionmaking process and information collection planning. Lastly, this chapter discusses assessing information collection activities.

THE OPERATIONS PROCESS AND INFORMATION COLLECTION

3-1. Commanders direct information collection activities by approving commander’s critical information requirements (CCIRs) and through driving the operations process. The success of information collection is measured by its contribution to the commander’s understanding, visualization, and decisionmaking. The operations process and information collection activities are mutually dependent. Commanders provide the guidance and focus that drive both by issuing their commander’s intent and approving CCIRs. The activities of information collection occur during all parts of the operation providing continuous information to the operations process.

3-2. Throughout the operations process, commanders and staffs use integrating processes to synchronize the warfighting functions to accomplish missions. Information collection activities, as well as intelligence preparation of the battlefield (IPB) are among these integrating processes. Synchronization is the arrangement of action in time, space, and purpose to produce maximum relative combat power at a decisive place and time. This collaborative effort by the staff, with the commander’s involvement, is essential for synchronizing information collection with the overall operation. Planning, preparing, executing, and assessing information collection activities is a continuous cycle whose time frame depends on the echelon, assets engaged, and the type of operation.

3-3. Conducting information collection activities consists of various staff functions; planning, collection, processing and exploitation; analysis and production; dissemination and integration; and evaluation and feedback. It should focus on the commander’s requirements. The purpose of these staff functions is to place all collection assets and resources into a single plan in order to capitalize on the different capabilities. The plan synchronizes and coordinates collection activities within the overall scheme of maneuver.

INFORMATION COLLECTION PLANNING CONSIDERATIONS

3-4. The information collection plan synchronizes activities of the information collection assets to provide intelligence to the commander required to confirm course of action selection and targeting requirements. The intelligence staff, in coordination with the operations staff, ensures all available collection assets provide the required information. They also recommend adjustments to asset locations, if required.

3-5. To be effective, the information collection plan must be based on the initial threat assessment and modified as the intelligence running estimate changes. Other staff sections’ running estimates may contain requirements for inclusion into the information collection plan. Additionally, the plan must be synchronized with the scheme of maneuver and updated as that scheme of maneuver changes. Properly synchronized information collection planning begins with the development and updating of IPB (threat characteristics, enemy templates, enemy course of action statements, and, most importantly, an enemy event template or matrix). Properly synchronized information collection planning ends with well-defined CCIRs and collection strategies based on the situation and commander’s intent.

THE MDMP AND INFORMATION COLLECTION PLANNING

3-7. Information collection planning is embedded in the military decisionmaking process (MDMP) and depends extensively on all staff members thoroughly completing the IPB process. Information collection planning starts with receipt of the mission (which could be a warning order). Information collection directly supports the development of intelligence and operations products used throughout the decision- making process. At each step in the MDMP, the staff must prepare certain products used in the plan and prepare phases of the operations process as described below.

3-8. Information collection activities are continuous, collaborative, and interactive. Several of the outputs from the various MDMP steps require the collaboration of the staff, especially the intelligence and operations staffs. The information collection plan cannot be developed without constant coordination among the entire staff. At every step in the MDMP, the intelligence staff must rely on input from the entire staff and cooperation with the operations staff to develop information collection products that support the commander’s intent and maximize collection efficiency for each course of action under consideration.

RECEIPT OF MISSION

3-9. Before receipt of the mission, the intelligence staff generates intelligence knowledge in anticipation of the mission. In addition to the knowledge already available, the intelligence staff uses intelligence reach and requests for additional information to higher headquarters to fill in the information gaps in the initial intelligence estimate.

3-10. When a mission is received, the commander and staff shift their efforts to describing the operational environment using mission variables and begin preparations for the MDMP. Commanders provide their initial guidance to the staff. The staff uses it to generate the initial information collection tasks to units and transmits it as part of the first warning order. In their guidance, commanders state the critical information required for the area of operations.

3-11. During the receipt of mission step, the staff gathers tools needed for the MDMP, begins the intelligence estimate, updates running estimates, and performs an initial assessment of the time available to subordinate units for planning, preparation, and execution. Since information collection assets are required early, the staff needs sufficient preparation time to begin sending information that the commander needs.

3-12.

The information collection outputs from this step are—

  • The commander’s initial information collection guidance.
  • Intelligence reach tasks.
  • Requests for information to higher headquarters.
  • Directions for accessing on-going or existing information collection activities or joint ISR.
  • The first warning order (WARNO) with initial information collection tasks.

MISSION ANALYSIS

3-13. When mission analysis begins, the staff should have the higher headquarters plan or order and all available products. The staff adds their updated running estimates to the process. The initial information collection tasks issued with the first WARNO may yield information to be analyzed and evaluated for relevance to mission analysis. The commander provides initial guidance that the staff uses to capture the commander’s intent and develop the restated mission.

Analyze the Higher Headquarters Order

3-14. During mission analysis, the staff analyzes the higher headquarters order to extract information collection tasks and constraints such as limits of reconnaissance. The order also contains details on the availability of information collection assets from higher echelons and any allocation of those assets to the unit.

Perform Intelligence Preparation of the Battlefield

3-15. IPB is one of the most important prerequisites to information collection planning. During IPB, staffs develop several key products that aid information collection planning. Those products include—

  • Threat characteristics.
  • Terrain overlays.
  • The weather effects matrix.
  • Enemy situational templates and course of action statements.
  • The enemy event template and matrix.
  • The high-payoff target list.
  • An updated intelligence estimate including identified information-gaps.

3-16. These products aid the staff in identifying—

  • Information gaps that can be answered by existing collection activities, intelligence reach, and requests for information to higher echelons. The remaining information gaps are used to develop requirements for information collection.
  • Threat considerations that may affect planning.
  • Terrain effects that may benefit, constrain, or limit the capabilities of collection assets.
  • Weather effects that may benefit, constrain, or negatively influence the capabilities of collection assets.
  • Civil considerations that might affect information collection planning.

3-17. The most useful product for information collection planning for the intelligence officer is the threat event template. Once developed, the threat event template is a key product in the development of the information collection plan. Likely threat locations, avenues of approach, infiltration routes, support areas, and areas of activity become named areas of interest (NAIs) or targeted areas of interest (TAIs) on which collection assets focus their collection efforts. Indicators, coupled with specific information requirements and essential elements of information (EEFI), provide collection assets with the required information on which units identify and report. FM 2-01.3 contains additional information on the IPB process and products.

3-18. As the staff completes mission analysis, the intelligence staff completes development of initial collection requirements. These collection requirements form the basis of the initial information collection plan, requests for collection support, and requests for information to higher and lateral units. When the mission analysis is complete, staffs have identified intelligence gaps, and planners have an initial plan on how to fill those gaps. Additionally, the operations officer and the remainder of the staff thoroughly understand the unit missions, tasks, and purposes.

Determine Specified, Implied, and Essential Tasks

3-19. The staff also identifies specified, implied, and essential information collection tasks. Specified tasks are directed toward subordinate units, systems, sensors, and Soldiers. Implied tasks determine how a system or sensor is initialized for collection. Essential information collection tasks are derived from specified and implied tasks. They are the focus of the information collection effort.

Review Available Assets

3-20. The staff must review all available collection assets, effectively creating an inventory of capabilities to be applied against collection requirements. Building the inventory of assets and resources begins with annex A of the higher headquarters order. The staff takes those assets attached or under operational control of the unit and adds those resources available from higher echelons and those belonging to adjacent units that might be of assistance. The higher headquarters order should specify temporary or permanent operating locations and the air tasking order details for aerial assets.

3-21. While reviewing the available collection assets, the staff evaluates the collection assets according to their capability and availability. First, the staff measures the capabilities of the collection assets. They must know and address the practical capabilities and limitations of all unit organic assets.

Determine Constraints

3-34. When determining constraints, the staff considers legal, political, operational, and rules of engagement constraints that might constrain reconnaissance, security, intelligence operations, and surveillance. The staff must consider planning constraints such as limits of reconnaissance, earliest time information is of value, and not earlier than times. In some cases, the commander may impose constraints on the use of certain collection assets. In other cases, system specific constraints—such as the weather, crew rest, or maintenance cycle limitations—may impose limits the staff must consider.

Identify Critical Facts and Assumptions

3-35. When staffs identify critical facts and assumptions, they identify critical facts and assumptions pertinent to information collection planning that they will use later in course of action (COA) development. For example, a critical fact might be that imagery requests may take 72 to 96 hours to fulfill or that the human intelligence effort requires significant time before a good source network is fully developed.

3-36. Developing assumptions for planning include the availability and responsiveness of organic assets and resources from higher echelons. For example, the staff might use a certain percentage (representing hours) of unmanned aircraft system support available on a daily basis, weather and maintenance permitting.

Perform Risk Assessment

3-37. When performing a risk assessment, the staff considers the asset’s effectiveness versus the protection requirements and risk to the asset. For example, placing a sensor forward enough on the battlefield that it can return valuable data and information may put the asset at high risk of being compromised, captured, or destroyed. The calculus of payoff versus loss will always be determined by mission variables and the commander’s decision.

3-38. In some cases, friendly forces may reveal a collection capability by taking certain actions. If it is important to keep a collection capability concealed, then the staff carefully considers every lethal or nonlethal action based on current intelligence.

Determine Initial CCIRs and EEFI

3-39. Determining initial CCIRs and EEFI is the most important prerequisite for information collection planning. The staff refines the list of requirements they derive from the initial analysis of information available and from intelligence gaps identified during IPB. They base this list on higher headquarters tasks, commander’s guidance, staff assessments, and subordinate and adjacent unit requests for information.

3-40. The staff then nominates these requirements to the commander to be CCIRs and EEFI. Commanders alone decide what information is critical based on their experience, the mission, the higher commander’s intent, and input from the staff. The CCIRs are the primary focus for information collection activities.

Develop the Initial Information Collection Plan

3-41. The initial information plan is crucial to begin or adjust the collection effort to help answer requirements necessary in developing effective plans. The initial information collection plan sets information collection in motion. Staffs may issue it as part of a WARNO, a fragmentary order, or an operation order. As more information becomes available, staffs incorporate it into a complete information plan to the operation order.

3-42. At this point in the MDMP, the initial information plan has to be generic because the staffs have yet to develop friendly COAs. The basis for the plan is the commander’s initial information collection guidance, the primary information gaps identified by the staff during mission analysis, and the enemy situational template developed during IPB. (Chapter 4 contains additional information on tasking and directing collection assets.)

 

3-43. The intelligence staff creates the requirements management tools for the information collection plan. The operations staff is responsible for the information collection plan. During this step, the operations and intelligence staff work closely to ensure they fully synchronize and integrate information collection activities into the overall plan.

3-44. The operations officer considers several factors when developing the initial information collection plan, including—

  • Requirements for collection assets in follow-on missions.
  • The time available to develop and refine the initial information collection plan.
  • The risk the commander is willing to accept if information collection missions are begun before the information collection plan is fully integrated into the scheme of maneuver.
  • Insertion and extraction methods for reconnaissance, security, surveillance, and intelligence units.
  • The communications plan for transmission of reports from assets to tactical operations centers.
  • The inclusion of collection asset locations and movements in to the fire support plan.
  • The reconnaissance handover with higher or subordinate echelons.
  • The sustainment support.
  • Legal support requirements.

Develop Requests for Information and Requests for Collection or Support

3-45. Submitting a request for information to the next higher or lateral echelon is a method for obtaining information not available with organic information collection assets. Units enter requests for information into a request for information management system where all units can see them. Hence, analysts several echelons above the actual requester become aware of the request and may be able to answer it.

3-46. When the unit cannot satisfy a collection requirement with its own assets, the intelligence staff composes and submits a request for information to the next higher echelon (or lateral units) for integration within its own information collection plan. At each echelon, the requirement is validated and a determination made as to whether or not that echelon can satisfy the requirement. If that echelon cannot satisfy the requirement, it is passed to the next higher echelon.

Develop and Synchronize Production Requirements

3-48. Intelligence staffs develop and synchronize production requirements to provide timely and relevant intelligence analysis and products to commanders, staff, and subordinate forces. Staffs use the unit’s battle rhythm as a basis for determining the daily, weekly, and monthly analytical products. The intelligence staff then designs an analytical and production effort to answer the CCIRs and meet the commander’s need for situational understanding and the staff’s need for situational awareness.

3-49. Intelligence production includes analyzing information and intelligence. It also includes presenting intelligence products, assessments, conclusions, or projections regarding the area of operations and threat forces in a format that aids the commander in achieving situational understanding. Staffs devote the remainder of the analytical effort to processing, analyzing, and disseminating data and information.

3-50. Commanders and staffs measure the success of the analytical and production effort by the products provided and their ability to answer or satisfy the CCIRs, intelligence requirements, and information requirements. For the purposes of the intelligence warfighting function an intelligence requirement is a type of information requirement developed by subordinate commanders and staff (including subordinate staffs) that requires dedicated collection.

COURSE OF ACTION DEVELOPMENT

3-51. Using the continually updated IPB products and the enemy situation template, the intelligence staff must integrate information collection considerations to develop friendly COAs. In many cases, the information collection considerations for each COA are similar depending on the characteristics of the friendly COA.

3-52. The operations and intelligence staffs must collaborate on information collection considerations to support each COA developed. The staff works to integrate its available resources into an integrated plan. Intelligence and operations staffs focus on the relationship of collection assets to other friendly forces, the terrain and weather, and the enemy.

3-53. The development of NAIs and TAIs based upon suspected enemy locations drive the employment of collection assets. The staff considers how to use asset mix, asset redundancy, and asset cueing to offset the capabilities of the various collection assets.

3-54. During COA development, the staff refines and tailors the initial CCIRs for each COA. Technically, these are initial requirements for each course of action. Later in the MDMP, once a COA is approved, the commander approves the final CCIR, and the staff publishes it.

COURSE OF ACTION ANALYSIS (WAR-GAMING)

3-55. The intelligence staff records the results of COA analysis and uses that information to develop the requirements planning tools. The entire staff uses the action-reaction-counteraction process to move logically through the war-gaming process. These events have a bearing on the assets recommended for tasking to the operations staff.

ORDERS PRODUCTION

3-56. Orders production is putting the plan into effect and directing units to conduct specific information collection tasks. The staff prepares the order by turning the selected COA into a clear, concise concept of operations and supporting information. The order provides all the information subordinate commands need to plan and execute their operations. However, this is not the first time subordinate commanders and their staffs have seen this data. Within the parallel and collaborative planning process, planners at all echelons have been involved in the orders process.

ASSESS INFORMATION COLLECTION ACTIVITIES

3-57. Assessment guides every operations process activity. Assessment is the continuous monitoring and evaluation of the current situation, particularly the enemy, and progress of an operation. Assessing information collection activities enables the operations and intelligence staffs to monitor and evaluate the current situation and progress of the operation. The desired result is to ensure all collection tasks are completely satisfied in a timely manner.

3-58. Staffs begin assessing information collection task execution with monitoring and reporting by collection assets as they execute their missions. Staffs track reporting to determine how well the information collection assets satisfy their collection tasks. The desired result is relevant information delivered to the commander before the latest time information is of value.

Chapter 4
Tasking and Directing Information Collection

Commanders direct information collection activities by approving requirements and through mission command in driving the operations process. This chapter describes the tasking and directing of information collection assets. It discusses how the staff finalizes the information collection plan and develops the information collection overlay. It then discusses the development of the information collection scheme of support. Lastly it discusses re-tasking assets.

TASK AND DIRECT INFORMATION COLLECTION

4-1. The operations staff integrates collection assets through a deliberate and coordinated effort across all warfighting functions. Tasking and directing information collection is vital to control limited collection assets. During task and direct information collection, the staff recommends redundancy, mix, and cue, as appropriate. The process of planning information collection activities begins once requirements are established, validated, and prioritized. Staffs accomplish tasking information collection by issuing warning orders, fragmentary orders, and operation orders. They accomplish directing information collection assets by continuously monitoring the operation. Staffs conduct re-tasking to refine, update, or create new requirements.

FINALIZE THE INFORMATION COLLECTION PLAN

4-2. To finalize the information collection plan, the staff must complete several important activities and review several considerations to achieve a fully synchronized, efficient, and effective plan. The information collection plan also applies to the rapid decisionmaking and synchronization process. Updating information collection activities during the execution and assessment phases of the operations process is crucial to the successful execution and subsequent adjustments of the information collection plan. The information collection plan is implemented through execution of asset tasking. The tasking process provides the selected collection assets with specific, prioritized requirements. When collection tasks or requests are passed to units, the staff provides specific details that clearly define the collection requirements. These requirements identify—

  • What to collect—specific information requirements and essential elements of information.
  • Where to collect it—named areas of interest and targeted areas of interest.
  • When and how long to collect.
  • Why to collect—answer commander’s critical information requirements.

4-3. The information collection plan is an execution order and should be published in the five-paragraph operation order (OPORD) format as a warning order (WARNO), an OPORD, or a fragmentary order (FRAGO). Staffs use the information collection plan for tasking, directing, and managing of collection assets (both assigned and attached assets) to collect against the requirements. The operations officer tasks and directs information collection activities. The intelligence staff assists the staff in the development of the information collection plan by providing the requirement planning tools. (Refer to TC 2-01 and ATTP 2-01 on how the requirement planning tools are developed.) Staffs—

  • Integrate the information collection plan into the scheme of maneuver.
  • Publish annex L (information collection) to the OPORD that tasks assets to begin the collection effort.
  • Ensure that the information collection plan addresses all of the commander’s requirements, that assigned and attached assets have been evaluated and recommended for information collection tasks within their capabilities, and that collection tasks outside the capabilities of assigned and attached assets have been prepared as requests for information to appropriate higher or lateral headquarters.
  • Publish any FRAGOs and WARNOs associated with information collection.

 

DEVELOP THE INFORMATION COLLECTION OVERLAY

4-6. The staff may issue an information collection overlay depicting the information collection plan in graphic form as an appendix or annex L to the OPORD.

DEVELOP THE INFORMATION COLLECTION SCHEME OF SUPPORT

4-8. The information collection scheme of support includes the planning and execution of operations and resources to support the Soldiers and units who perform information collection. This support includes fires, movement, protection, and sustainment (logistics, personnel services, health services support, and other sustainment related functions). The staff prepares the initial scheme of support. The operations officer approves the plan and tasks units.

PROVIDE SUPPORT TO SITE EXPLOITATION

4-10. Site exploitation is systematically searching for and collecting information, material, and persons from a designated location and analyzing them to answer information requirements, facilitate subsequent operations, or support criminal prosecution.

4-11. Site exploitation consists of a related series of activities to exploit personnel, documents, electronic data, and material captured, while neutralizing any threat posed by the items or contents. Units conduct site exploitation using one of two techniques: hasty and deliberate. Commanders chose the technique based on time available and the unit’s collection capabilities.

MONITOR OPERATIONS

4-12. Staffs track the progress of the operation against the requirements and the information collection plan. The operation seldom progresses on the timelines assumed during planning and staff war-gaming. The staff watches for changes in tempo that require changes in reporting times, such as latest time information is of value (LTIOV). The intelligence and operations staffs coordinate any changes with all parties concerned, including commanders and appropriate staff sections.

CORRELATE REPORTS TO REQUIREMENTS

4-13. Correlating information reporting to the original requirement and evaluating reports is key to effective requirements management. This quality control effort helps the staff ensure timely satisfaction of requirements. Requirements management includes dissemination of reporting and related information to original requesters and other users.

4-14. To correlate reports, the staff tracks which specific collection task originates from which requirement to ensure the original requester and all who need the collected information actually receive it. For efficiency and timeliness, the staff ensures production tasks are linked to requirements. This allows the staff to determine which requirements have been satisfied and which require additional collection.

4-15. The staff address the following potential challenges:

  • Large volumes of information that could inundate the intelligence analysis section. The intelligence staff may have trouble finding the time to correlate each report to a requirement.
  • Reports that partially satisfy a number of collection tasks. Other reports may have nothing to do with the collection task.
  • Reported information that fails to refer to the original task that drove collection.
  • Circular reporting and spam or unnecessary message traffic that causes consternation and wastes valuable time.

SCREEN REPORTS

4-16. The staff screens reports to determine whether the collection task has been satisfied. In addition, the staff screens each report for the following criteria:

  • Relevance. Does the information actually address the tasked collection task? If not, can the staff use this information to satisfy other requirements?
  • Completeness. Is essential information missing? (Refer to the original collection task.)
  • Timeliness. WastheassetreportedbytheLTIOVestablishedintheoriginaltask?
  • Opportunities for cueing. Can this asset or another asset take advantage of new information to increase the effectiveness and efficiency of the overall information collection effort? If the report suggests an opportunity to cue other assets, intelligence and operations staffs immediately cue them and record any new requirements in the information collection plan.

4-17. Information collection assets do not submit reports that simply state nothing significant to report. These reports may convey that collection occurred, but no activity satisfying the information collection task was observed, which may be a significant indicator. Nothing significant to report is by no means a reliable indicator of the absence of activity.

PROVIDE FEEDBACK

4-18. The staff provides feedback to all collection assets on their mission effectiveness and to analytic sections on their production. Normally the mission command element of that unit provides this feedback. Feedback reinforces whether collection or production satisfies the original task or request and provides guidance if it does not. Feedback is essential to maintaining information collection effectiveness and alerting leaders of deficiencies to be corrected.

4-19. As the operation continues, the intelligence and operations staffs track the status of each collection task, analyze reporting, and ultimately satisfy requirements. They pay particular attention to assets not producing required results, which may trigger adjustments to the information collection plan. During execution, the staff assesses the value of the information from collection assets as well as develops and refines requirements to satisfy information gaps.

4-20. When reporting satisfies a requirement, the staff relieves the collection assets of further responsibility to collect against information collection tasks related to the satisfied requirement. The operations officer, in coordination with the intelligence staff, provides additional tasks to satisfy emerging requirements. The operations staff notifies—

  • Collection assets and their leadership of partially satisfied requirements to continue collection against, of those collection tasks that remain outstanding, and what remains to be done.

4-21. By monitoring operations, correlating reports to requirements, screening reports, and providing feedback, the staff ensures the most effective employment of collection assets.

UPDATE THE INFORMATION COLLECTION PLAN

4-22. Evaluation of reporting, production, and dissemination identifies updates for the information collection plan. As the current tactical situation changes, staffs adjust the overall information collection plan to synchronize collection tasks, optimizing collection and exploitation capabilities. They constantly update requirements to ensure that information gathering efforts are synchronized with current operations while also supporting future operations planning. As collected information answers requirements, the staff updates the information collection plan.

4-23.  The steps in updating the information collection plan are—

  • Cue assets to other collection requirements.
  • Eliminate satisfied requirements.
  • Develop and add new requirements.
  • Re-task assets.
  • Transition to the next operation.

4-24.  The steps in updating information collection taskings are collaborative efforts by the intelligence and operations staff. Some steps predominately engage the intelligence staff, others the operations staff. Some steps may require coordination with other staff sections, and others may engage the entire operations and intelligence working group.

Maintain Information Collection Activities Synchronized to Operations

4-25. As execution of the commander’s plan progresses, the staff refines decision point timeline estimates used when the information is required.

Cue Assets to Other Collection Requirements

4-26. The intelligence and operations staffs track the status of collection assets, cueing and teaming assets together as appropriate to minimize the chance of casualties. For example, if a Soldier reports the absence of normal activity in a normally active market area, the staff could recommend redirecting an unmanned aircraft system or other surveillance means to monitor the area for a potential threat.

Eliminate Satisfied Requirements

4-27. During its evaluation of the information collection plan, the staff identifies requirements that were satisfied. The staff eliminates satisfied requirements and requirements that are no longer relevant, even if unsatisfied. When a requirement is satisfied or no longer relevant, the intelligence staff eliminates it from the information collection plan and updates any other logs or records.

RE-TASK ASSETS

4-28. The staff may issue orders to re-task assets. This is normally in consultation with the intelligence officer and other staff sections. Re-tasking is assigning an information collection asset with a new task and purpose.

DEVELOP AND ADD NEW REQUIREMENTS

4-29. As the operation progresses and the situation develop, commanders generate new requirements. Intelligence staff begins updating the requirements planning tools. The intelligence staff prioritizes new requirements against remaining requirements. The intelligence staff consolidates the new requirements with the existing requirements, reprioritizes the requirements, evaluates resources based upon the consolidated listing and priorities, and makes appropriate recommendations to the commander and operations officer.

TRANSITIONS

4-30. A transition occurs when the commander decides to change focus from one type of military operation to another. Updating information collection tasking may result in a change of focus for several collection assets. As with any other unit, collection assets may require rest and refit—or lead time for employment— to transition from one mission or operation to another effectively.

Appendix A

Information Collection Assets

This appendix discusses information collection assets available to Army commanders for the planning and execution of collection activities. This appendix discusses those assets by level, phase, and echelon. Lastly, this chapter discusses the network-enabled information collection.

BACKGROUND

A-1. An information collection capability is any human or automated sensor, asset, or processing, exploitation, and dissemination system that can be directed to collect information that enables better decisionmaking, expands understanding of the operational environment, and supports warfighting functions in decisive action. Factors—a unit’s primary mission, typical size area of operations (AO), number of personnel, and communications and network limitations—significantly impact what sensors, platforms, and systems are fielded.

MONITOR THE TACTICAL PLAN

A-3. Staffs ensure the collection activities remain focused on the commander’s critical information requirements (CCIRs). They continuously update staff products and incorporate those products into the running estimates and common operational picture (COP). Lastly, they quickly identify and report threats and decisive points in the AO.

STRATEGIC

A-5. National and theater-level collection assets provide tactical forces updates before and during deployment. Theater-level shaping operations require actionable intelligence including adversary centers of gravity and decision points, as well as the prediction of adversary anti-access measures. Space-based resources are key to supporting situational awareness during deployment and entry phases because they—

  • Monitor protection indicators.
  • Provide warning of ballistic missile launches threatening aerial and sea ports of debarkation and other threats to arriving forces.
  • Provide the communications links to forces enroute.
  • Provide meteorological information that could affect operations.

OPERATIONAL

A-6. The intelligence staff requests collection support with theater, joint, and national assets. Respective collection managers employ organic means to cover the seams and gaps between units. These means provide the deploying tactical force the most complete portrayal possible of the enemy and potential adversaries, the populace, and the environmental situation upon entry. The operational-level intelligence assets operate from a regional focus center. This regional focus center (located in the crisis area) assumes primary analytical overwatch for the alerted tactical maneuver elements.

 

NETWORK-ENABLED INFORMATION COLLECTION

A-42. The networking of all joint force elements creates capabilities for unparalleled information sharing and collaboration and a greater unity of effort via synchronization and integration of force elements at the lowest echelons. Distributed Common Ground System (Army) (DCGS-A) provides a network-centric, enterprise intelligence, weather, geospatial engineering, and space operations capabilities to maneuver, maneuver support, and sustainment organizations at all echelons from battalion to joint task forces. The DCGS-A is being implemented to integrate intelligence tasking, collection, processing, and dissemination capabilities across the Army and joint community. The purpose of DCGS-A is to unite the different systems across the global information network. DCGS-A is the Army’s primary system for—

  • Receipt of and processing select information collection asset data.
  • Control of select Army sensor systems.
  • Fusion of sensor data and information.
  • Direction and distribution of relevant threat, terrain, weather, and civil considerations products and information.
  • Facilitation of friendly information and reporting.

 

Appendix B

The Information Collection Annex to the Operation Order

This appendix provides instructions for preparing Annex L (Information Collection) in Army plans and orders. It provides a format for the annex that can be modified to meet the requirements of the base order and operations, and an example information collection plan. Refer to ATTP 5-0.1 for additional guidance on formatting and procedures.

ANNEX L (INFORMATION COLLECTION)

B-1. The information collection annex clearly describes how information collection activities support the offensive, defensive, and stability or defense support of civil authorities operations throughout the conduct of the operations described in the base order. See figure B-1. It synchronizes activities in time, space, and purpose to achieve objectives and accomplish the commander’s intent for reconnaissance, surveillance, and intelligence operations (including military intelligence disciplines).

 

Appendix C

Joint Intelligence, Surveillance, and Reconnaissance

The Army conducts operations as part of a joint force. This appendix examines joint intelligence, surveillance, and reconnaissance activities as part of unified action. It discusses the joint intelligence, surveillance, and reconnaissance doctrine, resources, planning systems, considerations, and organizations.

UNIFIED ACTION

C-1. Unified action is the synchronization, coordination, and/or integration of the activities of governmental and nongovernmental entities with military operations to achieve unity of effort (JP 1). It involves the application of all instruments of national power, including actions of other government agencies and multinational military and nonmilitary organizations. Combatant as well as subordinate commanders use unified action to integrate and synchronize their operations directly with the activities and operations of other military forces and nonmilitary organizations in their area of operations.

C-2. Army forces operating in an operational area are exposed to many non-U.S. Army participants. Multinational formations, host-nation forces, other government agencies, contractors, and nongovernmental organizations are all found in the operational area. Each participant has distinct characteristics, vocabulary, and culture, and all can contribute to situational understanding. Commanders, Soldiers, and all who seek to gather information have much to gain by being able to work with and leverage the capabilities of these entities. The Army expands the joint intelligence, surveillance, and reconnaissance (ISR) doctrine (contained in JP 2-01) by defining information collection as an activity that focuses on answering the commander’s critical information requirements (see paragraph 1-3).

CONCEPTS OF JOINT INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE

C-3. Joint ISR is an intelligence function, and its collections systems are intelligence assets and resources under the control of the J-2. This is different from Army information collection. Joint ISR does not include reconnaissance and surveillance units. Joint usage of reconnaissance and surveillance refers to the missions conducted by predominately airborne assets. Two key concepts impact how Army conducts joint ISR in the joint operations area: integration and interdependence.

INTEGRATION

C-4. The Army uses integration to extend the principle of combined arms to operations conducted by two or more Service components. The combination of diverse joint force capabilities generates combat power more potent than the sum of its parts. This integration does not require joint command at all echelons; however, it does require joint interoperability at all echelons.

INTERDEPENDENCE

C-5. The Army uses interdependence to govern joint operations and impact joint ISR activities. This interdependence is the purposeful reliance by one Service’s forces on another Service’s capabilities to maximize the complementary and reinforcing effects of both. Army forces operate as part of an interdependent joint force. Areas of interdependence that directly enhance Army information collection activities include—

Joint command and control. Integrated capabilities that—

  • Gain information superiority through improved, fully synchronized, integrated ISR, knowledge management, and information management.
  • Share a common operational picture.
  • Improve the ability of joint force and Service component commanders to conduct operations.

Joint intelligence. Integrated processes that—

  • Reduce unnecessary redundancies in collection asset tasking through integrated ISR.
  • Increase processing and analytic capability.
  • Facilitate collaborative analysis.
  • Provide global intelligence production and dissemination.
  • Provide intelligence products that enhance situational understanding by describing and assessing the operational environment.

C-6. Other Services also rely on Army forces to complement their capabilities, including intelligence support, detainee and prisoner of war operations, and others.

JOINT INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE DOCTRINE

C-7. JP 2-01 governs joint ISR doctrine. The joint force headquarters in the theater of operations govern operational policies and procedures specific to that theater. Army personnel serving in joint commands must be knowledgeable of joint doctrine for ISR. Army personnel involved in joint operations must understand the joint operation planning process. The joint operation planning process focuses on the interaction between an organization’s commander and staff and the commanders and staffs of the next higher and lower commands. The joint operation planning process continues throughout an operation.

JOINT INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE PLANNING SYSTEMS

C-13. Two joint ISR planning systems— the collection management mission application and the Planning Tool for Resource, Integration, Synchronization, and Management (PRISM)—help facilitate access to joint resources. In joint collection management operations, the collection manager, in coordination with the operations directorate, forwards collection requirements to the component commander exercising tactical control over the theater reconnaissance and surveillance assets. A mission tasking order goes to the unit selected to be responsible for the collection operation. At the selected unit, the mission manager makes the final choice of specific platforms, equipment, and personnel required for the collection operations based on operational considerations such as maintenance, schedules, training, and experience. The collection management mission application is used by the Air Force. It is a web-centric information systems architecture that incorporates existing programs sponsored by several commands, Services, and agencies. It also provides tools for recording, gathering, organizing, and tracking intelligence collection requirements for all disciplines. PRISM, a subsystem of collection management mission application, is a Web-based management and synchronization tool used to maximize the efficiency and effectiveness of theater operations. PRISM creates a collaborative environment for resource managers, collection managers, exploitation managers, and customers.

JOINT INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE CONCEPT OF OPERATIONS

C-16. The counterpart to the joint ISR plan is the joint ISR concept of operations, which is developed in conjunction with operational planning. The joint ISR concept of operations is based on the collection strategy and ISR execution planning, and is developed jointly by the joint force J-2 and J-3. The joint ISR concept of operations addresses how all available ISR assets and associated tasking, processing, exploitation, and dissemination infrastructure, to include multinational or coalition and commercial assets, are used to answer the joint force’s intelligence requirements. It identifies asset shortfalls relative to the joint force’s validated priority intelligence requirements (PIRs). It requires periodic evaluation of the capabilities and contributions of all available ISR assets in order to maximize their efficient utilization, and to ensure the timely release of allocated ISR resources when no longer needed by the joint force. JP 2-01 chapter 2 discusses the concept of operations in detail.

NATIONAL INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE RESOURCES AND GUIDELINES

C-17. Within the context of the National Intelligence Priorities Framework, the concept of ISR operations may be used to justifying requests for additional national ISR resources. National collection resources are leveraged against national priorities. Intelligence officers must remember that these assets are scarce and have a multitude of high-priority requirements.

NATIONAL INTELLIGENCE SUPPORT TEAMS

C-18. National intelligence support teams (NISTs) are formed at the request of a deployed joint or combined task force commander. NISTs are comprised of intelligence and communications experts from Defense Intelligence Agency, Central Intelligence Agency, National Geospatial-Intelligence Agency, National Security Agency, and other agencies as required to support the specific needs of the joint force commander. Defense Intelligence Agency is the executive agent for all NIST operations. Once on station, the NIST supplies a steady stream of agency intelligence on local conditions and potential threats. The needs of the mission dictate size and composition of NISTs.

C-19. Depending on the situation, NIST personnel are most often sent to support corps- or division-level organizations. However, during recent operations in Operation Iraqi Freedom and Operation Enduring Freedom, national agencies placed personnel at the brigade combat team level in some cases.

PLANNING AND REQUESTS FOR INFORMATION SYSTEMS

C-20. Several national databases and Intelink Web sites contain information applicable to the intelligence preparation to the battlefield process and national ISR planning. Commanders and their staff should review and evaluate those sites to determine the availability of current data, information, and intelligence products that might answer intelligence or information requirements.

REQUIREMENTS MANAGEMENT SYSTEM

C-21. The requirements management system provides the national and Department of Defense imagery communities with a uniform automated collection management system. The requirements management system manages intelligence requirements for the national and Department of Defense user community in support of the United States’ imagery and geospatial information system.

The requirements management system determines satisfaction of imagery requests, can modify imagery requests based on input from other sources of intelligence, and provides analytical tools for users to exploit.

C-22. The generated messages of the requirements management system are dispatched for approval and subsequent collection and exploitation tasking. The system is central to current and future integrated imagery and geospatial information management architectures supporting national, military, and civil customers.

C-23. Nominations management services provide the coordination necessary to accept user requirements for new information. These services aggregate, assign, and prioritize these user requirements. Nominations management services also track requirement satisfaction from the users.

NATIONAL SIGNALS INTELLIGENCE REQUIREMENTS PROCESS

C-24. The national signals intelligence requirements process (NSRP) is an integrated and responsive system of the policies, procedures, and technology used by the intelligence community to manage requests for national-level signals intelligence products and services. The NSRP replaced the previous system called the national signals intelligence requirement system.

C-25. The NSRP establishes an end-to-end crypto-logic mission management tracking system using information needs. Collectors of signals intelligence satisfy tactical through national-level consumer information needs based on NSRP guidance. The NSRP improves the consumer’s ability to communicate with the collector by adding focus and creating a mechanism for accountability and feedback.

GUIDELINES FOR ACCESSING NATIONAL RESOURCES FOR INFORMATION

C-29. Depending upon local procedures and systems available, the Army intelligence officer may use various means to submit a request for information. The guidelines below assist in accessing national-level resources to answer the request for information—

Know the PIRs and identify gaps that exist in the intelligence database and products.
Know what collection assets are available from supporting and supported forces.
Understand the timeline for preplanned and dynamic collection requests for particular assets.

  • Identify collection assets and dissemination systems that may help answer the commander’s PIRs.
  • Ensure liaison and coordination elements are aware of PIRs and timelines for satisfaction. Ensure PIRs are tied to specific operational decisions.
  • During planning, identify collection requirements and any trained analyst augmentation required to support post-strike battle damage assessment or other analysis requirements.
  • Plan for cueing to exploit collection platforms.

 

 

JOINT INTELLIGENCE, SURVEILLANCE, AND RECONNAISSANCE CONSIDERATIONS

C-30. Communication and cooperation with other agencies and organizations in the joint operations area can enhance ISR collection efforts, creating sources of information with insights not otherwise available. Commanders must understand the respective roles and capabilities of the civilian organizations in the joint operations area to coordinate most effectively. Civilian organizations have different organizational cultures and norms. Some organizations may work willingly with Army forces while others may not. Some organizations are particularly sensitive about being perceived as involved in intelligence operations with the military. Some considerations in obtaining the valuable information these organizations may have access to are—

  • Building a relationship—this takes time, effort, and a willingness to schedule time to meet with individuals.
  • Patience—it is best not to expect results quickly and to avoid the appearance of tasking other agencies to provide information.
  • Reciprocity—U.S. forces often can provide assistance or support that facilitate cooperation.
  • Mutual interests—other organizations may have the same interests as U.S. forces (such as increased security).
  • Trust—it should be mutual. At a minimum, organizations trust U.S. forces will not abuse the relationship and that the information is provided in good faith.

C-31. Commanders cannot task civilian organizations to collect information. However, U.S. government intelligence or law enforcement agencies normally collect or have access to information as part of their operations. These organizations may benefit by mutual sharing of information, and can be an excellent resource.

INTERGOVERNMENTAL AND NONGOVERNMENTAL ORGANIZATIONS

C-39. In addition to working with U.S. government agencies, unified action involves synchronizing joint or multinational military operations with activities of other government agencies, intergovernmental organizations, nongovernmental organizations, and contractors. These organizations may have significant access, specialized knowledge, or insight and understanding of the local situation because of the nature of what they do. These organizations vary widely in their purposes, interests, and ability or willingness to cooperate with the information-gathering activities of U.S. forces. It is often preferable to simply cultivate a relationship that enables the exchange of information without revealing specific requirements.

Notes on ‘Intelligence Essentials for Everyone’

Notes on Intelligence Essentials for Everyone

By Lisa Krizan

JOINT MILITARY INTELLIGENCE COLLEGE WASHINGTON, DC
June 1999

 

INTELLIGENCE ESSENTIALS FOR EVERYONE

Preface

The “importance of understanding” has become almost an obsession with significant portions of American business. There remain, however, many companies that attempt to operate as they traditionally have in the past — placing great faith in the owner’s or man- ager’s judgment as to what is required to remain competitive.

In this paper, the author has articulated clearly the fundamentals of sound intelligence practice and has identified some guidelines that can lead toward creation of a solid intelligence infrastructure. These signposts apply both to government intelligence and to business. Good intelligence should always be based on validated requirements, but it may be derived from a wide variety of sources, not all of which are reliable.

Understanding the needs of the consumer and the sources available enable an analyst to choose the correct methodology to arrive at useful answers. The author has laid out in clear, concise language a logical approach to creating an infrastructure for government and business. Every system will have flaws but this discussion should help the reader minimize those weaknesses. It is an important contribution to the education of government and business intelligence professionals.

James A. Williams, LTG, U.S. Army (Ret.) Former Director, Defense Intelligence Agency

Foreword

Decades of government intelligence experience and reflection on that experience are captured in this primer. Ms. Krizan combines her own findings on best practices in the intelligence profession with the discoveries and ruminations of other practitioners, including several Joint Military Intelligence College instructors and students who preceded her. Many of the selections she refers to are from documents that are out of print or have wrongly been assigned to a dustbin.

This primer reviews and reassesses Intelligence Community best practices with special emphasis on how they may be adopted by the private sector. The government convention of referring to intelligence users as “customers” suggests by itself the demonstrable similarities between government intelligence and business information support functions.

The genesis for this study was the author’s discovery of a need to codify for the Intelligence Community certain basic principles missing from the formal training of intelligence analysts. At the same time, she learned of requests from the private sector for the same type of codified, government best practices for adaptation to the business world. As no formal mechanism existed for an exchange of these insights between the public and private sectors, Ms. Krizan developed this paper as an adjunct to her Master’s thesis, Benchmarking the Intelligence Process for the Private Sector. Her thesis explores the rationale and mechanisms for benchmarking the intelligence process in government, and for sharing the resultant findings with the private sector.

Dr. Russell G. Swenson, Editor and Director, Office of Applied Research

 

 

 

 

PROLOGUE: INTELLIGENCE SHARING IN A NEW LIGHT

Education is the cheapest defense of a nation.

— Edmund Burke, 18th-century British philosopher

National Intelligence Meets Business Intelligence

This intelligence primer reflects the author’s examination of dozens of unclassified government documents on the practice of intelligence over a period of nearly seven years. For the national security Intelligence Community (IC), it represents a concise distillation and clarification of the national intelligence function. To the private sector, it offers an unprecedented translation into lay terms of national intelligence principles and their application within and potentially outside of government.1 Whereas “intelligence sharing” has traditionally been a government-to-government transaction, the environment is now receptive to government-private sector interaction.

The widespread trend toward incorporating government intelligence methodology into commerce and education was a primary impetus for publishing this document. As eco- nomic competition accelerates around the world, private businesses are initiating their own “business intelligence” (BI) or “competitive intelligence” services to advise their decisionmakers. Educators in business and academia are following suit, inserting BI concepts into professional training and college curricula

Whereas businesses in the past have concentrated on knowing the market and making the best product, they are shifting their focus to include knowing, and staying ahead of, competitors. This emphasis on competitiveness requires the sophisticated production and use of carefully analyzed information tailored to specific users; in other words, intelligence. But the use of intelligence as a strategic planning tool, common in government, is a skill that few companies have perfected.

The Society of Competitive Intelligence Professionals (SCIP), headquartered in the Washington, DC area, is an international organization founded in 1986 to “assist members in enhancing their firms’ competitiveness through a greater… understanding of competitor behaviors and future strategies as well as the market dynamics in which they do business.” SCIP’s code of conduct specifically promotes ethical and legal BI practices. The main focus of “collection” is on exploiting online and open-source information services, and the theme of “analysis” is to go beyond mere numerical and factual information, to interpretation of events for strategic decisionmaking.

 

 

 

 

Large corporations are creating their own intelligence units, and a few are successful at performing analysis in support of strategic decisionmaking. Others are hiring BI contractors, or “out-sourcing” this function. However, the majority of businesses having some familiarity with BI are not able to conduct rigorous research and analysis for value-added reporting. According to University of Pittsburgh professor of Business Administration John Prescott, no theoretical framework exists for BI. He believes that most studies done lack the rigor that would come with following sound research-design principles. By his estimate, only one percent of companies have a research-design capability exploitable for BI applications.7 At the same time, companies are increasingly opting to establish their own intelligence units rather than purchasing services from BI specialists. The implication of this trend is that BI professionals should be skilled in both intelligence and in a business discipline of value to the company.

The private sector can therefore benefit from IC expertise in disciplines complementary to active intelligence production, namely defensive measures. The whole concept of openness regarding intelligence practices may hinge upon the counter-balancing effect of self-defense, particularly as practiced through information systems security (INFOSEC) and operations security (OPSEC).9 Because the IC seeks to be a world leader in INFOSEC and OPSEC as well as intelligence production, defensive measures are an appropriate topic for dialogue between the public and private sectors.

The U.S. government INFOSEC Manual sums up the relationship between offense and defense in a comprehensive intelligence strategy in this way:

In today’s information age environment, control of information and information technology is vital. As the nation daily becomes more dependent on networked information systems to conduct essential business, including mili- tary operations, government functions, and national and international eco- nomic enterprises, information infrastructures are assuming increased strategic importance. This has, in turn, given rise to the concept of information warfare (INFOWAR) — a new form of warfare directed toward attacking (offensive) or defending (defensive) such infrastructures.10

Giving citizens the tools they need to survive INFOWAR is one of the IC’s explicit missions. This intelligence primer can assist that mission by offering a conceptual and practical “common operating environment” for business and government alike.

Assessing and Exchanging Best Practices

In documenting the essentials of intelligence, this primer is an example of benchmark- ing, a widely used process for achieving quality in organizations,

Benchmarking normally assesses best professional practices, developed and refined through experience, for carrying out an organization’s core tasks.An additional aim of benchmarking is to establish reciprocal relationships among best-in-class parties for the exchange of mutually beneficial information. Because the IC is the de facto functional leader in the intelligence profession, and is publicly funded, it is obligated to lead both the government and private sector toward a greater understanding of the intelligence discipline.

In the mid-1990s, as national intelligence agencies began to participate in international benchmarking forums, individuals from the private sector began to request practical information on the intelligence process from IC representatives. The requestors were often participants in the growing BI movement and apparently sought to adapt IC methods to their own purposes. Their circumspect counterparts in the government were not prepared to respond to these requests, preferring instead to limit benchmarking relationships to common business topics, such as resource management.

Demand in the private sector for intelligence skills can be met through the application of validated intelligence practices presented in this document. Conversely, the business- oriented perspective on intelligence can be highly useful to government intelligence professionals. As a BI practitioner explains, every activity in the intelligence process must be related to a requirement, otherwise it is irrelevant. Government personnel would benefit from this practical reminder in every training course and every work center. In the private sector, straying from this principle means wasting money and losing a competitive edge.

Curriculum exchanges between private sector educators and the IC are encouraged by legislation and by Congressional Commission recommendations,17 yet little such formal exchange has taken place.

  1. For example, the 1991 National Security Education Act (P.L. 102-183), the 1993 Government Performance and Results Act (P.L. 103-62), and the Congressional Report of the Commission on the Roles and Capabilities of the U.S. Intelligence Community, Preparing for the 21st Century: An Appraisal of U.S. Intelligence (Washington, DC: GPO, 1 March 1996), 87.

Whereas government practitioners are the acknowledged subject-matter experts in intelligence methodology, the private sector offers a wealth of expertise in particular areas such as business management, technology, the global marketplace, and skills training. Each has valuable knowledge to share with the other, and experience gaps to fill. On the basis of these unique needs and capabilities, the public and private sectors can forge a new partnership in understanding their common responsibilities, and this primer may make a modest contribution toward the exchange of ideas.

The following chapters outline validated steps to operating an intelligence service for both the government and the private sector. In either setting, this document should prove useful as a basic curriculum for students, an on-the-job working aid for practitioners, and a reference tool for experienced professionals, especially those teaching or mentoring others. Although the primer does not exhaustively describe procedures for quality intelligence production or defensive measures, it does offer the business community fundamental concepts that can transfer readily from national intelligence to commercial applications, including competitive analysis, strategic planning and the protection of proprietary information. Universities may incorporate these ideas into their business, political science, and intelligence studies curricula to encourage and prepare students to become intelligence practitioners in commerce or government.

PART I INTELLIGENCE PROCESS

[I]ntelligence is more than information. It is knowledge that has been specially prepared for a customer’s unique circumstances. The word knowledge highlights the need for human involvement. Intelligence collection systems produce… data, not intelligence; only the human mind can provide that special touch that makes sense of data for different customers’ requirements. The special processing that partially defines intelligence is the continual collection, verification, and analysis of information that allows us to understand the problem or situation in actionable terms and then tailor a product in the con- text of the customer’s circumstances. If any of these essential attributes is missing, then the product remains information rather than intelligence.18

Captain William S. Brei, Getting Intelligence Right: The Power of Logical Procedure, Occasional Paper Number Two (Washington, DC: Joint Military Intelligence College, January 1996), 4.

 

According to government convention, the author will use the term “customer” to refer to the intended recipient of an intelligence product — either a fellow intelligence ser- vice member, or a policy official or decisionmaker. The process of converting raw information into actionable intelligence can serve government and business equally well in their respective domains.

The Intelligence Process in Government and Business

Production of intelligence follows a cyclical process, a series of repeated and interrelated steps that add value to original inputs and create a substantially transformed product. That transformation is what distinguishes intelligence from a simple cyclical activity.

In government and private sector alike, analysis is the catalyst that converts information into intelligence for planners and decisionmakers.

Although the intelligence process is complex and dynamic, several component functions may be distinguished from the whole. In this primer, components are identified as Intelligence Needs, Collection Activities, Processing of Collected Information, Analysis and Production.

These labels, and the illustration below, should not be interpreted to mean that intelligence is a unidimensional and unidirectional process. “[I]n fact, the [process] is multidimensional, multi- directional, and — most importantly — interactive and iterative.”

The purpose of this process is for the intelligence service to provide decisionmakers with tools, or “products” that assist them in identifying key decision factors.

A nation’s power or a firm’s success results from a combination of factors, so intelligence producers and customers should examine potential adversaries and competitive situations from as many relevant viewpoints as possible. A competitor’s economic resources, political alignments, the number, education and health of its people, and apparent objectives are all important in determining the ability of a country or a business to exert influence on others. The eight subject categories of intelligence are exhaustive, but they are not mutually exclusive. Although dividing intelligence into subject areas is useful for analyzing information and administering production, it should not become a rigid formula.

Operational support intelligence incorporates all types of intelligence by use, but is produced in a tailored, focused, and timely manner for planners and operators of the supported activity.

How government and business leaders define their needs for these types of intelligence affects the intelligence service’s organization and operating procedures. Managers of this intricate process, whether in government or business, need to decide whether to make one intelligence unit responsible for all the component parts of the process or to create several specialized organizations for particular sub-processes.

Functional Organization of Intelligence

The national Intelligence Community comprises Executive Branch agencies that produce classified and unclassified studies on selected foreign developments as a prelude to decisions and actions by the president, military leaders, and other senior authorities.

Private sector organizations use open-source information to produce intelligence in a fashion similar to national authorities. By mimicking the government process of translating customer needs into production requirements, and particularly by performing rigorous analysis on gathered information, private organizations can produce assessments that aid their leaders in planning and carrying out decisions to increase their competitiveness in the global economy. This primer will point out why private entities may desire to transfer into their domain some well-honed proficiencies developed in the national Intelligence Community. At the same time, the Intelligence Community self-examination conducted in these pages may allow government managers to reflect on any unique capabilities worthy of further development and protection.

PART II
CONVERTING CUSTOMER NEEDS INTO INTELLIGENCE REQUIREMENTS

The articulation of the requirement is the most important part of the process, and it seldom is as simple as it might seem. There should be a dialogue concerning the requirement, rather than a simple assertion of need. Perhaps the customer knows precisely what is needed and what the product should look like. Perhaps… not. Interaction is required: discussion between ultimate user and principal producer. This is often difficult due to time, distance, and bureaucratic impediments, not to mention disparities of rank, personality, perspectives, and functions.

Defining the Intelligence Problem

Customer demands, or “needs,” particularly if they are complex and time-sensitive, require interpretation or analysis by the intelligence service before being expressed as intelligence requirements that drive the production process. This dialog between intelligence producer and customer may begin with a simple set of questions, and if appropriate, progress to a more sophisticated analysis of the intelligence problem being addressed.

The “Five Ws” — Who, What, When, Where, and Why — are a good starting point for translating intelligence needs into requirements. A sixth related question, How, may also be considered. In both government and business, these questions form the basic frame- work for decisionmakers and intelligence practitioners to follow in formulating intelligence requirements and devising a strategy to satisfy them.

This ability to establish a baseline and set in motion a collection and production strategy is crucial to conducting a successful intelligence effort. Too often, both producers and customers waste valuable time and effort struggling to characterize for themselves a given situation, or perhaps worse, they hastily embark upon an action plan without determining its appropriateness to the problem. Employing a structured approach as outlined in the Taxonomy of Problem Types can help the players avoid these inefficiencies and take the first step toward generating clear intelligence requirements by defining both the intelligence problem and the requisite components to its solution.

 

Intelligence Problem Definition A Government Scenario

The Severely Random problem type is one frequently encountered by the military in planning an operational strategy. This is the realm of wargaming. The initial intelligence problem is to identify all possible outcomes in an unbounded situation, so that commanders can generate plans for every contingency. The role of valid data is relatively minor, while the role of judgment is great, as history and current statistics may shed little light on how the adversary will behave in a hypothetical situation, and the progress and outcome of an operation against that adversary cannot be predicted with absolute accuracy. There- fore, the analytical task is to define and prepare for all potential outcomes. The analytical method is role playing and wargaming: placing oneself mentally in the imagined situation, and experiencing it in advance, even to the point of acting it out in a realistic setting. After experiencing the various scenarios, the players subjectively evaluate the outcomes of the games, assessing which ones may be plausible or expected to occur in the real world. The probability of error in judgment here is inherently high, as no one can be certain that the future will occur exactly as events unfolded in the game. However, repeated exercises can help to establish a measure of confidence, for practice in living out these scenarios may enable the players to more quickly identify and execute desired behaviors, and avoid mistakes in a similar real situation.

A Business Scenario

The Indeterminate problem type is one facing the entrepreneur in the modern telecommunications market. Predicting the future for a given proposed new technology or product is an extremely imprecise task fraught with potentially dire, or rewarding, consequences. The role of valid data is extremely minor here, whereas analytical judgments about the buying public’s future — and changing — needs and desires are crucial. Defining the key factors influencing the future market is the analytical task, to be approached via the analytical method of setting up models and scenarios: the if/then/else process. Experts in the proposed technology or market are then employed to analyze these possibilities. Their output is a synthesized assessment of how the future will look under various conditions with regard to the proposed new product. The probability of error in judgment is extremely high, as the deci- sion is based entirely on mental models rather than experience; after all, neither the new product nor the future environment exists yet. Continual reassessment of the changing fac- tors influencing the future can help the analysts adjust their conclusions and better advise decisionmakers on whether, and how, to proceed with the new product.

Generating Intelligence Requirements

Once they have agreed upon the nature of the intelligence problem at hand, the intelligence service and the customer together can next generate intelligence requirements to drive the production process. The intelligence requirement translates customer needs into an intelligence action plan. A good working relationship between the two parties at this stage will determine whether the intelligence produced in subsequent stages actually meets customer needs.

As a discipline, intelligence seeks to remain an independent, objective advisor to the decisionmaker. The realm of intelligence is that of “fact,” considered judgment, and prob- ability, but not prescription. It does not tell the customer what to do to meet an agenda, but rather, identifies the factors at play, and how various actions may affect outcomes. Intelligence tends to be packaged in standard formats and, because of its methodical approach, may not be delivered within the user’s ideal timeframe. For all these reasons, the customer may not see intelligence as a useful service.

Understanding each other’s views on intelligence is the first step toward improving the relationship between them. The next step is communication. Free interaction among the players will foster agreement on intelligence priorities and result in products that decisionmakers recognize as meaningful to their agendas, yet balanced by rigorous analysis.

Types of Intelligence Requirements

Having thus developed an understanding of customer needs, the intelligence service may proactively and continuously generate intelligence collection and production requirements to maintain customer-focused operations. Examples of such internally generated specifications include analyst-driven, events-driven, and scheduled requirements.

Further distinctions among intelligence requirements include timeliness and scope, or level, of intended use. Timeliness of requirements is established to meet standing (long- term) and ad hoc (short-term) needs. When the customer and intelligence service agree to define certain topics as long-term intelligence issues, they generate a standing requirement to ensure that a regular production effort can, and will, be maintained against that target. The customer will initiate an ad hoc requirement upon realizing a sudden short- term need for a specific type of intelligence, and will specify the target of interest, the coverage timeframe, and the type of output desired.

The scope or level of intended use of the intelligence may be characterized as strategic or tactical. Strategic intelligence is geared to a policymaker dealing with big-picture issues affecting the mission and future of an organization: the U.S. President, corporate executives, high-level diplomats, or military commanders of major commands or fleets. Tactical intelligence serves players and decisionmakers “on the ground” engaged in current operations: trade negotiators, marketing and sales representatives, deployed military units, or product developers.

Ensuring that Requirements Meet Customer Needs

Even when they follow this method of formulating intelligence requirements together, decisionmakers and their intelligence units in the public and private sectors may still have an incomplete grasp of how to define their needs and capabilities — until they have evaluated the resultant products.

customer feedback, production planning and tasking, as well as any internal product evaluation, all become part of the process of defining needs and creating intelligence requirements.

Whether in business or government, six fundamental values or attributes underlie the core principles from which all the essential intelligence functions are derived. The corollary is that intelligence customers’ needs may be defined and engaged by intelligence professionals using these same values.

Interpretation of these values turns a customer’s need into a collection and production requirement that the intelligence service understands in the context of its own functions. However, illustrating the complexity of the intelligence process, once this is done, the next step is not necessarily collection.

Rather, the next stage is analysis. Perhaps the requirement is simply and readily answered — by an existing product, by ready extrapolation from files or data bases, or by a simple phone call or short desk note based on an analyst’s or manager’s knowledge.

consumers do not drive collection per se; analysts do — or should.

PART III COLLECTION

The collection function rests on research — on matching validated intelligence objectives to available sources of information, with the results to be transformed into usable intelligence. Just as within needs-definition, analysis is an integral function of collection.

Collection Requirements

The collection requirement specifies exactly how the intelligence service will go about acquiring the intelligence information the customer needs. Any one, or any of several, players in the intelligence system may be involved in formulating collection requirements: the intelligence analyst, a dedicated staff officer, or a specialized collection unit.

In large intelligence services, collection requirements may be managed by a group of specialists acting as liaisons between customers and collectors (people who actually obtain the needed information, either directly or by use of technical means). Within that requirements staff, individual requirements officers may be dedicated to a particular set of customers, a type of collection resource, or a specific intelligence issue. This use of col- lection requirements officers is prevalent in the government. Smaller services, especially in the private sector, may assign collection requirements management to one person or team within a multidisciplinary intelligence unit that serves a particular customer or that is arrayed against a particular topic area.

the requirements management function entails much more than simple administrative duties. It requires analytic skill to evaluate how well the customer has expressed the intelligence need; whether, how and when the intelligence unit is able to obtain the required information through its available collection sources; and in what form to deliver the collected information to the intelligence analyst.

Collection Planning and Operations

One method for selecting a collection strategy is to first prepare a list of expected target evidence.

The collection requirements officer and the intelligence analyst for the target may collaborate in identifying the most revealing evidence of target activity, which may include physical features of terrain or objects, human behavior, or natural and man- made phenomena. The issue that can be resolved through this analysis is “What am I looking for, and how will I know it if I see it”?

Increasingly sophisticated identification of evidence types may reveal what collectible data are essential for drawing key conclusions, and therefore should be given priority; whether the evidence is distinguishable from innocuous information; and whether the intelligence service has the skills, time, money and authorization to collect the data needed to exploit a particular target. Furthermore, the collection must yield information in a format that is either usable in raw form by the intelligence analyst, or that can be converted practicably into usable form.

Finally, upon defining the collection requirement and selecting a collection strategy, the intelligence unit should implement that strategy by tasking personnel and resources to exploit selected sources, perform the collection, reformat the results if necessary to make them usable in the next stages, and forward the information to the intelligence production unit. This aspect of the collection phase may be called collection operations management. As with requirements management, it is often done by specialists, particularly in the large intelligence service. In smaller operations, the same person or team may perform some or all of the collection-related functions.

In comparison to the large, compartmentalized service, the smaller unit will likely experience greater overall efficiency of operations and fewer bureaucratic barriers to customer service. The same few people may act as requirements officers, operations managers and intelligence analysts/producers, decreasing the likelihood of communication and scheduling problems among them. This approach may be less expensive in terms of infrastructure and logistics than a functionally divided operation.

careful selection and assignment of personnel who thrive in a multidisciplinary environment will be vital to the unit’s success, to help ward off potential worker stress and overload. An additional pitfall that the small unit should strive to avoid is the tendency to be self-limiting: overreliance on the same customer contacts, collection sources and methods, analytic approaches, and production formulas can lead to stagnation and irrelevance. The small intelligence unit should be careful to invest in new initiatives that keep pace with changing times and customer needs.

Collection Sources

The range of sources available to all intelligence analysts, including those outside of government, is of course much broader than the set of restricted, special sources available only for government use.

four general categories serve to identify the types of information sources available to the intelligence analyst: people, objects, emanations, and records.

Strictly speaking, the information offered by these sources may not be called intelligence if the information has not yet been converted into a value-added product. In the government or private sector, collection may be performed by the reporting analyst or by a specialist in one or more of the collection disciplines.

The collection phase of the intelligence process thus involves several steps: translation of the intelligence need into a collection requirement, definition of a collection strategy, selection of collection sources, and information collection. The resultant collected information must often undergo a further conversion before it can yield intelligence in the analysis stage.

PART IV PROCESSING COLLECTED INFORMATION

From Raw Data to Intelligence Information

No matter what the setting or type of collection, gathered information must be pack- aged meaningfully before it can be used in the production of intelligence. Processing methods will vary depending on the form of the collected information and its intended use, but they include everything done to make the results of collection efforts usable by intelligence producers. Typically, “processing” applies to the techniques used by government intelligence services to transform raw data from special-source technical collection into intelligence information.

While collectors collect “raw” information, certain [collection] disciplines involve a sort of pre-analysis in order to make the information “readable” to the average all-source analyst.

Another term for processing, collation, encompasses many of the different operations that must be performed on collected information or data before further analysis and intel- ligence production can occur. More than merely physically manipulating information, collation organizes the information into a usable form, adding meaning where it was not evident in the original. Collation includes gathering, arranging, and annotating related information; drawing tentative conclusions about the relationship of “facts” to each other and their significance; evaluating the accuracy and reliability of each item; grouping items into logical categories; critically examining the information source; and assessing the meaning and usefulness of the content for further analysis. Collation reveals information gaps, guides further collection and analysis, and provides a framework for selecting and organizing additional information.

Examples of collation include filing documents, condensing information by categories or relationships, and employing electronic database programs to store, sort, and arrange large quantities of information or data in preconceived or self-generating patterns. Regardless of its form or setting, an effective collation method will have the following attributes:

  1. Be impersonal. It should not depend on the memory of one analyst; another person knowledgeable in the subject should be able to carry out the operation.
  2. Not become the “master” of the analyst or an end in itself.
  3. Be free of bias in integrating the information.
  4. Be receptive to new data without extensive alteration of the collating criterion.

Evaluating and Selecting Evidence

To prepare collected information for further use, one must evaluate its relevance and value to the specific problem at hand. An examination of the information’s source and applicability to the intelligence issue can determine whether that information will be further employed in the intelligence production process. Three aspects to consider in evaluating the relevance of information sources are reliability, proximity, and appropriateness.

Reliability of a source is determined through an evaluation of its past performance; if the source proved accurate in the past, then a reasonable estimate of its likely accuracy in a given case can be made.

Proximity refers to the source’s closeness to the information. The direct observer or participant in an event may gather and present evidence directly, but in the absence of such firsthand information, the analyst must rely on sources with varying degrees of proximity to the situation. A primary source passes direct knowledge of an event on to the analyst. A secondary source provides information twice removed from the original event; one observer informs another, who then relays the account to the analyst. Such regression of source proximity may continue indefinitely, and naturally, the more numerous the steps between the information and the source, the greater the opportunity for error or distortion.

Appropriateness of the source rests upon whether the source speaks from a position of authority on the specific issue in question. As no one person or institution is an expert on all matters, the source’s individual capabilities and shortcomings affect the level of validity or reliability assigned to the information it provides regarding a given topic.

Plausibility refers to whether the information is true under any circumstances or only under certain conditions, either known or possible. Expectabilityis assessed in the context of the analyst’s prior knowledge of the subject. Support for information exists when another piece of evidence corroborates it — either the same information from a different source, or different information that points to the same conclusion.

PART V ANALYSIS

Analysis is the breaking down of a large problem into a number of smaller problems and performing mental operations on the data in order to arrive at a conclusion or a generalization. It involves close examination of related items of information to determine the extent to which they confirm, supplement, or contradict each other and thus to establish probabilities and relationships.

Analysis is not merely reorganizing data and information into a new format. At the very least, analysis should fully describe the phenomenon under study, accounting for as many relevant variables as possible. At the next higher level of analysis, a thorough explanation of the phenomenon is obtained, through interpreting the significance and effects of its elements on the whole. Ideally, analysis can reach successfully beyond the descriptive and explanatory levels to synthesis and effective persuasion, often referred to as estimation.

The purpose of intelligence analysis is to reveal to a specific decisionmaker the underlying significance of selected target information. Frequently intelligence analysis involves estimating the likelihood of one possible outcome, given the many possibilities in a particular scenario.

The mnemonic “Four Fs Minus One” may serve as a reminder of how to apply this criterion. Whenever the intelligence information allows, and the customer’s validated needs demand it, the intelligence analyst will extend the thought process as far along the Food Chain as possible, to the third “F” but not beyond to the fourth.

Types of Reasoning

Objectivity is the intelligence analyst’s primary asset in creating intelligence that meets the Four Fs Minus One criterion. More than simply a conscientious attitude, objectivity is “a professional ethic that celebrates tough-mindedness and clarity in applying rules of evidence, inference, and judgment.”

Four basic types of reasoning apply to intelligence analysis: induction, deduction, abduction and the scientific method.

Induction. The induction process is one of discovering relationships among the phenomena under study.

In the words of Clauser and Weir:

Induction is the intellectual process of drawing generalizations on the basis of observations or other evidence. Induction takes place when one learns from experience. For example, induction is the process by which a person learns to associate the color red with heat and heat with pain, and to generalize these associations to new situations.

Induction occurs when one is able to postulate causal relationships. Intelligence estimates are largely the result of inductive processes, and, of course, induction takes place in the formulation of every hypothesis. Unlike other types of intellectual activities such as deductive logic and mathematics, there are no established rules for induction.

Deduction. “Deduction is the process of reasoning from general rules to particular cases. Deduction may also involve drawing out or analyzing premises to form a conclusion.

Deduction works best in closed systems such as mathematics, formal logic, or certain kinds of games in which all the rules are clearly spelled out.

However, intelligence analysis rarely deals with closed systems, so premises assumed to be true may in fact be false, and lead to false conclusions.

Abduction. Abduction is the process of generating a novel hypothesis to explain given evidence that does not readily suggest a familiar explanation. This process differs from induction in that it adds to the set of hypotheses available to the analyst. In inductive reasoning, the hypothesized relationship among pieces of evidence is considered to be already existing, needing only to be perceived and articulated by the analyst.

In abduction, the analyst creatively generates an hypothesis, then sets about examining whether the available evidence unequivocally leads to the new conclusion. The latter step, testing the evidence, is a deductive inference.

Examples of abductive reasoning in intelligence analysis include situations in which the analyst has a nagging suspicion that something of intelligence value has happened or is about to happen, but has no immediate explanation for this conclusion. The government intelligence analyst may conclude that an obscure rebel faction in a target country is about to stage a political coup, although no overt preparations for the takeover are evident. The business analyst may determine that a competitor company is on the brink of a dramatic shift from its traditional product line into a new market, even though its balance sheet and status in the industry are secure. In each case, the analyst, trusting this sense that the time is right for a significant event, will set out to gather and evaluate evidence in light of the new, improbable, yet tantalizing hypothesis.

Scientific Method. The scientific method combines deductive and inductive reasoning: Induction is used to develop the hypothesis, and deduction is used to test it. In science, the analyst obtains data through direct observation of the subject and formulates an hypothesis to explain conclusions suggested by the evidence.

Methods of Analysis

Opportunity Analysis. Opportunity analysis identifies for policy officials opportunities or vulnerabilities that the customer’s organization can exploit to advance a pol- icy, as well as dangers that could undermine a policy.60 It identifies institutions, interest groups, and key leaders in a target country or organization that support the intelligence customer’s objective; the means of enhancing supportive elements; challenges to positive elements (which could be diminished or eliminated); logistic, financial, and other vulnerabilities of adversaries; and activities that could be employed to rally resources and support to the objective.

Jack Davis notes that in the conduct of opportunity analysis,

[T]he analyst should start with the assumption that every policy concern can be transformed into a legitimate intelligence concern. What follows from this is that analysts and their managers should learn to think like a policy- maker in order to identify the issues on which they can provide utility, but they should always [behave like intelligence producers]. … The first step in producing effective opportunity analysis is to redefine an intelligence issue in the policymaker’s terms. This requires close attention to the policymaker’s role as “action officer” – reflecting a preoccupation with getting things started or stopped among adversaries and allies…. It also requires that analysts recog- nize a policy official’s propensity to take risk for gain….[P]olicymakers often see, say, a one-in-five chance of turning a situation around as a sound invest- ment of [organizational] prestige and their professional energies….[A]nalysts have to search for appropriate ways to help the policymaker inch the odds upward – not by distorting their bottom line when required to make a predic- tive judgment, or by cheerleading, but by pointing to opportunities as well as obstacles. Indeed, on politically sensitive issues, analysts would be well advised to utilize a matrix that first lists and then assesses both the promising and discouraging signs they, as objective observers, see for… policy goals…. [P]roperly executed opportunity analysis stresses information and possibili- ties rather than [explicit] predictions.

Jack Davis, The Challenge of Opportunity Analysis (Washington, DC: Center for the Study of Intelligence, July 1992)

Linchpin Analysis. Linchpin analysis is one way of showing intelligence managers and policy officials alike that all the bases have been touched. Linchpin analysis, a color- ful term for structured forecasting, is an anchoring tool that seeks to reduce the hazard of self-inflicted intelligence error as well as policymaker misinterpretation.

Analogy. Analogies depend on the real or presumed similarities between two things. For example, analysts might reason that because two aircraft have many features in com- mon, they may have been designed to perform similar missions. The strength of any such analogy depends upon the strength of the connection between a given condition and a specified result.

In addition, the analyst must consider the characteristics that are dissimilar between the phenomena under study. The dissimilarities may be so great that they ren- der the few similarities irrelevant.

One of the most widely used tools in intelligence analysis is the analogy. Analogies serve as the basis for most hypotheses, and rightly or wrongly, underlie many generalizations about what the other side will do and how they will go about doing it.

Thus, drawing well-considered generalizations is the key to using analogy effectively. When postulating human behavior, the analyst may effectively use analogy by applying it to a specific person acting in a situation similar to one in which his actions are well documented…

Customer Focus

As with the previous stages of the intelligence process, effective analysis depends upon a good working relationship between the intelligence customer and producer.

The government intelligence analyst is generally considered a legitimate and necessary policymaking resource, and even fairly junior employees may be accepted as national experts by virtue of the knowledge and analytic talent they offer to high level customers. Conversely, in the private sector, the intelligence analyst’s corporate rank is generally orders of magnitude lower than that of a company vice-president or CEO. The individual analyst may have little access to the ultimate customer, and the intelligence service as a whole may receive little favor from a senior echelon that makes little distinction between so-called intelligence and the myriad of other decisionmaking inputs. When private sector practitioners apply validated methods of analysis geared to meet specific customer needs, they can win the same kind of customer appreciation and support as that enjoyed by government practitioners.

Statistical Tools

Additional decisionmaking tools derived from parametric or non-parametric statistical techniques, such as Bayesian analysis, are sometime used in intelligence.

Analytic Mindset

Customer needs and collected information and data are not the only factors that influence the analytic process; the analyst brings his or her own unique thought patterns as well. This personal approach to problem-solving is “the distillation of the intelligence analyst’s cumulative factual and conceptual knowledge into a framework for making estimative judgments on a complex subject.”

Categories of Misperception and Bias

Evoked-Set Reasoning: That information and concern which dominates one’s thinking based on prior experience. One tends to uncritically relate new information to past or current dominant concerns.

Prematurely Formed Views: These spring from a desire for simplicity and stability, and lead to premature closure in the consideration of a problem.

Presumption that Support for One Hypothesis Disconfirms Others: Evidence that is consistent with one’s preexisting beliefs is allowed to disconfirm other views. Rapid closure in the consideration of an issue is a problem.

Inappropriate Analogies: Perception that an event is analogous to past events, based on inadequate consideration of concepts or facts, or irrelevant criteria. Bias of “Representativeness.”

Superficial Lessons From History: Uncritical analysis of concepts or events, superficial causality, over-generalization of obvious factors, inappropriate extrapolation from past success or failure.

Presumption of Unitary Action by Organizations: Perception that behavior of others is more planned, centralized, and coordinated than it really is. Dismisses accident and chaos. Ignores misperceptions of others. Fundamental attribution error, possibly caused by cultural bias.

Organizational Parochialism: Selective focus or rigid adherence to prior judgments based on organizational norms or loyalties. Can result from functional specialization. Group-think or stereotypical thinking.

Excessive Secrecy (Compartmentation): Over-narrow reliance on selected evidence. Based on concern for operational security. Narrows consideration of alternative views. Can result from or cause organizational parochialism.

Ethnocentrism: Projection of one’s own culture, ideological beliefs, doctrine, or expectations on others. Exaggeration of the causal significance of one’s own action. Can lead to mirror-imaging and wishful thinking. Parochialism.

Lack of Empathy: Undeveloped capacity to understand others’ perception of their world, their conception of their role in that world, and their definition of their interests. Difference in cognitive contexts.

Mirror-Imaging: Perceiving others as one perceives oneself. Basis is ethnocentrism. Facilitated by closed systems and parochialism.

Ignorance: Lack of knowledge. Can result from prior-limited priorities or lack of curiosity, perhaps based on ethnocentrism, parochialism, denial of reality, rational-actor hypothesis (see next entry).

Rational-Actor Hypothesis: Assumption that others will act in a “rational” manner, based on one’s own rational reference. Results from ethnocentrism, mirror-imaging, or ignorance.

Denial of Rationality: Attribution of irrationality to others who are perceived to act outside the bounds of one’s own standards of behavior or decisionmaking. Opposite of rational-actor hypothesis. Can result from ignorance, mirror-imaging, parochialism, or ethnocentrism.

Proportionality Bias: Expectation that the adversary will expend efforts proportionate to the ends he seeks. Inference about the intentions of others from costs and consequences of actions they initiate.

Willful Disregard of New Evidence: Rejection of information that conflicts with already-held beliefs. Results from prior policy commitments, and/or excessive pursuit of consistency.

Image and Self-Image: Perception of what has been, is, will be, or should be (image as subset of belief system). Both inward-directed (self-image) and outward-directed (image). Both often influenced by self-absorption and ethnocentrism.

Defensive Avoidance: Refusal to perceive and understand extremely threatening stimuli. Need to avoid painful choices. Leads to wishful thinking.

Overconfidence in Subjective Estimates: Optimistic bias in assessment. Can result from premature or rapid closure of consideration, or ignorance.

Wishful Thinking (Pollyanna Complex): Hyper-credulity. Excessive optimism born of smugness and overconfidence.

Best-Case Analysis: Optimistic assessment based on cognitive predisposition and general beliefs of how others are likely to behave, or in support of personal or organizational interests or policy preferences.

Conservatism in Probability Estimation: In a desire to avoid risk, tendency to avoid estimating extremely high or extremely low probabilities. Routine thinking. Inclination to judge new phenomena in light of past experience, to miss essentially novel situational elements, or failure to reexamine established tenets. Tendency to seek confirmation of prior- held beliefs.

Worst-Case Analysis (Cassandra Complex): Excessive skepticism. Reflects pessimism and extreme caution, based on predilection (cognitive predisposition), adverse past experience, or on support of personal or organizational interests or policy preferences.

Because the biases and misperceptions outlined above can influence analysis, they may also affect the resultant analytic products. As explained in the following Part, analysis does not cease when intelligence production begins; indeed, the two are interdependent. The foregoing overview of analytic pitfalls should caution intelligence managers and analysts that intelligence products should remain as free as possible from such errors of omission and commission, yet still be tailored to the specific needs of customers.

PART VI PRODUCTION

The previously-described steps of the intelligence process are necessary precursors to production, but it is only in this final step that functionality of the whole process is achieved. Production results in the creation of intelligence, that is, value-added actionable information tailored to a specific customer. In practical terms, production refers to the creation, in any medium, of either interim or finished briefings or reports for other analysts, or for decisionmakers or policy officials.

In government parlance, the term “finished” intelligence is reserved for products issued by analysts responsible for synthesizing all available sources of intelligence, resulting in a comprehensive assessment of an issue or situation, for use by senior analysts or decisionmakers.

Analysts within the single-source intelligence agencies consider any information or intelligence not issued by their own organization to be “collateral.”

Similar designations for finished intelligence products may apply in the business world. Particularly in large corporations with multidisciplinary intelligence units, or in business intelligence consulting firms, some production personnel may specialize in the creation of intelligence from a single source, while others specialize in finished reporting. For example, there may be specialists in library and on-line research, “HUMINT” experts who conduct interviews and attend conferences and trade shows, or scientists who per- form experiments on products or materials. The reports generated by such personnel may be considered finished intelligence by their intended customers within subdivisions of the larger company. The marketing, product development, or public relations department of a corporation may consume single-source intelligence products designed to meet their indi- vidual needs. Such a large corporation may also have an intelligence synthesis unit that merges the reports from the specialized units into finished intelligence for use in strategic planning by senior decisionmakers. Similarly, in the intelligence consulting firm, each of the specialized production units may contribute their reports to a centralized finished intelligence unit which generates a synthesized product for the client.

Emphasizing the Customer’s Bottom Line

The intelligence report or presentation must focus on the results of the analysis and make evident their significance through sound arguments geared to the customer’s interests. In short, intelligence producers must BLUF their way through the presentation — that is, they must keep the “Bottom Line Up Front.”

It is often difficult for… intelligence [producers] to avoid the temptation to succumb to the Agatha Christie Syndrome. Like the great mystery writer, we want to keep our readers in suspense until we can deliver that “punch line.” After we have worked hard on this analysis… we want the reader to know all the wonderful facts and analytical methods that have gone into our conclusions…. Most readers really will not care about all those bells and whistles that went into the analysis. They want the bottom line, and that is what intelligence professionals are paid to deliver.

James S. Major, The Style Guide: Research and Writing at the Joint Military Intelligence College

Some customers are “big picture” thinkers, seeking a general overview of the issue, and guidance on the implications for their own position and responsibilities. An appropriate intelligence product for such a customer will be clear, concise, conclusive, and free of jargon or distracting detail.Conversely, some customers are detail-oriented, seeing themselves as the ultimate expert on the subject area. This type of customer needs highly detailed and specialized intelligence to supplement and amplify known information.

Anatomy of an Intelligence Product

Whether it is produced within the government, or in the business setting, the basic nature of the intelligence product remains the same. The analyst creates a product to document ongoing research, give the customer an update on a current issue or situation, or provide an estimate of expected target activity. In general terms, the product’s function is to cover one or more subject areas, or to be used by the customer for a particular application.

Content

Determination of product content is done in close cooperation with the customer, sometimes at the initiative of one or the other, often in a cycle of give-and-take of ideas. Formal intelligence requirements, agreed upon by both producer and customer in advance, do drive the production process, but the converse is also true. The intelligence unit’s own self-concept and procedures influence its choice of which topics to cover, and which aspects to emphasize. As a result, the customer comes to expect a certain type of product from that unit, and adjusts requirement statements accordingly. In addition, the intelligence process may bring to light aspects of the target that neither the producer nor customer anticipated. When the parties involved have a close working relationship, either one may receive inspiration from interim products, and take the lead in pursuing new ways to exploit the target.

Often, this dialogue centers around the pursuit of new sources associated with known lucrative sources.

The basic orientation of the intelligence product toward a particular subject or application is also determined by the producer-customer relationship. Frequently, the intelligence service will organize the production process and its output to mirror the customer organization. Government production by the single-source intelligence agencies is largely organized geographically or topically, to meet the needs of all-source country, region, or topic analysts in the finished-intelligence producing agencies, such as DIA or the National Counterintelligence Center.

In terms of intended use by the customer, both business and government producers may generate intelligence to be applied in the current, estimative, operational, research, science and technology, or warning context. Serendipity plays a role here, because the collected and analyzed information may meet any or all of these criteria.

Features

Three key features of the intelligence product are timeliness, scope, and periodicity. Timeliness includes not only the amount of time required to deliver the product, but also the usefulness of the product to the customer at a given moment. Scope involves the level of detail or comprehensiveness of the material contained in the product. Periodicity describes the schedule of product initiation and generation.

In intelligence production, the adage “timing is everything” is particularly apt. When a customer requests specific support, and when actionable information is discovered through collection and analysis, the resultant intelligence product is irrelevant unless the customer receives it in time to take action — by adapting to or influencing the target entity. Timeliness therefore encompasses the short-term or long-term duration of the production process, and the degree to which the intelligence itself proves opportune for the customer.

It is important to remember that many users of intelligence have neither the time nor the patience to read through a voluminous study, however excellent it may be, and would much prefer to have the essential elements of the analysis set down in a few succinct paragraphs

Analysts may proactively generate products to meet known needs of specific customers, or they may respond to spontaneous customer requests for tailored intelligence. Furthermore, “analysts, as experts in their fields, are expected to initiate studies that address questions yet unformulated by [customers].” By selecting from available source material, and determining when to issue an intelligence product, analysts have the potential to influence how their customers use intelligence to make policy decisions.

Packaging

Government intelligence products are typically packaged as highly structured written and oral presentations, including electrical messages, hardcopy reports, and briefings.

The format of the intelligence product, regardless of the medium used to convey it, affects how well it is received by the customer. Even in a multimedia presentation, the personal touch can make a positive difference. Therefore, the degree of formality, and the mix of textual and graphical material should match the customer’s preferences.

Many customers prefer written analyses, often in the form of concise executive summaries or point papers; some will ask for an in-depth study after consuming the initial or periodic assessment.

producers should be aware of the potential pitfalls of relying on the executive summary to reach key customers. If the product does not appeal to the executive’s staff members who read it first, it may never reach the intended recipient.

Customer

In addition to understanding the customer’s intelligence requirements, the producer may benefit from an awareness of the relationship between the customer organization and the intelligence service itself.

The intelligence producer selects the product content and format to suit a specific individual or customer set. However, the producer should beware of selecting material or phraseology that is too esoteric or personal for a potential wide audience. Intelligence products are official publications that become official records for use by all authorized personnel within the producer and customer organizations. They should focus on the primary customer’s needs, yet address the interests of other legitimate players. Sometimes, when the producer is struggling with how to meet the needs of both internal and external customers, the solution is to create two different types of products, one for each type of customer. Internal products contain details about the sources and methods used to generate the intelligence, while external products emphasize actionable target information. Similarly, the producer adjusts the product content and tone to the customer’s level of expertise.

Finally, the number of designated recipients is often determined by the sensitivity of the intelligence issue covered in the product. If the intelligence is highly sensitive… then only the few involved persons will receive the report. A routine report may be broadly distributed to a large customer set. Thus, the choice of distribution method is more a marketing decision than a mechanical exercise. Successful delivery of a truly useful intelligence product to a receptive customer is the result of communication and cooperation among all the players.

Customer Feedback and Production Evaluation

The production phase of the intelligence process does not end with delivering the prod- uct to the customer. Rather, it continues in the same manner in which it began: with dialogue between producer and customer.

If the product is really to be useful for policy-making and command, dis- semination involves feedback, which is part of the marketing function…. Ide- ally, the “marketer” who delivers the product is the same individual who accepts and helps to refine the initial requirement.

Intelligence producers need feedback from end-users. If producers do not learn what is useful and not useful to customers, they cannot create genuine intelligence. Internal review procedures that focus on the format and style of intelligence products are not sufficient for producers to judge their performance; they must hear from customers on the intelligence value of their work.

Feedback procedures between producers and customers should include key questions, such as: Is the product usable? Is it timely? Was it in fact used? Did the product meet expectations? If not, why not? What next? The answers to these questions will lead to refined production, greater use of intelligence by decisionmakers, and further feedback sessions. Thus, production of intelligence actually generates more requirements in this iterative process. Producers and managers may use the framework developed by Brei and summarized below as an initial checklist for evaluating their own work, and as a basis for formal customer surveys to obtain constructive feedback.

Producers also need performance feedback from their own managers. Useful aspects of such an internal evaluation may include whether the output met the conditions set down by customers and producers in formal intelligence requirements, whether the intelligence was indeed used by customers, and whether the product resulted from a high standard of analytic quality.

To establish a formal internal review process for monitoring the quality of analysis in intelligence products, managers could select experienced analysts to serve on a rotating basis as “mindset coaches” — reviewing assessments for issues of mindset, uncertainty, and policy utility, or consider pairing with another production division to swap personnel for this activity. As a rule, the less the critical reader knows about the substance of the paper the more he or she will concentrate on the quality of the argumentation.

Managers make key decisions that mirror the intelligence process and make production possible. In conjunction with customers, managers determine what customer set the intelligence unit will serve; what sources it will exploit; what types of intelligence it will produce; and what methods of collection, processing, analysis, production, customer feedback, and self-evaluation it will use.

PART VII MANAGING THE INTELLIGENCE PROCESS

The Role of Management

Another discipline integral to the intelligence profession — but worthy of special consideration in this context — is that of management. The effective administration and direction of intelligence activities can be regarded as the epitome of intelligence professionalism. Just as an untutored civilian cannot be expected competently to command [a military unit], so an untrained or inexperienced layperson cannot be expected effectively to direct [an intelligence operation]. But mastery of professional intelligence skills does not, in itself, ensure that a person is able to direct intelligence functions competently; expertise in administrative techniques and behavioral skills is also essential to managerial effectiveness. Some facility in these areas can be acquired through experience, but a professional level of competence requires familiarity with the principles and theories of management, and leadership.

George Allen, “The Professionalization of Intelligence,” in Dearth and Goodden, 1995, 37.

supervisors and managers have a particular responsibility for ensuring the professional development of their subordinates. When all the members of the intelligence unit are competent, then the effectiveness of the group increases. Enabling subordinates also frees managers to thoroughly plan and administer the intelligence operation, instead of redoing the work of production personnel.

Organizing the Intelligence Service

In the national Intelligence Community, federal laws form the basis for a centrally coordinated but functionally organized system.

The unifying principle across government intelligence missions is the basic charter to monitor and manage threats to national interests and to the intelligence service itself. In both the national Intelligence Community and the business community, managers may make a distinction between self-protective intelligence activities and competitive intelligence activities.

Threat analysis in the business environment depends on the open exchange of information between companies, as it is widely recognized that no one benefits from other companies encountering unnecessary risk or danger to their personnel.

On the other hand, at the corporate level, competitive business intelligence relies on the protection or discovery of important corporate data. In the public security environment, diplomatic security and force protection for a government’s own citizens, and for personnel in multilateral operations, is in the best interests of all. Conversely, foreign capabilities assessment operates in the context of a zero-sum game among countries, with potential winners and losers of the tactical advantage.

When the very survival of a corporation or country is at stake vis-a-vis other players in their respective environments, a global or strategic model applies. At this level, strategic warning intelligence takes center stage in the government security setting, and its counter- part — strategic scenario planning — achieves value in the private sector.

The alternative to taking global or strategic intelligence action is to allow threats to emerge and to bring company or government officials to the realm of crisis management. There, fundamental government or business interests are at stake, and the outcome is more likely to be left to the vagaries of impulse and chance than to the considered judgment and actions of corporate or government leaders.

Managing Analysis and Production

Intelligence managers in government and industry need to decide how to organize the pro- duction process just as they need to determine the structure of the intelligence service as a whole. Typical methods of assigning analysts are by target function, geographical region, technical subject, or policy issue. The intelligence service may task analysts to concentrate on one type of source information, or to merge all available sources to produce “finished” intelligence or estimates.

Some industries will need analysts to specialize in certain technical subject areas or complex issues, while large corporations may assign intelligence analysts to each of several departments such as Research or Product Development. Small independent intelligence services may require personnel to perform all the functions of the intelligence process from needs assessment to production and performance evaluation. In that case, analysts might be assigned to a particular customer account rather than a specific topic area.

Furthermore, managers can take the initiative in transforming intelligence into a proactive service. Managers who are isolated from the intelligence customer tend to monitor the quantity of reports produced and level of polish in intelligence products, but not the utility of the intelligence itself.

But policy officials will seek information and judgment from the source that provides it at the lowest personal cost, including the mass media, no matter how much money the intelligence organization is spending to fund analysis on their behalf. Thus, managers need to learn to ask for and accept opportunity analysis included in intelligence products, not remove it as inappropriate during the review process.

Evaluating the Intelligence Process

Beyond organizing and monitoring intelligence production, an additional management responsibility is to evaluate the intelligence service’s overall mission performance. From the manager’s perspective, intelligence products are not the only output from the intelligence process; therefore, products are not the only focus of rigorous self-evaluation.

In the course of its operations, the intelligence unit expends and generates significant amounts of financial and political capital. Careful examination of this commodity flow may yield key insights into pro- cess improvement. Internal review procedures may thus include measures of how well the intelligence service and its components organize their work, use funds, allocate materiel and human resources, and coordinate with parent and customer organizations, all from the self- interested perspective of the intelligence service itself.

To assist them in this effort, managers may evaluate the sub-processes and interim products of the Needs Definition, Collection, Processing, Analysis, and Production phases of the intelligence process in terms of Brei’s Intelligence Values: Accuracy, Objectivity, Usability, Relevance, Readiness, and Timeliness.

Seeing the members of the intelligence service as customers of management, and of each other, can enable managers to create a work culture in which each person’s needs and talents are respected and incorporated into the organization’s mission.

PART VIII
PORTRAIT OF AN INTELLIGENCE ANALYST

The efficacy of the intelligence process described in the foregoing chapters depends upon personnel who are both able and willing to do the specialized work required. Through the findings of several government studies, this section presents the ideal characteristics of the central figure in value-added production — the intelligence analyst. According to these studies, the successful intelligence analyst brings to the discipline certain requisite knowledges and abilities, or has a high aptitude for acquiring them through specialized training; is able to perform the specific tasks associated with the job; and exhibits personality traits compatible with intelligence analysis work.

Cognitive Attributes

An individual’s analytic skill results from a combination of innate qualities, acquired experience, and relevant education. Psychologists call these mental faculties cognitive attributes, and further divide them into two types: abilities (behavioral traits, being able to perform a task) and knowledges (learned information about a specific subject).

According to a recent formal job analysis of selected intelligence analysts conducted by the NSA Office of Human Resources Services, important cognitive abilities for intelligence analysis include written expression, reading comprehension, inductive reasoning, deductive reasoning, pattern recognition, oral comprehension, and information ordering.

Unlike the abilities categories, areas of knowledge for government intelligence specialists do not necessarily apply to their private sector counterparts. The formal job study of government intelligence analysts revealed that knowledge of military-related and technical subjects, not surprisingly, was prevalent among the individuals in the research group.

Performance Factors

Seven intelligence analysis performance categories:

Data Collection – Research and gather data from all available sources.

Data Monitoring – Review flow of scheduled incoming data.

Data Organizing – Organize, format, and maintain data for analysis and technical report generation.

Data Analysis – Analyze gathered data to identify patterns, relationships, or anomalies.

Data Interpretation/Communication – Assign meaning to analyzed data and communicate it to appropriate parties.

Computer Utilization – Use computer applications to assist in analysis. Coordination – Coordinate with internal and external organizations.

This concise inventory echoes the intelligence process and illustrates the complexity of the intelligence analyst’s job. It also serves as a blueprint for managers as they design intelligence organizations and individual personnel assignments. In particular, the analyst’s job description should reflect these expected behaviors for purposes of recruitment, selection, placement, training, and performance evaluation

Research at the Joint Military Intelligence College (JMIC) demonstrates that intelligence professionals exhibit a pattern of personality traits that sets them apart from the U.S. population as a whole. In this regard, intelligence professionals are no different from many others, for every profession has its own distinct pattern of personality traits. A significant percentage (21 percent) of those who choose to pursue employment in national security intelligence tend to express the following behavior preferences: orientation to the inner world of ideas rather than the outer world of things and people, tendency to gather factual information through the senses rather than inspiration, proclivity to make decisions on the basis of logic rather than emotion, and an eagerness to seek closure proactively instead of leaving possibilities open. In contrast, researchers found that people who exhibit the opposite set of personality traits are almost non-existent among intelligence professionals.

the most frequently occurring type among the respondents to the JMIC survey exhibit the traits I, S, T and J.

Because people tend to be satisfied and productive in their work if their own personalities match the corresponding behaviors suitable to their jobs, this research tying personality traits to the intelligence profession can help individuals consider their general suitability for certain types of intelligence work.

PART IX DEFENSIVE MEASURES FOR INTELLIGENCE

[A]s information becomes more and more a factor of production, of tangible worth for its own sake, the value of the special knowledge that is the essence of intelligence will command a higher price in the global information age marketplace than will the generally available knowledge. Therein lies the most ancient and, at the same time, the most modern challenge to the future of intelligence — protecting it.

— Goodden, in Dearth and Goodden, 415.

Beyond Intelligence Process: Protecting the Means and the Results

An intelligence organization’s openness about its validated intelligence methods is of course tempered by self-defense considerations.

In light of the tendency to overlook OPSEC and INFOSEC implementation, the remainder of this section develops an instructional overview of the basic information that government and business personnel should know to protect their activities from unauthorized exploitation. Indeed, for the health of U.S. commerce and national security activities, everyone needs user-friendly information on how to protect proprietary information.

Operations Security

OPSEC is essential to the intelligence function in both the national security and business environments. OPSEC denies adversaries information about one’s own operational capabilities and intentions by identifying, controlling, and protecting indicators associated with the planning and conduct of those operations and other related activities.

An adversary is not necessarily a belligerent enemy: In OPSEC terms, an adversary is any entity that acts against one’s own interest or actively opposes one’s own goals.

Countermeasure options to prevent an adversary from exploiting these factors include: eliminating the indicators altogether, concealing indicator activities, disguising indicator activities, and staging deceptive (false) activities.

Information Systems Security

Information Systems Security (INFOSEC) refers to the protection of information that an organization uses and produces in the course of its operations. Today, this means protecting complex electronic networks. Government and business depend upon computerized systems for everything from banking, communications, and data processing to physical security and travel reservations. To the casual observer, INFOSEC may seem the domain of a few technical specialists, or the exclusive concern of the military or intelligence agencies. But INFOSEC is the responsibility of everyone who has ever used a tele- phone, computer, or automatic bank teller machine.

Each intelligence organization and activity must tailor its INFOSEC measures particular technologies and operational practices, weighing the costs of such measures against their value in safeguarding the mission.

INFOSEC for Everyone

The national authorities for INFOSEC described above evolved out of the need to protect the fundamental role of information in a democratic society and market economy. They help strike the balance between free exchange of information and privacy, and between free enterprise and regulation. Government-sponsored information policy and technology set the standards upon which nearly every facet of public and private life is based. Citizens receive basic services through government-created or -regulated information infrastructure, including automatic payroll deposit to employee bank accounts, cellular telephone service, electronic commerce via the Internet, air and rail traffic control, emergency medical services, and electrical power supply.

Such services contribute not only to the citizen’s quality of life, but to the very functioning of the nation. Information is the lifeblood of society and the economy. Imagine the chaos that would reign if the government, the military, banks, businesses, schools, and hospitals could not communicate reliably. Life would come to a halt.

Government expertise in information technology and policy has made it the authority specifically on protecting intelligence operations. The private sector may also benefit from this expertise by applying INFOSEC measures in business intelligence.

EPILOGUE

This primer has reviewed government intelligence production practices in building-block fashion. It has also explored the defensive measures comprising information security and operations security, which are integral to all the building blocks, and are equally applicable to private businesses and government organizations. Finally, the primer has drawn a cognitive, behavioral and personality profile of the central figure in intelligence production — the intelligence analyst. In the spirit of benchmarking, this document invites a reciprocal examination of best practices that may have been developed by private businesses, and of principles that may have been derived from other academic studies of intelligence-related processes.

Although this effort reflects a government initiative, in fact the government Intelligence Community may receive the greater share of rewards from benchmarking its own process. Potential benefits to the Community include an improved public image, increased self-awareness, more efficient recruitment through more informed self-selection by candidates for employment, as well as any resultant acquisition of specialized information from subject matter experts in the business and academic communities.

Notes on Open-Source Intelligence ATP 2-22.9

Notes on Open-Source Intelligence ATP 2-22.9

Preface

ATP 2-22.9 establishes a common understanding, foundational concepts, and methods of use for Army open- source intelligence (OSINT). ATP 2-22.9 highlights the characterization of OSINT as an intelligence discipline, its interrelationship with other intelligence disciplines, and its applicability to unified land operations.

This Army techniques publication—

  • Provides fundamental principles and terminology for Army units that conduct OSINT exploitation.
  • Discusses tactics, techniques, and procedures (TTP) for Army units that conduct OSINT exploitation.
  • Provides a catalyst for renewing and emphasizing Army awareness of the value of publicly available information and open sources.
  • Establishes a common understanding of OSINT.
  • Develops systematic approaches to plan, prepare, collect, and produce intelligence from publicly available information from open sources.

Introduction

Since before the advent of the satellite and other advanced technological means of gathering information, military professionals have planned, prepared, collected, and produced intelligence from publicly available information and open sources to gain knowledge and understanding of foreign lands, peoples, potential threats, and armies.

Open sources possess much of the information needed to understand the physical and human factors of the operational environment of unified land operations. Physical and human factors of a given operational environment can be addressed utilizing publicly available information to satisfy information and intelligence requirements and provide increased situational awareness interrelated with the application of technical or classified resources.

The world is being reinvented by open sources. Publicly available information can be used by a variety of individuals to expand a broad spectrum of objectives. The significance and relevance of open-source intelligence (OSINT) serve as an economy of force, provide an additional leverage capability, and cue technical or classified assets to refine and validate both information and intelligence.

As an intelligence discipline, OSINT is judged by its contribution to the intelligence warfighting function in support of other warfighting functions and unified land operations.

Chapter 1

Open-Source Intelligence (OSINT) Fundamentals

DEFINITION AND TERMS

1-1. Open-source intelligence is the intelligence discipline that pertains to intelligence produced from publicly available information that is collected, exploited, and disseminated in a timely manner to an appropriate audience for the purpose of addressing a specific intelligence and information requirement (FM 2-0). OSINT also applies to the intelligence produced by that discipline.

1-2. OSINT is also intelligence developed from the overt collection and analysis of publicly available and open-source information not under the direct control of the U.S. Government. OSINT is derived from the systematic collection, processing, and analysis of publicly available, relevant information in response to intelligence requirements. Two important related terms are open source and publicly available information:

  • Open source is any person or group that provides information without the expectation of privacy––the information, the relationship, or both is not protected against public disclosure. Open-source information can be publicly available but not all publicly available information is open source. Open sources refer to publicly available information medium and are not limited to physical persons.
  • Publicly available information is data, facts, instructions, or other material published or broadcast for general public consumption; available on request to a member of the general public; lawfully seen or heard by any casual observer; or made available at a meeting open to the general public.

1-3. OSINT collection is normally accomplished through monitoring, data-mining, and research. Open- source production supports all-source intelligence and the continuing activities of the intelligence process (generate intelligence knowledge, analyze, assess, and disseminate), as prescribed in FM 2-0. Like other intelligence disciplines, OSINT is developed based on the commander’s intelligence requirements.

CHARACTERISTICS

1-4. The following characteristics address the role of OSINT in unified land operations:

Provides the foundation. Open-source information provides the majority of the necessary background information on any area of operations (AO). This foundation is obtained through open-source media components that provide worldview awareness of international events and perceptions of non-U.S. societies. This foundation is an essential part of the continuing activity of generate intelligence knowledge.

  • Answers requirements. The availability, depth, and range of publicly available information enables organizations to satisfy intelligence and information requirements without the use or support of specialized human or technical means of collection.
  • Enhances collection. Open-source research supports surveillance and reconnaissance activities by answering intelligence and information requirements. It also provides information (such as biographies, cultural information, geospatial information, and technical data) that enhances and uses more technical means of collection.
  • Enhances production. As part of a multidiscipline intelligence effort, the use and integration of publicly available information and open sources ensure commanders have the benefit of all sources of available information to make informative decisions.

THE INTELLIGENCE WARFIGHTING FUNCTION

1-5.  The intelligence warfighting function is composed of four distinct Army tactical tasks (ARTs):

  • Intelligence support to force generation (ART 2.1).
  • Support to situational understanding (ART 2.2).
  • Perform intelligence, surveillance, and reconnaissance (ART2.3).
  • Support to targeting and information superiority (ART 2.4).

1-6.  The intelligence warfighting function is the related tasks and systems that facilitate understanding of

the operational environment, enemy, terrain, weather, and civil considerations (FM 1-02). As a continuous process, the intelligence warfighting function involves analyzing information from all sources and conducting operations to develop the situation. OSINT supports each of these ARTs.

Publicly available information is used to—

  1. Support situational understanding of the threat and operational environment.
    Obtain information about threat characteristics, terrain, weather, and civil considerations.
  2. Generate intelligence knowledge before receipt of mission to provide relevant knowledge of the operational environment.
  3. Rapidly provide succinct answers to satisfy the commander’s intelligence requirements during intelligence overwatch.
    Develop a baseline of knowledge and understanding concerning potential threat actions or intentions within specific operational environments in support of the commander’s ongoing intelligence requirements.
  4. Generate intelligence knowledge as the basis for Army integrating functions such as intelligence preparation of the battlefield (IPB). IPB is designed to support the staff estimate and the military decision-making process (MDMP).
  5. Most intelligence requirements are generated as a result of the IPB process and its interrelation with MDMP.
  6. Support situation development—a process for analyzing information and producing current intelligence concerning portions of the mission variables of enemy, terrain and weather, and civil considerations within the AO before and during operations (see FM 2-0). Situation development—
  • Assists the G-2/S-2 in determining threat intentions and objectives.
  • Confirms or denies courses of action (COAs).
  • Provides an estimate of threat combat effectiveness.

Support information collection. Planning requirements and assessing collection analyzes information requirements and intelligence gaps and assists in determining which asset or combination of assets are to be used to satisfy the requirements.

THE INTELLIGENCE PROCESS

1-9. The intelligence process consists of four steps (plan, prepare, collect, and produce) and four continuing activities (analyze, generate intelligence knowledge, assess, and disseminate). Just as the activities of the operations process (plan, prepare, execute, and assess) overlap and recur as the mission demands, so do the steps of the intelligence process. The continuing activities occur continuously throughout the intelligence process and are guided by the commander’s input.

1-10. The four continuing activities plus the commander’s input drive, shape, and develop the intelligence process. The intelligence process provides a common model for intelligence professionals to use to guide their thoughts, discussions, plans, and assessments. The intelligence process results in knowledge and products about the threat, terrain and weather, and civil considerations.

1-11. OSINT enhances and supports the intelligence process and enables the operations process, as described in FM 2-0. The intelligence process enables the systematic execution of Army OSINT exploitation, as well as the integration with various organizations (such as joint, interagency, intergovernmental, and multinational).

THE PLANNING REQUIREMENTS AND ASSESSING COLLECTION PROCESS

1-12. Information collection informs decisionmaking for the commander and enables the application of combat power and assessment of its effects. Information collection is an activity that synchronizes and integrates the planning and operation of sensors, assets, as well as the processing, exploitation, and dissemination of systems in direct support of current and future operations (FM 3-55). This is an integrated intelligence and operations function. For Army forces, this activity is a combined arms operation that focuses on priority intelligence requirements (PIRs) while answering the commander’s critical information requirements (CCIRs).

1-13. Information collected from multiple sources and analyzed becomes intelligence that provides answers to commanders’ information requirements concerning the enemy and other adversaries, climate, weather, terrain, and population. Developing these requirements is the function of information collection:

  • A commander’s critical information requirement is an information requirement identified by the commander as being critical to facilitating timely decisionmaking. The two key elements are friendly force information requirements and priority intelligence requirements (JP 3-0).
  • A priority intelligence requirement is an intelligence requirement, stated as a priority for intelligence support, which the commander and staff need to understand the adversary or the operational environment (JP 2-0).
  • A friendly force information requirement is information the commander and staff need to understand the status of friendly force and supporting capabilities (JP 3-0).

1-14. The planning requirements and assessing collection process involves six continuous, nondiscrete activities. These activities and subordinate steps are not necessarily sequential and often overlap. The planning requirements and assessing collection process supports the staff planning and operations processes throughout unified land operations.

THE MILITARY DECISIONMAKING PROCESS

1-15. Upon receipt of the mission, commanders and staffs begin the MDMP. The military decisionmaking process is an iterative planning methodology that integrates the activities of the commander, staff, subordinate headquarters, and other partners to understand the situation and mission; develop and compare courses of action; decide on a course of action that best accomplishes the mission; and produce an operation plan or order for execution (FM 5-0).

1-16. During the second step of the of the MDMP, mission analysis, commanders and staffs analyze the relationships among the mission variables—mission, enemy, terrain and weather, troops and support available, time available, civil considerations (METT-TC)—seeking to gain a greater understanding of the—

  • Operational environment, including enemies and civil considerations.
  • Desired end state of the higher headquarters.
  • Mission and how it is nested with those of the higher headquarters.
  • Forces and resources available to accomplish the mission and associated tasks.

1-17. Within the MDMP, OSINT assists in enabling the planning staff to update estimates and initial assessments by using publicly available information and open sources. Major intelligence contributions to mission analysis occur because of IPB.

INTELLIGENCE PREPARATION OF THE BATTLEFIELD

1-18. Intelligence preparation of the battlefield is a systematic process of analyzing and visualizing the portions of the mission variables of threat, terrain, weather, and civil considerations in a specific area of interest and for a specific mission. By applying intelligence preparation of the battlefield, commanders gain the information necessary to selectively apply and maximize operational effectiveness at critical points in time and space (FM 2-01.3).

1-19. IPB was originally designed to support the MDMP and troop leading procedures, but it can also be incorporated into other problem-solving models like design and red teaming. OSINT plays a significant and integral part during IPB in satisfying intelligence and information requirements indicated during the MDMP in support of unified land operations. The indicators that can be satisfied using OSINT during IPB include but are not limited to—

1-20. IPB is used primarily by commanders and staffs as a guide to evaluate specific datasets in order to gain an understanding of a defined operational environment. Prior to operations, an examination of national, multination partner, joint, and higher echelon databases is required to determine if the information requested is already available. As operations commence, new intelligence and information requirements are further identified as a result of battlefield changes. Publicly available information and open sources, when produced and properly integrated in support of the all-source intelligence effort, can be used to satisfy intelligence and information requirements.

Chapter 2

Planning and Preparation of the OSINT Mission

Directly or indirectly, publicly available information and open sources form the foundation for all intelligence when conducting operations. This foundation comes from open-source media components that provide worldview awareness of international events and perceptions of non-U.S. societies. This awareness prompts commanders to visualize a plan. Planning occurs when intelligence and information requirements are identified and means are developed as to how they will be satisfied.

SECTION I – PLANNING OSINT ACTIVITIES

2-1. The plan step of the intelligence process consists of the activities that identify pertinent information requirements and develop the means for satisfying those requirements and meeting the commander’s desired end state. As an aspect of intelligence readiness, planning for OSINT exploitation begins before a unit receives an official order or tasking as part of the generate intelligence knowledge continuing activity of the intelligence process.

2-2. The focus of OSINT research prior to deployment is determined and directed by the commander’s guidance. Sustained and proactive open-source research using basic and advanced Internet search techniques plays a critical role in understanding AOs through foundational knowledge required for unit readiness and effective planning. Research during planning for possible missions provides insight into how nontraditional military forces, foreign military forces, and transnational threats have operated in similar AOs. Prior to deployment, organizations with dedicated OSINT missions can also be resourced to satisfy intelligence and information requirements.

2-3. After a unit receives a mission, the focus of OSINT research is further refined based on the AO in which the unit operates. OSINT supports the continuous assessment of unified land operations during planning. Effective research and planning ensure commanders receive timely, relevant, and accurate intelligence and information to accomplish assigned missions and tasks. The MDMP and IPB driven by the intelligence process frame the planning of OSINT exploitation. OSINT is integrated into planning through the four steps of the IPB process:

  • Define the operational environment.
  • Describe environmental effects on operations.
  • Evaluate the threat.
  • Determine threat COAs.

DEFINE THE OPERATIONAL ENVIRONMENT

2-4. When assessing the conditions, circumstances, and influences in the AO and area of interest, the intelligence staff examines all characteristics of the operational environment. There are preexisting publicly available inputs that can be used to identify significant variables when analyzing the terrain, weather, threat, and civil considerations. At the end of step one of the IPB process, publicly available information and open sources can be used to support the development of the AO assessment and area of interest assessment.

DESCRIBE ENVIRONMENTAL EFFECTS ON OPERATIONS

2-5. When analyzing the environmental effects on threat and friendly operations, publicly available information and open sources can be used to describe the—

  • Physical environment (terrestrial, air, maritime, space, and information domains).
  • Civil considerations.

2-6. Combine the evaluation of the effects of terrain, weather, and civil considerations into a product that best suits the commander’s requirements. At the end of the second step of IPB, publicly available information and open sources can be used to better inform the commander of possible threat COAs and products and assessments to support the remainder of the IPB process.

EVALUATE THE THREAT

2-7. Step three of the IPB process is to evaluate each of the significant threats in the AO. If the staff fails to determine all the threat factions involved or their capabilities or equipment, or to understand their doctrine and tactics, techniques, and procedures (TTP), as well as their history, the staff will lack the intelligence needed for planning. At the end of step three of IPB, publicly available information and open sources can provide the majority of the information required to identify threat characteristics, as well as provide possible information needed to update threat models.

DETERMINE THREAT COURSES OF ACTION

2-8. Step four of the IPB process is to identify, develop, and determine likely threat COAs that can influence accomplishment of the friendly mission. The end state of step four is to replicate the set of COAs available to the threat commander and to identify those areas and activities that, when observed, discern which COA the threat commander has chosen. At the end of step four of IPB, publicly available information and open sources can be used to determine indicators adopted by the threat commander.

SECTION II – PREPARATION OF OSINT ACTIVITIES

2-9. The reliance on classified databases has often left Soldiers uninformed and ill-prepared to capitalize on the huge reservoir of unclassified information from publicly available information and open sources,

OSINT EXPLOITATION

2-10. When preparing to conduct OSINT exploitation, the areas primarily focused on are—

  • Public speaking forums.
  • Public documents.
  • Public broadcasts.
  • Internet Websites.

PUBLIC SPEAKING FORUMS

2-11. Acquiring information at public speaking forums requires close coordination to ensure that any overt acquisition is integrated and synchronized with the information collection plan and does not violate laws prohibiting the unauthorized collecting of information for intelligence purposes.

2-13.  The operation order (OPORD), TTP, or unit standard operating procedures (SOPs) should describe how the unit that is tasked with the public speaking forum mission requests, allocates, and manages funds to purchase digital camera and audio recording equipment along with the computer hardware and software to play and store video-related data.

PUBLIC DOCUMENTS

2-14. Organizations within an AO conduct document collection missions. Once collected, documents are analyzed and the information is disseminated throughout the intelligence community. Before executing any OSINT exploitation related to collecting public documents, it is important to—

  • Coordinate document collection, processing, and analysis activities across echelons.
  • Identify the procedure to deploy, maintain, recover, and transfer hardcopy, analog, and digital media processing and communications equipment.
  • Identify academic and commercial-off-the-shelf (COTS) information services that are already available for open-source acquisition, processing, and production.

2-15. The OPORD, TTP, or unit SOPs should describe how the unit requests, allocates, and manages funds for—

  • Document collection and processing services.
  • Purchasing books, dictionaries, images, maps, newspapers, periodicals, recorded audio and video items, computer hardware, digital cameras, and scanning equipment.
  • The cost of subscribing to newspapers, periodicals, and other readable materials.

2-16. For more detailed information on public documents and document exploitation, see TC 2-91.8.

PUBLIC BROADCASTS

2-17. The DNI OSC collects, processes, and reports international and regional broadcasts. This enables deployed organizations to collect and process information from local broadcasts that are of command interest. Before exploiting OSINT related to public broadcasts, it is important to—

  • Coordinate broadcast collection, processing, and production activities with those of the OSC.
  • Identify the procedure to deploy, maintain, recover, and transfer radio and television digital media storage devices and content processing and communications systems.
  • Identify Internet collection and processing resources to collect on television or radio station- specific Web casts.

INTERNET WEB SITES

2-19. Information collected, processed, and produced from Internet Web sites supports unified land operations. Before exploiting OSINT related to Internet Web sites—

  • Coordinate Internet collection, processing, and analysis activities across echelons.
  • Identify the procedure to deploy, maintain, recover, and transfer computers and associated communications and data storage systems.
  • Coordinate with G-6/S-6 for access to the INTELINK-U network or approved commercial Internet service providers that support open-source acquisition, processing, storage, and dissemination requirements.
  • Coordinate with G-6/S-6 to develop a list of authorized U.S. and non-U.S. Internet Websites for official government use, open-source research, and non-U.S. Internet Web sites restricted to selected authorized personnel engaged in OSINT exploitation.
  • Identify academic and COTS information services that are already available for open-source information acquisition, processing, and production.

PREPARATION CONSIDERATIONS

2-21. Preparing for OSINT exploitation also includes—

  • Establishing an OSINT architecture.
  • Prioritizing tasks and requests.
  • Task-organizing assets.
  • Deploying assets.
  • Assessing completed operations.

ESTABLISHING AN OSINT ARCHITECTURE

2-22. OSINT contributes to establishing an intelligence architecture, specifically ART 2.2.2, Establish Intelligence Architecture. Establishing an intelligence architecture comprises complex and technical issues that include sensors, data flow, hardware, software, communications, communications security materials, network classification, technicians, database access, liaison officers, training, and funding. A well-defined and -designed intelligence architecture can offset or mitigate structural, organizational, or personnel limitations. This architecture provides the best possible understanding of the threat, terrain and weather, and civil considerations. An established OSINT architecture incorporates data flow, hardware, software, communications security components, and databases that include

  • Conducting intelligence reach. Intelligence reach is a process by which intelligence organizations proactively and rapidly access information from, receive support from, and conduct direct collaboration and information sharing with other units and agencies, both within and outside the area of operations, unconstrained by geographic proximity, echelon, or command (FM 2-0).
  • Developing and maintaining automated intelligence networks. This task entails providing information systems that connect assets, units, echelons, agencies, and multinational partners for intelligence, collaborative analysis and production, dissemination, and intelligence reach. It uses existing automated information systems, and, when necessary, creates operationally specific networks.
  • Establishing and maintaining access. This task entails establishing, providing, and maintaining access to classified and unclassified programs, databases, networks, systems, and other Web-based collaborative environments for Army forces, joint forces, national agencies, and multinational organizations.
  • Creating and maintaining databases. This task entails creating and maintaining unclassified and classified databases. Its purpose is to establish interoperable and collaborative environments for Army forces, joint forces, national agencies, and multinational organizations. This task facilitates intelligence analysis, reporting, production, dissemination, sustainment, and intelligence reach.

 

Operational and Technical Open-Source Databases

2-23. OSINT exploitation requires access to databases and Internet capabilities to facilitate processing, storage, retrieval, and exchange of publicly available information. These databases are resident on local area networks (LANs), the World Wide Web (WWW), and the Deep Web (see appendix C for additional information). To support unified land operations, OSINT personnel use evaluated and analyzed publicly available information and open sources to populate information databases such as—

    • Operational information databases, which support the correlation of orders, requests, collection statuses, processing resources, and graphics.
    • Technical information databases, which support collection operations and consist of unprocessed text, audio files, video files, translations, and transcripts.

 

Open-Source Collection Acquisition Requirement–Management System

2-24. The primary open-source requirements management operational information and technical information database is the Open-source Collection Acquisition Requirement-Management System (OSCAR-MS). OSCAR-MS is a Web-based service sponsored by the Office of the Assistant Deputy Director of National Intelligence for Open Source (ADDNI/OS) to provide the National Open Source Enterprise (NOSE) with an application for managing open-source collection requirements. OSCAR-MS links OSINT providers and consumers within the intelligence community down to the brigade combat team (BCT) level. Personnel at the BCT level access OSCAR-MS via the SECRET Internet Protocol Router Network (SIPRNET) in order to submit requests for information to the Department of the Army Intelligence Information Services (DA IIS) request for information portal. The goal of the OSCAR-MS is to automate and streamline ad hoc open-source collection requirements by—

    • Providing useful metrics to understand OSINT requirements.
    • Allowing the digital indexing and tagging of submitted and completed open-source products to be searchable in the Library of National Intelligence.
    • Providing for local control of administrative data such as unit account management, local data tables, and local formats.
    • Allowing simple and flexible formats that employ data base auto-population.
    • Using complete English instead of acronyms, computer codes, and other non-intuitive shortcuts.
    • Allowing linkages between requirements, products, and evaluations.
    • Enablingintegrationofopen-sourceusersforcollaborationbetweenagencies.
    • Reducingrequirementduplicationthroughcustomersdirectlycontributingtoexistingrequirements.

PRIORITIZING TASKS AND REQUESTS

2-26. The G-2/S-2 and G-3/S-3 staffs use commander guidance and primary intelligence requirements to complete the information collection plan. The plan is used to assign tasks to subordinate units or submit requests to supporting intelligence organizations to achieve the desired information collection objectives. Embodied in the information collection plan, these tasks describe how the unit––

  • Requests collection and production support from joint, interagency, intergovernmental, and multinational organizations.
  • Task-organizes and deploys organic, attached, and contracted collection, processing, and production assets.
  • Conducts remote, split-based, or distributed collection, processing, and production.
  • Requests and manages U.S. and non-U.S. linguists based on priority for support, mission-specific skills, knowledge requirements (such as language, dialect, and skill level), clearance level, and category.

2-27. When developing information collection tasks for subordinate units, the G-2/S-2 and G-3/S-3 staffs use the task and purpose construct for developing task statements to account for—

  • Who is to execute the task?
  • What is the task?
  • When will the task begin?
  • Where will the task occur?

DEPLOYING ASSETS

2-29.  Deployment of publicly available assets—

  • Supports the scheme of maneuver.
  • Supports the commander’s intent.
  • Complies with unit SOPs.

2-30.  The deployment of assets generally requires a secure position, with network connectivity to the Internet, in proximity to supporting sustainment, protection, and communications resources.

ASSESSING COMPLETED OPERATIONS

2-31. Typical guidelines used to assess operations are—

  • Monitoring operations.
  • Correlating and screening reports.
  • Disseminating and providing a feedback mechanism.

SECTION III – PLANNING AND PREPARATION CONSIDERATIONS

2-33. Planning and preparation considerations when planning for OSINT exploitation include—

  • Open-source reliability.
  • Open-source information content credibility.
  • Compliance.
  • Operations security(OPSEC).
  • Classification.
  • Coordination.
  • Deception and bias.
  • Copyright and intellectual property.
  • Linguist requirements.
  • Machine foreign language translation (MFLT) systems.

OPEN-SOURCE RELIABILITY

2-34.  The types of sources used to evaluate information are—

  • Primary sources.
  • Secondary sources.

2-35.  A primary source refers to a document or physical object that was written or created during the time under study. These sources are present during an experience or time period and offer an inside view of a particular event. Primary sources—

  • Are generally categorized by content.
  • Is either public or private.
  • Is also referred to as an original source or evidence.
  • In fact, are usually fragmentary, ambiguous, and difficult to analyze. The information contained in primary sources is also subject to obsolete meanings of familiar words.

2-36.  Some types of primary sources include—

    • Original documents (excerpts or translations) such as diaries, constitutions, research journals, speeches, manuscripts, letters, oral interviews, news film footage, autobiographies, and official records.
    • Creative works such as poetry, drama, novels, music, and art.
    • Relics or artifacts such as pottery, furniture, clothing, artifacts, and buildings.
    • Personal narratives and memoirs.
    • Person of direct knowledge.

2-37.  A secondary source interprets, analyzes, cites, and builds upon primary sources. Secondary sources may contain pictures, quotes, or graphics from primary sources. Some types of secondary sources include publications such as—

  • Journals that interpret findings.
  • Magazine articles.

 

Note. Primary and secondary sources are oftentimes difficult to distinguish as both are subjective in nature. Primary sources are not necessarily more of an authority or better than secondary sources. For any source, primary or secondary, it is important for OSINT personnel to evaluate the report for deception and bias.

2-38. Open-source reliability ratings range from A (reliable) to F (cannot be judged) as shown in table 2-1. A first-time source used in the creation of OSINT is given a source rating of F. An F rating does not mean the source is unreliable, but OSINT personnel have no previous experience with the source upon which to base a determination.

OPEN-SOURCE INFORMATION CONTENT CREDIBILITY

2-39. Similar to open-source reliability, credibility ratings range from one (confirmed) to eight (cannot be judged) as shown in table 2-2. If the information is received from a first-time source, it is given a rating of eight and, like the reliability ratings scale, does not mean the information is not credible but that OSINT personnel have no means to verify the information.

COMPLIANCE

2-40. In accordance with EO 12333, DOD 5240.1-R, and AR 381-10, procedure 2, Army intelligence activities may collect publicly available information on U.S. persons only when it is necessary to fulfill an assigned function.

CLASSIFICATION

2-42. AR 380-5 states that intelligence producers “must be wary of applying so much security that they are unable to provide a useful product to consumers.” This is an appropriate warning for OSINT personnel where concern for OPSEC can undermine the ability to disseminate inherently unclassified information. Examples of unclassified information being over-classified are—

  • Reported information found in a foreign newspaper.
  • Message from a foreign official attending an international conference.

2-43. AR 380-5 directs that Army personnel will not apply classification or other security markings to an article or portion of an article that has appeared in a newspaper, magazine, or other public medium. Final analysis of OSINT may require additional restrictions and be deemed controlled unclassified information or sensitive but unclassified information.

COORDINATION

2-44. During planning, the G-2/S-2 and G-3/S-3 staff must ensure that OSINT missions and tasks are synchronized with the scheme of maneuver. Acquiring open-source information may compromise the operations of other intelligence disciplines or tactical units. Open-source acquisition that is not synchronized may also result in the tasking of multiple assets and the improper utilization of forces and equipment, adversely affecting the ability of nonintelligence organizations, such as civil affairs, military police, and public affairs, to accomplish assigned missions and tasks. Conversely, overt contact with an open source by nonintelligence organizations can compromise OSINT missions and tasks and lead to the loss of intelligence.

DECEPTION AND BIAS

2-45. Deception and bias is a concern in OSINT exploitation. OSINT exploitation does not normally acquire information by direct observation of activities and conditions within the AO. OSINT exploitation relies mainly on secondary sources to acquire and disseminate information. Secondary sources, such as government press offices, commercial news organizations, and nongovernmental organizations spokespersons, can intentionally or unintentionally add, delete, modify, or otherwise filter the information made to the general public. These sources may also convey one message in English with the intent to sway U.S. or international perspectives and a different non-English message for local populace consumption. It is important to know the background of open sources and the purpose of the public information in order to distinguish objectives, factual information, identify bias, or highlight deception efforts against the reader and the overall operation.

COPYRIGHT AND INTELLECTUAL PROPERTY

2-46. Copyright is a form of protection, for published and unpublished works, provided by Title 17, United States Code (USC), to authors of “original works of authorship,” including literary, dramatic, musical, and artistic works. Intellectual property is considered any creation of the mind and includes, but is not limited to—

  • Musical works and compositions.
  • Artistic displays.
  • Words or phrases.
  • Symbols and designs.

LINGUIST REQUIREMENTS

2-49. The ability to gather and analyze foreign materials is critical in OSINT exploitation. The effective use and employment of linguists, both civilian and military, facilitates this activity. The areas of the highest criticality of required foreign language skills and knowledge proficiency are—

  • Transcription. Both listening and writing proficiency in the source language are essential for an accurate transcript. A transcript is extremely important when English language skills of the OSINT personnel are inadequate for authoritative or direct translation from audio or video into English text.
  • Translation. Bilingual competence is a prerequisite for translations. Linguists must be able to—
    • Read and comprehend the source language.
    • Write comprehensibly in English.
    • Choose the equivalent expression in English that fully conveys and best matches the meaning intended in the source language.
  • l  Interpretation. Bilingual competence is a prerequisite for interpretation. Linguists must be able to—
    • Hear and comprehend the source language.
    • Speak comprehensibly in English.
    • Choose the equivalent expression in English that fully conveys and best matches the meaning intended in the source language.

SECTION IV – MANNING THE OSINT SECTION

2-66. OSINT personnel that comprise the OSINT section within the intelligence staff section can consist of both intelligence and nonintelligence individuals with the technical competence, creativity, forethought, cultural knowledge, and social awareness to exploit open sources effectively. The designation of OSINT personnel to satisfy requirements, missions, and tasks is generally identified by commanders and task- organized through organic assets (intelligence personnel, nonintelligence personnel, U.S. and non-U.S. contractor personnel, or linguists) in support of unified land operations.

OSINT SECTION DUTIES

2-67. The duties of the OSINT section are to—

  • Monitor operations. This ensures responsiveness to the current situation and to anticipate future acquisition, processing, reporting, and synchronization requirements.
  • Correlate reports. Reports (written, verbally, or graphically) should correlate classified reports through OSINT validation.
  • Screen reports. Information is screened in accordance with the CCIRs and commander’s guidance to ensure that pertinent and relevant information is not overlooked and the information is reduced to a workable size. Screening should encompass the elements of timeliness, completeness, and relevance to satisfy intelligence requirements.
  • Disseminate intelligence and information. Satisfied OSINT requirements are disseminated to customers in the form of useable products and reports.
  • Cue. Effective cueing by OSINT to more technical information collection assets, such as human intelligence (HUMINT) and counterintelligence (CI) improves the overall information collection effort by keeping organizations abreast of emerging unclassified information and opportunities as well as enabling the use of a multidiscipline approach to confirm or deny information by another information source, collection organization, or production activity.
  • Provide feedback. An established feedback mechanism is required to the supported commander or customer on the status of intelligence and information requirements.

OSINT SECTION AT THE BRIGADE COMBAT TEAM LEVEL

2-68. Each combatant command may have a task-organized OSINT cell or section to some varying degree in scope and personnel. At the tactical level of operations, it is commonplace for commanders to create OSINT cells from organic intelligence personnel to satisfy intelligence requirements.

2-70. As displayed in figure 2-1, personnel comprising the OSINT section at the BCT level include—

  • Section leader.
  • Requirements manager.
  • Situation development analyst.
  • Target development analyst.

SECTION LEADER

2-71. The section leader—

  • Is the primary liaison and coordinator with the BCTS-2.
  • Provides supervisory and managerial capacity oversight.
  • Sets the priority of tasks.
  • Monitors ongoing intelligence support required by the BCT S-2.
  • Ensures that all OSINT products are included in the planning for current and future operations.

 

REQUIREMENTS MANAGER

2-72. The requirements manager—

  • Ensures that situation development and target development support the overall efforts of the section.
  • Verifies the availability of collection assets.
  • Performs quality control for situation development and target development products.
  • Supervises the receipt, analysis, and dissemination of OSINT products.

SITUATION DEVELOPMENT ANALYST

2-73. The situation development analyst—

  • Monitors publicly available information and open sources in order to ensure the most accurate common operational picture.
  • Analyzes information and produces current intelligence about the operational environment, enemy, terrain, and civil considerations before and during operations.
  • Refines information received on threat intentions, objectives, combat effectiveness, and potential missions.
  • Confirms or denies threat COAs based on publicly available indicators.
    Provides information to better understand the local population in areas that include, but are not limited to—
  • Tribal affiliations.
  • Political beliefs.
  • Religious tenets.
  • Key leaders.
  • Support groups.
  • Income sources.

TARGET DEVELOPMENT ANALYST

2-74. The target development analyst—

  • Identifies the components, elements, and characteristics of specific targets, both lethal and nonlethal.
  • Identifies civil and other non-target considerations within the AO.
  • Provides publicly available information on threat capabilities and limitations.

TASK ORGANIZATION CONSIDERATIONS

2-75. When task-organizing the OSINT section to satisfy intelligence and information requirements, units must consider—

  • Mission command.
  • Collecting and processing.
  • Computer systems.

MISSION COMMAND

2-76. Dedicated mission command personnel are needed in order to provide management and oversight of OSINT exploitation to ensure continued synchronization with maneuver elements, tasks, and requests.

ACQUISITION

2-77. Due to the volumes of publicly available information, acquisition through established information collection activities and systems are necessary in order to ensure that open-source information is not lost or misplaced that could provide essential and necessary mission-related information. Publicly available information acquired from open sources should be reported in accordance with established unit SOPs.

COLLECTING AND PROCESSING

2-78. OSINT properly integrated into overall collection plans during operations are used to satisfy CCIRs. In order to access the full array of domestic and foreign publicly available information, the processing of materials oftentimes requires OSINT support to personnel operating in the areas of document exploitation (DOCEX).

Chapter 3

Collecting OSINT

Due to the unclassified nature of publicly available information, those engaging in OSINT collection activities can begin researching background information on their assigned area of responsibility long before the issuance of an official military deployment order while generating intelligence knowledge. IPB, an integrating process for Army forces, is the mechanism identifying intelligence and information requirements that can be satisfied utilizing publicly available information and open sources.

COLLECTING PUBLICLY AVAILABLE INFORMATION

3-1. Publicly available information and open-source research, applied as an economy of force, is an effective means of assimilating authoritative and detailed information on the mission variables (METT-TC) and operational variables (political, military, economic, social, information, infrastructure, physical environment, time [PMESII-PT]). The compilation of unanswered intelligence and information requirements determined at the conclusion of the MDMP and IPB are exercised through the commander’s input. Commander’s input—

  • Is expressed in the terms of describe, visualize, and direct.
  • Is the cornerstone of guidance used by OSINT personnel.
  • Validates intelligence and information requirements.

3-2. Commander’s input is expressed as CCIRs and categorized as friendly force information requirements (FFIRs) and PIRs. Continuous research and processing methods, coupled with the commander’s input and intelligence and information requirements, OSINT personnel collect publicly available information for exploitation. The collect step of the intelligence process involves collecting, processing, and reporting information in response to information collection tasks. Collected information is the foundation of intelligence databases, intelligence production, and situational awareness.

3-3. OSINT is integrated into planning through the continuous process of IPB. Personnel engaging in OSINT exploitation must initiate collection and requests for information to satisfy CCIRs to the level of detail required. Collecting open-source information comprises four steps, as shown in figure 3-1 on page 3-2:

  • Identify information and intelligence requirements.
  • Categorize intelligence requirements by type.
  • Identify source to collect the information.
  • Determine collection technique.

IDENTIFY INFORMATION AND INTELLIGENCE REQUIREMENTS

3-4. Intelligence and information gaps are identified during the IPB process. These gaps should be developed and framed around the mission and operational variables in order to ensure the commander receives the information needed to support all lines of operations or lines of effort. As information and intelligence are received, OSINT personnel update IPB products and inform the commander of any relevant changes. OSINT needs clearly stated information and intelligence requirements to effectively focus acquisition and production and should be incorporated into collection plans in order to satisfy these requirements.

3-5. Intelligence requirements that need to be satisfied can extend beyond the scope of OSINT, resulting in gaps. OSINT is subject to information and intelligence gaps that need to be satisfied using other appropriate methods to close those gaps.

3-6. IPB is used to classify intelligence and information requirements by type based on mission analysis and friendly COAs. OSINT personnel provide input during this step. Two important related terms that work in concert with OSINT are private information and publicly available information:

  • Private information comprises data, facts, instructions, or other material intended for or restricted to a particular person, group, or organization. Intelligence requirements that require private information are not assigned to OSINT sections. There are two subcategories of private information:
    • Controlled unclassified information requires the application of controls and protective measures, for a variety of reasons (that is, sensitive but unclassified or for official use only).
    • Classified information requires protection against unauthorized disclosure and is marked to indicate its classified status when produced or disseminated.
  • Publicly available information comprises data, facts, instructions, or other material published or broadcast for general public consumption; available on request to a member of the general public; lawfully seen or heard by any casual observer; or made available at a meeting open to the general public.

IDENTIFY SOURCE TO COLLECT INFORMATION

3-7. Identifying the source is part of planning requirements and assessing collection plans. The two types of sources used to collect information are confidential sources and open sources:

  • Confidential sources comprise any persons, groups, or systems that provide information with the expectation that the information, relationship, or both are protected against public disclosure. Information and intelligence requirements that require confidential sources are not assigned to OSINT sections.
  • Open sources comprise any person or group that provides information without the expectation of copyright or privacy—the information, the relationship, or both is not protected against public disclosure. Open sources include but are not limited to—
  • Courseware, dissertations, lectures, presentations, research papers, and studies in both hardcopy and softcopy covering subjects and topics on economics, geography (physical, cultural, and political-military), international relations, regional security, and science and technology.
  • Government agencies and nongovernmental organizations. Databases, posted information, and printed reports on a wide variety of economic, environmental, geographic, humanitarian, security, and science and technology issues.
  • Commercial and public information services. Broadcasted, posted, and printed news on current international, regional, and local topics.
  • Libraries and research centers. Printed documents and digital databases on a range of topics.
  • Individuals and groups. Handwritten, painted, posted, printed, and broadcasted information on subjects and topics on art, graffiti, leaflets, posters, tattoos, and Web sites.
  • Gray literature. Materials and information that are found using advanced Internet search techniques on the Deep Web consisting of technical reports, scientific research papers, and white papers.

 

 

DETERMINE COLLECTION TECHNIQUE

3-8. Collection implies gathering, by a variety of means, raw data and information from which finalized intelligence is then created or synthesized, and disseminated. Collected information is analyzed and incorporated into all-source and other intelligence discipline products. These products are disseminated per unit SOPs, OPORDs, other established feedback mechanism, or intelligence architecture. These techniques confirm the presence of planned targets and provide a baseline of activity and information on sources within the AO for further development and future validation. When gathering information, the utilized technique includes specific information requests, objectives, priorities, timeframe of expected activity, latest (or earliest) time the information is of value (LTIOV), and reporting instructions.

3-9. Open-source information that satisfies a CCIR is disseminated as quickly as possible to the commander and other staff personnel per unit SOPs or OPORDs. OSINT can use unintrusive collection techniques to cue more technical collection assets. Collection techniques, depending on operation complexities, can enhance the chances of satisfying intelligence and information requirements.

3-10. Open-source acquisition of information and intelligence requirements are assigned to OSINT personnel. Open-source collection includes the acquisition of material in the public domain. The extent to which open-source collection yields valuable information varies greatly with the nature of the target and the subject involved. The information might be collected by individuals who buy books and journals, observe military parades, or record television and radio programs.

RESEARCH

3-11. After determining the collection technique, OSINT personnel conduct research to satisfy intelligence and information requirements.

DETERMINE RESEARCH QUESTION

3-15. Research begins with the determination of a research question expressed in the form of CCIRs regarding a given topic. In OSINT exploitation, the research question can be based on the mission variables (METT-TC) and operational variables (PMESII-PT). The research question is refined through the development of information and intelligence requirements to be satisfied. Those requirements that are not satisfied are included in the planning requirements and assessing collection plan where more technical means of collection can be utilized.

DEVELOP RESEARCH PLAN

3-16. Different facets of a question may be expressed as information and intelligence requirements. These requirements form the basis for the research plan. A research plan can use both field research and practical research. The plan consists of—

  • Identification of information sources (both primary and secondary).
  • Description of how to access those sources.
  • Format for compiling the data.
  • Research methodology.
  • Dissemination format.

IMPLEMENT RESEARCH PLAN

3-17. Utilizing open-source media—the means of sending, receiving, and recording information— components, and associated elements (see table 3-1), OSINT personnel implement a research plan. The primary media used to implement a research plan include—

  • Public speaking forums.
  • Public documents.
  • Public broadcasts.
  • Internet Websites.

Public Speaking Forums

3-18. OSINT personnel conduct research by attending public speaking forums such as conferences, lectures, public meetings, working groups, debates, and demonstrations. Attending these and similar events are opportunities to build relationships with nonmilitary professionals and organizations. Intelligence personnel require a thorough understanding of the local culture and laws to ensure any collection activities are unintrusive and do not violate local customs or laws, such as the Chatham House Rule.

Public Documents

3-20. When acquiring public documents, OSINT personnel must be aware of the local environment and use a technique that is unintrusive and appropriate for the situation. These techniques include but are not limited to—

  • Photographing and copying documents available in public forums such as town halls, libraries, and museums.
  • Finding discarded documents in a public area such as streets, markets, and restrooms.
  • Photographing documents in public areas such as banners, graffiti, and posters.
  • Purchasing documents directly from street vendors, newspaper stands, bookstores, and publishers.
  • Purchasing documents through a third party such as a wholesale distributor or book club.
  • Receiving documents upon request without charge from the author, conferences, trade fairs, direct mail advertising.

Public Broadcasts

3-21. Regional bureaus of the DNI OSC collect on regional and international broadcast networks in accordance with open-source information and intelligence requirements. Coverage of regional and international broadcasts enables OSINT personnel and organizations to use assets from already identified sources. The four techniques used to acquire information of public broadcasts are—

  • Spectrum search. Searching the entire spectrum to detect, identify, and locate all emitters to confirm overall activity. This search provides an overview of the amount and types of activities and where they are located in the spectrum.
  • Band search. Searching a particular segment of the spectrum to confirm overall activity. By limiting the size of the search band, the asset can improve the odds of acquiring a signal.
  • Frequency search. Searching for radio or television frequencies.
  • Program search. Searching for radio or television programs. Programs vary by type, content characteristics, and media format. Program surveillance verifies and expands upon initial results.

Internet Web Sites

3-23. The four steps to acquire information on Internet Web sites are—

Plan Internet search.

Conduct Internet search.

Refine Internet search.

Record results.

 

Chapter 4

Producing OSINT

The Army operates in diverse environments around the world. This diversity requires proper use of publicly available information and open sources in the production of OSINT. Given the volume of existing publicly available information and the unpredictability of requests for information and intelligence requirements, OSINT personnel engaging in open-source exploitation must be fluidly aware of and flexible when producing OSINT. Effective production ensures that commanders and subordinates receive timely, relevant, and accurate intelligence. OSINT personnel produce OSINT by evaluating, analyzing, reporting, and disseminating intelligence as assessments, studies, and estimates.

CATEGORIES OF INTELLIGENCE PRODUCTS

4-1. After receiving a mission through the MDMP and commander’s intent—expressed in terms of describe, visualize, and direct—intelligence and information requirements are identified. Personnel engaging in OSINT exploitation typically gather and receive information, perform research, and report and disseminate information in accordance with the categories of intelligence products. (See table 4-1.) OSINT products are categorized by intended use and purpose. Categories can overlap and some publicly available and open-source information can be used in more than one product.

EVALUATE INFORMATION

4-2. Open sources are overt and unclassified. Due to these aspects of publicly available information and open sources, deception, bias, and disinformation are of particular concern when evaluating sources of information during OSINT exploitation. Information is evaluated in terms of—

  • Information reliability and credibility.

COMMUNICATIONS

4-3.  A simple communications model is typically two-way and consists of six parts:

  • Intended message.
  • Speaker(sender).
  • Speaker’s encoded message.
  • Listener(receiver).
  • Listener’s decoded message.
  • Perceived message.

4-4.  The speaker and listener each have different perspectives and aspects of communications (as shown in table 4-2 on page 4-4). There are great challenges facing communicators as the message becomes encoded by the speaker and decoded by the listener.

4-5. Communications during public speaking engagements are often difficult to evaluate given the myriad of elements that can prevent a successfully transmitted message. Given the multiple elements taken simultaneously, public speaking events are subjective and can be misunderstood.

4-6. The speaker has an intended message through a verbal, nonverbal, vocal, or visual media channel or combination thereof. Within communications, the areas typically involved in preventing the true intent of the message are the sending method, environment, and receiving method. Having an understanding of these areas generally yields a greater success rate between the speaker and listener.

4-8. Speakers communicate verbally and nonverbally based on their beliefs, emotions, or goals.

It is important to understand the differences in communication styles, how they are interpreted by an audience in order to effectively communicate the message intended and avoid misunderstandings. Evaluating information acquired through public speaking venues can be challenging based on these factors. Using the table to compare these types of communication can assist collection personnel in determining the influences surrounding communicators and predicting how the messages may be perceived.

INFORMATION RELIABILITY AND CREDIBILITY

4-9. OSINT personnel evaluate information with respect to reliability and credibility. It is important to evaluate the reliability of open sources in order to distinguish objective, factual information; bias; or deception. The rating is based on the subjective judgment of the evaluator and the accuracy of previous information produced by the same source.

4-10. OSINT personnel must assess the reliability and the credibility of the information independently of each other to avoid bias. The three types of sources used to evaluate and analyze received information are—

  • Primary sources. Have direct access to the information and conveys the information directly and completely.
  • Secondary sources. Conveys information through intermediary sources using the vernacular and summarizes or paraphrases information.
  • Authoritative sources. Accurately reports information from the leader, government, or ruling party.

 

 

PROCESS INFORMATION

4-14. Process is an information management activity: to raise the meaning of information from data to knowledge (FM 6-0). The function of processing, although not a component of the intelligence process, is a critical element in the analyzing and producing of OSINT. Publicly available information answers intelligence and information requirements. Based on the type of information received, it must be processed before being reported and disseminated as finalized OSINT. Intelligence personnel transform publicly available information and open sources into a form suitable for processing by—

  • Transcribing and translating.

DIGITIZING

4-15. OSINT personnel create a digital record of documents by scanning or taking digital photographs. Pertinent information about the document must be annotated to ensure accountability and traceability. Digitization enables the dissemination of the document to external databases and organizations, as well as enables the use of machine translation tools to screen documents for keywords, names, and phrases.

 

 

 

ANALYSIS OF MEDIA SOURCES

4-20. Analysis of the media is the systematic comparison of the content, behavior, patterns, and trends of organic media organizations and sources of a country. Analysis of the media as an activity was developed and based on methods and experience gained during OSINT exploitation against authoritarian political systems during the World War II and Cold War eras where media was government-controlled. Publicly available information and open sources must be analyzed for proper inclusion in OSINT processing. OSINT personnel weigh media analysis against set criterion. These criterions assist OSINT personnel to discern facts, indicators, patterns, and trends in information and relationships. This involves inductive or deductive reasoning to understand the meaning of past events and predict future actions.

4-21. Comparison of trends in the content of individual media with shifts in official policy suggests that some media continues to mirror the dominant policy line. By establishing a track record for media that is vulnerable to external and internal pressure to follow the central policy line, OSINT personnel can identify potential policy shifts. Comparison of what is said and what is not said against the background of what others are saying and what has been said before is the core of media source analysis.

4-22. Media source analysis is also important in semi-controlled and independent media environments. In media environments where both official and nonofficial media are present, official media may be pressured to follow the central policy line. Analyzing media in these environments must encompass both the journalist and commentator level. It is important to establish the track record of such individuals to discover access to insider information from parts of the government or being used by officials to float policies.

4-23. The three aspects of media source analysis are—

  • Media control.
  • Media structure.
  • Media content.

 

 

Media Control

4-24. Analyzing media environments in terms of media control requires awareness by intelligence personnel of how different elements of the media act, influence, and are of intelligence value. Careful examination of the differences in how media is handled in different types of environments can provide insight into domestic and foreign government strategies. Media environments are categorized as—

Government-controlled.

  • Control over the media is centralized.
  • The dominant element of control is the government and higher tiers of political leadership.
  • Governments use censorship mechanisms to exercise control over media content prior to dissemination of information.

Semi-controlled.

  • Control over the media is semi-centralized.
  • Government’s exercise and promote self-censorship by pressuring media managers and journalists prior to dissemination of information.

Independent.

  • Control over the media is decentralized.
  • Governments may regulate allocation of broadcast frequencies, morality in content, ownership in media markets, and occasionally apply political pressure against media or journalists.
  • Economic factors, norms of the journalist profession, the preferences of people who manage media, and the qualities of individual journalists who report or comment on the news all influence or control media content.

4-25. All media environments are controlled to some degree and therefore easier to perform media source analysis. The challenge for OSINT personnel is to determine the level, factors, and elements (see table 4-3) that elites, institutions, or individuals exercise control, how much power each possesses, and what areas are of interest to satisfy intelligence and information requirements.

 

 

 

Media Structure

4-26. Media structure encompasses attributes of media material. There are structural elements that affect the meaning and significance of the content of the item and are often as important as the content itself. Analysis of these elements uncovers insights into the points of view of personnel in government-controlled, semi-controlled, and independent environments to establish the structure of media elements.

4-27. The media structural elements are—

  • Selection, omission, and slant.
  • Hierarchy of power.
  • Media type.

Selection, Omission, and Slant

4-28. Selection of media items is a fundamental editorial decision at the core of news reporting. Selection includes media manager decisions about which stories are covered, which stories are not covered, and which slant (viewpoint), images, and information should be included, emphasized, deemphasized, or omitted in a news item.

Hierarchy of Power

4-29. All political systems involve a hierarchy of power (see table 4-4) that logically follows official statements issued by elements in corresponding hierarchy of authoritativeness. Authoritativeness is the likelihood that the views expressed in the statement represent the dominant viewpoint within the political system. The hierarchy is obvious at the political level—a statement by the prime minister trumps a statement by a minister. In other cases, the hierarchy may not be so obvious—a speech by the party chairman is more authoritative than the head of state.

Format

4-30. Format consists of how media is produced and disseminated for public consumption. Format can be in the form of a live news report, a live interview, or a prerecorded report or interview that gives individuals more opportunity to influence the context delivered to consumers.

Media Type

4-31. Television is the medium with the largest potential audience in media environments and has a significant impact in shaping the impressions of the general viewing public. Television has replaced radio as the main source of news except in media environments where poverty prevents mass access to television. Fewer people may get information from newspapers and Internet news Web sites, but these people may be richer, better educated, and more influential than the general television audience. Specialized print publications and Internet Web sites reach a still smaller audience, but the audience will likely include officials and experts who that have influence on policy debates and outcomes.

Prominence

4-32. Questions to consider pertaining to prominence of media stories are—

  • Does the story appear on the frontpage of newspapers or on the homepage of news Websites?
  • How much space is the story given?
  • In what order does the story appear in the news broadcast?
  • Is it featured in the opening previews of the newscast?
  • How frequently is the story rebroadcast on subsequent newscasts or bulletins?
  • How much airtime did it get?

Dissemination

4-33. Attention to patterns of dissemination of leader statements is important in government-controlled media environments. Leaders communicate publicly in a variety of ways such as formal policy statements, formal interviews, and impromptu remarks. By comparing the volume of media attention given to a statement, determination is made to whether the statement was intended to be taken as a pronouncement of established policy or merely as an ad hoc, uncoordinated expression prompted by narrow contextual or temporal conditions.

Timing

4-34. OSINT personnel have traditionally paid close attention to the timing of the appearance of information in the media as the information corresponds to the news cycle. A news cycle is the process and timing by which different types of media sources obtain information, incorporate or turn the information into a product, and make the product available to the public.

Media Content

4-35. Understanding the significance of media content can enhance the value of media source analysis. Media content encompasses the elements of—

  • Manifest content.
  • Latent content.

Manifest Content

4-36. Manifest content is the actual words, images, and sounds conveyed by open sources. One of the most important forms of media source analysis involves the careful comparison of the content of authoritative official statements to identify the policies or intentions represented. Governments, political entities, and actors use statements and information released to the media to strengthen, support, and promote policies.

4-37. Manifest content analysis of authoritative public statements is an effective tool to discern leadership intentions and attitudes. Manifest content, in order to be effective, consists of the following:

  • Esoteric communications or “reading between the lines” are public statements whose surface meaning (manifest content) does not reveal the real purpose, meaning, or significance (latent content) of the author. Esoteric communication is particularly evident in political systems with strong taboos against public contention or in cases where sensitive issues are at stake. Esoteric communication is more formalized in some media environments than in others but is common in all political communications.
  • Multimedia content analysis considers elements of content beyond the words used such as facial expressions, voice inflections of leaders giving a speeches or while being interviewed, or the reading of a script by a news broadcaster all provide indicators about the views of a subject or topic. These indicators assist to determine whether a statement was seriously considered, intended to be humorous, or simply impromptu.
  • Historical or past behavior of open sources must be considered. Influences such as media outlet, journalist, newsmaker, or news broadcaster are factors beyond immediate control. Other issues such as time pressures, deadlines, or technical malfunctions, may also affect the content or context of public information. Analysts’ judgments about source behavior must be made with careful consideration of previous behavior.

Latent Content

4-38. Latent content refers to the hidden meaning of a thought. Latent content can reveal patterns about the views and actions of the media controllers. These patterns and rules come from the unstated content that provides the underlying meaning of media content and behavior. When a pattern of content is changed, inference of a change in the viewpoint of the controller or a change in the balance of power among different controlling elements has occurred.

REPORT AND DISSEMINATE INFORMATION

4-39. Intelligence and information requirements satisfied through publicly available information and open sources should be immediately reported and disseminated in accordance with unit SOPs that are generally centered on intelligence requirements, information criticality, and information sensitivity.

4-40. Finalized OSINT serves no purpose unless it is timely, accurate, and properly disseminated to commanders and customers in a useable form. Reporting and disseminating a finalized OSINT product that satisfies intelligence and information requirements include but are not limited to—

  • Single discipline or multidiscipline estimates or assessments.
  • Statements of facts.
  • Evaluations of threat capabilities and limitations.
  • The threat’s likely COAs.

REPORTING GUIDELINES AND METHODS

4-41. Effective dissemination creates a mechanism of feedback in order to assess usefulness and predict or assess future intelligence and information requirements. The objective in reporting and disseminating intelligence and information is to provide relevancy to support conducting (planning, preparing, executing, and assessing) operations.

4-42.  The basic guidelines in preparing products for reporting and disseminating information are—

    • Timely. Information should be reported to affected units without delay for the sole purpose of ensuring the correct format.
    • Relevant. Information must contribute to the answering of intelligence requirements. Relevant information reduces collection, organization, and transmission times.
    • Complete. Prescribed formats and SOPs ensure completeness of transmitted information.

4-43.  The three reporting methods used to convey intelligence and information are—

    • Written. Methods include formats (spot reports), tactical reports (TACREPs), or information intelligence reports (IIRs).
    • Graphic. Web-based report dissemination is an effective technique to ensure the widest awareness of written and graphical information across echelons. OSINT personnel can collaborate and provide statuses of intelligence requirements through Web sites. Information can also be uploaded to various databases to support future open-source missions and operations.
    • Verbal and voice. The most common way to disseminate intelligence and information verbally is through a military briefing. Based on the criticality, sensitivity, and timeliness of the information, ad hoc and impromptu verbal communication methods are the most efficient to deliver information to commanders.

 

REPORTING AND DISSEMINATION CONSIDERATIONS

4-49. When reporting and disseminating OSINT products, considerations include but are not limited to—

Classification. When creating products from raw information, write-to-release at the lowest classification level to facilitate the widest distribution of the intelligence. Use tearline report formats to facilitate the separation of classified and unclassified information for users operating on communications networks of differing security levels. Organizations with original classification authority or personnel with derivative classification responsibilities must provide subordinate organizations and personnel with a security classification guide or guidance for information and intelligence derived from publicly available information and open sources in accordance with the policy and procedures in AR 380-5.

Feedback-mechanism development. E-mail, postal addresses, rating systems, and survey forms are mechanisms that OSINT personnel can use in order to understand the information requirements for customers.

Intellectual property identification. Identify intellectual property that an author or an organization has copyrighted, patented, or trademarked taken to preserve rights to the information. OSINT exploitation does not involve the selling, importing, or exporting of intellectual property. OSINT personnel engaging in exploitation should cite all sources used in reported and disseminated products. When uncertain, OSINT personnel should contact the supporting SJA office before reporting and disseminating a finalized OSINT product.

Use of existing dissemination methods, when and if possible. Creating new dissemination methods can at times complicate existing dissemination methods.

Analytical pitfalls. Analysts need to be cognizant that there are pitfalls when reporting and disseminating OSINT. The errors, referred to as fallacies (omission and assumption), are usually committed accidentally although sometimes they are deliberately used to persuade, convince, or deceive. Analysts must also be aware of hasty generalization, false cause, misuse of analogies and languages, biases (cultural, personal, organizational, cognitive), and hindsight. (For more information on analytical pitfalls, see TC 2-33.4.)

 

 

 

Appendix A

Legal Restrictions and Regulatory Limitations

Publicly available information and open sources cover a wide array of areas. Exploring, assessing, and collecting publicly available information and open sources has the potential to adversely affect organizations that execute OSINT missions. In some regards, OSINT missions could involve information either gathered against or delivered by U.S. persons. Given the scope of OSINT and its applicability within the intelligence community, having a firm awareness of intelligence oversight and its regulatory applications is necessary.

 

EXECUTIVE ORDER 12333

A-3. EO 12333 originated from operations that DOD intelligence units conducted against U.S. persons involved in the Civil Rights and anti-Vietnam War movements. DOD intelligence personnel used overt and covert means to collect information on the political positions of U.S. persons, retained the information in a nationwide database, and disseminated the information to law enforcement authorities.

A-4. The purpose of EO 12333 is to enhance human and technical collection techniques, the acquisition of foreign intelligence, and the countering of international terrorist activities conducted by foreign powers especially those undertaken abroad, and the acquisition of significant foreign intelligence, as well as the detection and countering of international terrorist activities and espionage conducted by foreign powers. Accurate and timely information about the capabilities, intentions, and activities of foreign powers, organizations, and subordinate agents is essential to informed national defense decisions. Collection of such information is a priority objective, pursued in a vigorous, innovative, and responsible manner that is consistent with the U.S. Constitution and applicable laws and principles.

INTERPRETATION AND IMPLEMENTATION

A-5. AR 381-10 interprets and implements EO 12333 and DOD 5240.1-R. AR 381-10 enables the intelligence community to perform authorized intelligence functions in a manner that protects the constitutional rights of U.S. persons. The regulation does not authorize intelligence activity. An Army intelligence unit or organization must have the mission to conduct any intelligence activity directed against U.S. persons. In accordance with the Posse Comitatus Act (Section 1385, Title 18, USC), the regulation does not apply to Army intelligence units or organizations when engaged in civil disturbance or law enforcement activities without prior approval by the Secretary of Defense.

 

 

ASSIGNED FUNCTIONS

A-6. Based on EO 12333, the assigned intelligence functions of the Army are to—

  • Collect, produce, and disseminate military-related foreign intelligence as required for execution of responsibility of the Secretary of Defense.
  • Conduct programs and missions necessary to fulfill departmental foreign intelligence requirements.
  • Conduct activities in support of DOD components outside the United States in coordination with the Central Intelligence Agency (CIA) and within the United States in coordination with the Federal Bureau of investigation (FBI) pursuant to procedures agreed upon by the Secretary of Defense and the Attorney General.
  • Protect the security of DOD installations to include its activities, property, information, and employed U.S. persons by appropriate means.
  • Cooperate with appropriate law enforcement agencies to protect employed U.S. persons, information, property, and facilities of any agency within the intelligence community.
  • Participate with law enforcement agencies to investigate or prevent clandestine intelligence activities by foreign powers or international terrorists.
  • Provide specialized equipment, technical knowledge, or assistance to U.S. persons for use by any department or agency, or, when lives are endangered, to support local law enforcement agencies.

ARMY REGULATION 381-10

A-7. AR 381-10 enables any Army component to perform intelligence functions in a manner that protects the constitutional rights of U.S. persons. It also provides guidance on collection techniques used to obtain information for foreign intelligence and CI purposes. Intelligence activity is not authorized by this regulation.

A-13. AR 381-10 does not authorize the collection of any information relating to a U.S. person solely because of personal lawful advocacy of measures opposed to government policy as embodied in the First Amendment to the U.S. Constitution. The First Amendment states that Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.

RETENTION OF U.S. PERSON INFORMATION

A-14. Retention refers only to maintaining information about U.S. persons that the Army intelligence component can retrieve by the person’s name or other personal identifying data. AR 381-10, procedure 3, describes the kinds of U.S. person information that an Army intelligence component may knowingly retain without the individual’s consent.

DISSEMINATION OF U.S. PERSON INFORMATION

A-19. Disseminate, an information management activity, refers to communicating relevant information of any kind from one person or place to another in a usable form by any means to improve understanding or to initiate or govern action (FM 6-0). In other words, dissemination is the delivery of intelligence to users in a suitable form with application of the intelligence to appropriate missions, tasks, and functions.

 

QUESTIONABLE INTELLIGENCE ACTIVITY

A-20. Questionable intelligence activity occurs when intelligence operations potentially violate—

  • Laws.
  • EOs.
  • Presidential directives.
  • DOD or Army policies.

A-21. Intelligence personnel should report questionable intelligence activity through the chain of command, the inspector general, or directly to the Assistant to the Secretary of Defense for Intelligence Oversight in accordance with AR 381-10. The following are examples of questionable intelligence activity on improper collecting, retaining, or disseminating of U.S. person information:

  • Collecting and gathering information about U.S. domestic groups not connected with a foreign power or international terrorism.
  • Producing and disseminating intelligence threat assessments containing U.S. person information without a clear explanation of the intelligence purpose for which the information was collected.
  • Collecting and gathering U.S. person information for force protection purposes without determining if the intelligence function is authorized.
  • Collecting and gathering U.S. person information from open sources without a logical connection to the mission of the unit.

 

Appendix B

Cyberspace Internet Awareness

Intelligence and nonintelligence personnel conducting open-source research must be aware of the digital operational environment by minimizing and reducing cyber “footprints,” practicing effective cyber OPSEC, utilizing safe online surfing techniques and habits, and understanding that embedded metadata can be contained in documents.

CYBERSPACE SITUATIONAL AWARENESS AND CYBER SECURITY

B-1. More than any other intelligence discipline, research involving publicly available information and open sources could unintentionally reveal CCIRs.

In the areas of computer information assurance and Internet security, internet awareness is needed in order to be effective, aggressive, and to successfully conduct open-source research and exploitation. Unjustified Internet Web- site restrictions have the potential to severely impede acquiring and the subsequent processing, reporting, and disseminating of publicly available information and open sources.

B-2. Awareness is the beginning of effective cyber security. Computers transmit machine specifications such as operating system, type of version of each enabled program, security levels, a history of Web sites visited, cookie information, user preferences, IP addresses, enabled languages, and referring URL when searching the Internet. Visitors are frequently redirected to alternate Web sites based on search criterion, location, language, and time the search is conducted.

B-3. The Internet is described as a “network of networks” due to the hundreds of thousands of interconnected networks consisting of millions of computers. Computers and users connected to the Internet are identified by a system-specific IP address that designates location. The IP address serves as the address where transferred information and datum is delivered. The concern therein rests in the understanding that while visiting nonstandard or questionable Internet Web sites in accordance with official duties, sensitive unit information could inadvertently be revealed.

B-5. Cyber situational awareness is the knowledge of friendly, neutral, and threat relevant information regarding activities in and through cyberspace and the electromagnetic spectrum (FM 1-02). Cyberspace and cyber security involve increasing cyber situational awareness by—

  • Identifying threat operations to determine the effect on friendly operations and countermeasures.
  • Determining how to use cyberspace to gain support from friendly and neutral entities.
  • Determining how to gain, maintain, and exploit technical and operational advantages.

B-7. URL information from the previous Web site visited is frequently an OPSEC issue and it identifies characteristics and interests of the user. While necessary for an effective research, the use of specific and focused search terms have potential OPSEC implications.

B-8. All actions on a Web site are logged and saved. The information is saved and linked to what is referred to as cookie data. User actions include but are not limited to—

  • Words typed in search parameter fields.
  • Drop-down menu choices.
  • Web site movement patterns such as changing domain name or Web site address.

B-9. On many Web sites, information that the user provides or fills in becomes part of the Web site and is searchable. Key information to avoid sharing includes but is not limited to—

  • Military plans.
  • Operations.
  • Exercises.
  • Maps and charts.
  • Locations.
  • Schedules.
  • Equipment vulnerabilities, capabilities, and shortfalls.
  • Names and related numbers:
    • Telephone numbers.
    • Birth dates.
    • Identification numbers.

B-10. Traditional and irregular threats are disruptive in nature and use the cyberspace domain to conduct operations against the Army. These threats are innovative, networked, and technologically adept. These threats capitalize on emerging technologies to establish and maintain a cultural and social advantage leveraging areas, to include but not limited to mission command, recruiting, logistics, fund raising and laundering, IO, and propaganda.

B-11. When engaged in OSINT exploitation utilizing computer systems and Internet usage, cyberspace awareness assessments should be developed and cover areas including but not limited to network vulnerabilities, network threats (physical and virtual), and future risks.

 

 

Appendix C

Basic and Advanced Internet Search Techniques

The ability to search the Internet is an essential skill for open-source research and acquisition. The Internet, considered a reconnaissance and surveillance research tool, provides access to Web sites and databases that hold a wide range of information on current, planned, and potential areas of operation. The exponential growth in computer technology and the Internet has placed more publicly available information and processing power at the fingertips of Soldiers than ever before. A body of knowledge on culture, economics, geography, military affairs, and politics that was once inaccessible to some degree, now rest in the hands of high school and college students—future leaders of the Army.

 

OPEN-SOURCE DATABASES, SOFTWARE, AND TOOLS

C-26. There are numerous COTS software applications, tools, and databases that are searchable using query words for research. Search engines used for research include but are not limited to—

 

Google Scholar. Google Scholar provides a simple way to broadly search for scholarly literature. From one place, searches expand across many disciplines and sources that include articles, theses, books, and abstracts. Google Scholar helps locate relevant work across the world of scholarly research.

Spokeo. Spokeo specializes in organizing people-related information (names, addresses, phone numbers) from phone books, social networks, marketing lists, business Web sites, and other public sources. Spokeo uses algorithms to piece together data into coherent profiles.

Blog Pulse. BlogPulse is an automated trend discovery system for blogs by applying machine- learning and natural language processing techniques.

Pipl. Pipl query engine helps locate Deep Web pages that cannot be found on regular or standard search engines. Pipl uses advanced language-analysis and ranking algorithms to retrieve the most relevant information about an individual.

Monitter. Monitter is a browser-based Twitter search engine. Monitter displays three constantly updated keyword searches parallel to each other in your browser.

Maltego. Maltego is a forensic application that offers data-mining and gathering of information into packaged representations. Maltego allows the identification of key relationships between information and identify previously unknown relationships.

 

Notes from Knowledge Management in the Intelligence Enterprise

Notes from Knowledge Management in the Intelligence Enterprise

Knowledge Management in the Intelligence Enterprise

This book is about the application of knowledge management (KM) principles to the practice of intelligence to fulfill those consumers’ expectations.

Unfortunately, too many have reduced intelligence to a simple metaphor of “connecting the dots.” This process, it seems, appears all too simple after the fact—once you have seen the picture and you can ignore irrelevant, contradictory, and missing dots. Real-world intelligence is not a puzzle of connecting dots; it is the hard daily work of planning operations, focusing the collection of data, and then processing the collected data for deep analysis to produce a flow of knowledge for dissemination to a wide range of consumers.

this book… is an outgrowth of a 2-day military KM seminar that I teach in the United States to describe the methods to integrate people, processes, and technologies into knowledge- creating enterprises.

The book progresses from an introduction to KM applied to intelligence (Chapters 1 and 2) to the principles and processes of KM (Chapter 3). The characteristics of collaborative knowledge-based intelligence organizations are described (Chapter 4) before detailing its principle craft of analysis and synthesis (Chapter 5 introduces the principles and Chapter 6 illustrates the practice). The wide range of technology tools to support analytic thinking and allow analysts to interact with information is explained (Chapter 7) before describing the automated tools that perform all-source fusion and mining (Chapter 8). The organizational, systems, and technology concepts throughout the book are brought together in a representative intelligence enterprise (Chapter 9) to illustrate the process of architecture design for a small intelligence cell. An overview of core, enabling, and emerging KM technologies in this area is provided in conclusion (Chapter 10).

Knowledge Management and Intelligence

This is a book about the management of knowledge to produce and deliver a special kind of knowledge: intelligence—that knowledge that is deemed most critical for decision making both in the nation-state and in business.

  • Knowledge management refers to the organizational disciplines, processes, and information technologies used to acquire, create, reveal, and deliver knowledge that allows an enterprise to accomplish its mission (achieve its strategic or business objectives). The components of knowledge management are the people, their operations (practices and processes), and the information technology (IT) that move and transform data, information, and knowledge. All three of these components make up the entity we call the enterprise.
  • Intelligence refers to a special kind of knowledge necessary to accomplish a mission—the kind of strategic knowledge that reveals critical threats and opportunities that may jeopardize or assure mission accomplishment. Intelligence often reveals hidden secrets or conveys a deep understanding that is covered by complexity, deliberate denial, or out- right deception. The intelligence process has been described as the process of the discovery of secrets by secret means. In business and in national security, secrecy is a process of protection for one party; discovery of the secret is the object of competition or security for the competitor or adversary… While a range of definitions of intelligence exist, perhaps the most succinct is that offered by the U.S. Central Intelligence Agency (CIA): “Reduced to its simplest terms, intelligence is knowledge and foreknowledge of the world around us—the prelude to decision and action by U.S. policymakers”
  • The intelligence enterprise encompasses the integrated entity of people, processes, and technologies that collects and analyzes intelligence data to synthesize intelligence products for decision-making consumers.

intelligence (whether national or business) has always involved the management (acquisition, analysis, synthesis, and delivery) of knowledge.

At least three driving factors continue to make this increasing need for automation necessary. These factors include:

  • Breadth of data to be considered.
  • Depth of knowledge to be understood.
  • Speed required for decision making.

Throughout this book, we distinguish between three levels of abstraction of knowledge, each of which may be referred to as intelligence in forms that range from unprocessed reporting to finished intelligence products

  1. Individual observations, measurements, and primitive messages form the lowest level. Human communication, text messages, electronic queries, or scientific instruments that sense phenomena are the major sources of data. The terms raw intelligence and evidence (data that is determined to be relevant) are frequently used to refer to elements of data.
  2. Information. Organized sets of data are referred to as information. The organization process may include sorting, classifying, or indexing and linking data to place data elements in relational context for subsequent searching and analysis.
  3. Information once analyzed, understood, and explained is knowledge or foreknowledge (predictions or forecasts). In the context of this book, this level of understanding is referred to as the intelligence product. Understanding of information provides a degree of comprehension of both the static and dynamic relationships of the objects of data and the ability to model structure and past (and future) behavior of those objects. Knowledge includes both static con- tent and dynamic processes.

These abstractions are often organized in a cognitive hierarchy, which includes a level above knowledge: human wisdom.

In this text, we consider wisdom to be a uniquely human cognitive capability—the ability to correctly apply knowledge to achieve an objective. This book describes the use of IT to support the creation of knowledge but considers wisdom to be a human capacity out of the realm of automation and computation.

1.1 Knowledge in a Changing World

This strategic knowledge we call intelligence has long been recognized as a precious and critical commodity for national leaders.

the Hebrew leader Moses commissioned and documented an intelligence operation to explore the foreign land of Canaan. That classic account clearly describes the phases of the intelligence cycle, which proceeds from definition of the requirement for knowledge through planning, tasking, collection, and analysis to the dissemination of that knowledge. He first detailed the intelligence requirements by describing the eight essential elements of information to be collected, and he described the plan to covertly enter and reconnoiter the denied area

requirements articulation, planning, collection, analysis-synthesis, and dissemination

The U.S. defense community has developed a network-centric approach to intelligence and warfare that utilizes the power of networked information to enhance the speed of command and the efficiency of operations. Sensors are linked to shooters, commanders efficiently coordinate agile forces, and engagements are based on prediction and preemption. The keys to achieving information superiority in this network-centric model are network breadth (or connectivity) and bandwidth; the key technology is information networking.

The ability to win will depend upon the ability to select and convert raw data into accurate decision-making knowledge. Intelligence superiority will be defined by the ability to make decisions most quickly and effectively—with the same information available to virtually all parties. The key enabling technology in the next century will become processing and cognitive power to rapidly and accurately convert data into com- prehensive explanations of reality—sufficient to make rapid and complex decisions.

Consider several of the key premises about the significance of knowledge in this information age that are bringing the importance of intelligence to the forefront. First, knowledge has become the central resource for competitive advantage, displacing raw materials, natural resources, capital, and labor. This resource is central to both wealth creation and warfare waging. Second, the management of this abstract resource is quite complex; it is more difficult (than material resources) to value and audit, more difficult to create and exchange, and much more difficult to protect. Third, the processes for producing knowledge from raw data are as diverse as the manufacturing processes for physical materials, yet are implemented in the same virtual manufacturing plant—the computer. Because of these factors, the management of knowledge to produce strategic intelligence has become a necessary and critical function within nations-states and business enterprises—requiring changes in culture, processes, and infrastructure to compete.

with rapidly emerging information technologies, the complexities of globalization and diverse national interests (and threats), businesses and militaries must both adopt radically new and innovative agendas to enable continuous change in their entire operating concept. Innovation and agility are the watchwords for organizations that will remain competitive in Hamel’s age of nonlinear revolution.

Business concept innovation will be the defining competitive advantage in the age of revolution. Business concept innovation is the capacity to reconceive existing business models in ways that create new value for customers, rude surprises for competitors, and new wealth for investors. Business concept innovation is the only way for newcomers to succeed in the face of enormous resource disadvantages, and the only way for incumbents to renew their lease on success

 

A functional taxonomy based on the type of analysis and the temporal distinction of knowledge and foreknowledge (warning, prediction, and forecast) distinguishes two primary categories of analysis and five subcategories of intelligence products

Descriptive analyses provide little or no evaluation or interpretation of collected data; rather, they enumerate collected data in a fashion that organizes and structures the data so the consumer can perform subsequent interpretation.

Inferential analyses require the analysis of collected relevant data sets (evidence) to infer and synthesize explanations that describe the mean- ing of the underlying data. We can distinguish four different focuses of inferential analysis:

  1. Analyses that explain past events (How did this happen? Who did it?);
  2. Analyses that explain the structure of current structure (What is the organization? What is the order of battle?);
  3. Analyses that explain current behaviors and states (What is the competitor’s research and development process? What is the status of development?);
  4. Foreknowledge analyses that forecast future attributes and states (What is the expected population and gross national product growth over the next 5 years? When will force strength exceed that of a country’s neighbors? When will a competitor release a new product?).

1.3 The Intelligence Disciplines and Applications

While the taxonomy of intelligence products by analytic methods is fundamental, the more common distinctions of intelligence are by discipline or consumer.

The KM processes and information technologies used in all cases are identical (some say, “bits are bits,” implying that all digital data at the bit level is identical), but the content and mission objectives of these four intelligence disciplines are unique and distinct.

Nation-state security interests deal with sovereignty; ideological, political, and economic stability; and threats to those areas of national interest. Intelligence serves national leadership and military needs by providing strategic policymaking knowledge, warnings of foreign threats to national secu- rity interests (economic, military, or political) and tactical knowledge to support day-to-day operations and crisis responses. Nation-state intelligence also serves a public function by collecting and consolidating open sources of foreign information for analysis and publication by the government on topics of foreign relations, trade, treaties, economies, humanitarian efforts, environmental concerns, and other foreign and global interests to the public and businesses at large.

Similar to the threat-warning intelligence function to the nation-state, business intelligence is chartered with the critical task of foreseeing and alerting management of marketplace discontinuities. The consumers of business intelligence range from corporate leadership to employees who access supply-chain data, and even to customers who access information to support purchase decisions.

A European Parliament study has enumerated concern over the potential for national intelligence sources to be used for nation-state economic advantages by providing competitive intelligence directly to national business interests. The United States has acknowledged a policy of applying national intelligence to protect U.S. business interests from fraud and illegal activities, but not for the purposes of providing competitive advantage

1.3.1 National and Military Intelligence

National intelligence refers to the strategic knowledge obtained for the leadership of nation-states to maintain national security. National intelligence is focused on national security—providing strategic warning of imminent threats, knowledge on the broad spectrum of threats to national interests, and fore-knowledge regarding future threats that may emerge as technologies, economies, and the global environment changes.

The term intelligence refers to both a process and its product.

The U.S. Department of Defense (DoD) provides the following product definitions that are rich in description of the processes involved in producing the product:

  1. The product resulting from the collection, processing, integration, analysis, evaluation, and interpretation of available information concerning foreign countries or areas;
  2. Information and knowledge about an adversary obtained through observation, investigation, analysis, or understanding.

Michael Herman accurately emphasizes the essential components of the intelligence process: “The Western intelligence system is two things. It is partly the collection of information by special means; and partly the subsequent study of particular subjects, using all available information from all sources. The two activities form a sequential process.”

Martin Libicki has provided a practical definition of information dominance, and the role of intelligence coupled with command and control and information warfare:

Information dominance may be defined as superiority in the generation, manipulation, and use of information sufficient to afford its possessors military dominance. It has three sources:

  • Command and control that permits everyone to know where they (and their cohorts) are in the battlespace, and enables them to execute operations when and as quickly as necessary.
  • Intelligence that ranges from knowing the enemy’s dispositions to knowing the location of enemy assets in real-time with sufficient precision for a one-shot kill.
  • Information warfare that confounds enemy information systems at various points (sensors, communications, processing, and command), while protecting one’s own.

 

The superiority is achieved by gaining superior intelligence and protecting information assets while fiercely degrading the enemy’s information assets. The goal of such superiority is not the attrition of physical military assets or troops—it is the attrition of the quality, speed, and utility of the adversary’s decision-making ability.

“A knowledge environment is an organizations (business) environment that enhances its capability to deliver on its mission (competitive advantage) by enabling it to build and leverage it intellectual capital.”

1.3.2 Business and Competitive Intelligence

The focus of business intelligence is on understanding all aspects of a business enterprise: internal operations and the external environment, which includes customers and competitors (the marketplace), partners, and suppliers. The external environmental also includes independent variables that can impact the business, depending on the business (e.g., technology, the weather, government policy actions, financial markets). All of these are the objects of business intelligence in the broadest definition. But the term business intelligence is also used in a narrower sense to focus on only the internals of the business, while the term competitor intelligence refers to those aspects of intelligence that focus on the externals that influence competitiveness: competitors.

Each of the components of business intelligence has distinct areas of focus and uses in maintaining the efficiency, agility, and security of the business; all are required to provide active strategic direction to the business. In large companies with active business intelligence operations, all three components are essential parts of the strategic planning process, and all contribute to strategic decision making.

1.4 The Intelligence Enterprise

The intelligence enterprise includes the collection of people, knowledge (both internal tacit and explicitly codified), infrastructure, and information processes that deliver critical knowledge (intelligence) to the consumers. This enables them to make accurate, timely, and wise decisions to accomplish the mission of the enterprise.

This definition describes the enterprise as a process—devoted to achieving an objective for its stakeholders and users. The enterprise process includes the production, buying, selling, exchange, and promotion of an item, substance, service, or system.

the DoD three-view architecture description, which defines three interrelated perspectives or architectural descriptions that define the operational, system, and technical aspects of an enterprise [29]. The operational architecture is a people- or organization-oriented description of the operational elements, intelligence business processes, assigned tasks, and information and work flows required to accomplish or support the intelligence function. It defines the type of information, the frequency of exchange, and the tasks that are supported by these information exchanges. The systems architecture is a description of the systems and interconnections providing for or supporting intelligence functions. The system architecture defines the physical connection, location, and identification of the key nodes, circuits, networks, and users, and specifies system and component performance parameters. The technical architecture is the minimal set of rules (i.e., standards, protocols, interfaces, and services) governing the arrangement, interaction, and interdependence of the elements of the system.

 

These three views of the enterprise (Figure 1.4) describe three layers of people-oriented operations, system structure, and procedures (protocols) that must be defined in order to implement an intelligence enterprise.

The operational layer is the highest (most abstract) description of the concept of operations (CONOPS), human collaboration, and disciplines of the knowledge organization. The technical architecture layer describes the most detailed perspective, noting specific technical components and their operations, protocols, and technologies.

The intelligence supply chain that describes the flow of data into knowledge to create consumer value is measured by the value it provides to intelligence consumers. Measures of human intellectual capital and organizational knowledge describe the intrinsic value of the organization.

1.5 The State of the Art and the State of the Intelligence Tradecraft

The subject of intelligence analysis remained largely classified through the 1980s, but the 1990s brought the end of the Cold War and, thus, open publication of the fundamental operations of intelligence and the analytic methods employed by businesses and nation-states. In that same period, the rise of commercial information sources and systems produced the new disciplines of open source intelligence (OSINT) and business/competitor intelligence. In each of these areas, a wealth of resources is available for tracking the rapidly changing technology state of the art as well as the state of the intelligence tradecraft.

1.5.1 National and Military Intelligence

Numerous sources of information provide management, legal, and technical insight for national and military intelligence professionals with interests in analysis and KM

These sources include:

  • Studies in Intelligence—Published by the U.S. CIA Center for the Study of Intelligence and the Sherman Kent School of Intelligence, unclassified versions are published on the school’s Web site (http://odci. gov.csi), along with periodically issued monographs on technical topics related to intelligence analysis and tradecraft.
  • International Journal of Intelligence and Counterintelligence—This quarterly journal covers the breadth of intelligence interests within law enforcement, business, nation-state policymaking, and foreign affairs.
  • Intelligence and National Security—A quarterly international journal published by Frank Cass & Co. Ltd., London, this journal covers broad intelligence topics ranging from policy, operations, users, analysis, and products to historical accounts and analyses.
  • Defense Intelligence Journal—This is a quarterly journal published by the U.S. Defense Intelligence Agency’s Joint Military Intelligence College.
  • American Intelligence Journal—Published by the National Military Intelligence Association (NMIA), this journal covers operational, organizational, and technical topics of interest to national and military intelligence officers.
  • Military Intelligence Professional Bulletin—This is a quarterly bulletin of the U.S. Army Intelligence Center (Ft. Huachuca) that is available on- line and provides information to military intelligence officers on studies of past events, operations, processes, military systems, and emerging research and development.
  • Jane’s Intelligence Review—This monthly magazine provides open source analyses of international military organizations, NGOs that threaten or wage war, conflicts, and security issues.

1.5.2 Business and Competitive Intelligence

Several sources focus on the specific areas of business and competitive intelligence with attention to the management, ethical, and technical aspects of collection, analysis, and valuation of products.

  • Competitive Intelligence Magazine—This is a CI source for general applications-related articles on CI, published bimonthly by John Wiley & Sons with the Society for Competitive Intelligence (SCIP).
  • Competitive Intelligence Review—This quarterly journal, also published by John Wiley with the SCIP, contains best-practice case studies as well as technical and research articles.
  • Management International Review—This is a quarterly refereed journal that covers the advancement and dissemination of international applied research in the fields of management and business. It is published by Gabler Verlag, Germany, and is available on-line.
  • Journal of Strategy and Business—This quarterly journal, published by Booz Allen and Hamilton focuses on strategic business issues, including regular emphasis on both CI and KM topics in business articles.

1.5.3 KM

The developments in the field of KM are covered by a wide range of business, information science, organizational theory, and dedicated KM sources that pro- vide information on this diverse and fast growing area.

  • CIO Magazine—This monthly trade magazine for chief information officers and staff includes articles on KM, best practices, and related leadership topics.
  • Harvard Business Review, Sloan Management Review—These management journals cover organizational leadership, strategy, learning and change, and the application of supporting ITs.
  • Journal of Knowledge Management—This is a quarterly academic journal of strategies, tools, techniques, and technologies published by Emerald (UK). In addition, Emerald also publishes quarterly The Learning Organization—An International Journal.
  • IEEE Transactions of Knowledge and Data Engineering—This is an archival journal published bimonthly to inform researchers, developers, managers, strategic planners, users, and others interested in state-of- the-art and state-of-the-practice activities in the knowledge and data engineering area.
  • Knowledge and Process Management—A John Wiley (UK) journal for executives responsible for leading performance improvement and con- tributing thought leadership in business. Emphasis areas include KM, organizational learning, core competencies, and process management.
  • American Productivity and Quality Center (APQC)—THE APQC is a nonprofit organization that provides the tools, information, expertise, and support needed to discover and implement best practices in KM. Its mission is to discover, research, and understand emerging and effective methods of both individual and organizational improvement, to broadly disseminate these findings, and to connect individuals with one another and with the knowledge, resources, and tools they need to successfully manage improvement and change. They maintain an on-line site at www.apqc.org.
  • Data Mining and Knowledge Discovery—This Kluwer (Netherlands) journal provides technical articles on the theory, techniques, and practice of knowledge extraction from large databases.

1.6 The Organization of This Book

This book is structured to introduce the unique role, requirements, and stake- holders of intelligence (the applications) before introducing the KM processes, technologies, and implementations.

2
The Intelligence Enterprise

Intelligence, the strategic information and knowledge about an adversary and an operational environment obtained through observation, investigation, analysis, or understanding, is the product of an enterprise operation that integrates people and processes in a organizational and networked computing environment.

The intelligence enterprise exists to produce intelligence goods and service—knowledge and foreknowledge to decision- and policy-making customers. This enterprise is a production organization whose prominent infrastructure is an information supply chain. As in any business, it has a “front office” to manage its relations with customers, with the information supply chain in the “back office.”

The intellectual capital of this enterprise includes sources, methods, workforce competencies, and the intelligence goods and services produced. As in virtually no other business, the protection of this capital is paramount, and therefore security is integrated into every aspect of the enterprise.

2.1 The Stakeholders of Nation-State Intelligence

The intelligence enterprise, like any other enterprise providing goods and services, includes a diverse set of stakeholders in the enterprise operation. The business model for any intelligence enterprise, as for any business, must clearly identify the stakeholders who own the business and those who produce and consume its goods and services.

  • The owners of the process include the U.S. public and its elected officials, who measure intelligence value in terms of the degree to which national security is maintained. These owners seek awareness and warning of threats to prescribed national interests.
  • Intelligence consumers (customers or users) include national, military, and civilian user agencies that measure value in terms of intelligence contribution to the mission of each organization, measured in terms of its impact on mission effectiveness.
  • Intelligence producers, the most direct users of raw intelligence, include the collectors (HUMINT and technical), processor agencies, and analysts. The principal value metrics of these users are performance based: information accuracy, coverage breadth and depth, confidence, and timeliness.

The purpose and value chains for intelligence (Figure 2.2) are defined by the stakeholders to provide a foundation for the development of specific value measures that assess the contribution of business components to the overall enterprise. The corresponding chains in the U.S. IC include:

  • Source—the source or basis for defining the purpose of intelligence is found in the U.S. Constitution, derivative laws (i.e., the National Security Act of 1947, Central Intelligence Agency Act of 1949, National Security Agency Act of 1959, Foreign Intelligence Surveillance Act of 1978, and Intelligence Organization Act of 1992), and orders of the executive branch [2]. Derived from this are organizational mission documents, such as the Director of Central Intelligence (DCI) Strategic Intent [3], which documents communitywide purpose and vision, as well as derivative guidance documents prepared by intelligence providers.
  • Purpose chain—the causal chain of purposes (objectives) for which the intelligence enterprise exists. The ultimate purpose is national security, enabled by information (intelligence) superiority that, in turn, is enabled by specific purposes of intelligence providers that will result in information superiority.
  • Value chain—the chain of values (goals) by which achievement of the enterprise purpose is measured.
  • Measures—Specific metrics by which values are quantified and articulated by stakeholders and by which the value of the intelligence enterprise is evaluated.

In a similar fashion, business and competitive intelligence have stakeholders that include customers, shareholders, corporate officers, and employees… there must exist a purpose and value chain that guides the KM operations. These typically include:

  • Source—the business charter and mission statement of a business elaborates the market served and the vision for the businesses role in that market.
  • Purpose chain—the objectives of the business require knowledge about internal operations and the market (BI objectives) as well as competitors (CI).
  • Value chain—the chain of values (goals) by which achievement of the enterprise purpose is measured.
  • Measures—Specific metrics by which values are quantified. A balanced set of measures includes vision and strategy, customer, internal, financial, and learning-growth metrics.

2.2 Intelligence Processes and Products

The process that delivers strategic and operational intelligence products is gener- ally depicted in cyclic form (Figure 2.3), with five distinct phases.

In every case, the need is the basis for a logical process to deliver the knowledge to the requestor.

  1. Planning and direction. The process begins as policy and decision makers define, at a high level of abstraction, the knowledge that is required to make policy, strategic, or operational decisions. The requests are parsed into information required, then to data that must be collected to estimate or infer the required answers. Data requirements are used to establish a plan of collection, which details the elements of data needed and the targets (people, places, and things) from which the data may be obtained.
  2. Collection. Following the plan, human and technical sources of data are tasked to collect the required raw data. The next section introduces the major collection sources, which include both openly available and closed sources that are accessed by both human and technical methods.

These sources and methods are among the most fragile [5]—and most highly protected—elements of the process. Sensitive and specially compartmented collection capabilities that are particularly fragile exist across all of the collection disciplines.

  1. Processing. The collected data is processed (e.g., machine translation, foreign language translation, or decryption), indexed, and organized in an information base. Progress on meeting the requirements of the col- lection plan is monitored and the tasking may be refined on the basis of received data.
  2. All-source analysis-synthesis and production. The organized information base is processed using estimation and inferential (reasoning) techniques that combine all-source data in an attempt to answer the requestor’s questions. The data is analyzed (broken into components and studied) and solutions are synthesized (constructed from the accumulating evidence). The topics or subjects (intelligence targets) of study are modeled, and requests for additional collection and processing may be made to acquire sufficient data and achieve a sufficient level of understanding (or confidence to make a judgment) to answer the consumer’s questions.
  3. Dissemination. Finished intelligence is disseminated to consumers in a variety of formats, ranging from dynamic operating pictures of war- fighters’ weapon systems to formal reports to policymakers. Three categories of formal strategic and tactical intelligence reports are distinguished by their past, present, and future focus: current intelligence reports are news-like reports that describe recent events or indications and warnings, basic intelligence reports provide complete descriptions of a specific situation (e.g., order of battle or political situation), and intelligence estimates attempt to predict feasible future outcomes as a result of current situation, constraints, and possible influences [6].

Though introduced here in the classic form of a cycle, in reality the process operates as a continuum of actions with many more feedback (and feedforward) paths that require collaboration between consumers, collectors, and analysts.

2.3 Intelligence Collection Sources and Methods

A taxonomy of intelligence data sources includes sources that are openly accessible or closed (e.g., denied areas, secured communications, or clandestine activities). Due to the increasing access to electronic media (i.e., telecommunications, video, and computer networks) and the global expansion of democratic societies, OSINT is becoming an increasingly important source of global data. While OSINT must be screened and cross validated to filter errors, duplications, and deliberate misinformation (as do all sources), it provides an economical source of public information and is a contributor to other sources for cueing, indications, and confirmation

Measurements and signatures intelligence (MASINT) is technically derived knowledge from a wide variety of sensors, individual or fused, either to perform special measurements of objects or events of interest or to obtain signatures for use by the other intelligence sources. MASINT is used to characterize the observable phenomena (observables) of the environment and objects of surveillance.

U.S. intelligence studies have pointed out specific changes in the use of these sources as the world increases globalization of commerce and access to social, political, economic, and technical information [10–12]:

  • The increase in unstructured and transnational threats requires the robust use of clandestine HUMINT sources to complement extensive technical verification means.
  • Technical means of collection are required for both broad area coverage and detailed assessment of the remaining denied areas of the world.

2.3.1 HUMINT Collection

HUMINT refers to all information obtained directly from human sources

HUMINT sources may be overt or covert (clandestine); the most common categories include:

  • Clandestine intelligence case officers. These officers are own-country individuals who operate under a clandestine “cover” to collect intelligence and “control” foreign agents to coordinate collections.
  • Agents. These are foreign individuals with access to targets of intelligence who conduct clandestine collection operations as representatives of their controlling intelligence officers. These agents may be recruited or “walk-in” volunteers who act for a variety of ideological, financial, or personal motives.
  • Émigrés, refugees, escapees, and defectors. The open, overt (yet discrete) programs to interview these recently arrived foreign individuals provide background information on foreign activities as well as occasional information on high-value targets.
  • Third party observers. Cooperating third parties (e.g., third-party countries and travelers) can also provide a source of access to information.

The HUMINT discipline follows a rigorous process for acquiring, employing, and terminating the use of human assets that follows a seven-step sequence. The sequence followed by case officers includes:

  1. Spotting—locating, identifying, and securing low-level contact with agent candidates;
  2. Evaluation—assessment of the potential (i.e., value or risk) of the spotted individual, based on a background investigation;
  3. Recruitment—securing the commitment from the individual;
  4. Testing—evaluation of the loyalty of the agent;
  5. Training—supporting the agent with technical experience and tools;
  6. Handling—supporting and reinforcing the agent’s commitment;
  7. Termination—completion of the agent assignment by ending the relationship.

 

HUMINT is dependent upon the reliability of the individual source, and lacks the collection control of technical sensors. Furthermore, the level of security to protect human sources often limits the fusion of HUMINT reports with other sources and the dissemination of wider customer bases. Directed high-risk HUMINT collections are generally viewed as a precious resource to be used for high-value targets to obtain information unobtainable by technical means or to validate hypotheses created by technical collection analysis.

2.3.2 Technical Intelligence Collection

Technical collection is performed by a variety of electronic (e.g., electromechanical, electro-optical, or bioelectronic) sensors placed on platforms in space, the atmosphere, on the ground, and at sea to measure physical phenomena (observables) related to the subjects of interest (intelligence targets).

The operational utility of these collectors for each intelligence application depends upon several critical factors:

  • Timeliness—the time from collection of event data to delivery of a tactical targeting cue, operational warnings and alerts, or formal strategic report;
  • Revisit—the frequency with which a target of interest can be revisited to understand or model (track) dynamic behavior;
  • Accuracy—the spatial, identity, or kinematic accuracy of estimates and predictions;
  • Stealth—the degree of secrecy with which the information is gathered and the measure of intrusion required.

2.4 Collection and Process Planning

The technical collection process requires the development of a detailed collection plan, which begins with the decomposition of the subject target into activities, observables, and then collection requirements.

From this plan, technical collectors are tasked and data is collected and fused (a composition, or reconstruction that is the dual of the decomposition process) to derive the desired intelligence about the target.

2.5 KM in the Intelligence Process

The intelligence process must deal with large volumes of source data, converting a wide range of text, imagery, video, and other media types into organized information, then performing the analysis-synthesis process to deliver knowledge in the form of intelligence products.

IT is providing increased automation of the information indexing, discovery, and retrieval (IIDR) functions for intelligence, especially the exponentially increasing volumes of global open-source data.

 

The functional information flow in an automated or semiautomated facility (depicted in Figure 2.5) requires digital archiving and analysis to ingest continu- ous streams of data and manage large volumes of analyzed data. The flow can be broken into three phases:

  1. Capture and compile;
    2. Preanalysis;
    3. Exploitation (analysis-synthesis).

The preanalysis phase indexes each data item (e.g., article, message, news segment, image, book or chapter) by assigning a reference for storage; generating an abstract that summarizes the content of the item and metadata with a description of the source, time, reliability-confidence, and relationship to other items (abstracting); and extracting critical descriptors of content that characterize the contents (e.g., keywords) or meaning (deep indexing) of the item for subsequent analysis. Spatial data (e.g., maps, static imagery, or video imagery) must be indexed by spatial context (spatial location) and content (imagery content).

The indexing process applies standard subjects and relationships, maintained in a lexicon and thesaurus that is extracted from the analysis information base. Fol- lowing indexing, data items are clustered and linked before entry into the analy- sis base. As new items are entered, statistical analyses are performed to monitor trends or events against predefined templates that may alert analysts or cue their focus of attention in the next phase of processing.

The categories of automated tools that are applied to the analysis information base include the following tools:

  • Interactive search and retrieval tools permit analysts to search by content, topic, or related topics using the lexicon and thesaurus subjects.
  • Structured judgment analysis tools provide visual methods to link data, synthesize deductive logic structures, and visualize complex relation- ships between data sets. These tools enable the analyst to hypothesize, explore, and discover subtle patterns and relationships in large data volumes—knowledge that can be discerned only when all sources are viewed in a common context.
  • Modeling and simulation tools model hypothetical activities, allowing modeled (expected) behavior to be compared to evidence for validation or projection of operations under scrutiny.
  • Collaborative analysis tools permit multiple analysts in related subject areas, for example, to collaborate on the analysis of a common subject.
  • Data visualization tools present synthetic views of data and information to the analyst to permit patterns to be examined and discovered.

2.6 Intelligence Process Assessments and Reengineering

The U.S. IC has been assessed throughout and since the close of the Cold War to study the changes necessary to adapt to advanced collection capabilities, changing security threats, and the impact of global information connectivity and information availability. Published results of these studies provide insight into the areas of intelligence effectiveness that may be enhanced by organizing the community into a KM enterprise. We focus here on the technical aspects of the changes rather than the organizational aspects recommended in numerous studies.

2.6.1 Balancing Collection and Analysis

Intelligence assessments have evaluated the utility of intelligence products and the balance of investment between collection and analysis.

2.6.2 Focusing Analysis-Synthesis

An independent study [21] of U.S. intelligence recommended a need for intelligence to sharpen the focus of analysis-synthesis resources to deal with the increased demands by policymakers for knowledge on a wider ranges of topics, the growing breadth of secret and open sources, and the availability of commercial open-source analysis.

2.6.3

Balancing Analysis-Synthesis Processes

One assessment conducted by the U.S. Congress reviewed the role of analysis- synthesis and the changes necessary for the community to reengineer its processes from a Cold War to a global awareness focus. Emphasizing the crucial role of analysis, the commission noted:

The raison d’etre of the Intelligence Community is to provide accurate and meaningful information and insights to consumers in a form they can use at the time they need them. If intelligence fails to do that, it fails altogether. The expense and effort invested in collecting and processing the information have gone for naught.

The commission identified the KM challenges faced by large-scale intelligence analysis that encompasses global issues and serves a broad customer base.

The commission’s major observations provide insight into the emphasis on people- related (rather than technology-related) issues that must be addressed for intelligence to be valued by the policy and decision makers that consume intelligence:

  1. Build relationships. A concerted effort is required to build relationships between intelligence producers and the policymakers they serve. Producer-consumer relationships range from assignment of intelligence liaison officers with consumers (the closest relationship and greatest consumer satisfaction) to holding regular briefings, or simple producer-subscriber relationships for general broadcast intelligence. Across this range of relationships, four functions must be accomplished for intelligence to be useful:
  • Analysts must understand the consumer’s level of knowledge and the issues they face.
  • Intelligence producers must focus on issues of significance and make information available when needed, in a format appropriate to the unique consumer.
  • Consumers must develop an understanding of what intelligence can and—equally important—cannot do.
  • Both consumer and producer must be actively engaged in a dialogue with analysts to refine intelligence support to decision making.
  1. Increase and expand the scope of analytic expertise. The expertise of the individual analysts and the community of analysts must be maintained at the highest level possible. This expertise is in two areas: domain, or region of focus (e.g., nation, group, weapon systems, or economics), and analytic-synthetic tradecraft. Expertise development should include the use of outside experts, travel to countries of study, sponsor- ship of topical conferences, and other means (e.g., simulations and peer reviews).
  2. Enhance use of open sources. Open-source data (i.e., publicly available data in electronic and broadcast media, journals, periodicals, and commercial databases) should be used to complement (cue, provide con- text, and in some cases, validate) special, or closed, sources. The analyst must have command of all available information and the means to access and analyze both categories of data in complementary fashion.
  3. Make analysis available to users. Intelligence producers must increasingly apply dynamic, electronic distribution means to reach consumers for collaboration and distribution. The DoD Joint Deployable Intelligence Support System (JDISS) and IC Intelink were cited as early examples of networked intelligence collaboration and distribution systems.
  4. Enhance strategic estimates. The United States produces national intelligence estimates (NIEs) that provide authoritative statements and fore- cast judgments about the likely course of events in foreign countries and their implications for the United States. These estimates must be enhanced to provide timely, objective, and relevant data on a wider range of issues that threaten security.
  5. Broaden the analytic focus. As the national security threat envelope has broadened (beyond the narrower focus of the Cold War), a more open, collaborative environment is required to enable intelligence analysts to interact with policy departments, think tanks, and academia to analyze, debate, and assess these new world issues.

In the half decade since the commission recommendations were published, the United States has implemented many of the recommendations. Several examples of intelligence reengineering include:

  • Producer-consumer relationships. The introduction of collaborative networks, tools, and soft-copy products has permitted less formal interaction and more frequent exchange between consumers and producers. This allows intelligence producers to better understand consumer needs and decision criteria. This has enabled the production of more focused, timely intelligence.
  • Analytic expertise. Enhancements in analytic training and the increased use of computer-based analytic tools and even simulation are providing greater experience—and therefore expertise—to human analysts.
  • Open source. Increased use of open-source information via commercial providers (e.g., Lexis NexisTM subscription clipping services to tailored topics) and the Internet has provided an effective source for obtaining background information. This enables special sources and methods to focus on validation of critical implications.
  • Analysis availability. The use of networks continues to expand for both collaboration (between analysts and consumers as well as between analysts) and distribution. This collaboration was enabled by the intro- duction and expansion of the classified Internet (Intelink) that interconnects the IC [24].
  • Broadened focus. The community has coordinated open panels to dis- cuss, debate, and collaboratively analyze and openly publish strategic perspectives of future security issues. One example is the “Global Trends 2015” report that resulted from a long-term collaboration with academia, the private sector, and topic area experts [25].

2.7 The Future of Intelligence

The two primary dimensions of future threats to national (and global) security include the source (from nation-state actors to no-state actors) and the threat-generating mechanism (continuous results of rational nation-state behaviors to discontinuities in complex world affairs). These threat changes and the contrast in intelligence are summarized in Table 2.4. Notice that these changes coincide with the transition from sensor-centric to network- and knowledge-centric approaches to intelligence introduced in Chapter 1.

intelligence must focus on knowledge creation in an enterprise environment that is prepared to rapidly reinvent itself to adapt to emergent threats.

3
Knowledge Management Processes

KM is the term adopted by the business community in the mid 1990s to describe a wide range of strategies, processes, and disciplines that formalize and integrate an enterprise’s approach to organizing and applying its knowledge assets. Some have wondered what is truly new about the concept of managing knowledge. Indeed, many pure knowledge-based organizations (insurance companies, consultancies, financial management firms, futures brokers, and of course, intelligence organizations) have long “managed” knowledge—and such management processes have been the core competency of the business.

The scope of knowledge required by intelligence organizations has increased in depth and breadth as commerce has networked global markets and world threats have diversified from a monolithic Cold War posture. The global reach of networked information, both open and closed sources, has produced a deluge of data—requiring computing support to help human analysts sort, locate, and combine specific data elements to provide rapid, accurate responses to complex problems. Finally, the formality of the KM field has grown significantly in the past decade—developing theories for valuing, auditing, and managing knowledge as an intellectual asset; strategies for creating, reusing, and leveraging the knowledge asset; processes for con- ducting collaborative transactions of knowledge among humans and machines; and network information technologies for enabling and accelerating these processes.

3.1 Knowledge and Its Management

In the first chapter, we introduced the growing importance of knowledge as the central resource for competition in both the nation-state and in business. Because of this, the importance of intelligence organizations providing strategic knowledge to public- and private-sector decision makers is paramount. We can summarize this importance of intelligence to the public or private enterprise in three assertions about knowledge.

First, knowledge has become the central asset or resource for competitive advantage. In the Tofflers’ third wave, knowledge displaces capital, labor, and natural resources as the principal reserve of the enterprise. This is true in wealth creation by businesses and in national security and the conduct of warfare for nation-states.

Second, it is asserted that the management of the knowledge resource is more complex than other resources. The valuation and auditing of knowledge is unlike physical labor or natural resources; knowledge is not measured by “head counts” or capital valuation of physical inventories, facilities, or raw materials (like stockpiles of iron ore, fields of cotton, or petroleum reserves). New methods of quantifying the abstract entity of knowledge—both in people and in explicit representations—are required. In order to accomplish this complex challenge, knowledge managers must develop means to capture, store, create, and exchange knowledge, while dealing with the sensitive security issues of knowing when to protect and when to share (the trade-off between the restrictive “need to know” and the collaborative “need to share”).

The third assertion about knowledge is that its management therefore requires a delicate coordination of people, processes, and supporting technologies to achieve the enterprise objectives of security, stability, and growth in a dynamic world:

  • People. KM must deal with cultures and organizational structures that enable and reward the growth of knowledge through collaborative learning, reasoning, and problem solving.
  • Processes. KM must also provide an environment for exchange, discovery, retention, use, and reuse of knowledge across the organization.
  • Technologies. Finally, IT must be applied to enable the people and processes to leverage the intellectual asset of actionable knowledge.

 

Definitions of KM as a formal activity are as diverse as its practitioners (Table 3.1), but all have in common the following general characteristics:

KM is based on a strategy that accepts knowledge as the central resource to achieve business goals and that knowledge—in the minds of its people, embedded in processes, and in explicit representations in knowledge bases—must be regarded as an intellectual form of capital to be leveraged. Organizational values must be coupled with the growth of this capital.

KM involves a process that, like a supply chain, moves from raw materials (data) toward knowledge products. The process is involved in acquiring (data), sorting, filtering, indexing and organizing (information), reasoning (analyzing and synthesizing) to create knowledge, and finally disseminating that knowledge to users. But this supply chain is not a “stovepiped” process (a narrow, vertically integrated and compartmented chain); it horizontally integrates the organization, allowing collaboration across all areas of the enterprise where knowledge sharing provides benefits.

KM embraces a discipline and cultural values that accept the necessity for sharing purpose, values, and knowledge across the enterprise to leverage group diversity and perspectives to promote learning and intellectual problem solving. Collaboration, fully engaged communication and cognition, is required to network the full intellectual power of the enterprise.

The U.S. National Security Agency (NSA) has adopted the following “people-oriented” definition of KM to guide its own intelligence efforts:

Strategies and processes to create, identify, capture, organize and leverage vital skills, information and knowledge to enable people to best accomplish the organizational mission.7ryfcv

The DoD has further recognized that KM is the critical enabler for information superiority:

The ability to achieve and sustain information superiority depends, in large measure, upon the creation and maintenance of reusable knowledge bases; the ability to attract, train, and retain a highly skilled work force proficient in utilizing these knowledge bases; and the development of core business processes designed to capitalize upon these assets.

The processes by which abstract knowledge results in tangible effects can be examined as a net of influences that effect knowledge creation and decision making.

The flow of influences in the figure illustrates the essential contributions of shared knowledge.

  1. Dynamic knowledge. At the central core is a comprehensive and dynamic understanding of the complex (business or national security) situation that confronts the enterprise. This understanding accumulates over time to provide a breadth and depth of shared experience, or organizational memory.
  2. Critical and systems thinking. Situational understanding and accumulated experience enables dynamic modeling to provide forecasts from current situations—supporting the selection of adapting organizational goals. Comprehensive understanding (perception) and thorough evaluation of optional courses of actions (judgment) enhance decision making. As experience accumulates and situational knowledge is refined, critical explicit thinking and tacit sensemaking about current situations and the consequences of future actions is enhanced.
  3. Shared operating picture. Shared pictures of the current situation (common operating picture), past situations and outcomes (experience), and forecasts of future outcomes enable the analytic workforce to collaborate and self-synchronize in problem solving.
  4. Focused knowledge creation. Underlying these functions is a focused data and experience acquisition process that tracks and adapts as the business or security situation changes.

While Figure 3.1 maps the general influences of knowledge on goal setting, judgment, and decision making in an enterprise, an understanding of how knowledge influences a particular enterprise in a particular environment is necessary to develop a KM strategy. Such a strategy seeks to enhance organizational knowledge of these four basic areas as well as information security to protect the intellectual assets,

3.2 Tacit and Explicit Knowledge

In the first chapter, we offered a brief introduction to hierarchical taxonomy of data, information, and knowledge, but here we must refine our understanding of knowledge and its construct before we delve into the details of management processes.

In this chapter, we distinguish between the knowledge-creation processes within the knowledge-creating hierarchy (Figure 3.2). The hierarchy illustrates the distinctions we make, in common terminology, between explicit (represented and defined) processes and those that are implicit (or tacit; knowledge processes that are unconscious and not readily articulated).

3.2.1 Knowledge As Object

The most common understanding of knowledge is as an object—the accumulation of things perceived, discovered, or learned. From this perspective, data (raw measurements or observations), information (data organized, related, and placed in context), and knowledge (information explained and the underlying processes understood) are also objects. The KM field has adopted two basic distinctions in the categories of knowledge as object:

  1. Explicit knowledge. This is the better known form of knowledge that has been captured and codified in abstract human symbols (e.g., mathematics, logical propositions, and structured and natural language). It is tangible, external (to the human), and logical. This documented knowledge can be stored, repeated, and taught by books because it is impersonal and universal. It is the basis for logical reasoning and, most important of all, it enables knowledge to be communicated electronically and reasoning processes to be automated.
  2. Tacit knowledge. This is the intangible, internal, experiential, and intuitive knowledge that is undocumented and maintained in the human mind. It is a personal knowledge contained in human experience. Philosopher Michael Polanyi pioneered the description of such knowledge in the 1950s, considering the results of Gestalt psychology and the philosophic conflict between moral conscience and scientific skepticism. In The Tacit Dimension, he describes a kind of knowledge that we cannot tell. This tacit knowledge is characterized by intangible fac- tors such as perception, belief, values, skill, “gut” feel, intuition, “know-how,” or instinct; this knowledge is unconsciously internalized and cannot be explicitly described (or captured) without effort.

An understanding of the relationship between knowledge and mind is of particular interest to the intelligence discipline, because these analytic techniques will serve two purposes:

  1. Mind as knowledge manager. Understanding of the processes of exchanging tacit and explicit knowledge will, of course, aid the KM process itself. This understanding will enhance the efficient exchange of knowledge between mind and computer—between internal and external representations.
  2. Mind as intelligence target. Understanding of the complete human processes of reasoning (explicit logical thought) and sensemaking (tacit, emotional insight) will enable more representative modeling of adversarial thought processes. This is required to understand the human mind as an intelligence target—representing perceptions, beliefs, motives, and intentions

Previously, we have used the terms resource and asset to describe knowledge, but it is not only an object or a commodity to be managed. Knowledge can also be viewed as a dynamic, embedded in processes that lead to action. In the next section, we explore this complementary perspective of knowledge.

3.2.2 Knowledge As Process

Knowledge can also be viewed as the action, or dynamic process of creation, that proceeds from unstructured content to structured understanding. This perspective considers knowledge as action—as knowing. Because knowledge explains the basis for information, it relates static information to a dynamic reality. Knowing is uniquely tied to the creation of meaning.

Karl Weick introduced the term sensemaking to describe the tacit knowing process of retrospective rationality—the method by which individuals and organizations seek to rationally account for things by going back in time to structure events and explanations holistically. We do this, to “make sense” of reality, as we perceive it, and create a base of experience, shared meaning, and understanding.

To model and manage the knowing process of an organization requires attention to both of these aspects of knowledge—one perspective emphasizing cognition, the other emphasizing culture and context. The general knowing process includes four basic phases that can be described in process terms that apply to tacit and explicit knowledge, in human and computer terms, respectively.

  1. This process acquires knowledge by accumulating data through human observation and experience or technical sensing and measurement. The capture of e-mail discussion threads, point-of-sales transactions, or other business data, as well as digital imaging or signals analysis are but examples of the wide diversity of acquisition methods.
  1. Maintenance. Acquired explicit data is represented in a standard form, organized, and stored for subsequent analysis and application in digital databases. Tacit knowledge is stored by humans as experience, skill, or expertise, though it can be elicited and converted to explicit form in terms of accounts, stories (rich explanations), procedures, or explanations.
  2. Transformation. The conversion of data to knowledge and knowledge from one form to another is the creative stage of KM. This knowledge-creation stage involves more complex processes like internalization, intuition, and conceptualization (for internal tacit knowledge) and correlation and analytic-synthetic reasoning (for explicit knowledge). In the next subsection, this process is described in greater detail.
  3. Transfer. The distribution of acquired and created knowledge across the enterprise is the fourth phase. Tacit distribution includes the sharing of experiences, collaboration, stories, demonstrations, and hands-on training. Explicit knowledge is distributed by mathematical, graphical, and textual representations, from magazines and textbooks to electronic media.

the three phases of organizational knowing (focusing on culture) described by Davenport and Prusak in their text Working Knowledge [17]:

  1. Generation. Organizational networks generate knowledge by social processes of sharing, exploring, and creating tacit knowledge (stories, experiences, and concepts) and explicit knowledge (raw data, organized databases, and reports). But these networks must be properly organized for diversity of both experience and perspective and placed under appropriate stress (challenge) to perform. Dedicated cross- functional teams, appropriately supplemented by outside experts and provided a suitable challenge, are the incubators for organizational knowledge generation.
  2. Codification and coordination. Codification explicitly represents generated knowledge and the structure of that knowledge by a mapping process. The map (or ontology) of the organization’s knowledge allows individuals within the organization to locate experts (tacit knowledge holders), databases (of explicit knowledge), and tacit-explicit net- works. The coordination process models the dynamic flow of knowledge within the organization and allows the creation of narratives (stories) to exchange tacit knowledge across the organization.
  3. Transfer. Knowledge is transferred within the organization as people interact; this occurs as they are mentored, temporarily exchanged, transferred, or placed in cross-functional teams to experience new perspectives, challenges, or problem-solving approaches.

3.2.3 Knowledge Creation Model

Nonaka and Takeuchi describe four modes of conversion, derived from the possible exchanges between two knowledge types (Figure 3.5):

  1. Tacit to tacit—socialization. Through social interactions, individuals within the organization exchange experiences and mental models, transferring the know-how of skills and expertise. The primary form of transfer is narrative—storytelling—in which rich context is conveyed and subjective understanding is compared, “reexperienced,” and internalized. Classroom training, simulation, observation, mentoring, and on-the-job training (practice) build experience; moreover, these activities also build teams that develop shared experience, vision, and values. The socialization process also allows consumers and producers to share tacit knowledge about needs and capabilities, respectively.
  2. Tacit to explicit—externalization. The articulation and explicit codification of tacit knowledge moves it from the internal to external. This can be done by capturing narration in writing, and then moving to the construction of metaphors, analogies, and ultimately models. Externalization is the creative mode where experience and concept are expressed in explicit concepts—and the effort to express is in itself a creative act. (This mode is found in the creative phase of writing, invention, scientific discovery, and, for the intelligence analyst, hypothesis creation.)
  1. Explicit to explicit—combination. Once explicitly represented, different objects of knowledge can be characterized, indexed, correlated, and combined. This process can be performed by humans or computers and can take on many forms. Intelligence analysts compare multiple accounts, cable reports, and intelligence reports regarding a common subject to derive a combined analysis. Military surveillance systems combine (or fuse) observations from multiple sensors and HUMINT reports to derive aggregate force estimates. Market analysts search (mine) sales databases for patterns of behavior that indicate emerging purchasing trends. Business developers combine market analyses, research and development results, and cost analyses to create strategic plans. These examples illustrate the diversity of the combination processes that combine explicit knowledge.
  2. Explicit to tacit—internalization. Individuals and organizations internalize knowledge by hands-on experience in applying the results of combination. Combined knowledge is tested, evaluated, and results in new tacit experience. New skills and expertise are developed and integrated into the tacit knowledge of individuals and teams.

Nonaka and Takeuchi further showed how these four modes of conversion operate in an unending spiral sequence to create and transfer knowledge throughout the organization

Organizations that have redundancy of information (in people, processes, and databases) and diversity in their makeup (also in people, processes, and databases) will enhance the ability to move along the spiral. The modes of activity benefit from a diversity of people: socialization requires some who are stronger in dialogue to elicit tacit knowledge from the team; externalization requires others who are skilled in representing knowledge in explicit forms; and internalization benefits from those who experiment, test ideas, and learn from experience, with the new concepts or hypotheses arising from combination.

Organizations can also benefit from creative chaos—changes that punctuate states of organizational equilibrium. These states include static presumptions, entrenched mindsets, and established processes that may have lost validity in a changing environment. Rather than destabilizing the organization, the injection of appropriate chaos can bring new-perspective reflection, reassess- ment, and renewal of purpose. Such change can restart tacit-explicit knowledge exchange, where the equilibrium has brought it to a halt.

3.3 An Intelligence Use Case Spiral

We follow a distributed crisis intelligence cell, using networked collaboration tools, through one complete spiral cycle to illustrate the spiral. This case is deliberately chosen because it stresses the spiral (no face-to-face interaction by the necessarily distributed team, very short time to interact, the temporary nature of the team, and no common “organizational” membership), yet illustrates clearly the phases of tacit-explicit exchange and the practical insight into actual intelligence- analysis activities provided by the model.

3.3.1 The Situation

The crisis in small but strategic Kryptania emerged rapidly. Vital national inter- ests—security of U.S. citizens, U.S. companies and facilities, and the stability of the fledgling democratic state—were at stake. Subtle but cascading effects in the environment, economy, and political domains triggered the small political lib- eration front (PLF) to initiate overt acts of terrorism against U.S. citizens, facili- ties, and embassies in the region while seeking to overthrow the fledgling democratic government.

3.3.2 Socialization

Within 10 hours of the team formation, all members participate in an on-line SBU kickoff meeting (same-time, different-place teleconference collaboration) that introduces all members, describes the group’s intelligence charter and procedures, explains security policy, and details the use of the portal/collaboration workspace created for the team. The team leader briefs the current situation and the issues: areas of uncertainly, gaps in knowledge or collection, needs for information, and possible courses of events that must be better understood. The group is allowed time to exchange views and form their own subgroups on areas of contribution that each individual can bring to the problem. Individuals express concepts for new sources for collection and methods of analysis. In this phase, the dialogue of the team, even though not face to face, is invaluable in rapidly establishing trust and a shared vision for the critical task over the ensuing weeks of the crisis.

3.3.3 Externalization

The initial discussions lead to the creation of initial explicit models of the threat that are developed by various team members and posted on the portal for all to see

The team collaboratively reviews and refines these models by updating new versions (annotated by contributors) and suggesting new submodels (or linking these models into supermodels). This externalization process codifies the team’s knowledge (beliefs) and speculations (to be evaluated) about the threat. Once externalized, the team can apply the analytic tools on the portal to search for data, link evidence, and construct hypothesis structures. The process also allows the team to draw on support from resources outside the team to conduct supporting collections and searches of databases for evidence to affirm, refine, or refute the models.

3.3.4 Combination

The codified models become archetypes that represent current thinking—cur- rent prototype hypotheses formed by the group about the threat (who—their makeup; why—their perceptions, beliefs, intents, and timescales; what—their resources, constraints and limitations, capacity, feasible plans, alternative courses of action, vulnerabilities). This prototype-building process requires the group to structure its arguments about the hypotheses and combine evidence to support its claims. The explicit evidence models are combined into higher level explicit explanations of threat composition, capacity, and behavioral patterns.

Initial (tentative) intelligence products are forming in this phase, and the team begins to articulate these prototype products—resulting in alternative hypotheses and even recommended courses of action

3.3.5 Internalization

As the evidentiary and explanatory models are developed on the portal, the team members discuss (and argue) over the details, internally struggling with acceptance or rejection of the validity of the various hypotheses. Individual team members search for confirming or refuting evidence in their own areas of expertise and discuss the hypotheses with others on the team or colleagues in their domain of expertise (often expressing them in the form of stories or metaphors) to experience support or refutation. This process allows the members to further refine and develop internal belief and confidence in the predictive aspects of the models. As accumulating evidence over the ensuing days strengthens (or refutes) the hypotheses, the process continues to internalize those explanations that the team has developed that are most accurate; they also internalize confidence in the sources and collaborative processes that were most productive for this ramp-up phase of the crisis situation.

3.3.6 Socialization

As the group periodically reconvenes, the subject focuses away from “what we must do” to the evidentiary and explanatory models that have been produced. The dialogue turns from issues of startup processes to model-refinement processes. The group now socializes around a new level of the problem: Gaps in the models, new problems revealed by the models, and changes in the evolving crisis move the spiral toward new challenges to create knowledge about vulnerabilities in the PLF and supporting networks, specific locations of black propaganda creation and distribution, finances of certain funding organizations, and identification of specific operation cells within the Kryptanian government.

3.3.7 Summary

This example illustrates the emergent processes of knowledge creation over the several day ramp-up period of a distributed crisis intelligence team.

The full spiral moved from team members socializing to exchange the tacit knowledge of the situation toward the development of explicit representations of their tacit knowledge. These explicit models allowed other supporting resources to be applied (analysts external to the group and online analytic tools) to link further evidence to the models and structure arguments for (or against) the models. As the models developed, team members discussed, challenged, and internalized their understanding of the abstractions, developing confidence and hands-on experience as they tested them against emerging reports and discussed them with team members and colleagues. The confidence and internalized understanding then led to a drive for further dialogue—initializing a second cycle of the spiral.

3.4 Taxonomy of KM

Using the fundamental tacit-explicit distinctions, and the conversion processes of socialization, externalization, internalization, and combination, we can establish a helpful taxonomy of the processes, disciplines, and technologies of the broad KM field applied to the intelligence enterprise. A basic taxonomy that categorizes the breadth of the KM field can be developed by distinguishing three areas of distinct (though very related) activities:

  1. People. The foremost area of KM emphasis is on the development of intellectual capital by people and the application of that knowledge by those people. The principal knowledge-conversion process in this area is socialization, and the focus of improvement is on human operations, training, and human collaborative processes. The basis of collaboration is human networks, known as communities of practice—sharing purpose, values, and knowledge toward a common mission. The barriers that challenge this area of KM are cultural in nature.
  2. Processes. The second KM area focuses on human-computer interaction (HCI) and the processes of externalization and internalization. Tacit-explicit knowledge conversions have required the development of tacit-explicit representation aids in the form of information visuali- zation and analysis tools, thinking aids, and decision support systems. This area of KM focuses on the efficient networking of people and machine processes (such autonomous support processes are referred to as agents) to enable the shared reasoning between groups of people and their agents through computer networks. The barrier to achieving robustness in such KM processes is the difficulty of creating a shared context of knowledge among humans and machines.
  3. Processors. The third KM area is the technological development and implementation of computing networks and processes to enable explicit-explicit combination. Network infrastructures, components, and protocols for representing explicit knowledge are the subject of this fast-moving field. The focus of this technology area is networked computation, and the challenges to collaboration lie in the ability to sustain growth and interoperability of systems and protocols.

 

Because the KM field can also be described by the many domains of expertise (or disciplines of study and practice), we can also distinguish five distinct areas of focus that help describe the field. The first two disciplines view KM as a competence of people and emphasize making people knowledgeable:

  1. Knowledge strategists. Enterprise leaders, such as the chief knowledge officer (CKO), focus on the enterprise mission and values, defining value propositions that assign contributions of knowledge to value (i.e., financial or operational). These leaders develop business models to grow and sustain intellectual capital and to translate that capital into organizational values (e.g., financial growth or organizational performance). KM strategists develop, measure, and reengineer business processes to adapt to the external (business or world) environment.
  2. Knowledge culture developers. Knowledge culture development and sustainment is promoted by those who map organizational knowledge and then create training, learning, and sharing programs to enhance the socialization performance of the organization. This includes the cadre of people who make up the core competencies of the organization (e.g., intelligence analysis, intelligence operations, and collection management). In some organizations a chief learning officer (CLO) is designated this role to oversee enterprise human capital, just as the chief financial officer (CFO) manages (tangible) financial capital.

The next three disciplines view KM as an enterprise capability and emphasize building the infrastructure to make knowledge manageable:

  1. KM applications. Those who apply KM principles and processes to specific business applications create both processes and products (e.g., software application packages) to provide component or end-end serv- ices in a wide variety of areas listed in Table 3.10. Some commercial KM applications have been sufficiently modularized to allow them to be outsourced to application service providers (ASPs) [20] that “package” and provide KM services on a per-operation (transaction) basis. This allows some enterprises to focus internal KM resources on organizational tacit knowledge while outsourcing architecture, infra- structure, tools, and technology.
  2. Enterprise architecture. Architects of the enterprise integrate people, processes, and IT to implement the KM business model. The architecting process defines business use cases and process models to develop requirements for data warehouses, KM services, network infrastructures, and computation.
  3. KM technology and tools. Technologists and commercial vendors develop the hardware and software components that physically implement the enterprise. Table 3.10 provides only a brief summary of the key categories of technologies that make up this broad area that encompasses virtually all ITs.

3.5 Intelligence As Capital

We have described knowledge as a resource (or commodity) and as a process in previous sections. Another important perspective of both the resource and the process is that of the valuation of knowledge. The value (utility or usefulness) of knowledge is first and foremost quantified by its impact on the user in the real world.

the value of intelligence goes far beyond financial considerations in national and MI application. In these cases, the value of knowledge must be measured in its impact on national interests: the warning time to avert a crisis, the accuracy necessary to deliver a weapon, the completeness to back up a policy decision, or the evidential depth to support an organized criminal conviction. Knowledge, as an abstraction, has no intrinsic value—its value is measured by its impact in the real world.

In financial terms, the valuation of the intangible aspects of knowledge is referred to as capital—intellectual capital. These intangible resources include the personal knowledge, skills, processes, intellectual property, and relationships that can be leveraged to produce assets of equal or greater importance than other organizational resources (land, labor, and capital).

What is this capital value in our representative business? It is comprised of four intangible components:

  1. Customer capital. This is the value of established relationships with customers, such as trust and reputation for quality.

Intelligence tradecraft recognizes this form of capital in the form of credibility with consumers—“the ability to speak to an issue with sufficient authority to be believed and relied upon by the intended audience”

  1. Innovation capital. Innovation in the form of unique strategies, new concepts, processes, and products based on unique experience form this second category of capital. In intelligence, new and novel sources and methods for unique problems form this component of intellectual capital.
  2. Process capital. Methodologies and systems or infrastructure (also called structural capital) that are applied by the organization make up its process capital. The processes of collection sources and both collection and analytic methods form a large portion of the intelligence organization’s process (and innovation) capital; they are often fragile (once discovered, they may be forever lost) and are therefore carefully protected.
  3. Human capital. The people, individually and in virtual organizations, comprise the human capital of the organization. Their collective tacit knowledge—expressed as dedication, experience, skill, expertise, and insight—form this critical intangible resource.

O’Dell and Grayson have defined three fundamental categories of value propositions in If Only We Knew What We Know [23]:

  1. Operational excellence. These value propositions seek to boost revenue by reducing the cost of operations through increased operating efficiencies and productivity. These propositions are associated with business process reengineering (BPR), and even business transformation using electronic commerce methods to revolutionize the operational process. These efforts contribute operational value by raising performance in the operational value chain.
  2. Product-to-market excellence. The propositions value the reduction in the time to market from product inception to product launch. Efforts that achieve these values ensure that new ideas move to development and then to product by accelerating the product development process. This value emphasizes the transformation of the business, itself (as explained in Section 1.1).
  3. Customer intimacy. These values seek to increase customer loyalty, customer retention, and customer base expansion by increasing intimacy (understanding, access, trust, and service anticipation) with customers. Actions that accumulate and analyze customer data to reduce selling cost while increasing customer satisfaction contribute to this proposition.

For each value proposition, specific impact measures must be defined to quantify the degree to which the value is achieved. These measures quantify the benefits, and utility delivered to stakeholders. Using these measures, the value added by KM processes can be observed along the sequential processes in the business operation. This sequence of processes forms a value chain that adds value from raw materials to delivered product.

Different kinds of measures are recommended for organizations in transition from legacy business models. During periods of change, three phases are recognized [24]. In the first phase, users (i.e., consumers, collection managers, and analysts) must be convinced of the benefits of the new approach, and the measures include metrics as simple as the number of consumers taking training and beginning to use serv- ices. In the crossover phase, when users begin to transition to the systems, measurers change to usage metrics. Once the system approaches steady-state use, financial-benefit measures are applied. Numerous methods have been defined and applied to describe and quantify economic value, including:

  1. Economic value added (EVA) subtracts cost of capital invested from net operating profit;
  2. Portfolio management approaches treats IT projects as individual investments, computing risks, yields, and benefits for each component of the enterprise portfolio;
  3. Knowledge capital is an aggregate measure of management value added (by knowledge) divided by the price of capital [25];
  4. Intangible asset monitor (IAM) [26] computes value in four categories—tangible capital, intangible human competencies, intangible internal structure, and intangible external structure [27].

The four views of the BSC provide a means of “balancing” the measurement of the major causes and effects of organizational performance but also provide a framework for modeling the organization.

3.6 Intelligence Business Strategy and Models

The commercial community has explored a wide range of business models that apply KM (in the widest sense) to achieve key business objectives. These objectives include enhancing customer service to provide long-term customer satisfaction and retention, expanding access to customers (introducing new products and services, expanding to new markets), increasing efficiency in operations (reduced cost of operations), and introducing new network-based goods and services (eCommerce or eBusiness). All of these objectives can be described by value propositions that couple with business financial performance.

The strategies that leverage KM to achieve these objectives fall into two basic categories. The first emphasizes the use of analysis to understand the value chain from first customer contact to delivery. Understanding the value added to the customer by the transactions (as well as delivered goods and services) allows the producer to increase value to the customer. Values that may be added to intelligence consumers by KM include:

• Service values. Greater value in services are provided to policymakers by anticipating their intelligence needs, earning greater user trust in accuracy and focus of estimates and warnings, and providing more timely delivery of intelligence. Service value is also increased as producers personalize (tailor) and adapt services to the consumer’s interests (needs) as they change.

• Intelligence product values. The value of intelligence products is increased when greater value is “added” by improving accuracy, providing deeper and more robust rationale, focusing conclusions, and building increased consumer confidence (over time).

The second category of strategies (prompted by the eBusiness revolution) seeks to transform the value chain by the introduction of electronic transactions between the customer and retailer. These strategies use network-based advertising, ordering, and even delivery (for information services like banking, investment, and news) to reduce the “friction” of physical-world retailer-customer

These strategies introduce several benefits—all applicable to intelligence:

  • Disintermediation. This is the elimination of intermediate processes and entities between the customer and producer to reduce transaction fric- tion. This friction adds cost and increases the difficulty for buyers to locate sellers (cost of advertising), for buyers to evaluate products (cost of travel and shopping), for buyers to purchase products (cost of sales) and for sellers to maintain local inventories (cost of delivery). The elimination of “middlemen” (e.g., wholesalers, distributors, and local retailers) in eRetailers such as Amazon.com has reduced transaction and intermediate costs and allowed direct transaction and delivery from producer to customer with only the eRetailer in between. The effect of disintermediation in intelligence is to give users greater and more immediate access to intelligence products (via networks such as the U.S. Intelink) and to analysis services via intelligence portals that span all sources of intelligence.
  • Infomediation. The effect of disintermediation has introduced the role of the information broker (infomediary) between customer and seller, providing navigation services (e.g., shopping agents or auctioning and negotiating agents) that act on the behalf of customers [31]. Intelligence communities are moving toward greater cross-functional collection management and analysis, reducing the stovepiped organization of intelligence by collection disciplines (i.e., imagery, signals, and human sources). As this happens, the traditional analysis role requires a higher level of infomediation and greater automation because the analyst is expected (by consumers) to become a broker across a wider range of intelligence sources (including closed and open sources).
  • Customer aggregation. The networking of customers to producers allows rapid analysis of customer actions (e.g., queries for information, browsing through catalogs of products, and purchasing decisions based on information). This analysis enables the producers to better understand customers, aggregate their behavior patterns, and react to (and perhaps anticipate) customer needs. Commercial businesses use these capabilities to measure individual customer patterns and mass market trends to more effectively personalize and target sales and new product developments. Intelligence producers likewise are enabled to analyze warfighter and policymaker needs and uses of intelligence to adapt and tailor products and services to changing security threats.

 

These value chain transformation strategies have produced a simple taxonomy to distinguish eBusiness models into four categories by the level of transaction between businesses and customers

  1. Business to business (B2B). The large volume of trade between businesses (e.g., suppliers and manufacturers) has been enhanced by network-based transactions (releases of specifications, requests for quotations, and bid responses) reducing the friction between suppliers and producers. High-volume manufacturing industries such as the auto- makers are implementing B2B models to increase competition among suppliers and reduce bid-quote-purchase transaction friction.
  2. 2. Business to customer (B2C). Direct networked outreach from producer to consumer has enabled the personal computer (e.g., Dell Computer) and book distribution (e.g., Amazon.com) industries to disintermediate local retailers and reach out on a global scale directly to customers. Similarly, intelligence products are now being delivered (pushed) to consumers on secure electronic networks, via subscription and express order services, analogous to the B2B model.
  3. Customer to business (C2B). Networks also allow customers to reach out to a wider range of businesses to gain greater competitive advantage in seeking products and services.

the introduction of secure intelligence networks and on-line intelligence product libraries (e.g., common operating picture and map and imagery libraries) allows consumers to pull intelligence from a broader range of sources. (This model enables even greater competition between source providers and provides a means of measuring some aspects of intelligence utility based on actual use of product types.)

  1. Customer to customer (C2C). The C2C model automates the mediation process between consumers, enabling consumers to locate those with similar purchasing-selling interests.

3.7 Intelligence Enterprise Architecture and Applications

Just like commercial businesses, intelligence enterprises:

  • Measure and report to stakeholders the returns on investment. These returns are measured in terms of intelligence performance (i.e., knowledge provided, accuracy and timeliness of delivery, and completeness and sufficiency for decision making) and outcomes (i.e., effects of warnings provided, results of decisions based on knowledge delivered, and utility to set long-term policies).
  • Service customers, the intelligence consumers. This is done by providing goods (intelligence products such as reports, warnings, analyses, and target folders) and services (directed collections and analyses or tailored portals on intelligence subjects pertinent to the consumers).
  • Require intimate understanding of business operations and must adapt those operations to the changing threat environment, just as businesses must adapt to changing markets.
  • Manage a supply chain that involves the anticipation of future needs of customers, the adjustment of the delivery of raw materials (intelligence collections), the production of custom products to a diverse customer base, and the delivery of products to customers just in time [33].

3.7.1 Customer Relationship Management

CRM processes that build and maintain customer loyalty focus on managing the relationship between provider and consumer. The short-term goal is customer satisfaction; the long-term goal is loyalty. Intelligence CRM seeks to provide intelligence content to consumers that anticipates their needs, focuses on the specific information that supports their decision making, and provides drill down to supporting rationale and data behind all conclusions. In order to accomplish this, the consumer-producer relationship must be fully described in models that include:

  • Consumer needs and uses of intelligence—applications of intelligence for decision making, key areas of customer uncertainty and lack of knowledge, and specific impact of intelligence on the consumer’s decision making;
  • Consumer transactions—the specific actions that occur between the enterprise and intelligence consumers, including urgent requests, subscriptions (standing orders) for information, incremental and final report deliveries, requests for clarifications, and issuances of alerts.

CRM offers the potential to personalize intelligence delivery to individual decision makers while tracking their changing interests as they browse subject offerings and issue requests through their own custom portals.

3.7.2 Supply Chain Management

The SCM function monitors and controls the flow of the supply chain, providing internal control of planning, scheduling, inventory control, processing, and delivery.

SCM is the core of B2B business models, seeking to integrate front-end suppliers into an extended supply chain that optimizes the entire production process to slash inventory levels, improve on-time delivery, and reduce the order-to-delivery (and payment) cycle time. In addition to throughput efficiency, the B2B models seek to aggregate orders to leverage the supply chain to gain greater purchasing power, translating larger orders to reduced prices. The key impact measures sought by SCM implementations include:

  • Cash-to-cash cycle time (time from order placement to delivery/ payment);
  • Delivery performance (percentage of orders fulfilled on or before request date);
  • Initial fill rate (percentage of orders shipped in supplier’s first ship- ment);
  • Initial order lead time (supplier response time to fulfill order);
  • On-time receipt performance (percentage of supplier orders received on time).

Like the commercial manufacturer, the intelligence enterprise operates a supply chain that “manufactures” all-source intelligence products from raw sources of intelligence data and relies on single-source suppliers (i.e., imagery, signals, or human reports).

3.7.3 Business Intelligence

The BI function provides all levels of the organization with relevant information on internal operations and the external business environment (via marketing) to be exploited (analyzed and applied) to gain a competitive advantage. The BI function serves to provide strategic insight into overall enterprise operations based on ready access to operating data.

The emphasis of BI is on explicit data capture, storage, and analysis; through the 1990s, BI was the predominant driver for the implementation of corporate data warehouses, and the development of online analytic processing (OLAP) tools. (BI preceded KM concepts, and the subsequent introduction of broader KM concepts added the complementary need for capture and analysis of tacit and explicit knowledge throughout the enterprise.)

The intelligence BI function should collect and analyze real- time workflow data to provide answers to questions such as:

  • What are the relative volumes of requests (for intelligence) by type?
  • What is the “cost” of each category of intelligence product?
  • What are the relative transaction costs of each stage in the supply chain?
  • What are the trends in usage (by consumers) of all forms of intelligence over the past 12 months? Over the past 6 months? Over the past week?
  • Which single sources of incoming intelligence (e.g., SIGINT, IMINT, and MASINT) have greatest utility in all-source products, by product category?

Like their commercial counterparts, the intelligence BI function should not only track the operational flows, they should also track the history of operational decisions—and their effects.

Both operational and decision-making data should be able to be conveniently navigated and analyzed to provide timely operational insight to senior leadership who often ask the question, “What is the cost of a pound of intelligence?”

3.8 Summary

KM provides a strategy and organizational discipline for integrating people, processes, and IT into an effective enterprise.

as noted by Tom Davenport, a leading observer of the discipline:

The first generation of knowledge management within enterprises emphasized the “supply side” of knowledge: acquisition, storage, and dissemination of business operations and customer data. In this phase knowledge was treated much like physical resources and implementation approaches focused on building “warehouses” and “channels” for supply processing and distribution. This phase paid great attention to systems, technology and infrastructure; the focus was on acquiring, accumulating and distributing explicit knowledge in the enterprise [35].

Second generation KM emphasis has turned attention to the demand side of the knowledge economy—seeking to identify value in the collected data to allow the enterprise to add value from the knowledge base, enhance the knowledge spiral, and accelerate innovation. This generation has brought more focus to people (the organization) and the value of tacit knowledge; the issues of sustainable knowledge creation and dissipation throughout the organization are emphasized in this phase. The attention in this generation has moved from understanding knowledge systems to understanding knowledge workers. The third generation to come may be that of KM innovation, in which the knowledge process is viewed as a complete life cycle within the organization, and the emphasis will turn to revolutionizing the organization and reducing the knowledge cycle time to adapt to an ever-changing world environment

 

4

The Knowledge-Based Intelligence Organization

National intelligence organizations following World War II were characterized by compartmentalization (insulated specialization for security purposes) that required individual learning, critical analytic thinking, and problem solving by small, specialized teams working in parallel (stovepipes or silos). These stovepipes were organized under hierarchical organizations that exercised central control. The approach was appropriate for the centralized organizations and bipolar security problems of the relatively static Cold War, but the global breadth and rapid dynamics of twenty-first century intelligence problems require more agile networked organizations that apply organization-wide collaboration to replace the compartmentalization of the past. Founded on the virtues of integrity and trust, the disciplines of organizational collaboration, learning, and problem solving must be developed to support distributed intelligence collection, analysis, and production.

This chapter focuses on the most critical factor in organizational knowl- edge creation—the people, their values, and organizational disciplines. The chapter is structured to proceed from foundational virtues, structures, and com- munities of practice (Section 4.1) to the four organizational disciplines that sup- port the knowledge creation process: learning, collaboration, problem solving, and best practices—called intelligence tradecraft.

the people perspective of KM presented in this chapter can be contrasted with the process and technology perspectives (Table 4.1) five ways:

  1. Enterprise focus. The focus is on the values, virtues, and mission shared by the people in the organization.
  2. Knowledge transaction. Socialization, the sharing of tacit knowledge by methods such as story and dialogue, is the essential mode of transac- tion between people for collective learning, or collaboration to solve problems.
  3. The basis for human collaboration lies in shared pur- pose, values, and a common trust.
  4. A culture of trust develops communities that share their best practices and experiences; collaborative problem solving enables the growth of the trusting culture.
  5. The greatest barrier to collaboration is the inability of an organization’s culture to transform and embrace the sharing of values, virtues, and disciplines.

The numerous implementation failures of early-generation KM enterprises have most often occurred because organizations have not embraced the new business models introduced, nor have they used the new systems to collaborate. As a result, these KM implementations have failed to deliver the intellectual capital promised. These cases were generally not failures of process, technology, or infrastructure; rather, they were failures of organizational culture change to embrace the new organizational model. In particular, they failed to address the cultural barriers to organizational knowledge sharing, learning, and problem solving.

Numerous texts have examined these implementation challenges, and all have emphasized that organizational transformation must precede KM system implementations.

4.1 Virtues and Disciplines of the Knowledge-Based Organization

At the core of an agile knowledge-based intelligence organization is the ability to sustain the creation of organizational knowledge through learning and collaboration. Underlying effective collaboration are values and virtues that are shared by all. The U.S. IC, recognizing the need for such agility as its threat environment changes, has adopted knowledge-based organizational goals as the first two of five objectives in its Strategic Intent:

  • Unify the community through collaborative processes. This includes the implementation of training and business processes to develop an inter-agency collaborative culture and the deployment of supporting technologies.
  • Invest in people and knowledge. This area includes the assessment of customer needs and the conduct of events (training, exercises, experiments, and conferences/seminars) to develop communities of practice and build expertise in the staff to meet those needs. Supporting infrastructure developments include the integration of collaborative networks and shared knowledge bases.

Clearly identified organizational propositions of values and virtues (e.g., integrity and trust) shared by all enable knowledge sharing—and form the basis for organizational learning, collaboration, problem solving, and best-practices (intelligence tradecraft) development introduced in this chapter. This is a necessary precedent before KM infrastructure and technology is introduced to the organization. The intensely human values, virtues, and disciplines introduced in the following sections are essential and foundational to building an intelligence organization whose business processes are based on the value of shared knowledge.

4.1.1 Establishing Organizational Values and Virtues

The foundation of all organizational discipline (ordered, self-controlled, and structured behavior) is a common purpose and set of values shared by all. For an organization to pursue a common purpose, the individual members must conform to a common standard and a common set of ideals for group conduct.

The knowledge-based intelligence organization is a society that requires virtuous behavior of its members to enable collaboration. Dorothy Leonard-Barton, in Wellsprings of Knowledge, distinguishes two categories of values: those that relate to basic human nature and those that relate to performance of the task. In the first category are big V values (also called moral virtues) that include basic human traits such as personal integrity (consistency, honesty, and reliability), truthfulness, and trustworthiness. For the knowledge worker’s task, the second category (of little v values) includes those values long sought by philosophers to arrive at knowledge or justify true belief. Some epistemologies define intellectual virtue as the foundation of knowledge: Knowledge is a state of belief arising out of intellectual virtue. Intellectual virtues include organizational conformity to a standard of right conduct in the exchange of ideas, in reasoning and in judgment.

Organizational integrity is dependent upon the individual integrity of all contributor—as participants cooperate and collaborate around a central purpose, the virtue of trust (built upon shared trust- worthiness of individuals) opens the doors of sharing and exchange. Essential to this process is the development of networks of conversations that are built on communication transactions (e.g., assertions, declarations, queries, or offers) that are ultimately based in personal commitments. Ultimately, the virtue of organizational wisdom—seeking the highest goal by the best means—must be embraced by the entire organization recognizing a common purpose.

Trust and cooperative knowledge sharing must also be complemented by an objective openness. Groups that place consensus over objectivity become subject to certain dangerous decision-making errors.

4.1.2 Mapping the Structures of Organizational Knowledge

Every organization has a structure and flow of knowledge—a knowledge environment or ecology (emphasizing the self-organizing and balancing characteristics of organizational knowledge networks). The overall process of studying and characterizing this environment is referred to as mapping—explicitly rep- resenting the network of nodes (competencies) and links (relationships, knowledge flow paths) within the organization. The fundamental role of KM organizational analysis is the mapping of knowledge within an existing organization.

the knowledge mapping identifies the intangible tacit assets of the organization. The mapping process is conducted by a variety of means: passive observation (where the analyst works within the community), active interviewing, formal questionnaires, and analysis. As an ethnographic research activity, the mapping analyst seeks to understand the unspoken, informal flows and sources of knowledge in the day-to-day operations of the organization. The five stages of mapping (Figure 4.1) must be conducted in partnership with the owners, users, and KM implementers.

The first phase is the definition of the formal organization chart—the for- mal flows of authority, command, reports, intranet collaboration, and information systems reporting. In this phase, the boundaries, or focus of mapping interest is established. The second phase audits (identifies, enumerates, and quantifies as appropriate) the following characteristics of the organization:

  1. Knowledge sources—the people and systems that produce and articulate knowledge in the form of conversation, developed skills, reports, implemented (but perhaps not documented) processes, and databases.
  2. Knowledge flowpaths—the flows of knowledge, tacit and explicit, for- mal and informal. These paths can be identified by analyzing the transactions between people and systems; the participants in the trans- actions provide insight into the organizational network structure by which knowledge is created, stored, and applied. The analysis must distinguish between seekers and providers of knowledge and their relationships (e.g., trust, shared understanding, or cultural compatibility) and mutual benefits in the transaction.
  3. Boundaries and constraints—the boundaries and barriers that control, guide, or constrict the creation and flow of knowledge. These may include cultural, political (policy), personal, or electronic system characteristics or incompatibilities.
  4. Knowledge repositories—the means of maintaining organizational knowledge, including tacit repositories (e.g., communities of experts that share experience about a common practice) and explicit storage (e.g., legacy hardcopy reports in library holdings, databases, or data warehouses).

Once audited, the audit data is organized in the third phase by clustering the categories of knowledge, nodes (sources and sinks), and links unique to the organization. The structure of this organization, usually a table or a spreadsheet, provides insight into the categories of knowledge, transactions, and flow paths; it provides a format to review with organization members to convey initial results, make corrections, and refine the audit. This phase also provides the foundation for quantifying the intellectual capital of the organization, and the audit categories should follow the categories of the intellectual capital accounting method adopted.

The fourth phase, mapping, transforms the organized data into a structure (often, but not necessarily, graphical) that explicitly identifies the current knowledge network. Explicit and tacit knowledge flows and repositories are distinguished, as well as the social networks that support them. This process of visualizing the structure may also identify clusters of expertise, gaps in the flows, chokepoints, as well as areas of best (and worst) practices within the network.

Once the organization’s current structure is understood, the structure can be compared to similar structures in other organizations by benchmarking in the final phase. Benchmarking is the process of identifying, learning, and adapting outstanding practices and processes from any organization, anywhere in the world, to help an organization improve its performance. Benchmarking gathers the tacit knowledge—the know-how, judgments, and enablers—that explicit knowledge often misses. This process allows the exchange of quantitative performance data and qualitative best-practice knowledge to be shared and com- pared with similar organizations to explore areas for potential improvement and potential risks.

Because the repository provides a pointer to the originating authors, it also provides critical pointers to people, or a directory that identifies people within the agency with experience and expertise by subject

4.1.3 Identifying Communities of Organizational Practice

A critical result of any mapping analysis is the identification of the clusters of individuals who constitute formal and informal groups that create, share, and maintain tacit knowledge on subjects of common interest.

The functional workgroup benefits from stability, established responsibilities, processes and storage, and high potential for sharing. Functional workgroups provide the high-volume knowledge production of the organization but lack the agility to respond to projects and crises.

Cross-functional project teams are shorter term project groups that can be formed rapidly (and dismissed just as rapidly) to solve special intelligence problems, maintain special surveillance watches, prepare for threats, or respond to crises. These groups include individuals from all appropriate functional disciplines—with the diversity often characteristic of the makeup of the larger organization, but on a small scale—with reach back to expertise in functional departments.

M researchers have recognized that such organized commu- nities provide a significant contribution to organizational learning by providing a forum for:

  • Sharing current problems and issues;
  • Capturing tacit experience and building repositories of best practices;
  • Linking individuals with similar problems, knowledge, and experience;
  • Mentoring new entrants to the community and other interested parties.

Because participation in communities of practice is based on individual interest, not organizational assignment, these communities may extend beyond the duration of temporary assignments and cut across organizational boundaries.

The activities of working, learning, and innovating have traditionally been treated as independent (and conflicting) activities performed in the office, in the classroom, and in the lab. However, studies by John Seely Brown, chief scientist of Xerox PARC, have indicated that once these activities are unified in communities of practice, they have the potential to significantly enhance knowledge transfer and creation.

4.1.4 Initiating KM Projects

The knowledge mapping and benchmarking process must precede implementation of KM initiatives, forming the understanding of current competencies and processes and the baseline for measuring any benefits of change. KM implementation plans within intelligence organizations generally consider four components, framed by the kind of knowledge being addressed and the areas of investment in KM initiatives:

  1. Organizational competencies. The first area includes assessment of workforce competencies and forms the basis of an intellectual capital audit of human capital. This area also includes the capture of best practices (the intelligence business processes, or tradecraft) and the development of core competencies through training and education. This assessment forms the basis of intellectual capital audit.
  2. Social collaboration. Initiatives in this area enforce established face-to-face communities of practice and develop new communities. These activities enhance the socialization process through meetings and media (e.g., newsletters, reports, and directories).
  3. KM networks. Infrastructure initiatives implement networks (e.g., corporate intranets) and processes (e.g., databases, groupware, applications, and analytic tools) to provide for the capture and exchange of explicit knowledge.
  4. Virtual collaboration. The emphasis in this area is applying technology to create connectivity among and between communities of practice. Intranets and collaboration groupware (discussed in Section 4.3.2) enable collaboration at different times and places for virtual teams—and provide the ability to identify and introduce communities with similar interests that may be unaware of each other.

4.1.5 Communicating Tacit Knowledge by Storytelling

The KM community has recognized the strength of narrative communication—dialogue and storytelling—to communicate the values, emotion (feelings, passion), and sense of immersed experience that makeup personalized, tacit knowledge.

 

The introduction of KM initiatives can bring significant organizational change because it may require cultural transitions in several areas:

  • Changes in purpose, values, and collaborative virtues;
  • Construction of new social networks of trust and communication;
  • Organizational structure changes (networks replace hierarchies);
  • Business process agility, resulting a new culture of continual change (training to adopt new procedures and to create new products).

All of these changes require participation by the workforce and the communication of tacit knowledge across the organization.

Storytelling provides a complement to abstract, analytical thinking and communication, allowing humans to share experience, insight, and issues (e.g., unarticulated concerns about evidence expressed as “negative feelings,” or general “impressions” about repeated events not yet explicitly defined as threat patterns).

The organic school of KM that applies storytelling to cultural transformation emphasizes a human behavioral approach to organizational socialization, accepting the organization as a complex ecology that may be changed in a large way by small effects.

These effects include the use of a powerful, effective story that communicates in a way that spreads credible tacit knowledge across the entire organization.

This school classifies tacit knowledge into artifacts, skills, heuristics, experience, and natural talents (the so-called ASHEN classification of tacit knowledge) and categorizes an organizations’ tacit knowledge in these classes to understand the flow within informal communities.

Nurturing informal sharing within secure communities of practice and distinguishing such sharing from formal sharing (e.g., shared data, best practices, or eLearning) enables the rich exchange of tacit knowledge when creative ideas are fragile and emergent.

4.2 Organizational Learning

Senge asserted that the fundamental distinction between traditional controlling organizations and adaptive self-learning organizations are five key disciplines including both virtues (commitment to personal and team learning, vision shar- ing, and organizational trust) and skills (developing holistic thinking, team learning, and tacit mental model sharing). Senge’s core disciplines, moving from the individual to organizational disciplines, included:

• Personal mastery. Individuals must be committed to lifelong learning toward the end of personal and organization growth. The desire to learn must be to seek a clarification of one’s personal vision and role within the organization.

• Systems thinking. Senge emphasized holistic thinking, the approach for high-level study of life situations as complex systems. An element of learning is the ability to study interrelationships within complex dynamic systems and explore and learn to recognize high-level patterns of emergent behavior.

• Mental models. Senge recognized the importance of tacit knowledge (mental, rather than explicit, models) and its communication through the process of socialization. The learning organization builds shared mental models by sharing tacit knowledge in the storytelling process and the planning process. Senge emphasized planning as a tacit- knowledge sharing process that causes individuals to envision, articulate, and share solutions—creating a common understanding of goals, issues, alternatives, and solutions.

• Shared vision. The organization that shares a collective aspiration must learn to link together personal visions without conflicts or competition, creating a shared commitment to a common organizational goal set.

• Team learning. Finally, a learning organization acknowledges and understands the diversity of its makeup—and adapts its behaviors, pat- terns of interaction, and dialogue to enable growth in personal and organizational knowledge.

It is important, here, to distinguish the kind of transformational learning that Senge was referring to (which brings cultural change across an entire organization), from the smaller scale group learning that takes place when an intelligence team or cell conducts a long-term study or must rapidly “get up to speed” on a new subject or crisis.

4.2.1 Defining and Measuring Learning

The process of group learning and personal mastery requires the development of both reasoning and emotional skills. The level of learning achievement can be assessed by the degree to which those skills have been acquired.

The taxonomy of cognitive and affective skills can be related to explicit and tacit knowledge categories, respectively, to provide a helpful scale for measuring the level of knowledge achieved by an individual or group on a particular subject.

4.2.2 Organizational Knowledge Maturity Measurement

The goal of organizational learning is the development of maturity at the organizational level—a measure of the state of an organization’s knowledge about its domain of operations and its ability to continuously apply that knowledge to increase corporate value to achieve business goals.

Carnegie-Mellon University Software Engineering Institute has defined a five-level People Capability Maturity Model® (P-CMM ®) that distinguishes five levels of organizational maturity, which can be measured to assess and quantify the maturity of the workforce and its organizational KM performance. The P-CMM® framework can be applied, for example, to an intelligence organization’s analytic unit to measure current maturity and develop strategy to increase to higher levels of performance. The levels are successive plateaus of practice, each building on the preceding foundation.

4.2.3 Learning Modes

4.2.3.1 Informal Learning

We gain experience by informal modes of learning on the job alone, with men- tors, team members, or while mentoring others. The methods of informal learning are as broad as the methods of exchanging knowledge introduced in the last chapter. But the essence of the learning organization is the ability to translate what has been learned into changed organizational behavior. David Garvin has identified five fundamental organizational methodologies that are essential to implementing the feedback from learning to change; all have direct application in an intelligence organization.

  1. Systematic problem solving. Organizations require a clearly defined methodology for describing and solving problems, and then for implementing the solutions across the organization. Methods for acquiring and analyzing data, synthesizing hypothesis, and testing new ideas must be understood by all to permit collaborative problem solving. The process must also allow for the communication of lessons learned and best practices developed (the intelligence tradecraft) across the organization.
  2. Experimentation. As the external environment changes, the organization must be enabled to explore changes in the intelligence process. This is done by conducting experiments that take excursions from the normal processes to attack new problems and evaluate alternative tools and methods, data sources, or technologies. A formal policy to encourage experimentation, with the acknowledgment that some experiments will fail, allows new ideas to be tested, adapted, and adopted in the normal course of business, not as special exceptions. Experimentation can be performed within ongoing programs (e.g., use of new analytic tools by an intelligence cell) or in demonstration programs dedicated to exploring entirely new ways of conducting analysis (e.g., the creation of a dedicated Web-based pilot project independent of normal operations and dedicated to a particular intelligence subject domain).
  3. Internal experience. As collaborating teams solve a diversity of intelligence problems, experimenting with new sources and methods, the lessons that are learned must be exchanged and applied across the organization. This process of explicitly codifying lessons learned and making them widely available for others to adopt seems trivial, but in practice requires significant organizational discipline. One of the great values of communities of common practice is their informal exchange of lessons learned; organizations need such communities and must support formal methods that reach beyond these communities. Learning organizations take the time to elicit the lessons from project teams and explicitly record (index and store) them for access and application across the organization. Such databases allow users to locate teams with similar problems and lessons learned from experimentation, such as approaches that succeeded and failed, expected performance levels, and best data sources and methods.
  4. External sources of comparison. While the lessons learned just described applied to self learning, intelligence organizations must look to external sources (in the commercial world, academia, and other cooperating intelligence organizations) to gain different perspectives and experiences not possible within their own organizations. A wide variety of methods can be employed to secure the knowledge from external perspectives, such as making acquisitions (in the business world), establishing strategic relationships, the use of consultants, establishing consortia. The process of sharing, then critically comparing qualitative and quantitative data about processes and performance across organizations (or units within a large organization), enables leaders and process owners to objectively review the relative effectiveness of alter- native approaches. Benchmarking is the process of improving performance by continuously identifying, understanding, and adapting outstanding practices and processes found inside and outside the organization [23]. The benchmarking process is an analytic process that requires compared processes to be modeled, quantitatively measured, deeply understood, and objectively evaluated. The insight gained is an understanding of how best performance is achieved; the knowledge is then leveraged to predict the impact of improvements on over- all organizational performance.
  5. Transferring knowledge. Finally, an intelligence organization must develop the means to transfer people (tacit transfer of skills, experience, and passion by rotation, mentoring, and integrating process teams) and processes (explicit transfer of data, information, business processes on networks) within the organization. In Working Knowledge [24], Davenport and Prusak point out that spontaneous, unstructured knowledge exchange (e.g., discussions at the water cooler, exchanges among informal communities of interest, and discussions at periodic knowledge fairs) is vital to an organization’s success, and the organization must adopt strategies to encourage such sharing.

4.2.3.2 Formal Learning

In addition to informal learning, formal modes provide the classical introduc- tion to subject-matter knowledge.

Information technologies have enabled four distinct learning modes that are defined by distinguishing both the time and space of interaction between the learner and the instructor

  1. Residential learning (RL). Traditional residential learning places the students and instructor in the physical classroom at the same time and place. This proximity allows direct interaction between the student and instructor and allows the instructor to tailor the material to the students.
  2. Distance learning remote (DL-remote). Remote distance learning pro- vides live transmission of the instruction to multiple, distributed locations. The mode effectively extends the classroom across space to reach a wider student audience. Two-way audio and video can permit limited interaction between extended classrooms and the instructor.
  3. Distance learning canned (DL-canned). This mode simply packages (or cans) the instruction in some media for later presentation at the student’s convenience (e.g., traditional hardcopy texts, recorded audio or video, or softcopy materials on compact discs) DL-canned materials include computer-based training courseware that has built-in features to interact with the student to test comprehension, adaptively present material to meet a student’s learning style, and link to supplementary materials to the Internet.
  4. Distance learning collaborative (DL-collaborative). The collaborative mode of learning (often described as e-learning) integrates canned material while allowing on-line asynchronous interaction between the student and the instructor (e.g., via e-mail, chat, or videoconference). Collaboration may also occur between the student and software agents (personal coaches) that monitor progress, offer feedback, and recommend effective paths to on-line knowledge.

4.3 Organizational Collaboration

The knowledge-creation process of socialization occurs as communities (or teams) of people collaborate (commit to communicate, share, and diffuse knowledge) to achieve a common purpose.

Collaboration is a stronger term than cooperation because participants are formed around and committed to a com- mon purpose, and all participate in shared activity to achieve the end. If a problem is parsed into independent pieces (e.g., financial analysis, technology analysis, and political analysis), cooperation may be necessary—but not collabo- ration. At the heart of collaboration is intimate participation by all in the creation of the whole—not in cooperating to merely contribute individual parts to the whole.

 

Collaboration is widely believed to have the potential to perform a wide range of functions together:

  • Coordinate tasking and workflow to meet shared goals;
  • Share information, beliefs, and concepts;
  • Perform cooperative problem-solving analysis and synthesis;
  • Perform cooperative decision making;
  • Author team reports of decisions and rationale.

This process of collaboration requires a team (two or more) of individuals that shares a common purpose, enjoys mutual respect and trust, and has an established process to allow the collaboration process to take place. Four levels (or degrees) of intelligence collaboration can be distinguished, moving toward increasing degrees of interaction and dependence among team members

Sociologists have studied the sequence of collaborative groups as they move from inception to decision commitment. Decision emergence theory (DET) defines four stages of collaborative decision making within an individual group: orientation of all members to a common perspective; conflict, during which alternatives are compared and competed; emergence of collaborative alternatives; and finally reinforcement, when members develop consensus and commitment to the group decisions.

4.3.1 Collaborative Culture

First among the means to achieve collaboration is the creation of a collaborating culture—a culture that shares the belief that collaboration (as opposed to competition or other models) is the best approach to achieve a shared goal and that shares a commitment to collaborate to achieve organizational goals.

The collaborative culture must also recognize that teams are heterogeneous in nature. Team members have different tacit (experience, personality style) and cognitive (reasoning style) preferences that influence their unique approach to participating in the collaborative process.

The mix of personalities within a team must be acknowledged and rules of collaborative engagement (and even groupware) must be adapted to allow each member to contribute within the constraints and strengths of their individual styles.

Collaboration facilitators may use Myers-Brigg or other categorization schemes to analyze a particular team’s structure to assess the team’s strengths, weaknesses and overall balance

4.3.2 Collaborative Environments

Collaborative environments describe the physical, temporal, and functional setting within which organizations interact.

4.3.3 Collaborative Intelligence Workflow

The representative team includes:

• Intelligence consumer. The State Department personnel requesting the analysis define high-level requirements and are the ultimate customers for the intelligence product. They specify what information is needed: the scope or breadth of coverage, the level of depth, the accuracy required, and the timeframe necessary for policy making.

• All-source analytic cell. The all-source analysis cell, which may be a dis- tributed virtual team across several different organizations, has the responsibility to produce the intelligence product and certify its accuracy.

• Single-source analysts. Open-source and technical-source analysts (e.g., imagery, signals, or MASINT) are specialists that analyze the raw data collected as a result of special tasking; they deliver reports to the all- source team and certify the conclusions of special analysis.

• Collection managers. The collection managers translate all-source requests for essential information (e.g., surveillance of shipping lines, identification of organizations, or financial data) into specific collection tasks (e.g., schedules, collection parameters, and coordination between different sources). They provide the all-source team with a status of their ability to satisfy the team’s requests.

4.3.3.3 The Collaboration Paths

  1. Problem statement.

Interacting with the all-source analytic leader (LDR)—and all-source analysts on the analytic team—the problem is articulated in terms of scope (e.g., area of world, focus nations, and expected depth and accuracy of estimates), needs (e.g., specific questions that must be answered and pol- icy issues) urgency (e.g., time to first results and final products), and expected format of results (e.g., product as emergent results portal or softcopy document).

  1. Problem refinement. The analytic leader (LDR) frames the problem with an explicit description of the consumer requirements and intelligence reporting needs. This description, once approved by the consumer, forms the terms of reference for the activity. The problem statement-refinement loop may be iterated as the situation changes or as intelligence reveals new issues to be studied.
  2. Information requests to collection tasking. Based on the requirements, the analytic team decomposes the problem to deduce specific elements of information needed to model and understand the level of trafficking. (The decomposition process was described earlier in Section 2.4.) The LDR provides these intelligence data requirements to the collec- tion manger (CM) to prepare a collection plan. This planning requires the translation of information needs to a coordinated set of data- collection tasks for humans and technical collection systems. The CM prepares a collection plan that traces planned collection data and means to the analytic team’s information requirements.
  3. Collection refinement. The collection plan is fed back to the LDR to allow the analytic team to verify the completeness and sufficiency of the plan—and to allow a review of any constraints (e.g., limits to coverage, depth, or specificity) or the availability of previously collected relevant data. The information request–collection planning and refinement loop iterates as the situation changes and as the intelligence analysis proceeds. The value of different sources, the benefits of coordinated collection, and other factors are learned by the analytic team as the analysis proceeds, causing adjustments to the collection plan to satisfy information needs.
  4. Cross cueing. The single-source analysts acquire data by searching exist- ing archived data and open sources and by receiving data produced by special collections tasked by the CM. Single-source analysts perform source-unique analysis (e.g., imagery analysis; open-source foreign news report, broadcast translation, and analysis; and human report analysis) As the single-source analysts gain an understanding of the timing of event data, and the relationships between data observed across the two domains, the single-source analysts share these temporal and functional relationships. The cross-cueing collaboration includes one analyst cueing the other to search for corroborating evidence in another domain; one analyst cueing the other to a possible correlated event; or both analysts recommending tasking for the CM to coordinate a special collection to obtain time or functionally correlated data on a specific target. It is important to note that this cross-cueing collaboration, shown here at the single-source analysis level function is also performed within the all-source analysis unit (8), where more subtle cross-source relations may be identified.
  5. Single-source analysis reporting. Single-source analysts report the interim results of analysis to the all-source team, describing the emerging picture of the trafficking networks as well as gaps in information. This path provides the all-source team with an awareness of the progress and contribution of collections, and the added value of the analysis that is delivering an emerging trafficking picture.
  6. Single-source analysis refinement. The all-source team can provide direction for the single-source analysts to focus (“Look into that organization in greater depth”), broaden (“Check out the neighboring countries for similar patterns”), or change (“Drop the study of those shipping lines and focus on rail transport”) the emphasis of analysis and collection as the team gains a greater understanding of the subject. This reporting-refinement collaboration (paths 6 and 7, respectively) precedes publication of analyzed data (e.g., annotated images, annotated foreign reports on trafficking, maps of known and suspect trafficking routes, and lists of known and suspect trafficking organizations) into the analysis base.
  7. All-source analysis collaboration. The all-source team may allocate components of the trafficking-analysis task to individuals with areas of subject matter specialties (e.g., topical components might include organized crime, trafficking routes, finances, and methods), but all contribute to the construction of a single picture of illegal trafficking. The team shares raw and analyzed data in the analysis base, as well as the intelligence products in progress in a collaborative workspace. The LDR approves all product components for release onto the digital production system, which places them onto the intelligence portal for the consumer.

In the initial days, the portal is populated with an initial library of related subject matter data (e.g., open source and intelligence reports and data on illegal trafficking in general). As the analysis proceeds, analytic results are posted to the portal,

4.4 Organizational Problem Solving

Intelligence organizations face a wide range of problems that require planning, searching, and explanation to provide solutions. These problems require reactive solution strategies to respond to emergent situations as well as opportunistic (proactive) strategies to identify potential future problems to be solved (e.g., threat assessments, indications, and warnings).

The process of solving these problems collaboratively requires a defined strategy for groups to articulate a problem and then proceed to collectively develop a solution. In the context of intelligence analysis, organizational problem solving focuses on the following kinds of specific problems:

  • Planning. Decomposing intelligence needs for data requirements, developing analysis-synthesis procedures to apply to the collected data to draw conclusions, and scheduling the coordinated collection of data to meet those requirements
  • Discovery. Searching and identifying previously unknown patterns (of objects, events, behaviors, or relationships) that reveal new understanding about intelligence targets. (The discovery reasoning approach is inductive in nature, creating new, previously unrevealed hypotheses.)
  • Detection. Searching and matching evidence against previously known target hypotheses (templates). (The detection reasoning approach is deductive in nature, testing evidence against known hypotheses.)
  • Explanation. Estimating (providing mathematical proof in uncertainty) and arguing (providing logical proof in uncertainty) are required to provide an explanation of evidence. Inferential strategies require the description of multiple hypotheses (explanations), the confidence in each one, and the rationale for justifying a decision. Problem-solving descriptions may include the explanation of explicit knowledge via technical portrayals (e.g., graphical representations) and tacit knowledge via narrative (e.g., dialogue and story).

To perform organizational (or collaborative) problem solving in each of these areas, the individuals in the organization must share an awareness of the reasoning and solution strategies embraced by the organization. In each of these areas, organizational training, formal methodologies, and procedural templates provide a framework to guide the thinking process across a group. These methodologies also form the basis for structuring collaboration tools to guide the way teams organize shared knowledge, structure problems, and proceed from problem to solution.

Collaborative intelligence analysis is a difficult form of collaborative problem solving, where the solution often requires the analyst to overcome the efforts of a subject of study (the intelligence target) to both deny the analyst information and provide deliberately deceptive information.

4.4.1 Critical, Structured Thinking

Critical, or structured, thinking is rooted in the development of methods of careful, structured thinking, following the legacy of the philosophers and theologians that diligently articulated their basis for reasoning from premises to conclusions.

Critical thinking is based on the application of a systematic method to guide the collection of evidence, reason from evidence to argument, and apply objective decision-making judgment (Table 4.10). The systematic methodology assures completeness (breadth of consideration), objectivity (freedom from bias in sources, evidence, reasoning, or judgment), consistency (repeatability over a wide range of problems), and rationality (consistency with logic). In addition, critical thinking methodology requires the explicit articulation of the reasoning process to allow review and critique by others. These common methodologies form the basis for academic research, peer review, and reporting—as well as for intelligence analysis and synthesis.

structured methods that move from problem to solution provide a helpful common framework for groups to communicate knowledge and coordi- nate a process from problem to solution. The TQM initiatives of the 1980s expanded the practice of teaching entire organizations common strategies for articulating problems and moving toward solutions. A number of general problem-solving strategies have been developed and applied to intelligence applications, for example (moving from general to specific):

  • Kepner-TregoeTM. This general problem-solving methodology, introduced in the classic text The Rational Manager [38] and taught to generations of managers in seminars, has been applied to management, engineering, and intelligence-problem domains. This method carefully distinguishes problem analysis (specifying deviations from expectations, hypothesizing causes, and testing for probable causes) and decision analysis (establishing and classifying decision objectives, generating alternative decisions, and comparing consequences).
  • Multiattribute utility analysis (MAUA). This structured approach to decision analysis quantifies a utility function, or value of all decision factors, as a weighted sum of contributing factors for each alternative decision. Relative weights of each factor sum to unity so the overall utility scale (for each decision option) ranges from 0 to 1.
  • Alternative competing hypotheses (ACH). This methodology develops and organizes alternative hypotheses to explain evidence, evaluates the evidence across multiple criteria, and provides rationale for reasoning to the best explanation.
  • Lockwood analytic method for prediction (LAMP). This methodology exhaustively structures and scores alternative futures hypotheses for complicated intelligence problems with many factors. The process enumerates, then compares the relative likelihood of COAs for all actors (e.g., military or national leaders) and their possible outcomes. The method provides a structure to consider all COAs while attempting to minimize the exponential growth of hypotheses.

A basic problem-solving process flow (Figure 4.7), which encompasses the essence of each of these three approaches, includes five fundamental component stages:

  1. Problem assessment. The problem must be clearly defined, and criteria for decision making must be established at the beginning. The problem, as well as boundary conditions, constraints, and the format of the desired solution, is articulated.
  2. Problem decomposition. The problem is broken into components by modeling the “situation” or context of the problem. If the problem is a corporate need to understand and respond to the research and develop- ment initiatives of a particular foreign company, for example, a model of that organization’s financial operations, facilities, organizational structure (and research and development staffing), and products is con- structed. The decomposition (or analysis) of the problem into the need for different kinds of information necessarily requires the composition (or synthesis) of the model. This models the situation of the problem and provides the basis for gathering more data to refine the problem (refine the need for data) and better understand the context.
  3. Alternative analysis. In concert with problem decomposition, alterna- tive solutions (hypotheses) are conceived and synthesized. Conjecture and creativity are necessary in this stage; the set of solutions are catego- rized to describe the range of the solution space. In the example of the problem of understanding a foreign company’s research and develop- ment, these solutions must include alternative explanations of what the competitor might be doing and what business responses should be taken to respond if there is a competitive threat. The competitor ana- lyst must explore the wide range of feasible solutions and associated constraints and variables; alternatives may range from no research and
  4. Problem decomposition. The problem is broken into components by modeling the “situation” or context of the problem. If the problem is a corporate need to understand and respond to the research and develop- ment initiatives of a particular foreign company, for example, a model of that organization’s financial operations, facilities, organizational structure (and research and development staffing), and products is con- structed. The decomposition (or analysis) of the problem into the need for different kinds of information necessarily requires the composition (or synthesis) of the model. This models the situation of the problem and provides the basis for gathering more data to refine the problem (refine the need for data) and better understand the context.
  5. Alternative analysis. In concert with problem decomposition, alternative solutions (hypotheses) are conceived and synthesized. Conjecture and creativity are necessary in this stage; the set of solutions are categorized to describe the range of the solution space. In the example of the problem of understanding a foreign company’s research and development, these solutions must include alternative explanations of what the competitor might be doing and what business responses should be taken to respond if there is a competitive threat. The competitor analyst must explore the wide range of feasible solutions and associated constraints and variables; alternatives may range from no research and development investment to significant but hidden investment in a new, breakthrough product development. Each solution (or explanation, in this case) must be compared to the model, and this process may cause the scope of the model to be expanded in scope, refined, and further decomposed to smaller components.
  6. Decision analysis. In this stage the alternative solutions are applied to the model of the situation to determine the consequences of each solution. In the foreign firm example, consequences are related to both the likelihood of the hypothesis being true and the consequences of actions taken. The decision factors, defined in the first stage, are applied to evaluate the performance, effectiveness, cost, and risk associated with each solution. This stage also reveals the sensitivity of the decision factors to the situation model (and its uncertainties) and may send the analyst back to gather more information about the situation to refine the model [42].
  7. Solution evaluation. The final stage, judgment, compares the outcome of decision analysis with the decision criteria established at the onset. Here, the uncertainties (about the problem, the model of the situation, and the effects of the alternative solutions) are considered and other subjective (tacit) factors are weighed to arrive at a solution decision.

This approach underlies the basis for traditional analytic intelligence methods, because it provides structure, rationale, and formality. But most recognize that the solid tacit knowledge of an experienced analyst provides a complementary basis—or an unspoken confidence that underlies final decisions—that is recognized but not articulated as explicitly as the quantified decision data.

4.4.2 Systems Thinking

In contrast with the reductionism of a purely analytic approach, a more holistic approach to understanding complex processes acknowledges the inability to fully decompose many complex problems into a finite and complete set of linear processes and relationships. This approach, referred to as holism, seeks to understand high-level patterns of behavior in dynamic or complex adaptive systems that transcend complete decomposition (e.g., weather, social organizations, or large-scale economies and ecologies). Rather than being analytic, systems approaches tend to syn- thetic—that is, these approaches construct explanations at the aggregate or large scale and compare them to real-world systems under study.

Complexity refers the property of real-world systems that prohibits any formalism to represent or completely describe its behavior. In contrast with simple systems that may be fully described by some formalism (i.e., mathematical equations that fully describe a real-world process to some level of satisfaction for the problem at hand), complex systems lack a fully descriptive formalism that captures all of their properties, especially global behavior.

systems of subatomic scale, human organizational systems, and large-scale economies, where very large numbers of independent causes interact in large numbers of interactive ways, are characterized by inability to model global behavior—and a frustrating inability to predict future behavior.

The expert’s judgment is based not on an external and explicit decomposition of the problem, but on an internal matching of high-level patterns of prior experience with the current situation. The experienced detective as well as the experienced analyst applies such high-level comparisons of current behaviors with previous tacit (unarticulated, even unconscious) patterns gained through experience.

It is important to recognize that analytic and systems-thinking approaches, though in contrast, are usually applied in a complementary fashion by individuals and team alike. The analytic approach provides the structure, record keeping, and method for articulating decision rationale, while the systems approach guides the framing of the problem, provides the synoptic perspective for exploring alternatives, and provides confidence in judgments.

4.4.3     Naturalistic Decision Making

in times of crisis, when time does not permit the careful methodologies, humans apply more naturalistic methods that, like the systems-thinking mode, rely entirely on the only basis available—prior experience.

Uncontrolled, [information] will control you and your staffs … and lengthen your decision-cycle times.” (Insightfully, the Admiral also noted, “You can only manage from your Desktop Computer … you cannot lead from it”

While long-term intelligence analysis applies the systematic, critical analytic approaches described earlier, crisis intelligence analy- sis may be forced to the more naturalistic methods, where tacit experience (via informal on-the-job learning, simulation, or formal learning) and confidence are critical.

4.5 Tradecraft: The Best Practices of Intelligence

The capture and sharing of best practices was developed and matured through- out the 1980s when the total quality movement institutionalized the processes of benchmarking and recording lessons learned. Two forms of best practices and lessons capture and recording are often cited:

  1. Explicit process descriptions. The most direct approach is to model and describe the best collection, analytic, and distribution processes, their performance properties, and applications. These may be indexed, linked, and organized for subsequent reuse by a team posed with simi- lar problems and instructors preparing formal curricula.
  2. Tacit learning histories. The methods of storytelling, described earlier in this chapter, are also applied to develop a “jointly told” story by the team developing the best practice. Once formulated, such learning histories provide powerful tools for oral, interactive exchanges within the organization; the written form of the exchanges may be linked to the best-practice description to provide context.

While explicit best-practices databases explain the how, learning histories provide the context to explain the why of particular processes.

The CIA maintains a product evaluation staff to evaluate intelligence products, learn from the large range of products produced (estimates, forecasts, technical assessments, threat assessments, and warnings) and maintains the database of best practices for training and distribution to the analytic staff.

4.6 Summary

In this chapter, we have introduced the fundamental cultural qualities, in terms of virtues and disciplines that characterize the knowledge-based intelligence organization. The emphasis has necessarily been on organizational disciplines—learning, collaborating, problem solving—that provide the agility to deliver accurate and timely intelligence products in a changing environment. The virtues and disciplines require support—technology to support collaboration over time and space, to support the capture and retrieval of explicit knowledge, to enable the exchange of tacit knowledge, and to support the cognitive processes in analytic and holistic problem solving.

5

Principles of Intelligence Analysis and Synthesis

At the core of all knowledge creation are the seemingly mysterious reasoning processes that proceed from the known to the assertion of entirely new knowledge about the previously unknown. For the intelligence analyst, this is the process by which evidence [1], that data deter- mined to be relevant to a problem, is used to infer knowledge about a subject of investigation—the intelligence target. The process must deal with evidence that is often inadequate, undersampled in time, ambiguous, and carries questionable pedigree.

We refer to this knowledge-creating discipline as intelligence analysis and the practitioner as analyst. But analysis properly includes both the processes of analysis (breaking things down) and synthesis (building things up).

5.1 The Basis of Analysis and Synthesis

The process known as intelligence analysis employs both the functions of analysis and synthesis to produce intelligence products.

In a criminal investigation, this leads from a body of evidence, through feasible explanations, to an assembled case. In intelligence, the process leads from intelligence data, through alternative hypotheses, to an intelligence product. Along this trajectory, the problem solver moves forward and backward iteratively seeking a path that connects the known to the solution (that which was previously unknown).

Intelligence analysis-synthesis is very interested in financial, political, economic, military, and many other evidential relationships that may not be causal, but provide understanding of the structure and behavior of human, organizational, physical, and financial entities.

Descriptions of the analysis-synthesis processes can be traced from its roots in philosophy and problem solving to applications in intelligence assessments.

Philosophers distinguish between propositions as analytic or synthetic based on the direction in which they are developed. Propositions in which the predicate (conclusion) is contained within the subject are called analytic because the predicate can be derived directly by logical reasoning forward from the subject; the subject is said to contain the solution. Synthetic propositions on the other hand have predicates and subjects that are independent. The synthetic proposition affirms a connection between otherwise independent concepts.

The empirical scientific method applies analysis and synthesis to develop and then to test hypotheses:

  • Observation. A phenomenon is observed and recorded as data.
  • Hypothesis creation. Based upon a thorough study of the data, a working hypothesis is created (by the inductive analysis process or by pure inspi- ration) to explain the observed phenomena.
  • Experiment development. Based on the assumed hypothesis, the expected results (the consequences) of a test of the hypothesis are synthesized (by deduction).
  • Hypothesis testing. The experiment is performed to test the hypothesis against the data.
  • When the consequences of the test are confirmed, the hypothesis is verified (as a theory or law depending upon the degree of certainty).

The analyst iteratively applies analysis and synthesis to move forward from evidence and backward from hypothesis to explain the available data (evidence). In the process, the analyst identifies more data to be collected, critical missing data, and new hypotheses to be explored. This iterative analysis-synthesis process provides the necessary traceability from evidence to conclusion that will allow the results (and the rationale) to be explained with clarity and depth when completed.

 

5.2 The Reasoning Processes

Reasoning processes that analyze evidence and synthesize explanations perform inference (i.e., they create, manipulate, evaluate, modify, and assert belief). We can characterize the most fundamental inference processes by their process and products:

  • Process. The direction of the inference process refers to the way in which beliefs are asserted. The process may move from specific (or particular) beliefs toward more general beliefs, or from general beliefs to assert more specific beliefs.
  • Products. The certainty associated with an inference distinguishes two categories of results of inference. The asserted beliefs that result from inference may be infallible (e.g., an analytic conclusion is derived from infallible beliefs and infallible logic is certain) or fallible judgments (e.g., a synthesized judgment is asserted with a measure of uncertainty; “probably true,” “true with 0.95 probability,” or “more likely true than false”).

 

5.2.1 Deductive Reasoning

Deduction is the method of inference by which a conclusion is inferred by applying the rules of a logical system to manipulate statements of belief to form new logically consistent statements of belief. This form of inference is infallible, in that the conclusion (belief) must be as certain as the premise (belief). It is belief preserving in that conclusions reveal no more than that expressed in the original premises. Deduction can be expressed in a variety of syllogisms, including the more common forms of propositional logic.

5.2.2 Inductive Reasoning

Induction is the method of inference by which a more general or more abstract belief is developed by observing a limited set of observations or instances.

Induction moves from specific beliefs about instances to general beliefs about larger and future populations of instances. It is a fallible means of inference.

The form of induction most commonly applied to extend belief from a sample of instances to a larger population, is inductive generalization:

By this method, analysts extend the observations about a limited number of targets (e.g., observations of the money laundering tactics of several narcotics rings within a drug cartel) to a larger target population (e.g., the entire drug cartel).

Inductive prediction extends belief from a population to a specific future sample.

By this method, an analyst may use several observations of behavior (e.g., the repeated surveillance behavior of a foreign intelligence unit) to create a general detection template to be used to detect future surveillance activities by that or other such units. The induction presumes future behavior will follow past patterns.

In addition to these forms, induction can provide a means of analogical reasoning (induction on the basis of analogy or similarity) and inference to relate cause and effect. The basic scientific method applies the principles of induction to develop hypotheses and theories that can subsequently be tested by experimentation over a larger population or over future periods of time. The subject of induction is central to the challenge of developing automated systems that generalize and learn by inducing patterns and processes (rules).

Koestler uses the term bisociation to describe the process of viewing multiple explanations (or multiple associations) of the same data simultaneously. In the example in the figure, the data can be projected onto a common plane of discernment in which the data represents a simple curved line; projected onto an orthogonal plane, the data can explain a sinusoid. Though undersampled, as much intelligence data is, the sinusoid represents a new and novel explanation that may remain hidden if the analyst does not explore more than the common, immediate, or simple interpretation.

In a similar sense, the inductive discovery by an intelligence analyst (aha!) may take on many different forms, following the simple geometric metaphor. For example:

  • A subtle and unique correlation between the timing of communications (by traffic analysis) and money transfers of a trading firm may lead to the discovery of an organized crime operation.
  • A single anomalous measurement may reveal a pattern of denial and deception to cover the true activities at a manufacturing facility in which many points of evidence, are, in fact deceptive data “fed” by the deceiver. Only a single piece of anomalous evidence (D5 in the figure) is the clue that reveals the existence of the true operations (a new plane in the figure). The discovery of this new plane will cause the analyst to search for additional supporting evidence to support the deception hypothesis.

Each frame of discernment (or plane in Koestler’s metaphor) is a framework for creating a single or a family of multiple hypotheses to explain the evidence. The creative analyst is able to entertain multiple frames of discernment, alternatively analyzing possible “fits” and constructing new explanations, exploring the many alternative explanations. This is Koestler’s constructive-destructive process of discovery.

Collaborative intelligence analysis (like collaborative scientific discovery) may produce a healthy environment for creative induction or an unhealthy competitive environment that stifles induction and objectivity. The goal of collaborative analysis is to allow alternative hypotheses to be conceived and objectively evaluated against the available evidence and to guide the tasking for evidence to confirm or disconfirm the alternatives.

5.2.3 Abductive Reasoning

Abduction is the informal or pragmatic mode of reasoning to describe how we “reason to the best explanation” in everyday life. Abduction is the practical description of the interactive use of analysis and synthesis to arrive at a solution or explanation creating and evaluating multiple hypotheses.

Unlike infallible deduction, abduction is fallible because it is subject to errors (there may be other hypotheses not considered or another hypothesis, however unlikely, may be correct). But unlike deduction, it has the ability to extend belief beyond the original premises. Peirce contended that this is the logic of discovery and is a formal model of the process that scientists apply all the time.

Consider a simple intelligence example that implements the basic abduc- tive syllogism. Data has been collected on a foreign trading company, TraderCo, which indicates its reported financial performance is not consistent with (less than) its level of operations. In addition, a number of its executives have subtle ties with organized crime figures.

The operations of the company can be explained by at least three hypotheses:

Hypothesis (H1)—TraderCo is a legitimate but poorly run business; its board is unaware of a few executives with unhealthy business contacts.

Hypothesis (H2)—TraderCo is a legitimate business with a naïve board that is unaware that several executives who gamble are using the business to pay off gambling debts to organized crime.

Hypothesis (H3)—TraderCo is an organized crime front operation that is trading in stolen goods and laundering money through the business, which reports a loss.

Hypothesis H3 best explains the evidence.

∴ Therefore, Accept Hypothesis H3 as the best explanation.

Of course, the critical stage of abduction unexplained in this set of hypotheses is the judgment that H3 is the best explanation. The process requires a criteria for ranking hypotheses, a method for judging which is best, and a method to assure that the set of candidate hypotheses cover all possible (or feasible) explanations.

 

5.2.3.1 Creating and Testing Hypotheses

Abduction introduces the competition among multiple hypotheses, each being an attempt to explain the evidence available. These alternative hypotheses can be compared, or competed on the basis of how well they explain (or fit) the evidence. Furthermore, the created alternative hypotheses provide a means of identifying three categories of evidence important to explanation:

  • Positive evidence. This is evidence revealing the presence of an object or occurrence of an event in a hypothesis.
  • Missing evidence. Some hypotheses may fit the available evidence, but the hypothesis “predicts” that additional evidence that should exist if the hypothesis were true is “missing.” Subsequent searches and testing for this evidence may confirm or disconfirm the hypothesis.
  • Negative evidence. Hypotheses that contain evidence of a nonoccurrence of an event (or nonexistence of an object) may confirm a hypothesis.

5.2.3.2 Hypothesis Selection

Abduction also poses the issue of defining which hypothesis provides the best explanation of the evidence. The criteria for comparing hypotheses, at the most fundamental level, can be based on two principle approaches established by philosophers for evaluating truth propositions about objective reality [18]. The correspondence theory of the truth of a proposition p is true is to maintain that “p corresponds to the facts.”

For the intelligence analyst this would equate to “hypothesis h corresponds to the evidence”—it explains all of the pieces of evidence, with no expected evidence missing, all without having to leave out any contradictory evidence. The coherence theory of truth says that a proposition’s truth consists of its fitting into a coherent system of propositions that create the hypothesis. Both concepts contribute to practical criteria for evaluating competing hypotheses

5.3 The Integrated Reasoning Process

The analysis-synthesis process combines each of the fundamental modes of reasoning to accumulate, explore, decompose to fundamental elements, and then fit together evidence. The process also creates hypothesized explanations of the evidence and uses these hypotheses to search for more confirming or refuting elements of evidence to affirm or prune the hypotheses, respectively.

This process of proceeding from an evidentiary pool to detections, explanations, or discovery has been called evidence marshaling because the process seeks to marshal (assemble and organize) into a representation (a model) that:

  • Detects the presence of evidence that match previously known premises (or patterns of data);
  • Explains underlying processes that gave rise to the evidence;
  • Discovers new patterns in the evidence—patterns of circumstances or behaviors not known before (learning).

The figure illustrates four basic paths that can proceed from the pool of evidence, our three fundamental inference modes and a fourth feedback path:

  1. Deduction. The path of deduction tests the evidence in the pool against previously known patterns (or templates) that represent hypotheses of activities that we seek to detect. When the evidence fits the hypothesis template, we declare a match. When the evidence fits multiple hypotheses simultaneously, the likelihood of each hypothesis (determined by the strength of evidence for each) is assessed and reported. (This likelihood may be computed probabilistically using Bayesian methods, where evidence uncertainty is quantified as a probability and prior probabilities of the hypotheses are known.)
  2. Retroduction. This feedback path, recognized and named by C.S. Peirce as yet another process of reasoning, occurs when the analyst conjectures (synthesizes) a new conceptual hypothesis (beyond the cur- rent frame of discernment) that causes a return to the evidence to seek evidence to match (or test) this new hypothesis. The insight Peirce provided is that in the testing of hypotheses, we are often inspired to realize new, different hypotheses that might also be tested. In the early implementation of reasoning systems, the forward path of deduction was often referred to as forward chaining by attempting to automatically fit data to previously stored hypothesis templates; the path of retroduction was referred to as backward chaining, where the system searched for data to match hypotheses queried by an inspired human operator.
  3. Abduction. The abduction process, like induction, creates explanatory hypotheses inspired by the pool evidence and then, like deduction, attempts to fit items of evidence to each hypothesis to seek the best explanation. In this process, the candidate hypotheses are refined and new hypotheses are conjectured. The process leads to comparison and ranking of the hypotheses, and ultimately the best is chosen as the explanation. As a part of the abductive process, the analyst returns to the pool of evidence to seek support for these candidate explanations; this return path is called retroduction.
  4. Induction. The path of induction considers the entire pool of evidence to seek general statements (hypotheses) about the evidence. Not seeking point matches to the small sets of evidence, the inductive path conjectures new and generalized explanation of clusters of similar evidence; these generalizations may be tested across the evidence to determine the breadth of applicability before being declared as a new discovery.

5.4 Analysis and Synthesis As a Modeling Process

The fundamental reasoning processes are applied to a variety of practical ana- lytic activities performed by the analyst.

  • Explanation and description. Find and link related data to explain entities and events in the real world.
  • Detection. Detect and identify the presence of entities and events based on known signatures. Detect potentially important deviations, including anomaly detection of changes relative to “normal” or “expected” state or change detection of changes or trends over time.
  • Discovery. Detect the presence of previously unknown patterns in data (signatures) that relate to entities and events.
  • Estimation. Estimate the current qualitative or quantitative state of an entity or event.
  • Prediction. Anticipate future events based on detection of known indicators; extrapolate current state forward, project the effects of linear fac- tors forward, or simulate the effects of complex factors to synthesize possible future scenarios to reveal anticipated and unanticipated (emergent) futures.

In each of these cases, we can view the analysis-synthesis process as an evidence-decomposing and model-building process.

The objective of this process is to sort through and organize data (analyze) and then to assemble (synthesize), or marshal related evidence to create a hypothesis—an instantiated model that represents one feasible representation of the intelligence subject (target). The model is used to marshal evidence, evaluate logical argumentation, and provide a tool for explanation of how the available evidence best fits the analyst’s conclusion. The model also serves to help the analyst understand what evidence is missing, what strong evidence supports the model, and where negative evidence might be expected. The terminology we use here can be clarified by the following distinctions:

  • A real intelligence target is abstracted and represented by models.
  • A model has descriptive and stated attributes or properties.
  • A particular instance of a model, populated with evidence-derived and conjectured properties, is a hypothesis.

A target may be described by multiple models, each with multiple instances (hypotheses). For example, if our target is the financial condition of a designated company, we might represent the financial condition with a single financial model in the form of a spreadsheet that enumerates many financial attributes. As data is collected, the model is populated with data elements, some reported publicly and others estimated. We might maintain three instances of the model (legitimate company, faltering legitimate company, and illicit front organization), each being a competing explanation (or hypothesis) of the incomplete evidence. These hypotheses help guide the analyst to identify the data required to refine, affirm, or discard existing hypotheses or to create new hypotheses.

Explicit model representations provide a tool for collaborative construction, marshaling of evidence, decomposition, and critical examination. Mental and explicit modeling are complementary tools of the analyst; judgment must be applied to balance the use of both.

Former U.S. National Intelligence Officer for Warning (1994–1996) Mary McCarthy has emphasized the importance of the explicit modeling to analysis:

Rigorous analysis helps overcome mindset, keeps analysts who are immersed in a mountain of new information from raising the bar on what they would consider an alarming threat situation, and allows their minds to expand other possibilities. Keeping chronologies, maintaining databases and arraying data are not fun or glamorous. These techniques are the heavy lifting of analysis, but this is what analysts are supposed to do [19].

 

The model is an abstract representation that serves two functions:

  1. Model as hypothesis. Based on partial data or conjecture alone, a model may be instantiated as a feasible proposition to be assessed, a hypothesis. In a homicide investigation, each conjecture for “who did it” is a hypothesis, and the associated model instance is a feasible explanation for “how they did it.” The model provides a framework around which data is assembled, a mechanism for examining feasibility, and a basis for exploring data to confirm or refute the hypothesis.
  2. Model as explanation. As evidence (relevant data that fits into the model) is assembled on the general model framework to form a hypothesis, different views of the model provide more robust explanations of that hypothesis. Narrative (story), timeline, organization relationships, resources, and other views may be derived from a common model.

 

 

The process of implementing data decomposition (analysis) and model construction-examination (synthesis) can be depicted in three process phases or spaces of operation (Figure 5.6):

  1. Data space. In this space, data (relevant and irrelevant, certain and ambiguous) are indexed and accumulated. Indexing by time (of collection and arrival), source, content topic, and other factors is performed to allow subsequent search and access across many dimensions.
  2. Argumentation space. The data is reviewed; selected elements of potentially relevant data (evidence) are correlated, grouped, and assembled into feasible categories of explanations, forming a set (structure) of high-level hypotheses to explain the observed data. This process applies exhaustive searches of the data space, accepting some as relevant and discarding others. In this phase, patterns in the data are dis- covered, although all the data in the patterns may not be present; these patterns lead to the creation of hypotheses even though all the data may not exist. Examination of the data may lead to creation of hypotheses by conjecture, even though no data supports the hypothesis at this point. The hypotheses are examined to determine what data would be required to reinforce or reject each; hypotheses are ranked in terms of likelihood and needed data (to reinforce or refute). The models are tested and various excursions are examined. This space is the court in which the case is made for each hypothesis, and they are judged for completeness, sufficiency, and feasibility. This examination can lead to requests for additional data, refinements of the current hypotheses, and creation of new hypotheses.
  3. Explanation space. Different “views” of the hypothesis model provide explanations that articulate the hypothesis and relate the supporting evidence. The intelligence report can include a single model and explanation that best fits the data (when data is adequate to assert the single answer) or alternative competing models, as well as the sup- porting evidence for each and an assessment of the implications of each. Figure 5.6 illustrates several of the views often used: timelines of events, organization-relationship diagrams, annotated maps and imagery, and narrative story lines.

For a single target under investigation, we may create and consider (or entertain) several candidate hypotheses, each with a complete set of model views. If, for example, we are trying to determine the true operations of the foreign company introduced earlier, TradeCo, we may hold several hypotheses:

  1. H1—The company is a legal clothing distributor, as advertised.
  2. H2 —The company is a legal clothing distributor, but company executives are diverting business funds for personal interests.
  3. H3—The company is a front operation to cover organized crime, where hypothesis 3 has two sub-hypotheses:
  • H31—The company is a front for drug trafficking.
    • H32—The company is a front for terrorism money laundering.

In this case, H1, H2, H31, and H32 are the four root hypotheses, and the analyst identifies the need to create an organizational model, an operations flow-process model, and a financial model for each of the four hypotheses—creating 4 × 3 = 12 models.

 

5.5 Intelligence Targets in Three Domains

We have noted that intelligence targets may be objects, events, or dynamic processes—or combinations of these. The development of information operations has brought a greater emphasis on intelligence targets that exist not only in the physical domain, but in the realms of information (e.g., networked computers and information processes) and human decision making.

Information operations (IO) are those actions taken to affect an adversary’s information and information systems, while defending one’s own information and information systems. The U.S. Joint Vision 2020 describes the Joint Chiefs of Staff view of the ultimate purpose of IO as “to facilitate and protect U.S. decision-making processes, and in a conflict, degrade those of an adversary”.

The JV2020 builds on the earlier JV2010 [26] and retains the fundamental operational concepts, two with significant refinements that emphasize IO. The first is the expansion of the vision to encompass the full range of operations (nontraditional, asymmetric, unconventional ops), while retaining warfighting as the primary focus. The second refinement moves information superiority concepts beyond technology solutions that deliver information to the concept of superiority in decision making. This means that IO will deliver increased information at all levels and increased choices for commanders. Conversely, it will also reduce information to adversary commanders and diminish their decision options. Core to these concepts and challenges is the notion that IO uniquely requires the coordination of intelligence, targeting, and security in three fundamental realms, or domains of human activities.

 

These are likewise the three fundamental domains of intelligence targets, and each must be modeled:

  1. The physical domain encompasses the material world of mass and energy. Military facilities, vehicles, aircraft, and personnel make up the principal target objects of this domain. The orders of battle that measure military strength, for example, are determined by enumerating objects of the physical world.
  2. The abstract symbolic domain is the realm of information. Words, numbers, and graphics all encode and represent the physical world, storing and transmitting it in electronic formats, such as radio and TV signals, the Internet, and newsprint. This is the domain that is expanding at unprecedented rates, as global ideas, communications, and descriptions of the world are being represented in this domain. The domain includes the cyberspace that has become the principal means by which humans shape their perception of the world. It interfaces the physical to the cognitive domains.
  3. The cognitive domain is the realm of human thought. This is the ultimate locus of all information flows. The individual and collective thoughts of government leaders and populations at large form this realm. Perceptions, conceptions, mental models, and decisions are formed in this cognitive realm. This is the ultimate target of our adversaries: the realm where uncertainties, fears, panic, and terror can coerce and influence our behavior.

Current IO concepts have appropriately emphasized the targeting of the second domain—especially electronic information systems and their information content. The expansion of networked information systems and the reliance on those systems has focused attention on network-centric forms of warfare. Ultimately, though, IO must move toward a focus on the full integration of the cognitive realm with the physical and symbolic realms to target the human mind

Intelligence must understand and model the complete system or complex of the targets of IO: the interrelated systems of physical behavior, information perceived and exchanged, and the perception and mental states of decision makers.

Of importance to the intelligence analyst is the clear recognition that most intelligence targets exist in all three domains, and models must consider all three aspects.

The intelligence model of such an organization must include linked models of all three domains—to provide an understanding of how the organization perceives, decides, and communicates through a networked organization, as well as where the people and other physical objects are moving in the physical world. The concepts of detection, identification, and dynamic tracking of intelligence targets apply to objects, events, and processes in all three domains.

5.6 Summary

the analysis-synthesis process proceeds from intelligence analysis to operations analysis and then to policy analysis.

The knowledge-based intelligence enterprise requires the capture and explicit representation of such models to permit collaboration among these three disciplines to achieve the greatest effectiveness and sharing of intellectual capital.

6

The Practice of Intelligence Analysis and Synthesis

The chapter moves from high-level functional flow models toward the processes implemented by analysts.

A practical description of the process by one author summarizes the perspective of the intelligence user:

A typical intelligence production consists of all or part of three main elements: descriptions of the situation or event with an eye to identifying its essential characteristics; explanation of the causes of a development as well as its significance and implications; and the prediction of future developments. Each element contains one or both of these components: data, pro- vided by knowledge and incoming information and assessment, or judgment, which attempts to fill the gaps in the data

Consumers expect description, explanation, and prediction; as we saw in the last chapter, the process that delivers such intelligence is based on evidence (data), assessment (analysis-synthesis), and judgment (decision).

6.1 Intelligence Consumer Expectations

The U.S. Government Accounting Office (GAO) noted the need for greater clarity in the intelligence delivered in U.S. national intelligence estimates (NIEs) in a 1996 report, enumerating five specific standards for analysis, from the perspective of policymakers.

Based on a synthesis of the published views of current and former senior intelligence officials, the reports of three independent commissions, and a CIA publication that addressed the issue of national intelligence estimating, an objective NIE should meet the following standards [2]:

  • [G1]: quantify the certainty level of its key judgments by using percentages or bettors’ odds, where feasible, and avoid overstating the certainty of judgments (note: bettors’ odds state the chance as, for example, “one out of three”);
  • [G2]: identify explicitly its assumptions and judgments;
  • [G3]: develop and explore alternative futures: less likely (but not impossible) scenarios that would dramatically change the estimate if they occurred;
  • [G4]: allow dissenting views on predictions or interpretations;
  • [G5]: note explicitly what the IC does not know when the information gaps could have significant consequences for the issues under consideration.

 

The Commission would urge that the [IC] adopt as a standard of its meth- odology that in addition to considering what they know, analysts consider as well what they know they don’t know about a program and set about fill- ing gaps in their knowledge by:

  • [R1] taking into account not only the output measures of a program, but the input measures of technology, expertise and personnel from both internal sources and as a result of foreign assistance. The type and rate of foreign assis- tance can be a key indicator of both the pace and objective of a program into which the IC otherwise has little insight.
  • [R2] comparing what takes place in one country with what is taking place in others, particularly among the emerging ballistic missile powers. While each may be pursuing a somewhat different development program, all of them are pursuing programs fundamentally different from those pursued by the US, Russia and even China. A more systematic use of comparative methodologies might help to fill the information gaps.
  • [R3] employing the technique of alternative hypotheses. This technique can help make sense of known events and serve as a way to identify and organize indicators relative to a program’s motivation, purpose, pace and direction. By hypothesizing alternative scenarios a more adequate set of indicators and col- lection priorities can be established. As the indicators begin to align with the known facts, the importance of the information gaps is reduced and the likely outcomes projected with greater confidence. The result is the possibility for earlier warning than if analysts wait for proof of a capability in the form of hard evidence of a test or a deployment. Hypothesis testing can provide a guide to what characteristics to pursue, and a cue to collection sensors as well.
  • [R4] explicitly tasking collection assets to gather information that would dis- prove a hypothesis or fill a particular gap in a list of indicators. This can prove a wasteful use of scarce assets if not done in a rigorous fashion. But moving from the highly ambiguous absence of evidence to the collection of specific evidence of absence can be as important as finding the actual evidence [3].

 

 

 

intelligence consumers want more than estimates or judgments; they expect concise explanations of the evidence and reasoning processes behind judgments with substantiation that multiple perspectives, hypotheses, and consequences have been objectively considered.

They expect a depth of analysis-synthesis that explicitly distinguishes assumptions, evidence, alternatives, and consequences—with a means of quantifying each contribution to the outcomes (judgments).

6.2 Analysis-Synthesis in the Intelligence Workflow

Analysis-synthesis is one process within the intelligence cycle… It represents a process that is practically implemented as a continuum rather than a cycle, with all phases being implemented concurrently and addressing a multitude of different intelligence problems or targets.

The stimulus-hypothesis-option-response (SHOR) model, described by Joseph Wohl in 1986, emphasizes the consideration of multiple perception hypotheses to explain sensed data and assess options for response.

The observe-orient-decide-act (OODA) loop, developed by Col. John Warden, is a high-level abstraction of the military command and control loop that considers the human decision-making role and its dependence on observation and orientation—the process of placing the observations in perceptual framework for decision making.

The tasking, processing, exploitation, dissemination (TPED) model used by U.S. technical collectors and processors [e.g., the U.S. National Reconnaissance Office (NRO), the National Imagery and Mapping Agency (NIMA), and the National Security Agency (NSA)] distinguishes between the processing elements of the national technical-means intelligence channels (SIGINT, IMINT, and MASINT) and the all-source analytic exploitation roles of the CIA and DIA.

The DoD Joint Directors of Laboratories (JDL) data fusion model is a more detailed technical model that considers the use of multiple sources to produce a common operating picture of individual objects, situations (the aggregate of objects and their behaviors), and the consequences or impact of those situations. The model includes a hierarchy of data correlation and combination processes at three levels (level 0: signal refinement; level 1: object refinement; level 2: situation refinement; level 3: impact refinement) and a corresponding feedback control process (level 4: process refinement) [10]. The JDL model is a functional representation that accommodates automated processes and human processes and provides detail within both the processing and analysis steps. The model is well suited to organize the structure of automated processing stages for technical sensors (e.g., imagery, signals, and radar).

  • Level 0: signal refinement automated processing correlates and combines raw signals (e.g., imagery pixels or radar signals intercepted from multiple locations) to detect objects and derive their location, dynamics, or identity.
  • Level 1: object refinement processing detects individual objects and correlates and combines these objects across multiple sources to further refine location, dynamics, or identity information.
  • Level 2: situation refinement analysis correlates and combines the detected objects across all sources within the background context to produce estimates of the situation—explaining the aggregate of static objects and their behaviors in context to derive an explanation of activities with estimated status, plans, and intents.
  • Level 3: impact refinement analysis estimates the consequences of alternative courses of action.
  • The level 4 process refinement flows are not shown in the figure, though all forward processing levels can provide inputs to refine the process to: focus collection or processing on high-value targets, refine processing parameters to filter unwanted content, adjust database indexing of intermediate data, or improve overall efficiency of the production process. The level 4 process effectively performs the KM business intelligence functions introduced in Section 3.7.

The analysis stage employs semiautomated detection and discovery tools to access the data in large databases produced by the processing stage. In general, the processing stage can be viewed as a factory of processors, while the analysis stage is a lower volume shop staffed by craftsmen—the analytic team.

6.3 Applying Automation

Automated processing has been widely applied to level 1 object detection (e.g., statistical pattern recognition) and to a lesser degree to level 2 situation recognition problems (e.g., symbolic artificial intelligence systems) for intelligence applications.

Viewing these dimensions as the number of nodes (causes) and number of interactions (influencing the scale of effects) in a dynamic system, the problem space depicts the complexity of the situation being analyzed:

  • Causal diversity. The first dimension relates to the number of causal fac- tors, or actors, that influence the situation behavior.
  • Scale of effects. The second dimension relates to the degree of interaction between actors, or the degree to which causal factors influence the behavior of the situation.

As both dimensions increase, the potential for nonlinear behavior increases, making it more difficult to model the situation being analyzed.

These problems include the detection of straightforward objects in images, content patterns in text, and emitted signal matching. More difficult problems still in this category include dynamic situations with moderately higher numbers of actors and scales of effects that require qualitative (propositional logic) or quantitative (statistical modeling) reasoning processes.

The most difficult category 3 problems, intractable to fully automated analysis, are those complex situations characterized by high numbers of actors with large-scale interactions that give rise to emergent behaviors.

6.4 The Role of the Human Analyst

The analyst applies tacit knowledge to search through explicit information to create tacit knowledge in the form of mental models and explicit intelligence reports for consumers.

The analysis process requires the analyst to integrate the cognitive reasoning and more emotional sensemaking processes with large bodies of explicit information to produce explicit intelligence products for consumers. To effectively train and equip analysts to perform this process, we must recognize and account for these cognitive and emotion components of comprehension. The complete process includes the automated workflow, which processes explicit information, and the analyst’s internal mental workflow, which integrates the cognitive and emotional modes

 

Complementary logical and emotional frameworks are based on the current mental model of beliefs and feelings and the new information is compared to these frameworks; differences have the potential for affirming the model (agreement), learning and refining the model (acceptance and model adjustment), or rejecting the new information. Judgment integrates feelings about consequences and values (based on experience) with reasoned alternative consequences and courses of action that construct the meaning of the incoming stimulus. Decision making makes an intellectual-emotional commitment to the impact of the new information on the mental model (acceptance, affirmation, refinement, or rejection).

6.5 Addressing Cognitive Shortcomings

The intelligence analyst is not only confronted with ambiguous information about complex subjects, but is often placed under time pressures and expectations to deliver accurate, complete, and predictive intelligence. Consumer expectations often approach infallibility and omniscience.

In this situation, the analyst must be keenly aware of the vulnerabilities of human cognitive short- comings and take measures to mitigate the consequences of these deficiencies. The natural limitations in cognition (perception, attention span, short- and long-term memory recall, and reasoning capacity) constrain the objectivity of our reasoning processes, producing errors in our analysis.

In “Combatting Mind-Set,” respected analyst Jack Davis has noted that analysts must recognize the subtle influence of mindset, the cumulative mental model that distills analysts’ beliefs about a complex subject and “find[s] strategies that simultaneously harness its impressive energy and limit[s] the potential damage”.

Davis recommends two complementary strategies:

  1. Enhancing mindset. Creating explicit representation of the mind- set—externalizing the mental model—allows broader collaboration, evaluation from multiple perspectives, and discovery of subtle biases.
  2. Ensuring mind-set. Maintaining multiple explicit explanations and projections and opportunity analyses provides insurance against single-point judgments and prepares the analyst to switch to alternatives when discontinuities occur.

Davis has also cautioned analysts to beware the paradox of expertise phenomenon that can distract attention from the purpose of an analysis. This error occurs when discordant evidence is present and subject experts tend to be distracted and focus on situation analysis (solving the discordance to understand the subject situation) rather than addressing the impact on the analysis of the consequences of the discrepancy. In such cases, the analyst must focus on providing value added by addressing what action alternatives exist for alternatives and their consequences in cost-benefit terms

Heuer emphasized the importance of supporting tools and techniques to overcome natural analytic limitations [20]: “Weaknesses and biases inherent in human thinking processes can be demonstrated through carefully designed experiments. They can be alleviated by conscious application of tools and techniques that should be in the analytical tradecraft toolkit of all intelligence analysts.”

6.6 Marshaling Evidence and Structuring Argumentation

Instinctive analysis focuses on a single or limited range of alternatives, moves on a path to satisfy minimum needs (satisficing, or finding an acceptable explanation), and is performed implicitly using tacit mental models. Structured analysis follows the principles of critical thinking introduced in Chapter 4, organizing the problem to consider all reasonable alternatives, systematically and explicitly representing the alternative solutions to comprehensively analyze all factors.

6.6.1 Structuring Hypotheses

6.6.2 Marshaling Evidence and Structuring Arguments

There exist a number of classical approaches to representing hypotheses, marshaling evidence to them, and arguing for their validity. Argumentation structures propositions to move from premises to conclusions. Three perspectives or disciplines of thought have developed the most fundamental approaches to this process.

Each discipline has contributed methods to represent knowledge and to provide a structure for reasoning to infer from data to relevant evidence, through intermediate hypotheses to conclusion. The term knowledge representation refers to the structure used to represent data and show its relevance as evidence, the representation of rules of inference, and the asserted conclusions.

6.6.3 Structured Inferential Argumentation

Philosophers, rhetoricians, and lawyers have long sought accurate means of structuring and then communicating, in natural language, the lines of reasoning, that lead from complicated sets of evidence to conclusions. Lawyers and intelligence analysts alike seek to provide a clear and compelling case for their conclusions, reasoned from a mass of evidence about a complex subject.

We first consider the classical forms of argumentation described as infor- mal logic, whereby the argument connects premises to conclusions. The com- mon forms include:

  1. Multiple premises, when taken together, lead to but one con- clusion. For example: The radar at location A emits at a high pulse repetition frequency (PRF); when it emits at high PRF, it emits on fre- quency (F) → the radar at A is a fire control radar.
  2. Multiple premises independently lead to the same conclu- sion. For example: The radar at A is a fire control radar. Also Location A stores canisters for missiles. → A surface to air missile (SAM) battery must be at location A.
  3. A single premise leads to but one conclusion, for example: A SAM battery is located at A the battery at A → must be linked to a command and control (C2) center.
  4. A single premise can support more than one conclusion. For example: The SAM battery could be controlled by the C2 center at golf, or The SAM battery could be controlled by the C2 center at hotel.

 

These four basic forms may be combined to create complex sets of argu- mentation, as in the simple sequential combination and simplification of these examples:

  • The radar at A emits at a high PRF; when it emits at high PRF, it emits on frequency F, so it must be a fire control radar. Also, location A stores canisters for missiles, so there must be a SAM battery there. The battery at A must be linked to a C2 center. It could be controlled by the C2 centers at golf or at hotel.

The structure of this argument can be depicted as a chain of reasoning or argumentation (Figure 6.7) using the four premise structures in sequence.

Toulmin distinguished six elements of all arguments [24]:

  1. Data (D), at the beginning point of the argument, are the explicit elements of data (relevant data, or evidence) that are observed in the external world.
  1. Claim (C), is the assertion of the argument.
  2. Qualifier (Q), imposes any qualifications on the claim.
  3. Rebuttals (R) are any conditions that may refute the claim.
  4. Warrants (W) are the implicit propositions (rules, principles) that permit inference from data to claim.
  5. Backing (B) are assurances that provide authority and currency to the warrants.

Applying Toulmin’s argumentation scheme requires the analyst to distinguish each of the six elements of argument and to fit them into a standard structure of reasoning—see Figure 6.8(a)—which leads from datum (D) to claim (C). The scheme separates the domain-independent structure from the warrants and backing, which are dependent upon the field in which we are working (e.g., legal cases, logical arguments, or morals).

The general structure, described in natural language then proceeds from datum (D) to claim (I) as follows:

  • The datum (D), supported by the warrant (W), which is founded upon the backing (B), leads directly to the claim (C), qualified to the degree (Q), with the caveat that rebuttal (R) is present.

 

 

Such a structure requires the analyst to identify all of the key components of the argument—and explicitly report if any components are missing (e.g., if rebuttals or contradicting evidence is not existent).

The benefits of this scheme are the potential for the use of automation to aid analysts in the acquisition, examination, and evaluation of natural-language arguments. As an organizing tool, the Toulmin scheme distinguishes data (evidence) from the warrants (the universal premises of logic) and their backing (the basis for those premises).

It must be noted that formal logicians have criticized Toulmin’s scheme due to its lack of logical rigor and ability to address probabilistic arguments. Yet, it has contributed greater insight and formality to developing structured natural-language argumentation.

6.6.4 Inferential Networks

Moving beyond Toulmin’s structure, we must consider the approaches to create network structures to represent complex chains of inferential reasoning.

The use of graph theory to describe complex arguments allows the analyst to represent two crucial aspects of an argument:

  • Argument structure. The directed graph represents evidence (E), events, or intermediate hypotheses inferred by the evidence (i), and the ultimate, or final, hypotheses (H) as graph nodes. The graph is directed because the lines connecting nodes include a single arrow indicating the single direction of inference. The lines move from a source element of evidence (E) through a series of inferences (i1, i2, i3, … in) toward a terminal hypothesis (H). The graph is acyclic because the directions of all arrows move from evidence, through intermediate inferences to hypothesis, but not back again: there are no closed-loop cycles.
  • Force of evidence and propagation. In common terms we refer the force, strength, or weight of evidence to describe the relative degree of contribution of evidence to support an intermediate inference (in), or the ultimate hypothesis (H). The graph structure provides a means of describing supporting and refuting evidence, and, if evidence is quantified (e.g., probabilities, fuzzy variables, or other belief functions), a means of propagating the accumulated weight of evidence in an argument.

Like a vector, evidence includes a direction (toward certain hypotheses) and a magnitude (the inferential force). The basic categories of argument can be structured to describe four basic categories of evidence combination (illustrated in Figure 6.9):

Direct. The most basic serial chain of inference moves from evidence (E) that the event E occurred, to the inference (i1) that E did in fact occur. This inference expresses belief in the evidence (i.e., belief in the veracity and objectivity of human testimony). The chain may go on serially to further inferences because of the belief in E.

Consonance. Multiple items of evidence may be synergistic resulting in one item enhancing the force of another; their joint contribution pro- vides more inferential force than their individual contributions. Two items of evidence may provide collaborative consonance; the figure illustrates the case where ancillary evidence (E2) is favorable to the credibility of the source of evidence (E1), thereby increasing the force of E1. Evidence may also be convergent when E1 and E2 provide evidence of the occurrence of different events, but those events, together, favor a common subsequent inference. The enhancing contribution

(i1) to (i2) is indicated by the dashed arrow.

Redundant. Multiple items of evidence (E1, E2) that redundantly lead to a common inference (i1) can also diminish the force of each other in two basic ways. Corroborative redundancy occurs when two or more sources supply identical evidence of a common event inference (i1). If one source is perfectly credible, the redundant source does not contribute inferential force; if both have imperfect credibility, one may diminish the force of the other to avoid double counting the force of the redundant evidence. Cumulative redundancy occurs when multiple items of evidence (E1, E2), though inferring different intermediate hypotheses (i1,i2), respectively, lead to a common hypothesis (i3) farther up the reasoning chain. This redundant contribution to (i3), indicated by the dashed arrow, necessarily reduces the contribution of inferential force from E2.

Dissonance. Dissonant evidence may be contradictory when items of evidence E1 and E2 report, mutually exclusively, that the event E did occur and did not occur, respectively. Conflicting evidence, on the other hand, occurs when E1and E2 report two separate events i1 and i2 (both of which may have occurred, but not jointly), but these events favor mutually exclusive hypotheses at i3.

The graph moves from bottom to top in the following sequence:

  1. Direct evidence at the bottom;
  2. Evidence credibility inferences are the first row above evidence, infer- ring the veracity, objectivity, and sensitivity of the source of evidence;
  3. Relevance inferences move from credibility-conditioned evidence through a chain of inferences toward final hypothesis;
  4. The final hypothesis is at the top.

Some may wonder why such rigor is employed for such a simple argument. This relatively simple example illustrates the level of inferential detail required to formally model even the simplest of arguments. It also illustrates the real problem faced by the analyst in dealing with the nuances of redundant and conflicting evidence. Most significantly, the example illustrates the degree of care required to accurately represent arguments to permit machine-automated reasoning about all-source analytic problems.

We can see how this simple model demands the explicit representation of often-hidden assumptions, every item of evidence, the entire sequence of inferences, and the structure of relationships that leads to our conclusion that H1 is true.

Inferential networks provide a logical structure upon which quantified calculations may be performed to compute values of inferential force of evidence and the combined contribution of all evidence toward the final hypothesis.

6.7 Evaluating Competing Hypotheses

Heuer’s research indicated that the single most important technique to over- come cognitive shortcomings is to apply a systematic analytic process that allows objective comparison of alternative hypotheses

“The simultaneous evaluation of multiple, competing hypotheses entails far greater cognitive strain than examining a single, most-likely hypothesis”

Inferential networks are useful at the detail level, where evidence is rich and the ACH approach is useful at the higher levels of abstraction and where evidence is sparse. Networks are valuable for automated computation; ACH is valuable for collaborative analytic reasoning, presentation, and explanation. The ACH approach provides a methodology for the concurrent competition of multiple explanations, rather than the focus on the currently most plausible.

The ACH structure approach described by Heuer uses a matrix to organize and describe the relationship between evidence and alternative hypotheses. The sequence of the analysis-synthesis process (Figure 6.11) includes:

  1. Hypothesis synthesis. A multidisciplinary team of analysts creates a set of feasible hypotheses, derived from imaginative consideration of all possibilities before constructing a complete set that merits detailed consideration.
  2. Evidence analysis. Available data is reviewed to locate relevant evidence and inferences that can be assigned to support or refute the hypotheses. Explicitly identify the assumptions regarding evidence and the arguments of inference. Following the processes described in the last chapter, list the evidence-argument pairs (or chains of inference) and identify, for each, the intrinsic value of its contribution and the potential for being subject to denial or deception (D&D).
  3. Matrix synthesis. Construct an ACH matrix that relates evidence- inference to the hypotheses defined in step 1.
  4. Matrix analysis. Assess the diagnosticity (the significance or diagnostic value of the contribution of each component of evidence and related inferences) of each evidence-inference component to each hypothesis. This process proceeds for each item of evidence-inference across the rows, considering how each item may contribute to each hypothesis. An entry may be supporting (consistent with), refuting (inconsistent with), or irrelevant (not applicable) to a hypothesis; a contribution notation (e.g., +, –, or N/A, respectively) is marked within the cell. Where possible, annotate the likelihood (or probability) that this evi- dence would be observed if the hypothesis is true. Note that the diagnostic significance of an item of evidence is reduced as it is consistent with multiple hypotheses; it has no diagnostic contribution when it supports, to any degree, all hypotheses.
  5. Matrix synthesis (refinement). Evidence assignments are refined, eliminating evidence and inferences that have no diagnostic value.
  6. Hypotheses analysis. The analyst now proceeds to evaluate the likelihood of each hypothesis, by evaluating entries down the columns. The likelihood of each hypothesis is estimated by the characteristics of supporting and refuting evidence (as described in the last chapter). Inconsistencies and gaps in expected evidence provide a basis for retasking; a small but high-confidence item that refutes the preponderance of expected evidence may be a significant indicator of deception. The analyst also assesses the sensitivity of the likely hypothesis to contributing assumptions, evidence, and the inferences; this sensitivity must be reported with conclusions and the consequences if any of these items are in error. This process may lead to retasking of collectors to acquire more data to sup- port or refute hypotheses and to reduce the sensitivity of a conclusion.
  7. Decision synthesis (judgment). Reporting the analytic judgment requires the description of all of the alternatives (not just the most likely), the assumptions, evidence, and inferential chains. The report must also describe the gaps, inconsistencies, and their consequences on judgments. The analyst must also specify what should be done to provide an update on the situation and what indictors might point to significant changes in current judgments.

 

Notice that the ACH approach deliberately focuses the analyst’s attention on the contribution, significance, and relationships of evidence to hypotheses, rather than on building a case for any one hypothesis. The analytic emphasis is, first, on evidence and inference across the rows, before evaluating hypotheses, down the columns.

The stages of the structured analysis-synthesis methodology (Figure 6.12) are summarized in the following list:

  • Organize. A data mining tool (described in Chapter 8, Section 8.2.2) automatically clusters related data sets by identifying linkages (relation- ships) across the different data types. These linked clusters are visualized using link-clustering tools used to visualize clusters and linkages to allow the analyst to consider the meaningfulness of data links and dis- cover potentially relevant relationships in the real world.
  • Conceptualize. The linked data is translated from the abstract relation- ship space to diagrams in the temporal and spatial domains to assess real-world implications of the relationships. These temporal and spatial models allow the analyst to conceptualize alternative explanations that will become working hypotheses. Analysis in the time domain considers the implications of sequence, frequency, and causality, while the spatial domain considers the relative location of entities and events.
  • Hypothesize. The analyst synthesizes hypotheses, structuring evidence and inferences into alternative arguments that can be evaluated using the method of alternative competing hypotheses. In the course of this process, the analyst may return to explore the database and linkage diagrams further to support or refute the working hypotheses.

 

6.8 Countering Denial and Deception

Because the targets of intelligence are usually high-value subjects (e.g., intentions, plans, personnel, weapons or products, facilities, or processes), they are generally protected by some level of secrecy to prevent observation. The means of providing this secrecy generally includes two components:

  1. Denial. Information about the existence, characteristics, or state of a target is denied to the observer by methods of concealment. Camouflage of military vehicles, emission control (EMCON), operational security (OPSEC), and encryption of e-mail messages are common examples of denial, also referred to as dissimulation (hiding the real).
  2. Deception. Deception is the insertion of false information, or simulation (showing the false), with the intent to distort the perception of the observer. The deception can include misdirection (m-type) deception to reduce ambiguity and direct the observer to a simulation—away from the truth—or ambiguity (a-type) deception, which simulates effects to increase the observer’s ambiguity or understanding about the truth

D&D methods are used independently or in concert to distract or disrupt the intelligence analyst, introducing distortions in the collection channels, ambiguity in the analytic process, errors in the resulting intelligence product, and misjudgment in decisions based on the product. Ultimately, this will lead to distrust of the intelligence product by the decision maker or consumer. Strategic D&D poses an increasing threat to the analyst, as an increasing number of channels for D&D are available to deceivers. Six distinct categories of strategic D&D operations have different target audiences, means of implementation, and objectives.

Propaganda or psychological operations (PSYOP) target a general population using several approaches. White propaganda openly acknowledges the source of the information, gray propaganda uses undeclared sources. Black propaganda purports to originate from a source other its actual sponsor, protecting the true source (e.g., clandestine radio and Internet broadcast, independent organizations, or agents of influence. Coordinated white, gray, and black propaganda efforts were strategically conducted by the Soviet Union throughout the Cold War as active measures of disinformation

Leadership deception targets leadership or intelligence consumers, attempting to bypass the intelligence process by appealing directly to the intelligence consumer via other channels. Commercial news channels, untrustworthy diplomatic channels, suborned media, and personal relationships can be exploited to deliver deception messages to leadership (before intelligence can offer D&D cautions) in an effort to establish mindsets in decision makers.

Intelligence deception specifically targets intelligence collectors (technical sensors, communications interceptors, and humans) and subsequently analysts by combining denial of the target data and by introducing false data to disrupt, distract, or deceive the collection or analysis processes (or both processes). The objective is to direct the attention of the sensor or the analyst away from a correct knowledge of a specific target.

Denial operations by means of OPSEC seek to deny access to true intentions and capabilities by minimizing the signatures of entities and activities.

Two primary categories of countermeasures for intelligence deception must be orchestrated to counter either the simple deception of a parlor magician or the complex intelligence deception program of a rogue nation-state. Both collection and analysis measures are required to provide the careful observation and critical thinking necessary to avoid deception. Improvements in collection can provide broader and more accurate coverage, even limited penetration of some covers.

The problem of mitigating intelligence surprise, therefore, must be addressed by considering both large numbers of models or hypotheses (analysis) and large sets of data (collection, storage, and analysis)

In his classic treatise, Strategem, Barton Whaley exhaustively studied over 100 historical D&D efforts and concluded, “Indeed, this is the general finding of my study—that is, the deceiver is almost always successful regardless of the sophistication of his victim in the same art. On the face of it, this seems an intolerable conclusion, one offending common sense. Yet it is the irrefutable conclusion of historical evidence”

 

The components of a rigorous counter D&D methodology, then, include the estimate of the adversary’s D&D plan as an intelligence subject (target) and the analysis of specific D&D hypotheses as alternatives. Incorporating this process within the ACH process described earlier amounts to assuring that reasonable and feasible D&D hypotheses (for which there may be no evidence to induce a hypothesis) are explicitly considered as alternatives.

two active searches for evidence to support, refute, or refine the D&D hypotheses [44]:

  1. Reconstructive inference. This deductive process seeks to detect the presence of spurious signals (Harris call these sprignals) that are indicators of D&D—the faint evidence predicted by conjectured D&D plans. Such sprignals can be strong evidence confirming hypothesis A (the simulation), weak contradictory evidence of hypothesis C (leakage from the adversary’s dissimulation effort), or missing evidence that should be present if hypothesis A were true.
  2. Incongruity testing. This process searches for inconsistencies in the data and inductively generates alternative explanations that attribute the incongruities to D&D (i.e., D&D explains the incongruity of evidence for more than one reality in simultaneous existence).

These processes should be a part of any rigorous alternative hypothesis process, developing evidence for potential D&D hypotheses while refining the estimate of the adversaries’ D&D intents, plans, and capabilities. The processes also focus attention on special collection tasking to support, refute, or refine current D&D hypotheses being entertained.

  • Summary

Central to the intelligence cycle, analysis-synthesis requires the integration of human skills and automation to provide description, explanation, and prediction with explicit and quantified judgments that include alternatives, missing evidence, and dissenting views carefully explained. The challenge of discovering the hidden, forecasting the future, and warning of the unexpected cannot be performed with infallibility, yet expectations remain high for the analytic com- munity.

The practical implementation of collaborative analysis-synthesis requires a range of tools to coordinate the process within the larger intelligence cycle, augment the analytic team with reasoning and sensemaking support, overcome human cognitive shortcomings, and counter adversarial D&D.

 

7

Knowledge Internalization and Externalization

The process of conducting knowledge transactions between humans and computing machines occurs at the intersection between tacit and explicit knowledge, between human reasoning and sensemaking, and the explicit computation of automation. The processes of externalization (tacit-to-explicit transactions) and internalization (explicit-to-tacit transactions) of knowledge, however, are not just interfaces between humans and machines; more properly, the intersection is between human thought, symbolic representations of thought, and the observed world.

7.1 Externalization and Internalization in the Intelligence Workflow

The knowledge-creating spiral described in Chapter 3 introduced the four phases of knowledge creation.

Externalization

Following social interactions with collaborating analysts, an analyst begins to explicitly frame the problem. The process includes the decomposition of the intelligence problem into component parts (as described in Section 2.2) and explicit articulation of essential elements of information required to solve the problem. The tacit-to-explicit transfer includes the explicit listing of these essential elements of information needed, candidate sources of data, the creation of searches for relevant SMEs, and the initiation of queries for relevant knowledge within current holdings and collected all-source data. The primary tools to interact with all-source holdings are query and retrieval tools that search and retrieve information for assessment of relevance by the analyst.

Combination

This explicit-explicit transfer process correlates and combines the collected data in two ways:

  1. Interactive analytic tools. The analyst uses a wide variety of analytic tools to compare and combine data elements to identify relationships and marshal evidence against hypotheses.
  2. Automated data fusion and mining services. Automated data combination services also process high-volume data to bring detections of known patterns and discoveries of “interesting” patterns to the attention of the analyst.

Internalization

The analyst integrates the results of combination in two domains: external hypotheses (explicit models and simulations) and decision models (like the alter- native competing hypothesis decision model introduced in the last chapter) are formed to explicitly structure the rationale between hypotheses, and internally, the analyst develops tacit experience with the structured evidence, hypotheses, and decision alternatives.

Services in the data tier capture incoming data from processing pipelines (e.g., imagery and signals producers), reporting sources (news services, intelligence reporting sources), and open Internet sources being monitored. Content appropriate for immediate processing and production, such as news alerts, indications, and warning events, and critical change data are routed to the operational storage for immediate processing. All data are indexed, transformed, and loaded into the long-term data warehouse or into specialized data stores (e.g., imagery, video, or technical databases). The intelligence services tier includes six basic service categories:

  1. Operational processing. Information filtered for near-real-time criticality are processed to extract and tag content, correlate and combine with related content, and provide updates to operational watch officers. This path applies the automated processes of data fusion and data mining to provide near-real-time indicators, tracks, metrics, and situation summaries.
  2. Indexing, query, and retrieval. Analysts use these services to access the cumulating holdings by both automated subscriptions for topics of interest to be pushed to the user upon receipt and interactive query and retrieval of holdings.
  3. Cognitive (analytic) services. The analysis-synthesis and decision- making processes described in Chapters 5 and 6 are supported by cognitive services (thinking-support tools).
  4. Collaboration services. These services, described in Chapter 4, allow synchronous and asynchronous collaboration between analytic team members.
  5. Digital production services. Analyst-generated and automatically created dynamic products are produced and distributed to consumers based on their specified preferences.
  6. Workflow management. The workflow is managed across all tiers to monitor the flow from data to product, to monitor resource utilization, to assess satisfaction of current priority intelligence requirements, and to manage collaborating workgroups.

7.2 Storage, Query, and Retrieval Services

At the center of the enterprise is the knowledge base, which stores explicit knowledge and provides the means to access that knowledge to create new knowledge.

7.2.1 Data Storage

Intelligence organizations receive a continuous stream of data from their own tasked technical sensors and human sources, as well as from tasked collections of data from open sources. One example might be Web spiders that are tasked to monitor Internet sites for new content (e.g., foreign news services), then to collect, analyze, and index the data for storage. The storage issues posed by the continual collection of high-volume data are numerous:

Diversity. All-source intelligence systems require large numbers of inde- pendent data stores for imagery, text, video, geospatial, and special technical data types. These data types are served by an equally high number of specialized applications (e.g., image and geospatial analysis and signal analysis).

Legacy. Storage system designers are confronted with the integration of existing (legacy) and new storage systems; this requires the integration of diverse logical and physical data types.

Federated retrieval and analysis. The analyst needs retrieval, application, and analysis capabilities that span across the entire storage system.

7.2.2 Information Retrieval

Information retrieval (IR) is formally defined as “… [the] actions, methods and procedures for recovering stored data to provide information on a given subject” [2]. Two approaches to query and retrieve stored data or text are required in most intelligence applications:

  1. Data query and retrieval is performed on structured data stored in relational database applications. Imagery, signals, and MASINT data are generally structured and stored in structured formats that employ structured query language (SQL) and SQL extensions for a wide variety of databases (e.g., Access, IBM DB2 and Informix, Microsoft SQL Server, Oracle, and Sybase). SQL allows the user to retrieve data by context (e.g., by location in data tables, such as date of occurrence) or by content (e.g., retrieve all record with a defined set of values).
  2. Text query and retrieval is performed on both structured and unstructured text in multiple languages by a variety of natural language search engines to locate text containing specific words, phrases, or general concepts within a specified context.

Data query methods are employed within the technical data processing pipelines (IMINT, SIGINT, and MASINT). The results of these analyses are then described by analysts in structured or unstructured text in an analytic database for subsequent retrieval by text query methods.

Moldovan and Harabagiu have defined a five-level taxonomy of Q&A systems (Table 7.1) that range from the common keyword search engine that searches for relevant content (class 1) to reasoning systems that solve complex natural language problems (class 5) [3]. Each level requires increasing scope of knowledge, depth of linguistic understanding, and sophistication of reasoning to translate relevant knowledge to an answer or solution.

 

The first two levels of current search capabilities locate and return relevant content based on keywords (content) or the relationships between clusters of words in the text (concept).

While class 1 capabilities only match and return content that matches the query, class 2 capabilities integrate the relevant data into a simple response to the question.

Class 3 capabilities require the retrieval of relevant knowledge and reasoning about that knowledge to deduce answers to queries, even when the specific answer is not explicitly stated in the knowledge base. This capability requires the ability to both reason from general knowledge to specific answers and provide rationale for those answers to the user.

Class 4 and 5 capabilities represent advanced capabilities, which require robust knowledge bases that contain sophisticated knowledge representation (assertions and axioms) and reasoning (mathematical calculation, logical inference, and temporal reasoning).

7.3 Cognitive (Analytic Tool) Services

Cognitive services support the analyst in the process of interactively analyzing data, synthesizing hypotheses, and making decisions (choosing among alternatives). These interactive services support the analysis-synthesis activities described in Chapters 5 and 6. Alternatively called thinking tools, analytics, knowledge discovery, or analytic tools, these services enable the human to trans- form and view data, create and model hypotheses, and compare alternative hypotheses and consequences of decisions.

  • Exploration tools allow the analyst to interact with raw or processed multi- media (text, numerical data, imagery, video, or audio) to locate and organize content relevant to an intelligence problem. These tools provide the ability to search and navigate large volumes of source data; they also provide automated taxonomies of clustered data and summaries of individual documents. The information retrieval functions described in the last subsection are within this category. The product of exploration is generally a relevant set of data/text organized and metadata tagged for subsequent analysis. The analyst may drill down to detail from the lists and summaries to view the full content of all items identified as relevant.
  • Reasoning tools support the analyst in the process of correlating, comparing, and combining data across all of the relevant sources. These tools support a wide variety of specific intelligence target analyses:
  • Temporal analysis. This is the creation of timelines of events, dynamic relationships, event sequences, and temporal transactions (e.g., electronic, financial, or communication).
  • Link analysis. This involves automated exploration of relationships among large numbers of different types of objects (entities and events).
  • Spatial analysis. This is the registration and layering of 3D data sets and creation of 3D static and dynamic models from all-source evidence. These capabilities are often met by commercial geospatial information system and computer-aided design (CAD) software.
  • Functional analysis. This is the analysis of processes and expected observables (e.g., manufacturing, business, and military operations, social networks and organizational analysis, and traffic analysis).

These tools aid the analyst in five key analytic tasks:

  1. Correlation: detection and structuring of relationships or linkages between different entities or events in time, space, function, or interaction; association of different reports or content related to a common entity or event;
  2. Combination: logical, functional, or mathematical joining of related evidence to synthesize a structured argument, process, or quantitative estimate;
  3. Anomaly detection: detection of differences between expected (or modeled) characteristics of a target;
  4. Change detection: detection of changes in a target over time—the changes may include spectral, spatial, or other phenomenological changes;
  5. Construction: synthesis of a model or simulation of entities or events and their interactions based upon evidence and conjecture.

Sensemaking tools support the exploration, evaluation, and refinement of alternative hypotheses and explanations of the data. Argumentation structuring, modeling, and simulation tools in this category allow analysts to be immersed in their hypotheses and share explicit representations with other collaborators. This immersion process allows the analytic team to create shared meaning as they experience the alternative explanations.

Decision support (judgment) tools assist analytic decision making by explicitly estimating and comparing the consequences and relative merits of alternative decisions.

These tools include models and simulations that permit the analyst to create and evaluate alternative COAs and weigh the decision alternatives against objective decision criteria. Decision support systems (DSSs) apply the principles of probability to express uncertainty and decision theory to create and assess attributes of decision alternatives and quantify the relative utility of alternatives. Normative, or decision-analytic DSSs, aid the analyst in structuring the decision problem and in computing the many factors that lead from alternatives to quantifiable attributes and resulting utilities. These tools often relate attributes to utility by influence diagrams and compute utilities (and associated uncertainties) using Bayes networks.

The tools progressively move from data as the object of analysis (for exploration) to clusters of related information, to hypotheses, and finally on to decisions, or analytic judgments.

intelligence workflow management software can provide a means to organize the process by providing the following functions:

  • Requirements and progress tracking: maintains list of current intelligence requirements, monitors tasking to meet the requirements, links evidence and hypotheses to those requirements, tracks progress toward meeting requirements, and audits results;
  • Relevant data linking: maintains ontology of subjects relevant to the intelligence requirements and their relationships and maintains a data- base of all relevant data (evidence);
  • Collaboration directory: automatically locates and updates a directory of relevant subject matter experts as the problem topic develops.

In this example, an intelligence consumer has requested specific intelligence on a drug cartel named “Zehga” to support counter-drug activities in a foreign country. The sequence of one analyst’s use of tools in the example include:

  1. The process begins with synchronous collaboration with other analysts to discuss the intelligence target (Zehga) and the intelligence requirements to understand the cartel organization structure, operations, and finances. The analyst creates a peer-to-peer collaborative workspace that contains requirements, essential elements of information (EEIs) needed, current intelligence, and a directory of team members before inviting additional counter-drug subject matter experts to the shared space.
  2. The analyst opens a workflow management tool to record requirements, key concepts and keywords, and team members; the analyst will link results to the tool to track progress in delivering finished intelligence. The tool is also used to request special tasking from technical collectors (e.g., wiretaps) and field offices.
  3. Once the problem has been externalized in terms of requirements and EEIs needed, the sources and databases to be searched are selected (e.g., country cables, COMINT, and foreign news feeds and archives). Key concepts and keywords are entered into IR tools; these tools search current holdings and external sources, retrieving relevant multi- media content. The analyst also sets up monitor parameters to continually check certain sources (e.g., field office cables and foreign news sites) for changes or detections of relevant topics; when detected, the analyst will be alerted to the availability of new information.
  1. The IR tools also create a taxonomy of the collected data sets, structuring the catch into five major categories: Zehga organization (personnel), events, finances, locations, and activities. The taxonomy breaks each category into subcategories of clusters of related content. Documents located in open-source foreign news reports are translated into English, and all documents are summarized into 55-word abstracts.
  2. The analyst views the taxonomy and drills down to summaries, then views the full content of the most critical items to the investigation. Selected items (or hyperlinks) are saved to the shared knowledge base for a local repository relevant to the investigation.
  3. The retrieved catch is analyzed with text mining tools that discover and list the multidimensional associations (linkages or relationships) between entities (people, phone numbers, bank account numbers, and addresses) and events (meetings, deliveries, and crimes).
  4. The linked lists are displayed on a link-analysis tool to allow the analyst to manipulate and view the complex web of relationships between people, communications, finances, and the time sequence of activities. From these network visuals, the analyst begins discovering the Zehga organizational structure, relationships to other drug cartels and financial institutions, and the timeline of explosive growth of the cartel’s influence.
  5. The analyst internalizes these discoveries by synthesizing a Zehga organization structure and associated financial model, filling in the gaps with conjectures that result in three competing hypotheses: a centralized model, a federated model, and a loose network model. These models are created using a standard financial spreadsheet and a net- work relationship visualization tool. The process of creating these hypotheses causes the analyst to frequently return to the knowledge base to review retrieved data, to issue refined queries to fill in the gaps, and to further review the results of link analyses. The model synthesis process causes the analyst to internalize impressions of confidence, uncertainty, and ambiguity in the evidence, and the implications of potential missing or negative evidence. Here, the analyst ponders the potential for denial and deception tactics and the expected subtle “sprignals” that might appear in the data.
  6. An ACH matrix is created to compare the accrued evidence and argumentation structures supporting each of the competing models. At any time, this matrix and the associated organizational-financial models summarize the status of the intelligence process; these may be posted on the collaboration space and used to identify progress on the work- flow management tool.
  7. The analyst further internalizes the situation by applying a decision sup- port tool to consider the consequences or implications of each model on counter-drug policy courses of action relative to the Zehga cartel.
  8. Once the analyst has reached a level of confidence to make objective analytic judgments about hypotheses, results can be digitally published to the requesting consumers and to the collaborative workgroup to begin socialization—and another cycle to further refine the results. (The next section describes the digital publication process.)

 

Commercial tool suites such as Wincite’s eWincite, Wisdom Builder’s Wisdombuilder, and Cipher’s Knowledge. Works similarly integrate text-based tools to support the competitive intelligence analysis.

Tacit capture and collaborative filtering monitors the activities of all users on the network and uses statistical clustering methods to identify the emergent clusters of interest that indicate communities of common practice. Such filtering could identify and alert these two analysts to other ana- lysts that are converging on a common suspect from other directions (e.g., money laundering and drug trafficking).

7.4 Intelligence Production, Dissemination, and Portals

The externalization-to-internalization workflow results in the production of digital intelligence content suitable for socialization (collaboration) across users and consumers. This production and dissemination of intelligence from KM enterprises has transitioned from static, hardcopy reports to dynamically linked digital softcopy products presented on portals.

Digital production processes employ content technologies that index, structure, and integrate fragmented components of content into deliverable products. In the intelligence context, content includes:

  1. Structured numerical data (imagery, relational database queries) and text [e.g., extensible markup language (XML)-formatted documents] as well as unstructured information (e.g., audio, video, text, and HTML content from external sources);
  2. Internally or externally created information;
  3. Formally created information (e.g., cables, reports, and imagery or signals analyses) as well as informal or ad hoc information (e.g., e-mail, and collaboration exchanges);
  4. Static or active (e.g., dynamic video or even interactive applets) content.

The key to dynamic assembly is the creation and translation of all content to a form that is understood by the KM system. While most intelligence data is transactional and structured (e.g., imagery, signals, MASINT), intelligence and open-source documents are unstructured. While the volume of open-source content available on Internet and closed-source intelligence content grows exponentially, the content remains largely unstructured.

Content technology pro- vides the capability to transform all-sources to a common structure for dynamic integration and personalized publication. The XML offers a method of embed- ding content descriptions by tagging each component with descriptive information that allows automated assembly and distribution of multimedia content

Intelligence standards being developed include an intelligence information markup language (ICML) specification for intelligence reporting and metadata standards for security, specifying digital signatures (XML-DSig), security/encryption (XML-Sec), key management (XML-KMS), and information security marking (XML-ISM) [12]. Such tagging makes the content interoperable; it can be reused and automatically integrated in numerous ways:

  • Numerical data may be correlated and combined.
  • Text may be assembled into a complete report (e.g., target abstract, tar- getpart1, targetpart2, …, related targets, most recent photo, threat summary, assessment).
  • Various formats may be constructed from a single collection of contents to suit unique consumer needs (e.g., portal target summary format, personal digital assistant format, or pilot’s cockpit target folder format).

a document object model (DOM) tree can be created from the integrated result to transform the result into a variety of formats (e.g., HTML or PDF) for digital publication.

The analysis and single-source publishing architecture adopted by the U.S. Navy Command 21 K-Web (Figure 7.7) illustrates a highly automated digital production process for intelligence and command applications [14]. The production workflow in the figure includes the processing, analysis, and dissemination steps of the intelligence cycle:

  1. Content collection and creation (processing and analysis). Both quantitative technical data and unstructured text are received, and content is extracted and tagged for subsequent processing. This process is applied to legacy data (e.g., IMINT and SIGINT reports), structured intelligence message traffic, and unstructured sources (e.g., news reports and intelligence e-mail). Domain experts may support the process by creating metadata in a predefined XML metadata format to append to audio, video, or other nontext sources. Metadata includes source, pedigree, time of collection, and format information. New content created by analysts is entered in standard XML DTD templates.
  2. Content applications. XML-tagged content is entered in the data mart, where data applications recognize, correlate, consolidate, and summarize content across the incoming components. A correlation agent may, for example, correlate all content relative to a new event or entity and pass the content on to a consolidation agent to index the components for subsequent integration into an event or target report. The data (and text) fusion and mining functions described in the next chapter are performed here.
  3. Content management-product creation (production). Product templates dictate the aggregation of content into standard intelligence products: warnings, current intelligence, situation updates, and target status. These composite XML-tagged products are returned to the data mart.
  4. Content publication and distribution. Intelligence products are personalized in terms of both style (presentation formats) and distribution (to users with an interest in the product). Users may explicitly define their areas of interests, or the automated system may monitor user activities (through queries, collaborative discussion topics, or folder names maintained) to implicitly estimate areas of interest to create a user’s personal profile. Presentation agents choose from the style library and user profiles to create distribution lists for content to be delivered via e-mail, pushed to users’ custom portals, or stored in the data mart for subsequent retrieval. The process of content syndication applies an information and content exchange (ICE) standard to allow a single product to be delivered in multiple styles and to provide automatic content update across all users.

The user’s single entry point is a personalized portal (or Web portal) that provides an organized entry into the information available on the intelligence enterprise.

7.5 Human-Machine Information Transactions and Interfaces

In all of the services and tools described in the previous sections, the intelligence analyst interacts with explicitly collected data, applying his or her own tacit knowledge about the domain of interest to create estimates, descriptions, expla- nations, and predictions based on collected data. This interaction between the analyst and KM systems requires efficient interfaces to conduct the transaction between the analyst and machine.

7.5.1 Information Visualization

Edward Tufte introduced his widely read text Envisioning Information with the prescient observation that, “Even though we navigate daily through a perceptual world of three dimensions and reason occasionally about higher dimensional arena with mathematical ease, the world portrayed on our information displays is caught up in the two-dimensionality of the flatlands of paper and video screen”. Indeed, intelligence organizations are continually seeking technologies that will allow analysts to escape from this flatland.

The essence of visualization is to provide multidimensional information to the analyst in a form that allows immediate understanding by this visual form of thinking.

A wide range of visualization methods are employed in analysis (Table 7.6) to allow the user to:

  • Perceive patterns and rapidly grasp the essence of large complex (multi-dimensional) information spaces, then navigate or rapidly browse through the space to explore its structure and contents;
  • Manipulate the information and visual dimensions to identify clusters of associated data, patterns of linkages and relationships, trends (temporal behavior), and outlying data;
  • Combine the information by registering, mathematically or logically jointing (fusing), or overlaying.

 

7.5.2 Analyst-Agent Interaction

Intelligent software agents tailored to support knowledge workers are being developed to provide autonomous automated support in the information retrieval and exploration tasks introduced throughout this chapter. These collaborative information agents, operating in multiagent networks, provide the

potential to amplify the analyst’s exploration of large bodies of data, as they search, organize, structure, and reason about findings before reporting results. Information agents are being developed to perform a wide variety of functions, as an autonomous collaborating community under the direction of a human analyst, including:

  • Personal information agents (PIMs) coordinate an analyst’s searches and organize bookmarks to relevant information; like a team of librarians, the PIMs collect, filter, and recommend relevant materials for the analyst.
  • Brokering agents mediate the flow of information between users and sources (databases, external sources, collection processors); they can also act as sentinels to monitor sources and alert users to changes or the availability of new information.
  • Planning agents accept requirements and create plans to coordinate agents and task resources to meet user goals.

agents also offer the promise of a means of interaction with the analyst that emulates face- to-face conversation, and will ultimately allow information agents to collaborate as (near) peers with individuals and teams of human analysts. These interactive agents (or avatars) will track the analyst (or analytic team) activities and needs to conduct dialogue with the analysts—in terms of the semantic concepts familiar to the topic of interest—to contribute the following kinds of functions:

  • Agent conversationalists that carry on dialogue to provide high- bandwidth interactions that include multimodal input from the analyst (e.g., spoken natural language, keyboard entries, and gestures and gaze) and multimodal replies (e.g., text, speech, and graphics). Such conversationalists will increase “discussions” about concepts, relevant data, and possible hypotheses [23].
  • Agent observers that monitor analyst activity, attention, intention, and task progress to converse about suggested alternatives, potentials for denial and deception, or warnings that the analyst’s actions imply cognitive shortcomings (discussed in Chapter 6) may be influencing the analysis process.
  • Agent contributors that will enter into collaborative discussions to interject alternatives, suggestions, or relevant data.

The integration of collaborating information agents and information visualization technologies holds the promise of more efficient means of helping analysts find and focus on relevant information, but these technologies require greater maturity to manage uncertainty, dynamically adapt to the changing ana- lytic context, and understand the analyst’s intentions.

7.6 Summary

The analytic workflow requires a constant interaction between the cognitive and visual-perceptive processes in the analyst’s mind and the explicit representations of knowledge in the intelligence enterprise.

 

8

Explicit Knowledge Capture and Combination

In the last chapter, we introduced analytic tools that allow the intelligence analyst to interactively correlate, compare, and combine numerical data and text to discover clusters and relationships among events and entities within large databases. These interactive combination tools are considered to be goal-driven processes: the analyst is driven by a goal to seek solutions within the database, and the reasoning process is interactive with the analyst and machine in a common reasoning loop. This chapter focuses on the largely automated combination processes that tend to be data driven: as data continuously arrives from intelligence sources, the incoming data drives a largely automated process that continually detects, identifies, and tracks emerging events of interest to the user. These parallel goal-driven and data-driven processes were depicted as complementary combination processes in the last chapter

In all cases, the combination processes help sources to cross-cue each other, locate and identify target events and entities, detect anomalies and changes, and track dynamic targets.

8.1 Explicit Capture, Representation, and Automated Reasoning

The term combination introduced by Nonaka and Takeuchi in the knowledge-creation spiral is an abstraction to describe the many functions that are performed to create knowledge, such as correlation, association, reasoning, inference, and decision (judgment). This process requires the explicit representation of knowledge; in the intelligence application this includes knowledge about the world (e.g., incoming source information), knowledge of the intelligence domain (e.g., characteristics of specific weapons of mass destruction and their production and deployment processes), and the more general procedural knowledge about reasoning.

 

The DARPA Rapid Knowledge Formation (RKF) project and its predecessor, the High-Performance Knowledge Base project, represent ambitious research aimed at providing a robust explicit knowledge capture, representation, and combination (reasoning) capability targeted toward the intelligence analysis application [1]. The projects focused on developing the tools to create and manage shared, reusable knowledge bases on specific intelligence domains (e.g., biological weapons subjects); the goal is to enable creation of over one million axioms of knowledge per year by collaborating teams of domain experts. Such a knowledge base requires a computational ontology—an explicit specification that defines a shared conceptualization of reality that can be used across all processes.

The challenge is to encode knowledge through the instantiation and assembly of generic knowledge components that can be readily entered and understood by domain experts (appropriate semantics) and provide sufficient coverage to encompass an expert-level of understanding of the domain. The knowledge base must have fundamental knowledge of entities (things that are), events (things that happen), states (descriptions of stable event characteristics), and roles (entities in the context of events). It must also describe knowledge of the relationships between (e.g. cause, object of, part of, purpose of, or result of) and properties (e.g., color, shape, capability, and speed) of each of these.

8.2 Automated Combination

Two primary categories of the combination processes can be distinguished, based on their approach to inference; each is essential to intelligence processing and analysis.

The inductive process of data mining discovers previously unrecognized patterns in data (new knowledge about characteristics of an unknown pattern class) by searching for patterns (relationships in data) that are in some sense “interesting.” The discovered candidates are usually presented to human users for analysis and validation before being adopted as general cases [3].

The deductive process, data fusion, detects the presence of previously known patterns in many sources of data (new knowledge about the existence of a known pattern in the data). This is performed by searching for specific pattern templates in sensor data streams or databases to detect entities, events, and complex situations comprised of interconnected entities and events.

data sets used by these processes for knowledge creation are incomplete, dynamic, and contain data contaminated by noise. These factors make the following process characteristics apply:

  • Pattern descriptions. Data mining seeks to induce general pattern descriptions (reference patterns, templates, or matched filters) to characterize data understood, while data fusion applies those descriptions to detect the presence of patterns in new data.
  • Uncertainty in inferred knowledge. The data and reference patterns are uncertain, leading to uncertain beliefs or knowledge.
  • Dynamic state of inferred knowledge. The process is sequential and inferred knowledge is dynamic, being refined as new data arrives.
  • Use of domain knowledge. Knowledge about the domain (e.g., constraints, context) may be used in addition to collected raw intelligence data.

8.2.1 Data Fusion

Data fusion is an adaptive knowledge creation process in which diverse elements of similar or dissimilar observations (data) are aligned, correlated, and combined into organized and indexed sets (information), which are further assessed to model, understand, and explain (knowledge) the makeup and behavior of a domain under observation.

The data-fusion process seeks to explain an adversary (or uncooperative) intelligence target by abstracting the target and its observable phenomena into a causal or relationship model, then applying all-source observation to detect entities and events to estimate the properties of the model. Consider the levels of representation in the simple target-observer processes in Figure 8.2 [6]. The adversary leadership holds to goals and values that create motives; these motives, combined with beliefs (created by perception of the current situation), lead to intentions. These intentions lead to plans and responses to the current situation; from alternative plans, decisions are made that lead to commands for action. In a hierarchical military, or a networked terrorist organization, these commands flow to activities (communication, logistics, surveillance, and movements). Using the three domains of reality terminology introduced in Chapter 5, the motive-to-decision events occur in the adversary’s cognitive domain with no observable phenomena.

The data-fusion process uses observable evidence from both the symbolic and physical domains to infer the operations, communications, and even the intentions of the adversary.

The emerging concept of effects-based military operations (EBO) requires intelligence products that provide planners with the ability to model the various effects influencing a target that make up a complex system. Planners and opera- tors require intelligence products that integrate models of the adversary physical infrastructure, information networks, and leadership and decision making

The U.S. DoD JDL has established a formal process model of data fusion that decomposes the process into five basic levels of information-refining processes (based upon the concept of levels of information abstraction) [8]:

  • Level 0: Data (or subobject) refinement. This is the correlation across signals or data (e.g., pixels and pulses) to recognize components of an object and the correlation of those components to recognize an object.
  • Level 1: Object refinement. This is the correlation of all data to refine individual objects within the domain of observation. (The JDL model uses the term object to refer to real-world entities, however, the subject of interest may be a transient event in time as well.)
  • Level 2: Situation refinement. This is the correlation of all objects (information) within the domain to assess the current situation.
  • Level 3: Impact refinement. This is the correlation of the current situation with environmental and other constraints to project the meaning of the situation (knowledge). The meaning of the situation refers to its implications to the user: threat, opportunity, change, or consequence.
  • Level 4: Process refinement. This is the continual adaptation of the fusion process to optimize the delivery of knowledge against a defined mission objective.

 

8.2.1.1 Level 0: Data Refinement

Raw data from sensors may be calibrated, corrected for bias and gain errors, limited (thresholded), and filtered to remove systematic noise sources. Object detection may occur at this point—in individual sensors or across multiple sensors (so-called predetection fusion). The object-detection process forms observation reports that contain data elements such as observation identifier, time of measurement, measurement or decision data, decision, and uncertainty data.

8.2.1.2 Level 1: Object Refinement

Sensor and source reports are first aligned to a common spatial reference (e.g., a geographic coordinate system) and temporal reference (e.g., samples are propagated forward or backward to a common time.) These alignment transformations place the observations in a common time-space coordinate system to allow an association process to determine which observations from different sensors have their source in a common object. The association process uses a quantitative correlation metric to measure the relative similarity between observations. The typical correlation metric, C, takes on the following form:

n
c = ∑wi xi

i1=1

Where;
wi = weighting coefficient for attribute xi.

xi = ith correlation attribute metric.

The correlation metric may be used to make a hard decision (an association), choosing the most likely parings of observations, or a deferred decision, assigning more that one hypothetical paring and deferring a hard decision until more observations arrive. Once observations have been associated, two functions are performed on each associated set of measurements for common object:

  1. Tracking. For dynamic targets (vehicles or aircraft), the current state of the object is correlated with previously known targets to determine if the observation can update a model of an existing model (track). If the newly associated observations are determined to be updates to an existing track, the state estimation model for the track (e.g., a Kalman filter) is updated; otherwise, a new track is initiated.
  2. Identification. All associated observations are used to determine if the object identity can be classified to any one of several levels (e.g., friend/foe, vehicle class, vehicle type or model, or vehicle status or intent).

8.2.1.3 Level 2: Situation Refinement

All objects placed in space-time context in an information base are analyzed to detect relationships based on spatial or temporal characteristics. Aggregate sets of objects are detected by their coordinated behavior, dependencies, proximity, common point of origin, or other characteristics using correlation metrics with high-level attributes (e.g., spatial geometries or coordinated behavior). The synoptic understanding of all objects, in their space-time context, provides situation knowledge, or awareness.

8.2.1.4 Level 3: Impact (or Threat) Refinement

Situation knowledge is used to model and analyze feasible future behaviors of objects, groups, and environmental constraints to determine future possible out- comes. These outcomes, when compared with user objectives, provide an assessment of the implications of the current situation. Consider, for example, a simple counter-terrorism intelligence situation that is analyzed in the sequence in Figure 8.4.

8.2.1.5 Level 4: Process Refinement

This process provides feedback control of the collection and processing activities to achieve the intelligence requirements. At the top level, current knowledge (about the situation) is compared to the intelligence requirements required to achieve operational objectives to determine knowledge shortfalls. These shortfalls are parsed, downward, into information, then data needs, which direct the future acquisition of data (sensor management) and the control of internal processes. Processes may be refined, for example, to focus on certain areas of interest, object types, or groups. This forms the feedback loop of the data-fusion process.

8.2.2 Data Mining

Data mining is the process by which large sets of data (or text in the specific case of text mining) are cleansed and transformed into organized and indexed sets (information), which are then analyzed to discover hidden and implicit, but previously undefined, patterns. These patterns are reviewed by domain experts to determine if they reveal new understandings of the general structure and relationships (knowledge) in the data of a domain under observation.

The object of discovery is a pattern, which is defined as a statement in some language, L, that describes relationships in subset Fs of a set of data, F, such that:

  1. The statement holds with some certainty, c;
  2. The statement is simpler (in some sense) than the enumeration of all facts in Fs [13].

This is the inductive generalization process described in Chapter 5. Mined knowledge, then, is formally defined as a pattern that is interesting, according to some user-defined criterion, and certain to a user-defined measure of degree.

In application, the mining process is extended from explanations of limited data sets to more general applications (induction). In this example, a relationship pattern between three terrorist cells may be discovered that includes intercommunication, periodic travel to common cities, and correlated statements posted on the Internet.

Data mining (also called knowledge discovery) is distinguished from data fusion by two key characteristics:

  1. Inference method. Data fusion employs known patterns and deductive reasoning, while data mining searches for hidden patterns using inductive reasoning.
  2. Temporal perspective. The focus of data fusion is retrospective (determining current state based on past data), while data mining is both retrospective and prospective—focused on locating hidden patterns that may reveal predictive knowledge.

Beginning with sensors and sources, the data warehouse is populated with data, and successive functions move the data toward learned knowledge at the top. The sources, queries, and mining processes may be refined, similar to data fusion. The functional stages in the figure are described next.

  • Data warehouse. Data from many sources are collected and indexed in the warehouse, initially in the native format of the source. One of the chief issues facing many mining operations is the reconciliation of diverse database formats that have different formats (e.g., field and record sizes and parameter scales), incompatible data definitions, and other differences. The warehouse collection process (flow in) may mediate between these input sources to transform the data before storing in common form [20].
  • Data cleansing. The warehoused data must be inspected and cleansed to identify and correct or remove conflicts, incomplete sets, and incompatibilities common to combined databases. Cleansing may include several categories of checks:
  1. Uniformity checks verify the ranges of data, determine if sets exceed limits, and verify that formats versions are compatible.
  2. Completeness checks evaluate the internal consistency of data sets to ensure, for example, that aggregate values are consistent with individual data components (e.g., “verify that total sales is equal to sum of all sales regions, and that data for all sales regions is present”).
  3. Conformity checks exhaustively verify that each index and reference exists.
  4. Genealogy checks generate and check audit trails to primitive data to permit analysts to drill down from high-level information.
  • Data selection and transformation. The types of data that will be used for mining are selected on the basis of relevance. For large operations, ini- tial mining may be performed on a small set, then extended to larger sets to check for the validity of abducted patterns. The selected data may then be transformed to organize all data into common dimensions and to add derived dimensions as necessary for analysis.
  • Data mining operations. Mining operations may be performed in a supervised manner in which the analyst presents the operator with a selected set of training data, in which the analyst has manually determined the existence of pattern classes. Alternatively, the operation may proceed without supervision, performing an automated search for patterns. A number of techniques are available (Table 8.4), depending upon the type of data and search objectives (interesting pattern types).
  • Discovery modeling. Prediction or classification models are synthesized to fit the data patterns detected. This is the proscriptive aspect of mining: modeling the historical data in the database (the past) to provide a model to predict the future. The model attempts to abduct a generalized description that explains discovered patterns of interest and, using statistical inference from larger volumes of data, seeks to induct generally applicable models. Simple extrapolation, time-series trends, com- plex linked relationships, and causal mathematical models are examples of models created.
  • Visualization. The analyst uses visualization tools that allow discovery of interesting patterns in the data. The automated mining operations cue the operator to discovered patterns of interest (candidates), and the analyst then visualizes the pattern and verifies if, indeed, it contains new and useful knowledge. OLAP refers to the manual visualization process in which a data manipulation engine allows the analyst to create data “views” from the human perspective and to perform the following categories of functions:
  1. Multidimensional analysis of the data across dimensions, through relationships (e.g., command hierarchies and transaction networks) and in perspectives natural to the analyst (rather that inherent in the data);
  2. Transformation of the viewing dimensions or slicing of the multidimensional array to view a subset of interest;
  3. Drill down into the data from high levels of aggregation, downward into successively deeper levels of information;
  4. Reach through from information levels to the underlying raw data, including reaching beyond the information base, back to raw data by the audit trail generated in genealogy checking;
  5. Modeling of hypothetical explanations of the data, in terms of trend analysis, extrapolations.
  • Refinement feedback. The analyst may refine the process, by adjusting the parameters that control the lower level processes, as well as requesting more or different data on which to focus the mining operations.

 

 

8.2.3 Integrated Data Fusion and Mining

In a practical intelligence application, the full reasoning process integrates the discovery processes of data mining with the detection processes of data fusion. This integration helps the analyst to coordinate learning about new signatures and patterns and apply that new knowledge, in the form of templates, to detect other cases of the situation. A general application of these integrated tools can support the search for nonliteral target signatures, the use of those learned and validated signatures to detect new targets [21]. (Nonliteral target signatures refer to those signatures that extend across many diverse observation domains and are not intuitive or apparent to analysts, but may be discovered only by deeper analysis of multidimensional data.)

The mining component searches the accumulated database of sensor data, with discovery processes focused on relationships that may have relevance to the nonliteral target sets. Discovered models (templates) of target objects or processes are then tested, refined, and verified using the data-fusion process. Finally, the data-fusion process applies the models deductively for knowledge detection in incoming sensor data streams.

8.3 Intelligence Modeling and Simulation

Modeling activities take place in externalization (as explicit models are formed to describe mental models), combination (as evidence is combined and compared with the model), and in internalization (as the analyst ponders the matches, mismatches, and incongruities between evidence and model).

While we have used the general term model to describe any abstract representation, we now distinguish here between two implementations made by the modeling and simulation (M&S) community. Models refer to physical, mathematical, or otherwise logical representations of systems, entities, phenomena, or processes, while simulations refer to those methods to implement models over time (i.e., a simulation is a time-dynamic model)

Models and simulations are inherently collaborative; their explicit representations (versus mental models) allow analytic teams to collectively assemble, and explore the accumulating knowledge that they represent. They support the analysis-synthesis process in multiple ways:

  • Evidence marshaling. As described in Chapter 5, models and simulations provide the framework for which inference and evidence is assembled; they provide an audit trail of reasoning.
  • Exploration. Models and simulations also provide a means for analysts to be immersed in the modeled situation, its structure, and dynamics. It is a tool for experimentation and exploration that provides deeper understanding to determine necessary confirming or falsifying evidence, to evaluate potential sensing measures, and to examine potential denial and deception effects.
  • Dynamic process tracking. Simulations model the time-dynamic behavior of targets to forecast future behavior, compare with observations, and refine the behavior model over time. Dynamic models provide the potential for estimation, anticipation, forecasting, and even prediction (these words imply increasing accuracy and precision in their estimates of future behavior).
  • Explanation. Finally, the models and simulations provide a tool for presenting alternative hypotheses, final judgments, and rationale.

chance favors the prepared prototype: models and simulations can and should be media to create and capture surprise and serendipity

The table (8.5) illustrates independent models and simulations in all three domains, however these domains can be coupled to create a robust model to explore how an adversary thinks (cognitive domain), transacts (e.g., finances, command, and intelligence flows), and acts (physical domain).

A recent study of the advanced methods required to support counter-terrorism analysis recommended the creation of scenarios using top-down synthesis (manual creation by domain experts and large-scale simulation) to create synthetic evidence for comparison with real evidence discovered by bottom-up data mining.

8.3.1 M&S for I&W

The challenge of I&W demands predictive analysis, where “the analyst is looking at something entirely new, a discontinuous phenomenon, an outcome that he or she has never seen before. Furthermore, the analyst only sees this new pat- tern emerge in bits and pieces”

The tools monitor world events to track the state and time-sequence of state transitions for comparison with indicators of stress. These analytic tools apply three methods to provide indicators to analysts:

  1. Structural indicator matching. Previously identified crisis patterns (statistical models) are matched to current conditions to seek indications in background conditions and long-term trends.
  2. Sequential tracking models. Simulations track the dynamics of events to compare temporal behavior with statistical conflict accelerators in cur- rent situations that indicate imminent crises.
  3. Complex behavior analysis. Simulations are used to support inductive exploration of the current situation, so the analyst can examine possible future scenarios to locate potential triggering events that may cause instability (though not in prior indicator models).

A general I&W system architecture (Figure 8.7), organized following the JDL data-fusion structure, accepts incoming news feed text reports of current situations and encodes the events into a common format (by human or automated coding). The event data is encoded into time-tagged actions (assault, kid- nap, flee, assassinate), proclamations (threaten, appeal, comment) and other pertinent events from relevant actors (governments, NGOs, terror groups). The level 1 fusion process correlates and combines similar reports to produce a single set of current events organized in time series for structural analysis of back- ground conditions and sequential analysis of behavioral trends by groups and interactions between groups. This statistical analysis is an automatic target-recognition process, comparing current state and trends with known clusters of unstable behaviors. The level 2 process correlates and aggregates individual events into larger patterns of behavior (situations). A dynamic simulation tracks the current situation (and is refined by the tracking loop shown) to enable the analyst to explore future excursions from the present condition. By analysis of the dynamics of the situation, the analyst can explore a wide range of feasible futures, including those that may reveal surprising behavior that is not intuitive—increasing the analyst’s awareness of unstable regions of behavior or the potential of subtle but potent triggering events.

8.3.2 Modeling Complex Situations and Human Behavior

The complex behavior noted in the prior example may result from random events, human free will, or the nonlinearity introduced by the interactions of many actors. The most advanced applications of M&S are those that seek to model environments (introduced in Section 4.4.2) that exhibit complex behaviors—emergent behaviors (surprises) that are not predictable from the individual contributing actors within the system. Complexity is the property of a system that prohibits the description of its overall behavior even when all of the components are described completely. Complex environments include social behaviors of significant interest to intelligence organizations: populations of nation states, terrorist organizations, military commands, and foreign leaders [32]. Perhaps the grand challenge of intelligence analysis is to understand an adversary’s cognitive behavior to provide both warning and insight into the effects of alternative preemptive actions that may avert threats.

Nonlinear mathematical solutions are intractable for most practical problems, and the research community has applied dynamic systems modeling and agent-based simulation (ABS) to represent systems that exhibit complex behavior [34]. ABS research is being applied to the simulation of a wide range of organizations to assess intent, decision making and planning (cognitive), com- mand and finances (symbolic), and actions (physical). The applications of these simulations include national policies [35], military C2 [36], and terrorist organizations [37].

9
The Intelligence Enterprise Architecture

The processing, analysis, and production components of intelligence operations are implemented by enterprises—complex networks of people and their business processes, integrated information and communication systems and technology components organized around the intelligence mission. As we have emphasized throughout this text, an effective intelligence enterprise requires more than just these components; the people require a collaborative culture, integrated electronic networks require content and contextual compatibility, and the implementing components must constantly adapt to technology trends to remain competitive. The effective implementation of KM in such enterprises requires a comprehensive requirements analysis and enterprise design (synthesis) approach to translate high-level mission statements into detailed business processes, networked systems, and technology implementations.

9.1 Intelligence Enterprise Operations

In the early 1990s the community implemented Intelink, a communitywide network to allow the exchange of intelligence between agencies that maintained internal compartmented networks [2]. The DCI vision for “a unified IC optimized to provide a decisive information advantage…” in the mid-1990s led to the IC CIO to establish an IC Operational Network (ICON) office to perform enterprise architecture analysis and engineering to define the system and communication architectures in order to integrate the many agency networks within the IC [3]. This architecture is required to provide the ability to collaborate securely and synchronously from the users’ desktops across the IC and with customers (e.g., federal government intelligence consumers), partners (component agencies of the IC), and suppliers (intelligence data providers within and external to the IC).

The undertaking illustrates the challenge of implementing a mammoth intelligence enterprise that is comprised of four components:

  1. Policies. These are the strategic vision and derivative policies that explicitly define objectives and the approaches to achieve the vision.
  1. Operational processes. These are collaborative and operationally secure processes to enable people to share knowledge and assets securely and freely across large, diverse, and in some cases necessarily compartmented organizations. This requires processes for dynamic modification of security controls, public key infrastructure, standardized intelligence product markup, the availability of common services, and enterprisewide search, collaboration, and application sharing.
  2. System (network). This is an IC system for information sharing (ICSIS) that includes an agreed set of databases and applications hosted within shared virtual spaces within agencies and across the IC. The system architecture (Figure 9.1) defines three virtual collaboration spaces, one internal to each organization and a second that is accessible across the community (an intranet and extranet, respectively). The internal space provides collaboration at the Special Compartmented Intelligence (SCI) level within the organization; owners tightly control their data holdings (that are organizationally sensitive). The community space enables IC-wide collaboration at the SCI level; resource protection and control is provided by a central security policy. A separate collateral community space provides a space for data shared with DoD and other federal agencies.
  1. The enterprise requires the integration of large installed bases of legacy components and systems with new technologies. The integration requires definition of standards (e.g., metadata, markup languages, protocols, and data schemas) and the plans for incremental technology transitions.

9.2 Describing the Enterprise Architecture

Two major approaches to architecture design that are immediately applicable to the intelligence enterprise have been applied by the U.S. DoD and IC for intelligence and related applications. Both approaches provide an organizing method- ology to assure that all aspects of the enterprise are explicitly defined, analyzed, and described to assure compatibility, completeness, and traceability back to the mission objectives. The approaches provide guidance to develop a comprehensive abstract model to describe the enterprise; the model may be understood from different views in which the model is observed from a particular perspective (i.e., the perspectives of the user or developer) and described by specific products that makeup the viewpoint.

The first methodology is the Zachman Architecture FrameworkTM, developed by John Zachman in the late1980s while at IBM. Zachman pioneered a concept of multiple perspectives (views) and descriptions (viewpoints) to completely define the information architecture [6]. This framework is organized as a matrix of 30 perspective products, defined by the cross product of two dimensions:

  1. Rows of the matrix represent the viewpoints of architecture stakeholders: the owner, planner, designer, builder (e.g., prime contractor), and subcontractor. The rows progress from higher level (greater degree of abstraction) descriptions by the owner toward lower level (details of implementation) by the subcontractor.
  2. Columns represent the descriptive aspects of the system across the dimensions of data handled, functions performed, network, people involved, time sequence of operations, and motivation of each stakeholder.

Each cell in the framework matrix represents a descriptive product required to describe an aspect of the architecture.

 

This framework identifies a single descriptive product per view, but permits a wide range of specific descriptive approaches to implement the products in each cell of the framework:

  • Mission needs statements, value propositions, balanced scorecard, and organizational model methods are suitable to structure and define the owner’s high-level view.
  • Business process modeling, the object-oriented Unified Modeling Language (UML), or functional decomposition using Integrated Definition Models (IDEF) explicitly describe entities and attributes, data, functions, and relationships. These methods also support enterprise functional simulation at the owner and designer level to permit evaluation of expected enterprise performance.
  • Detailed functional standards (e.g., IEEE and DoD standards specification guidelines) provide guidance to structure detailed builder- and subcontractorlevel descriptions that define component designs.

The second descriptive methodology is the U.S. DoD Architecture Frame- work (formally the C4ISR Architecture Framework), which defines three inter- related perspectives or architectural views, each with a number of defined products [7]. The three interrelated views (Figure 9.2) are as follows:

    1. Operational architecture is a description (often graphical) of the operational elements, intelligence business processes, assigned tasks, work- flows, and information flows required to accomplish or support the intelligence function. It defines the type of information, the frequency of exchange, and what tasks are supported by these information exchanges.
    2. Systems architecture is a description, including graphics, of the systems and interconnections providing for or supporting intelligence functions. The system architecture defines the physical connection, location, and identification of the key nodes, circuits, networks, and users and specifies system and component performance parameters. It is constructed to satisfy operational architecture requirements per standards defined in the technical architecture. This architecture view shows how multiple systems within a subject area link and interoperate and may describe the internal construction or operations of particular systems within the architecture.
    3. Technical architecture is a minimal set of rules governing the arrangement, interaction, and interdependence of the parts or elements whose purpose is to ensure that a conformant system satisfies a specified set of requirements. The technical architecture identifies the services, interfaces, standards, and their relationships. It provides the technical guidelines for implementation of systems upon which engineering specifications are based, common building blocks are built, and product lines are developed.

 

 

Both approaches provide a framework to decompose the enterprise into a comprehensive set of perspectives that must be defined before building; following either approach introduces the necessary discipline to structure the enterprise architecture design process.

The emerging foundation for enterprise architecting using framework models is distinguished from the traditional systems engineering approach, which focuses on optimization, completeness, and a build-from-scratch originality [11]. Enterprise (or system) architecting recognizes that most enterprises will be constructed from a combination of existing and new integrating components:

  • Policies, based on the enterprise strategic vision;
  • People, including current cultures that must change to adopt new and changing value propositions and business processes;
  • Systems, including legacy data structures and processes that must work with new structures and processes until retirement;
  • IT, including legacy hardware and software that must be integrated with new technology and scheduled for planned retirement.

The adoption of the architecture framework models and system architecting methodologies are developed in greater detail in a number of foundational papers and texts [12].

9.3 Architecture Design Case Study: A Small Competitive Intelligence Enterprise

The enterprise architecture design principles can be best illustrated by developing the architecture description for a fictional small-scale intelligence enterprise: a typical CI unit for a Fortune 500 business. This simple example defines the introduction of a new CI unit, deliberately avoiding the challenges of introducing significant culture change across an existing organization and integrating numerous legacy systems.

The CI unit provides legal and ethical development of descriptive and inferential intelligence products for top management to assess the state of competitors’ businesses and estimate their future actions within the current marketplace. The unit is not the traditional marketing function (which addresses the marketplace of customers) but focuses specifically on the competitive environment, especially competitors’ operations, their business options, and likely decision-making actions.

The enterprise architect recognizes the assignment as a corporate KM project that should be evaluated against O’Dell and Grayson’s four-question checklist for KM projects [14]:

  1. Select projects to advance your business performance. This project will enhance competitiveness and allow FaxTech to position and adapt its product and services (e.g., reduce cycle time and enhance product development to remain competitive).
  2. Select projects that have a high success probability. This project is small, does not confront integration with legacy systems, and has a high probability of technical success. The contribution of KM can be articulated (to deliver competitive intelligence for executive decision making), there is a champion on the board (the CIO), and the business case (to deliver decisive competitor knowledge) is strong. The small CI unit implementation does not require culture change in the larger Fax- Tech organization—and it may set an example of the benefits of collaborative knowledge creation to set the stage for a larger organization-wide transformation.
  3. Select projects appropriate for exploring emerging technologies. The project is an ideal opportunity to implement a small KM enterprise in FaxTech that can demonstrate intelligence product delivery to top management and can support critical decision making.
  4. Select projects with significant potential to build KM culture and discipline within the organization. The CI enterprise will develop reusable processes and tools that can be scaled up to support the larger organization; the lessons learned in implementation will be invaluable in planning for an organization-wide KM enterprise.

9.3.1 The Value Proposition

The CI value proposition must define the value of competitive intelligence.

The quantitative measures may be difficult to define; the financial return on CI investment measure, for example, requires a careful consideration of how the derived intelligence couples with strategy and impacts revenue gains. Kilmetz and Bridge define a top-level measure of CI return on investment (ROI) metric that considers the time frame of the payback period (t, usually updated quarterly and accumulated to measure the long-term return on strategic decisions) and applies the traditional ROI formula, which subtracts the cost of the CI investment (C CI+I,, the initial implementation cost, plus accumulating quarterly operations costs using net present values) from the revenue gain [17]:

ROICI =∑[(P×Q)−CCI+I]t

The expected revenue gain is estimated by the increase in sales (units sold, Q, multiplied by price, P, in this case) that are attributable to CI-induced decisions. Of course, the difficulty in defining such quantities is the issue of assuring that the gains are uniquely attributable to decisions possible only by CI information [18].

In building the scorecard, the enterprise architect should seek the lessons learned from others, using sources such as the Society for Competitive Intelligence Professionals or the American Productivity and Quality Center

9.3.2 The CI Business Process

The Society of Competitive Intelligence Professionals has defined a CI business cycle that corresponds to the intelligence cycle; the cycle differs by distinguishing primary and published source information, while eliminating the automated processing of technical intelligence sources. The five stages, or business processes, of this high-level business model include:

  1. Planning and direction. The cycle begins with the specific identification of management needs for competitive intelligence. Management defines the specific categories of competitors (companies, alliances) and threats (new products or services, mergers, market shifts, technology discontinuities) for focus and the specific issues to be addressed. The priorities of intelligence needed, routine reporting expectations, and schedules for team reporting enables the CI unit manager to plan specific tasks for analysts, establish collection and reporting schedules, and direct day-to-day operations.
  1. Published source collection. The collection of articles, reports, and financial data from open sources (Internet, news feeds, clipping services, commercial content providers) includes both manual searches by analysts and active, automated searches by software agents that explore (crawl) the networks and cue analysts to rank-ordered findings. This collection provides broad, background knowledge of CI targets; the results of these searches provide cues to support deeper, more focused primary source collection.
  2. Primary source collection. The primary sources of deep competitor information are humans with expert knowledge; ethical collection process includes the identification, contact, and interview of these individuals. Such collections range from phone interviews, formal meetings, and consulting assignments to brief discussions with competitor sales representatives at trade shows. The results of all primary collections are recorded on standard format reports (date, source, qualifications, response to task requirement, results, further sources suggested, references learned) for subsequent analysis.
  3. Analysis and production. Once indexed and organized, the corpus of data is analyzed to answer the questions posed by the initial tasks. Collected information is placed in a framework that includes organizational, financial, and product-service models that allow analysts to estimate the performance and operations of the competitor and predict likely strategies and planned activities. This process relies on a synoptic view of the organized information, experience, and judgment. SMEs may be called in from within FaxTech or from the outside (consultants) to support the analysis of data and synthesis of models.
  4. Reporting. Once approved by the CI unit manager, these quantitative models and more qualitative estimative judgments of competitor strategies are published for presentation in a secure portal or for formal presentation to management. As result of this reporting, management provides further refining direction and the cycle repeats.

9.3.4 The CI Unit Organizational Structure and Relationships

This manager accepts tasking from executive management, issues detailed tasks to the analytic team, and then reviews and approves results before release to management. The manager also manages the budget, secures consultants for collection or analysis support, manages special collections, and coordinates team training and special briefings by SMEs.

9.3.5 A Typical Operational Scenario

For each of the five processes, a number of use cases may be developed to describe specific actions that actors (CI team members or system components) perform to complete the process. In object-oriented design processes, the devel- opment of such use cases drives the design process by first describing the many ways in which actors interact to perform the business process [22]. A scenario or process thread provides a view of one completed sequence through a single or numerous use case(s) to complete an enterprise task. A typical crisis response scenario is summarized in Table 9.3 to illustrate the sequence of interactions between the actors (management, CI manager, deputy, knowledge-base man- ager and analysts, system, portal, and sources) to complete a quick response thread. The scenario can be further modeled by an activity diagram [23] that models the behavior between objects.

The development of the operational scenario also raises nonfunctional performance issues that are identified and defined, generally in parametric terms, for example:

  • Rate and volume of data ingested daily;
  • Total storage capacity of the on-line and offline archived holdings;
  • Access time for on-line and off-line holdings;
  • Number of concurrent analysts, searches, and portal users;
  • Information assurance requirements (access, confidentiality, and attack rejection).

9.3.6 CI System Abstraction

The purpose of use cases and narrative scenarios is to capture enterprise behavior and then to identify the classes of object-oriented design. The italicized text in the scenario identifies the actors, and the remaining nouns are candidates for objects (instantiated software classes). From these use cases, software designers can identify the objects of design, their attributes, and interactions. Based upon the use cases, object-oriented design proceeds to develop sequence diagrams that model messages passing between objects, state diagrams that model the dynamic behavior within each object, and object diagrams that model the static description of objects. The object encapsulates state attributes and provides services to manipulate the internal attributes

 

Based on the scenario of the last section, the enterprise designer defines the class diagram (Figure 9.7) that relates objects that accept the input CI requirements through the entire CI process to a summary of finished intelligence. This diagram does not include all objects; the objects presented illustrate those that acquire data related to specific competitors, and these objects are only a subset of the classes required to meet the full enterprise requirements defined earlier. (The objects in this are included in the analysis package described in the next section.) The requirement object accepts new CI requirements for a defined competitor; requirements are specified in terms of essential elements of information (EEI), financial data, SWOT characteristics, and organization structure. In this object, key intelligence topics may be selected from predefined templates to specify specific intelligence requirements for a competitor or for a marketplace event [24]. The analyst translates the requirements to tasks in the task object; the task object generates search and collect objects that specify the terms for automated search and human collection from primary sources, respectively. The results of these activities generate data objects that organize and present accumulated evidence that is related to the corresponding search and collect objects.

The analyst reviews the acquired data, creating text reports and completing analysis templates (SWOT, EEI, financial) in the analysis object. Analysis entries are linked to the appropriate competitor in the competitor list and to the supporting evidence in data objects. As results are accumulated in the templates, the status (e.g., percentage of required information in template completed) is computed and reported by the status object. Summary of current intelligence and status are rolled up in the summary object, which may be used to drive the CI portal.

9.3.7 System and Technical Architecture Descriptions

The abstractions that describe functions and data form the basis for partitioning packages of software services and the system hardware configuration. The system architecture description includes a network hardware view (Figure 9.8, top) and a comparable view of the packaged software objects (Figure 9.8, bottom)

The enterprise technical architecture is described by the standards for commercial and custom software packages (e.g., the commercial and developed software components with versions, as illustrated in Table 9.4) to meet the requirements developed in system model row of the matrix. Fuld & Company has published periodic reviews of software tools to support the CI process; these reviews provide a helpful evaluation of available commercial packages to support the CI enterprise [25]. The technical architecture is also described by the standards imposed on the implementing components—both software and hardware. These standards include general implementation standards [e.g., American National Standards Institute (ANSI), International Standards Organization (ISO), and Institute of Electrical and Electronics Engineers (IEEE)] and federal standards regulating workplace environments and protocols. The applicable standards are listed to identify applicability to various functions within the enterprise.

A technology roadmap should also be developed to project future transitions as new components are scheduled to be integrated and old components are retired. It is particularly important to plan for integration of new software releases and products to assure sustained functionality and compatibility across the enterprise.

10
Knowledge Management Technologies

IT has enabled the growth of organizational KM in business and government; it will continue to be the predominant influence on the progress in creating knowledge and foreknowledge within intelligence organizations.

10.1 Role of IT in KM

When we refer to technology, the application of science by the use of engineering principles to solve a practical problem, it is essential that we distinguish the difference between three categories of technologies that all contribute to our ability to create and disseminate knowledge (Table 10.1). We may view these as three technology layers, with the basic computing materials sciences providing the foundation technology applications for increasing complexity and scale of communications and computing.

10.4.1 Explicit Knowledge Combination Technologies

Future explicit knowledge combination technologies include those that trans- form explicit knowledge into useable forms and those that perform combination processes to create new knowledge.

  • Multimedia content-context tagged knowledge bases. Knowledgebase technology will support the storage of multimedia data (structured and unstructured) with tagging of both content and context to allow com- prehensive searches for knowledge across heterogeneous sources.
  • Multilingual natural language. Global natural language technologies will allow accurate indexing, tagging, search, linking, and reasoning about multilingual text (and recognized human speech at both the content level and the concept level. This technology will allow analysts to conduct multilingual searches by topic and concept at a global scale
  • Integrated deductive-inductive reasoning. Data-fusion and-data mining technologies will become integrated to allow interactive deductive and inductive reasoning for structured and unstructured (text) data sources. Data-fusion technology will develop level 2 (situation) and level 3 (impact, or explanation) capabilities using simulations to represent complex and dynamic situations for comparison with observed situations.
  • Purposeful deductive-inductive reasoning. Agent-based intelligence will coordinate inductive (learning and generalization) and deductive (decision and detection) reasoning processes (as well as abductive explanatory reasoning) across unstructured multilingual natural language, common sense, and structured knowledge bases. This reasoning will be goal-directed based upon agent awareness of purpose, values, goals, and beliefs.
  • Automated ontology creation. Agent-based intelligence will learn the structure of content and context, automatically populating knowledge bases under configuration management by humans.

 

10.4.3 Knowledge-Based Organization Technologies

Technologies that support the socialization processes of tacit knowledge exchange will enhance the performance and effectiveness of organizations; these technologies will increasingly integrate intelligence agents into the organization as aids, mentors, and ultimately as collaborating peers.

  • Tailored naturalistic collaboration. Collaboration technologies will provide environments with automated capabilities that will track the con- text of activities (speech, text, graphics) and manage the activity toward defined goals. These environments will also recognize and adapt to individual personality styles, tailoring the collaborative process (and the mix of agents-humans) to the diversity of the human-team composition.
  • Intimate tacit simulations. Simulation and game technologies will enable human analysts to be immersed in the virtual physical, symbolic, and cognitive environments they are tasked to understand. These technologies will allow users to explore data, information, and complex situations in all three domains of reality to gain tacit experience and to be able to share the experience with others.
  • Human-like agent partners. Multiagent system technologies will enable the formation of agent communities of practice and teams—and the creation of human-agent organizations. Such hybrid organizations will enable new analytic cultures and communities of problem-solving.
  • Combined human-agent learning. Personal agent tutors, mentors, and models will shadow their human partners, share experiences and observations, and show what they are learning. These agents will learn monitor subtle human cues about the capture and use of tacit knowledge in collaborative analytic processes.
  • Direct brain tacit knowledge. Direct brain biological-to-machine connections will allow monitors to provide awareness, tracking, articulation, and capture of tacit experiences to augment human cognitive performance.

10.5 Summary

KM technologies are built upon materials and ITs that enable the complex social (organizational) and cognitive processes of collaborative knowledge creation and dissemination to occur over large organizations, over massive scales of knowledge. Technologists, analysts, and developers of intelligence enterprises must monitor these fast-paced technology developments to continually reinvent the enterprise to remain competitive in the global competition for knowledge. This continual reinvention process requires a wise application of technology in three modes. The first mode is the direct adoption of technologies by upgrade and integration of COTS and GOTS products. This process requires the continual monitoring of industry standards, technologies, and the marketplace to project the lifecycle of products and forecast adoption transitions. The second application mode is adaptation, in which a commercial product component may be adapted for use by wrapping, modifying, and integrating with commercial or custom components to achieve a desired capability. The final mode is custom development of a technology unique to the intelligence application. Often, such technologies may be classified to protect the unique investment in, the capability of, and in some cases even the existence of the technology.

Technology is enabling, but it is not sufficient; intelligence organizations must also have the vision to apply these technologies while transforming the intelligence business in a rapidly changing world.

 

Notes on Intelligence Analysis: A Target-Centric Approach

A major contribution of the 9/11 Commission and the Iraqi WMD Commission was their focus on a failed process, specifically on that part of the process where intelligence analysts interact with their policy customers.

“Thus, this book has two objectives:

The first objective is to redefine the intelligence process to help make all parts of what is commonly referred to as the “intelligence cycle” run smoothly and effectively, with special emphasis on both the analyst-collector and the analyst-customer relationships.

The second goal is to describe some methodologies that make for better predictive analysis.”

 

“An intelligence process should accomplish three basic tasks. First, it should make it easy for customers to ask questions. Second, it should use the existing base of intelligence information to provide immediate responses to the customer. Third, it should manage the expeditious creation of new information to answer remaining questions. To do these things, intelligence must be collaborative and predictive: collaborative to engage all participants while making it easy for customers to get answers; predictive because intelligence customers above all else want to know what will happen next.”

“the target-centric process outlines a collaborative approach for intelligence collectors, analysts, and customers to operate cohesively against increasingly complex opponents. We cannot simply provide more intelligence to customers; they already have more information than they can process, and information overload encourages intelligence failures. The community must provide what is called “actionable intelligence”—intelligence that is relevant to customer needs, is accepted, and is used in forming policy and in conducting operations.”

“The second objective is to clarify and refine the analysis process by drawing on existing prediction methodologies. These include the analytic tools used in organizational planning and problem solving, science and engineering, law, and economics. In many cases, these are tools and techniques that have endured despite dramatic changes in information technology over the past fifty years. All can be useful in making intelligence predictions, even in seemingly unrelated fields.”

“This book, rather, is a general guide, with references to lead the reader to more in-depth studies and reports on specific topics or techniques. The book offers insights that intelligence customers and analysts alike need in order to become more proactive in the changing world of intelligence and to extract more useful intelligence.”

“The common theme of these and many other intelligence failures discussed in this book is not the failure to collect intelligence. In each of these cases, the intelligence had been collected. Three themes are common in intelligence failures: failure to share information, failure to analyze collected material objectively, and failure of the customer to act on intelligence.”

 

“ though progress has been made in the past decade, the root causes for the failure to share remain, in the U.S. intelligence community as well as in almost all intelligence services worldwide:

Sharing requires openness. But any organization that requires secrecy to perform its duties will struggle with and often reject openness. Most governmental intelligence organizations, including the U.S. intelligence community, place more emphasis on secrecy than on effectiveness. The penalty for producing poor intelligence usually is modest. The penalty for improperly handling classified information can be career-ending. There are legitimate reasons not to share; the U.S. intelligence community has lost many collection assets because details about them were too widely shared. So it comes down to a balancing act between protecting assets and acting effectively in the world. ”

 

“Experts on any subject have an information advantage, and they tend to use that advantage to serve their own agendas. Collectors and analysts are no different. At lower levels in the organization, hoarding information may have job security benefits. At senior levels, unique knowledge may help protect the organizational budget. ”

 

“Finally, both collectors of intelligence and analysts find it easy to be insular. They are disinclined to draw on resources outside their own organizations.12 Communication takes time and effort. It has long-term payoffs in access to intelligence from other sources, but few short-term benefits.”

 

Failure to Analyze Collected Material Objectively

In each of the cases cited at the beginning of this introduction, intelligence analysts or national leaders were locked into a mindset—a consistent thread in analytic failures. Falling into the trap that Louis Pasteur warned about in the observation that begins this chapter, they believed because, consciously or unconsciously, they wished it to be so. ”

 

 

 

  • Ethnocentric bias involves projecting one’s own cultural beliefs and expectations on others. It leads to the creation of a “mirror-image” model, which looks at others as one looks at oneself, and to the assumption that others will act “rationally” as rationality is defined in one’s own culture.”
  • Wishful thinking involves excessive optimism or avoiding unpleasant choices in analysis.
  • Parochial interests cause organizational loyalties or personal agendas to affect the analysis process.
  • Status quo biases cause analysts to assume that events will proceed along a straight line. The safest weather prediction, after all, is that tomorrow’s weather will be like today’s.
  • Premature closure results when analysts make early judgments about the answer to a question and then, often because of ego, defend the initial judgments tenaciously. This can lead the analyst to select (usually without conscious awareness) subsequent evidence that supports the favored answer and to reject (or dismiss as unimportant) evidence that conflicts with it.

 

Summary

 

Intelligence, when supporting policy or operations, is always concerned with a target. Traditionally, intelligence has been described as a cycle: a process starting from requirements, to planning or direction, collection, processing, analysis and production, dissemination, and then back to requirements. That traditional view has several shortcomings. It separates the customer from the process and intelligence professionals from one another. A gap exists in practice between dissemination and requirements. The traditional cycle is useful for describing structure and function and serves as a convenient framework for organizing and managing a large intelligence community. But it does not describe how the process works or should work.”

 

 

 

Intelligence is in practice a nonlinear and target-centric process, operated by a collaborative team of analysts, collectors, and customers collectively focused on the intelligence target. The rapid advances in information technology have enabled this transition.

All significant intelligence targets of this target-centric process are complex systems in that they are nonlinear, dynamic, and evolving. As such, they can almost always be represented structurally as dynamic networks—opposing networks that constantly change with time. In dealing with opposing networks, the intelligence network must be highly collaborative.

 

“Historically, however, large intelligence organizations, such as those in the United States, provide disincentives to collaboration. If those disincentives can be removed, U.S. intelligence will increasingly resemble the most advanced business intelligence organizations in being both target-centric and network-centric.”

 

 

“Having defined the target, the first question to address is, What do we need to learn about the target that our customers do not already know? This is the intelligence problem, and for complex targets, the associated intelligence issues are also complex. ”

 

 

 

 

 

 

 

Chapter 4

Defining the Intelligence Issue

A problem well stated is a problem half solved.

Inventor Charles Franklin Kettering

“all intelligence analysis efforts start with some form of problem definition.”

“The initial guidance that customers give analysts about an issue, however, almost always is incomplete, and it may even be unintentionally misleading.”

“Therefore, the first and most important step an analyst can take is to understand the issue in detail. He or she must determine why the intelligence analysis is being requested and what decisions the results will support. The success of analysis depends on an accurate issue definition. As one senior policy customer noted in commenting on intelligence failures, “Sometimes, what they [the intelligence officers] think is important is not, and what they think is not important, is.”

 

“The poorly defined issue is so common that it has a name: the framing effect. It has been described as “the tendency to accept problems as they are presented, even when a logically equivalent reformulation would lead to diverse lines of inquiry not prompted by the original formulation.”

 

 

“veteran analysts go about the analysis process quite differently than do novices. At the beginning of a task, novices tend to attempt to solve the perceived customer problem immediately. Veteran analysts spend more time thinking about it to avoid the framing effect. They use their knowledge of previous cases as context for creating mental models to give them a head start in addressing the problem. Veterans also are better able to recognize when they lack the necessary information to solve a problem,6 in part because they spend enough time at the beginning, in the problem definition phase. In the case of the complex problems discussed in this chapter, issue definition should be a large part of an analyst’s work.

Issue definition is the first step in a process known as structured argumentation.”

 

 

“structured argumentation always starts by breaking down a problem into parts so that each part can be examined systematically.”

 

Statement of the Issue

 

In the world of scientific research, the guidelines for problem definition are that the problem should have “a reasonable expectation of results, believing that someone will care about your results and that others will be able to build upon them, and ensuring that the problem is indeed open and underexplored.”8 Intelligence analysts should have similar goals in their profession. But this list represents just a starting point. Defining an intelligence analysis issue begins with answering five questions:

 

When is the result needed? Determine when the product must be delivered. (Usually, the customer wants the report yesterday.) In the traditional intelligence process, many reports are delivered too late—long after the decisions have been made that generated the need—in part because the customer is isolated from the intelligence process… The target-centric approach can dramatically cut the time required to get actionable intelligence to the customer because the customer is part of the process.”

 

Who is the customer? Identify the intelligence customers and try to understand their needs. The traditional process of communicating needs typically involves several intermediaries, and the needs inevitably become distorted as they move through the communications channels.

 

What is the purpose? Intelligence efforts usually have one main purpose. This purpose should be clear to all participants when the effort begins and also should be clear to the customer in the result…Customer involvement helps to make the purpose clear to the analyst.”

 

 

What form of output, or product, does the customer want? Written reports (now in electronic form) are standard in the intelligence business because they endure and can be distributed widely. When the result goes to a single customer or is extremely sensitive, a verbal briefing may be the form of output.”

 

“Studies have shown that customers never read most written intelligence. Subordinates may read and interpret the report, but the message tends to be distorted as a result. So briefings or (ideally) constant customer interaction with the intelligence team during the target-centric process helps to get the message through.”

 

What are the real questions? Obtain as much background knowledge as possible about the problem behind the questions the customer asks, and understand how the answers will affect organizational decisions. The purpose of this step is to narrow the problem definition. A vaguely worded request for information is usually misleading, and the result will almost never be what the requester wanted.”

 

Be particularly wary of a request that has come through several “nodes” in the organization. The layers of an organization, especially those of an intelligence bureaucracy, will sometimes “load” a request as it passes through with additional guidance that may have no relevance to the original customer’s interests. A question that travels through several such layers often becomes cumbersome by the time it reaches the analyst.

 

“The request should be specific and stripped of unwanted excess. ”

 

“The time spent focusing the request saves time later during collection and analysis. It also makes clear what questions the customer does not want answered—and that should set off alarm bells, as the next example illustrates.”

 

“After answering these five questions, the analyst will have some form of problem statement. On large (multiweek) intelligence projects, this statement will itself be a formal product. The issue definition product helps explain the real questions and related issues. Once it is done, the analyst will be able to focus more easily on answering the questions that the customer wants answered.”

 

The Issue Definition Product

 

“When the final intelligence product is to be a written report, the issue definition product is usually in précis (summary, abstract, or terms of reference) form. The précis should include the problem definition or question, notional results or conclusions, and assumptions. For large projects, many intelligence organizations require the creation of a concept paper or outline that provides the stakeholders with agreed terms of reference in précis form.”

 

“Whether the précis approach or the notional briefing is used, the issue definition should conclude with an issue decomposition view.”

 

Issue Decomposition

 

“taking a seemingly intractable problem and breaking it into a series of manageable subproblems.”

 

 

“Glenn Kent of RAND Corporation uses the name strategies-to-task for a similar breakout of U.S. Defense Department problems.12 Within the U.S. intelligence community, it is sometimes referred to as problem decomposition or “decomposition and visualization.”

 

 

 

“Whatever the name, the process is simple: Deconstruct the highest level abstraction of the issue into its lower-level constituent functions until you arrive at the lowest level of tasks that are to be performed or subissues to be dealt with. In intelligence, the deconstruction typically details issues to be addressed or questions to be answered. Start from the problem definition statement and provide more specific details about the problem.”

 

“Start from the problem definition statement and provide more specific details about the problem. The process defines intelligence needs from the top level to the specific task level via taxonomy—a classification system in which objects are arranged into natural or related groups based on some factor common to each object in the group. ”

 

“At the top level, the taxonomy reflects the policymaker’s or decision maker’s view and reflects the priorities of that customer. At the task level, the taxonomy reflects the view of the collection and analysis team. These subtasks are sometimes called key intelligence questions (KIQs) or essential elements of information (EEIs).”

 

“Issue decomposition follows the classic method for problem solving. It results in a requirements, or needs, hierarchy that is widely used in intelligence organizations. ”

 

it is difficult to evaluate how well an intelligence organization is answering the question, “What is the political situation in Region X?” It is much easier to evaluate the intelligence unit’s performance in researching the transparency, honesty, and legitimacy of elections, because these are very specific issues.

 

“Obviously there can be several different issues associated with a given intelligence target or several different targets associated with a given issue.”

 

Complex Issue Decomposition

 

We have learned that the most important step in the intelligence process is to understand the issue accurately and in detail. Equally true, however, is that intelligence problems today are increasingly complex—often described as nonlinear, or “wicked.” They are dynamic and evolving, and thus their solutions are, too. This makes them difficult to deal with—and almost impossible to address within the traditional intelligence cycle framework. A typical example of a wicked issue is that of a drug cartel—the cartel itself is dynamic and evolving and so are the questions being posed by intelligence consumers who have an interest in it.”

 

 

 

 

 

 

 

 

“A typical real-world customer’s issue today presents an intelligence officer with the following challenges:

 

It represents an evolving set of interlocking issues and constraints.

“There are many stakeholders—people who care about or have something at stake in how the issue is resolved.”

 

“There are many stakeholders—people who care about or have something at stake in how the issue is resolved. (Again, this makes the problem-solving process a fundamentally social one, in contrast to the antisocial traditional intelligence cycle.) ”

 

The constraints on the solution, such as limited resources and political ramifications, change over time. The target is constantly changing, as the Escobar example illustrates, and the customers (stakeholders) change their minds, fail to communicate, or otherwise change the rules of the game.”

 

Because there is no final issue definition, there is no definitive solution. The intelligence process often ends when time runs out, and the customer must act on the most currently available information.”

 

“Harvard professor David S. Landes summarized these challenges nicely when he wrote, “The determinants of complex processes are invariably plural and interrelated.”15 Because of this—because complex or wicked problems are an evolving set of interlocking issues and constraints, and because the introduction of new constraints cannot be prevented—the decomposition of a complex problem must be dynamic; it will change with time and circumstances. ”

 

 

“As intelligence customers learn more about the targets, their needs and interests will shift.

Ideally, a complex issue decomposition should be created as a network because of the interrelationship among the elements.

 

 

Although the hierarchical decomposition approach may be less than ideal for complex problems, it works well enough if it is constantly reviewed and revised during the analysis process. It allows analysts to define the issue in sufficient detail and with sufficient accuracy so that the rest of the process remains relevant. There may be redundancy in a linear hierarchy, but the human mind can usually recognize and deal with the redundancy. To keep the decomposition manageable, analysts should continue to use the hierarchy, recognizing the need for frequent revisions, until information technology comes up with a better way.

 

 

 

Structured Analytic Methodologies for Issue Definition

 

Throughout the book we discuss a class of analytic methodologies that are collectively referred to as structured analytic methodologies or SATs. ”

 

 

“a relevancy check needs to be done. To be “key,” an assumption must be essential to the analytic reasoning that follows it. That is, if the assumption turns out to be invalid, then the conclusions also probably are invalid. CIA’s Tradecraft Primer identifies several questions that need to be asked about key assumptions:

 

How much confidence exists that this assumption is correct?

What explains the degree of confidence in the assumption?

What circumstances or information might undermine this assumption?

Is a key assumption more likely a key uncertainty or key factor?

Could the assumption have been true in the past but less so now?

If the assumption proves to be wrong, would it significantly alter the analytic line? How?

Has this process identified new factors that need further analysis?”

 

Example: Defining the Counterintelligence Issue

 

Counterintelligence (CI) in government usually is thought of as having two subordinate problems: security (protecting sources and methods) and catching spies (counterespionage).

 

 

If the issue is defined this way—security and counterespionage—the response in both policy and operations is defensive. Personnel background security investigations are conducted. Annual financial statements are required of all employees. Profiling is used to detect unusual patterns of computer use that might indicate computer espionage. Cipher-protected doors, badges, personal identification numbers, and passwords are used to ensure that only authorized persons have access to sensitive intelligence. The focus of communications security is on denial, typically by encryption. Leaks of intelligence are investigated to identify their source.

 

But whereas the focus on security and counterespionage is basically defensive, the first rule of strategic conflict is that the offense always wins. So, for intelligence purposes, you’re starting out on the wrong path if the issue decomposition starts with managing security and catching spies.

 

A better issue definition approach starts by considering the real target of counterintelligence: the opponent’s intelligence organization. Good counterintelligence requires good analysis of the hostile intelligence services. As we will see in several examples later in this book, if you can model an opponent’s intelligence system, you can defeat it. So we start with the target as the core of the problem and begin an issue decomposition.

 

If the counterintelligence issue is defined in this fashion, then the counterintelligence response will be forward-leaning and will focus on managing foreign intelligence perceptions through a combination of covert action, denial, and deception. The best way to win the CI conflict is to go on the offensive (model the target, anticipate the opponent’s actions, and defeat him or her). Instead of denying information to the opposing side’s intelligence machine, for example, you feed it false information that eventually degrades the leadership’s confidence in its intelligence services.

 

To do this, one needs a model of the opponent’s intelligence system that can be subjected to target-centric analysis, including its communications channels and nodes, its requirements and targets, and its preferred sources of intelligence.

 

Summary

Before beginning intelligence analysis, the analyst must understand the customer’s issue. This usually involves close interaction with the customer until the important issues are identified. The problem then has to be deconstructed in an issue decomposition process so that collection, synthesis, and analysis can be effective.”

 

All significant intelligence issues, however, are complex and nonlinear. The complex problem is a dynamic set of interlocking issues and constraints with many stakeholders and no definitive solution. Although the linear issue decomposition process is not an optimal way to approach such problems, it can work if it is reviewed and updated frequently during the analysis process.

 

 

“Issue definition is the first step in a process known as structured argumentation. As an analyst works through this process, he or she collects and evaluates relevant information, fitting it into a target model (which may or may not look like the issue decomposition); this part of the process is discussed in chapters 5–7. The analyst identifies information gaps in the target model and plans strategies to fill them. The analysis of the target model then provides answers to the questions posed in the issue definition process. The next chapter discusses the concept of a model and how it is analyzed.”

 

 

 

 

 

 

 

 

Chapter 5

Conceptual Frameworks for Intelligence Analysis

 

“If we are to think seriously about the world, and act effectively in it, some sort of simplified map of reality . . . is necessary.”

Samuel P. Huntington, The Clash of Civilizations and the Remaking of World Order

 

 

“Balance of power,” for example, was an important conceptual framework used by policymakers during the Cold War. A different conceptual framework has been proposed for assessing the influence that one country can exercise over another.”

 

Analytic Perspectives—PMESII

 

In chapter 2, we discussed the instruments of national power—an actions view that defines the diplomatic, information, military, and economic (DIME) actions that executives, policymakers, and military or law enforcement officers can take to deal with a situation.

 

The customer of intelligence may have those four “levers” that can be pulled, but intelligence must be concerned with the effects of pulling those levers. Viewed from an effects perspective, there are usually six factors to consider: political, military, economic, social, infrastructure, and information, abbreviated PMESII.

 

Political. Describes the distribution of responsibility and power at all levels of governance—formally constituted authorities, as well as informal or covert political powers. (Who are the tribal leaders in the village? Which political leaders have popular support? Who exercises decision-making or veto power in a government, insurgent group, commercial entity, or criminal enterprise?)

 

Military. Explores the military and/or paramilitary capabilities or other ability to exercise force of all relevant actors (enemy, friendly, and neutral) in a given region or for a given issue. (What is the force structure of the opponent? What weaponry does the insurgent group possess? What is the accuracy of the rockets that Hamas intends to use against Israel? What enforcement mechanisms are drug cartels using to protect their territories?)

 

Economic. Encompasses individual and group behaviors related to producing, distributing, and consuming resources. (What is the unemployment rate? Which banks are supporting funds laundering? What are Egypt’s financial reserves? What are the profit margins in the heroin trade?)

 

Social. Describes the cultural, religious, and ethnic makeup within an area and the beliefs, values, customs, and behaviors of society members. (What is the ethnic composition of Nigeria? What religious factions exist there? What key issues unite or divide the population?)

Infrastructure. Details the composition of the basic facilities, services, and installations needed for the functioning of a community, business enterprise, or society in an area. (What are the key modes of transportation? Where are the electric power substations? Which roads are critical for food supplies?)

 

Information. Explains the nature, scope, characteristics, and effects of individuals, organizations, and systems that collect, process, disseminate, or act on information. (How much access does the local population have to news media or the Internet? What are the cyber attack and defense capabilities of the Saudi government? How effective would attack ads be in Japanese elections?)

 

The typical intelligence problem seldom must deal with only one of these factors or systems. Complex issues are likely to involve them all. The events of the Arab Spring in 2011, the Syrian uprising that began that year, and the Ukrainian crisis of 2014 involved all of the PMESII factors. But PMESII is also relevant in issues that are not necessarily international. Law enforcement must deal with them all (in this case, “military” refers to the use of violence or armed force by criminal elements).

 

Modeling the Intelligence Target

 

Models are used so extensively in intelligence that analysts seldom give them much thought, even as they use them.

 

The model paradigm is a powerful tool in many disciplines.

 

“Former national intelligence officer Paul Pillar described them as “guiding images” that policymakers rely on in making decisions. We’ve discussed one guiding image—that of the PMESII concept. The second guiding image—that of a map, theory, concept, or paradigm—in this book is merged into a single entity called a model.Or, as the CIA’s Tradecraft Primer puts it succinctly:

 

“all individuals assimilate and evaluate information through the medium of “mental models…”

 

Modeling is usually thought of as being quantitative and using computers. However, all models start in the human mind. Modeling does not always require a computer, and many useful models exist only on paper. Models are used widely in fields such as operations research and systems analysis. With modeling, one can analyze, design, and operate complex systems. One can use simulation models to evaluate real-world processes that are too complex to analyze with spreadsheets or flowcharts (which are themselves models, of course) to test hypotheses at a fraction of the cost of undertaking the actual activities. Models are an efficient communication tool for showing how the target functions and stimulating creative thinking about how to deal with an opponent.

 

Models are essential when dealing with complex targets (Analysis Principle 5-1). Without a device to capture the full range of thinking and creativity that occurs in the target-centric approach to intelligence, an analyst would have to keep in mind far too many details. Furthermore, in the target-centric approach, the customer of intelligence is part of the collaborative process. Presented with a model as an organizing construct for thinking about the target, customers can contribute pieces to the model from their own knowledge—pieces that the analyst might be unaware of. The primary suppliers of information (the collectors) can do likewise.

 

The Concept of a Model

 

A model, as used in intelligence, is an organizing constraint. It is a combination of facts, hypotheses, and assumptions about a target, developed in a form that is useful for analyzing the target and for customer decision making (producing actionable intelligence). The type of model used in intelligence typically comprises facts, hypotheses, and assumptions, so it’s important to distinguish them here:

 

Fact. Something that is indisputably the case.

Hypothesis. A proposition that is set forth to explain developments or observed phenomena. It can be posed as conjecture to guide research (a working hypothesis) or accepted as a highly probable conclusion from established facts.

Assumption. A thing that is accepted as true or as certain to happen, without proof.

 

These are the things that go into a model. But, it is important to distinguish them when you present the model. Customers should never wonder whether they are hearing facts, hypotheses, or assumptions.

 

A model is a replica or representation of an idea, an object, or an actual system. It often describes how a system behaves. Instead of interacting with the real system, an analyst can create a model that corresponds to the actual one in certain ways.

 

 

Physical models are a tangible representation of something. A map, a globe, a calendar, and a clock are all physical models. The first two represent the Earth or parts of it, and the latter two represent time. Physical models are always descriptive.

 

Conceptual models—inventions of the mind—are essential to the analytic process. They allow the analyst to describe things or situations in abstract terms both for estimating current situations and for predicting future ones.”

 

 

A normative model may contain some descriptive segments, but its purpose is to describe a best, or preferable, course of action.

 

A decision-support model—that is, a model used to choose among competing alternatives—is normative.

 

 

A conceptual model may be either descriptive, describing what it represents, or normative. A normative model may contain some descriptive segments, but its purpose is to describe a best, or preferable, course of action. A decision-support model—that is, a model used to choose among competing alternatives—is normative.

In intelligence analysis, the models of most interest are conceptual and descriptive rather than normative. Some common traits of these conceptual models follow.

 

Descriptive models can be deterministic or stochastic.

In a deterministic model the relationships are known and specified explicitly. A model that has any uncertainty incorporated into it is a stochastic model (meaning that probabilities are involved), even though it may have deterministic properties.

 

Descriptive models can be linear or nonlinear.

Linear models use only linear equations (for example, x = Ay + B) to describe relationships.

 

Nonlinear models use any type of mathematical function. Because nonlinear models are more difficult to work with and are not always capable of being analyzed, the usual practice is to make some compromises so that a linear model can be used.

 

Descriptive models can be static or dynamic.

A static model assumes that a specific time period is being analyzed and the state of nature is fixed for that time period. Static models ignore time-based variances. For example, one cannot use them to determine the impact of an event’s timing in relation to other events. Returning to the example of a combat model, a snapshot of the combat that shows where opposing forces are located and their directions of movement at that instant is static. Static models do not take into account the synergy of the components of a system, where the actions of separate elements can have a different effect on the system than the sum of their individual effects would indicate. Spreadsheets and most relationship models are static.

 

Dynamic modeling (also known as simulation) is a software representation of the time-based behavior of a system. Where a static model involves a single computation of an equation, a dynamic model is iterative; it constantly recomputes its equations as time changes.

 

Descriptive models can be solvable or simulated.

A solvable model is one in which there is an analytic way of finding the answer. The performance model of a radar, a missile, or a warhead is a solvable problem. But other problems require such a complicated set of equations to describe them that there is no way to solve them. Worse still, complex problems typically cannot be described in a manageable set of equations. In complex cases—such as the performance of an economy or a person—one can turn to simulation.

 

Using Target Models for Analysis

 

Operations

Intelligence services prefer specific sources of intelligence, shaped in part by what has worked for them in the past; by their strategic targets; and by the size of their pocketbooks. The poorer intelligence services rely heavily on open source (including the web) and HUMINT, because both are relatively inexpensive. COMINT also can be cheap, unless it is collected by satellites. The wealthier services also make use of satellite-collected imagery intelligence (IMINT) and COMINT, and other types of technical collection.

 

“China relies heavily on HUMINT, working through commercial organizations, particularly trading firms, students, and university professors far more than most other major intelligence powers do.

 

In addition to being acquainted with opponents’ collection habits, CI also needs to understand a foreign intelligence service’s analytic capabilities. Many services have analytic biases, are ethnocentric, or handle anomalies poorly. It is important to understand their intelligence communications channels and how well they share intelligence within the government. In many countries, the senior policymaker or military commander is the analyst. That provides a prime opportunity for “perception management,” especially if a narcissistic leader like Hitler, Stalin, or Saddam Hussein is in charge and doing his own analysis. Leaders and policymakers find it difficult to be objective; they are people of action, and they always have an agenda. They have lots of biases and are prone to wishful thinking.

 

Linkages

Almost all intelligence services have liaison relationships with foreign intelligence or security services. It is important to model these relationships because they can dramatically extend the capabilities of an intelligence service.

 

Summary

Two conceptual frameworks are invaluable for doing intelligence analysis. One deals with the instruments of national or organizational power and the effects of their use. The second involves the use of target models to produce analysis.

 

The intelligence customer has four instruments of national or organizational power, as discussed in chapter 2. Intelligence is concerned with how opponents will use those instruments and the effects that result when customers use them. Viewed from both the opponent’s actions and the effects perspectives, there are usually six factors to consider: political, military, economic, social, infrastructure, and information, abbreviated PMESII:

 

 

Political. The distribution of power and control at all levels of governance.

 

Military. The ability of all relevant actors (enemy, friendly, and neutral) to exercise force.

 

Economic. Behavior relating to producing, distributing, and consuming resources.

 

Social. The cultural, religious, and ethnic composition of a region and the beliefs, values, customs, and behaviors of people.

 

Infrastructure. The basic facilities, services, and installations needed for the functioning of a community or society.

 

Information. The nature, scope, characteristics, and effects of individuals, organizations, and systems that collect, process, disseminate, or act on information.”

 

 

Models in intelligence are typically conceptual and descriptive. The easiest ones to work with are deterministic, linear, static, solvable, or some combination. Unfortunately, in the intelligence business the target models tend to be stochastic, nonlinear, dynamic, and simulated.

 

From an existing knowledge base, a model of the target is developed. Next, the model is analyzed to extract information for customers or for additional collection. The “model” of complex targets will typically be a collection of associated models that can serve the purposes of intelligence customers and collectors.

 

Chapter 6

Overview of Models in Intelligence

 

One picture is worth more than ten thousand words.

Chinese proverb

 

“The process of populating the appropriate model is known as synthesis, a term borrowed from the engineering disciplines. Synthesis is defined as putting together parts or elements to form a whole—in this case, a model of the target. It is what intelligence analysts do, and their skill at it is a primary measure of their professional competence. ” .

 

 

Creating a Conceptual Model

 

 

The first step in creating a model is to define the system that encompasses the intelligence issues of interest, so that the resulting model answers any problem that has been defined by using the issue definition process.

 

few questions in strategic intelligence or in-depth research can be answered by using a narrowly defined target.

 

For the complex targets that are typical of in-depth research, an analyst usually will deal with a complete system, such as an air defense system that will use the new fighter aircraft

 

In law enforcement, analysis of an organized crime syndicate involves consideration of people, funds, communications, operational practices, movement of goods, political relationships, and victims. Many intelligence problems will require consideration of related systems as well. The energy production system, for example, will give rise to intelligence questions about related companies, governments, suppliers and customers, and nongovernmental organizations (such as environmental advocacy groups). The questions that customers pose should be answerable by reference only to the target model, without the need to reach beyond it.

 

A major challenge in defining the relevant system is to use restraint. The definition must include essential subsystems or collateral systems, but nothing more. Part of an analyst’s skill lies in being able to include in a definition the relevant components, and only the relevant components, that will address the issue.

 

The systems model can therefore be structural, functional, process oriented, or any combination thereof. Structural models include actors, objects, and the organization of their relationships to each other. Process models focus on interactions and their dynamics. Functional models concentrate on the results achieved, for example, a model that simulates the financial consequences of a proposed trade agreement.

 

After an analyst has defined the relevant system, the next step is to select the generic models, or model templates, to be used. These model templates then will be made specific, or “populated,” using evidence (discussed in chapter 7). Several types of generic models are used in intelligence. The three most basic types are textual, mathematical, and visual.

 

Textual Models

 

Almost any model can be described using written text. The CIA’s World Factbook is an example of a set of textual models—actually a series of models (political, military, economic, social, infrastructure, and information)—of a country. Some common examples of textual models that are used in intelligence analysis are lists, comparative models, profiles, and matrix models.

 

 

 

 

Lists

 

Lists and outlines are the simplest examples of a model.

 

The list continues to be used by analysts today for much the same purpose—to reach a yes-or-no decision.

 

Comparative Models

 

Comparative techniques, like lists, are a simple but useful form of modeling that typically does not require a computer simulation. Comparative techniques are used in government, mostly for weapons systems and technology analyses. Both governments and businesses use comparative models to evaluate a competitor’s operational practices, products, and technologies. This is called benchmarking.

 

A powerful tool for analyzing a competitor’s developments is to compare them with your own organization’s developments. Your own systems or technologies can provide a benchmark for comparison.

 

Comparative models have to be culture specific to help avoid mirror imaging.

 

A keiretsu is a network of businesses, usually in related industries, that own stakes in one another and have board members in common as a means of mutual security. A network of essentially captive (because they are dependent on the keiretsu) suppliers provides the raw material for the keiretsu manufacturers, and the keiretsu trading companies and banks provide marketing services. Keiretsu have their roots in prewar Japan.

 

Profiles

 

Profiles are models of individuals—in national intelligence, of leaders of foreign governments; in business intelligence, of top executives in a competing organization; in law enforcement, of mob leaders and serial criminals.

 

 

Profiles depend heavily on understanding the pattern of mental and behavioral traits that are shared by adult members of a society—referred to as the society’s modal personality. Several modal personality types may exist in a society, and their common elements are often referred to as national character.

 

Defining the modal personality type is beyond the capabilities of the journeyman intelligence analyst, and one must turn to experts.

 

 

The modal personality model usually includes at least the following elements:

 

Concept of self—the conscious ideas of what a person thinks he or she is, along with the frequently unconscious motives and defenses against ego-threatening experiences such as withdrawal of love, public shaming, guilt, or isolation.

 

Relation to authority—how an individual adapts to authority figures

Modes of impulse control and expressing emotion

Processes of forming and manipulating ideas”

 

 

Three model types are often used for studying modal personalities and creating behavioral profiles:

 

Cultural pattern models are relatively straightforward to analyze and are useful in assessing group behavior.

 

 

Child-rearing systems can be studied to allow the projection of adult personality patterns and behavior. They may allow more accurate assessments of an individual than a simple study of cultural patterns, but they cannot account for the wide range of possible pattern variations occurring after childhood.

 

Individual assessments are probably the most accurate starting points for creating a behavioral model, but they depend on detailed data about the specific individual. Such data are usually gathered from testing techniques; the Rorschach (or “Inkblot”) test—a projective personality assessment based on the subject’s reactions to a series of ten inkblot pictures—is an example.

 

Interaction Matrices

A textual variant of the spreadsheet (discussed later) is the interaction matrix, a valuable analytic tool for certain types of synthesis. It appears in various disciplines and under different names and is also called a parametric matrix or a traceability matrix.

 

Mathematical Models

The most common modeling problem involves solving an equation. Most problems in engineering or technical intelligence are single equations of the form.

 

Most analysis involves fixing all of the variables and constants in such an equation or system of equations, except for two variables. The equation is then solved repetitively to obtain a graphical picture of one variable as a function of another. A number of software packages perform this type of solution very efficiently. For example, as a part of radar performance analysis, the radar range equation is solved for signal-to-noise ratio as a function of range, and a two-dimensional curve is plotted. Then, perhaps, signal-to-noise ratio is fixed and a new curve plotted for radar cross-section as a function of range.

 

Often the requirement is to solve an equation, get a set of ordered pairs, and plug those into another equation to get a graphical picture rather than solving simultaneous equations.

 

Spreadsheets

 

The computer is a powerful tool for handling the equation-solution type of problem. Spreadsheet software has made it easy to create equation-based models. The rich set of mathematical functions that can be incorporated in it, and its flexibility, make the spreadsheet a widely used model in intelligence.

 

Simulation Models

 

A simulation model is a mathematical model of a real object, a system, or an actual situation. It is useful for estimating the performance of its real-world analogue under different conditions. We often wish to determine how something will behave without actually testing it in real life. So simulation models are useful for helping decision makers choose among alternative actions by determining the likely outcomes of those actions.

 

In intelligence, simulation models also are used to assess the performance of opposing weapons systems, the consequences of trade embargoes, and the success of insurgencies.

 

Simulation models can be challenging to build. The main challenge usually is validation: determining that the model accurately represents what it is supposed to represent, under different input conditions.

 

Visual Models

 

Models can be described in written text, as noted earlier. But the models that have the most impact for both analysts and customers in facilitating understanding take a visual form.

 

Visualization involves transforming raw intelligence into graphical, pictorial, or multimedia forms so that our brains can process and understand large amounts of data more readily than is possible from simply reading text. Visualization lets us deal with massive quantities of data and identify meaningful patterns and structures that otherwise would be incomprehensible.

 

 

Charts and Graphs

 

Graphical displays, often in the form of curves, are a simple type of model that can be synthesized both for analysis and for presenting the results of analysis.

 

 

Pattern Models

 

Many types of models fall under the broad category of pattern models. Pattern recognition is a critical element of all intelligence

 

Most governmental and industrial organizations (and intelligence services) also prefer to stick with techniques that have been successful in the past. An important aspect of intelligence synthesis, therefore, is recognizing patterns of activities and then determining in the analysis phase whether (a) the patterns represent a departure from what is known or expected and (b) the changes in patterns are significant enough to merit attention. The computer is a valuable ally here; it can display trends and allow the analyst to identify them. This capability is particularly useful when trends would be difficult or impossible to find by sorting through and mentally processing a large volume of data. Pattern analysis is one way to effectively handle complex issues.

 

One type of pattern model used by intelligence analysts relies on statistics. In fact, a great deal of pattern modeling is statistical. Intelligence deals with a wide variety of statistical modeling techniques. Some of the most useful techniques are easy to learn and require no previous statistical training.

 

Histograms, which are bar charts that show a frequency distribution, are one example of a simple statistical pattern.

 

Advanced Target Models

 

The example models introduced so far are frequently used in intelligence. They’re fairly straightforward and relatively easy to create. Intelligence also makes use of four model types that are more difficult to create and to analyze, but that give more in-depth analysis. We’ll briefly introduce them here.

 

Systems Models

 

Systems models are well known in intelligence for their use in assessing the performance of weapons systems.

 

 

Systems models have been created for all of the following examples:

 

A republic, a dictatorship, or an oligarchy can be modeled as a political system.

 

Air defense systems, carrier strike groups, special operations teams, and ballistic missile systems all are modeled as military systems.

 

Economic systems models describe the functioning of capitalist or socialist economies, international trade, and informal economies.

 

Social systems include welfare or antipoverty programs, health care systems, religious networks, urban gangs, and tribal groups.

 

Infrastructure systems could include electrical power, automobile manufacturing, railroads, and seaports.

 

A news gathering, production, and distribution system is an example of an information system.

Creating a systems model requires an understanding of the system, developed by examining the linkages and interactions between the elements that compose the system as a whole.

 

 

A system has structure. It is comprised of parts that are related (directly or indirectly). It has a defined boundary physically, temporally, and spatially, though it can overlap with or be a part of a larger system.

 

A system has a function. It receives inputs from, and sends outputs into, an outside environment. It is autonomous in fulfilling its function. A main battle tank standing alone is not a system. A tank with a crew, fuel, ammunition, and a communications subsystem is a system.

 

A system has a process that performs its function by transforming inputs into outputs.

 

 

Relationship Models

 

Relationships among entities—people, places, things, and events—are perhaps the most common subject of intelligence modeling. There are four levels of such relationship models, each using increasingly sophisticated analytic approaches: hierarchy, matrix, link, and network models. The four are closely related, representing the same fundamental idea at increasing levels of complexity.

 

Relationship models require a considerable amount of time to create, and maintaining the model (known to those who do it as “feeding the beast”) demands much effort. But such models are highly effective in analyzing complex problems, and the associated graphical displays are powerful in persuading customers to accept the results.

 

Hierarchy Models

 

The hierarchy model is a simple tree structure. Organizational modeling naturally lends itself to the creation of a hierarchy, as anyone who ever drew an organizational chart is aware. A natural extension of such a hierarchy is to use a weighting scheme to indicate the importance of individuals or suborganizations in it.

 

Matrix Models

 

The interaction matrix was introduced earlier. The relationship matrix model is different. It portrays the existence of an association, known or suspected, between individuals. It usually portrays direct connections such as face-to-face meetings and telephone conversations. Analysts can use association matrices to identify those personalities and associations needing a more in-depth analysis to determine the degree of relationships, contacts, or knowledge between individuals.

 

Link Models

 

A link model allows the view of relationships in more complex tree structures. Though it physically resembles a hierarchy model (both are trees), a link model differs in that it shows different kinds of relationships but does not indicate subordination.

 

Network Models

 

A network model can be thought of as a flexible interrelationship of multiple tree structures at multiple levels. The key limitation of the matrix model discussed earlier is that although it can deal with the interaction of two hierarchies at a given level, because it is a two-dimensional representation, it cannot deal with interactions at multiple levels or with more than two hierarchies. Network synthesis is an extension of the link or matrix synthesis concept that can handle such complex problems. There are several types of network models. Two are widely used in intelligence:

 

Social network models show patterns of human relationships. The nodes are people, and the links show that some type of relationship exists.

 

Target network models are most useful in intelligence. The nodes can be any type of entity—people, places, things, concepts—and the links show that some type of relationship exists between entities.

 

Spatial and Temporal Models

 

Another way to examine data and to search for patterns is to use spatial modeling—depicting locations of objects in space. Spatial modeling can be used effectively on a small scale. For example, within a building, computer-aided design/computer-aided modeling, known as CAD/CAM, can be a powerful tool for intelligence synthesis. Layouts of buildings and floor plans are valuable in physical security analysis and in assessing production capacity.

.

 

Spatial modeling on larger scales is usually called geospatial modeling.

 

Patterns of activity over time are important for showing trends. Pattern changes are often used to compare how things are going now with how they went last year (or last decade). Estimative analysis often relies on chronological models.

 

Scenarios

Arguably the most important model for estimative intelligence purposes is the scenario, a very sophisticated model.

 

Alternative scenarios are used to model future situations. These scenarios increasingly are produced as virtual reality models because they are powerful ways to convey intelligence and are very persuasive.

Target Model Combinations

Almost all target models are actually combinations of many models. In fact, most of the models described in the previous sections can be merged into combination mod- els. One simple example is a relationship-time display.

This is a dynamic model where link or network nodes and links (relationships) change, appear, and disappear over time.

We also typically want to have several distinct but interrelated models of the target in order to be able to answer different customer questions.

Submodels

One type of component model is a submodel, a more detailed breakout of the top-level model. It is typical, for complex targets, to have many such submodels of a target that provide different levels of detail.

Participants in the target-centric process then can reach into the model set to pull out the information they need. The collectors of information can drill down into more detail to refine collection targeting and to fill specific gaps.

The intelligence customer can drill down to answer questions, gain confidence in the analyst’s picture of the target, and understand the limits of the analyst’s work. The target model is a powerful collaborative tool.

Collateral Models

In contrast to the submodel, a collateral model may show particular aspects of the overall target model, but it is not simply a detailed breakout of a top-level model. A collateral model typically presents a different way of thinking about the target for a specific intelligence purpose.

The collateral models in Figures 6-7 to 6-9 are examples of the three general types—structural, functional, and process—used in systems analysis. Figures 6-7 and 6-8 are structural models. Figure 6-9 is both a process model and a functional mod- el. In analyzing complex intelligence targets, all three types are likely to be used.

These models, taken together, allow an analyst to answer a wide range of customer questions.

More complex intelligence targets can re- quire a combination of several model types. They may have system characteristics, take a network form, and have spatial and temporal characteristics.

Alternative and Competitive Target Models

Alternative and competitive models are somewhat different things, though they are frequently confused with each other.

Alternative Models

Alternative models are an essential part of the synthesis process. It is important to keep more than one possible target model in mind, especially as conflicting or contradict- ory intelligence information is collected.

 

“The disciplined use of alternative hypotheses could have helped counter the natural cognitive tendency to force new information into existing paradigms.” As law professor David Schum has noted, “the generation of new ideas in fact investigation usually rests upon arranging or juxtaposing our thoughts and evidence in different ways.” To do that we need multiple alternative models.

And, the more inclusive you can be when defining alternative models, the better…

In studies listing the analytic pitfalls that hampered past assessments, one of the most prevalent is failure to consider alternative scenarios, hypotheses, or models.

Analysts have to guard against allowing three things to interfere with their need to develop alternative models:

  • Ego. Former director of national intelligence Mike McConnell once observed that analysts inherently dislike alternative, dissenting, or competitive views. But, the opposite becomes true of analysts who operate within the target-centric approach—the focus is not on each other anymore, but instead on contributing to a shared target model.
  • Time. Analysts are usually facing tight deadlines. They must resist the temptation to go with the model that best fits the evidence without considering alternatives. Otherwise, the result is premature closure that can cost dearly in the end result.
  • The customer. Customers can view a change in judgment as evidence that the original judgment was wrong, not that new evidence forced the change. Furthermore, when presented with two or more target models, customers will tend to pick the one that they like best, which may or may not be the most likely model. Analysts know this.

 

It is the analyst’s responsibility to establish a tone of setting egos aside and of conveying to all participants in the process, including the customer, that time spent up front developing alternative models is time saved at the end if it keeps them from committing to the wrong model in haste.

Competitive Models

It is well established in intelligence that, if you can afford the resources, you should have independent groups providing competing analyses. This is because we’re dealing with uncertainty. Different analysts, given the same set of facts, are likely to come to different conclusions.

It is important to be inclusive when defining alternative or competitive models.

Summary

Creating a target model starts with defining the relevant system. The system model can be a structural, functional, or process model, or any combination. The next step is to select the generic models or model templates.

Lists and curves are the simplest form of model. In intelligence, comparative models or benchmarks are often used; almost any type of model can be made comparative, typically by creating models of one’s own system side by side with the target system model.

Pattern models are widely used in the intelligence business. Chronological models allow intelligence customers to examine the timing of related events and plan a way to change the course of these events. Geospatial models are popular in military intelligence for weapons targeting and to assess the location and movement of opposing forces.

Relationship models are used to analyze the relationships among elements of the tar- get—organizations, people, places, and physical objects—over time. Four general types of relationship models are commonly used: hierarchy, matrix, link, and network models. The most powerful of these, network models, are increasingly used to describe complex intelligence targets.

 

Competitive and alternative target models are an essential part of the process. Properly used, they help the analyst deal with denial and deception and avoid being trapped by analytic biases. But they take time to create, analysts find it difficult to change or chal- lenge existing judgments, and alternative models give policymakers the option to se- lect the conclusion they prefer—which may or may not be the best choice.

 

 

 

 

 

 

 

Chapter 7

 

Creating the Model

Believe nothing you hear, and only one half that you see.  – Edgar Allen Poe

This chapter describes the steps that analysts go through in populating the target model. Here, we focus on the synthesis part of the target-centric approach, often called collation in the intelligence business.

We discuss the importance of existing pieces of intelligence, both finished and raw, and how best to think about sources of new raw data.

We talk about how credentials of evidence must be established, introduce widely used in- formal methods of combining evidence, and touch on structured argumentation as a formal methodology for combining evidence.

Analysts generally go through the actions described here in service to collation. They may not think about them as separate steps and in any event aren’t likely to do them in the order presented. They nevertheless almost always do the following:

 

  • Review existing finished intelligence about the target and examine existing raw intelligence
  • Acquire new raw intelligence
  • Evaluate the new raw intelligence
  • Combine the intelligence from all sources into the target model

 

Existing Intelligence

Existing finished intelligence reports typic- ally define the current target model. So information gathering to create or revise a model begins with the existing knowledge base. Before starting an intelligence collection effort, analysts should ensure that they are aware of what has already been found on a subject.

Finished studies or reports on file at an analyst’s organization are the best place to start any research effort. There are few truly new issues.

The databases of intelligence organizations include finished intelligence reports as well as many specialized data files on specific topics. Large commercial firms typically have comparable facilities in-house, or they depend on commercially available databases.

a literature search should be the first step an analyst takes on a new project. The purpose is to both define the current state of knowledge—that is, to understand the existing model(s) of the intelligence target—and to identify the major controversies and disagreements surrounding the target model.

The existing intelligence should not be accepted automatically as fact. Few experienced analysts would blithely accept the results of earlier studies on a topic, though they would know exactly what the studies found. The danger is that, in conducting the search, an analyst naturally tends to adopt a preexisting target model.

In this case, premature closure, or a bias toward the status quo, leads the analyst to keep the existing model even when evidence indicates that a different model is more appropriate.

To counter this tendency, it’s important to do a key assumptions check on the existing model(s).

Do the existing analytic conclusions appear to be valid?

What are the premises on which these conclusions rest, and do they appear to be valid as well?

Has the underlying situation changed so that the premises may no longer apply?

Once the finished reports are in hand, the analyst should review all of the relevant raw intelligence data that already exist. Few things can ruin an analyst’s career faster than sending collectors after information that is already in the organization’s files.

Sources of New Raw Intelligence

Raw intelligence comes from a number of sources, but they typically are categorized as part of the five major “INTs” shown in this section.

 

 

 

The definitions of each INT follow:

  • Open source (OSINT). Information of potential intelligence value that is available to the general public
  • Human intelligence (HUMINT). Intelligence derived from information collected and provided by human sources
  • Measurements and signatures intelligence (MASINT). Scientific and technical intelligence obtained by quantitative and qualitative analysis of data (metric, angle, spatial, wavelength, time dependence, modulation, plasma, and hydromagnetic) derived from specific technical sensors
  • Signals intelligence (SIGINT). Intelligence comprising either individually or in combination all communications intelligence, electronics intelligence, and foreign instrumentation signals intelligence
  • Imagery intelligence (IMINT). Intelligence derived from the exploitation of collection by visual photography, infrared sensors, lasers, electro-optics, and radar sensors such as synthetic aperture radar wherein images of objects are reproduced optically or electronically on film, electronic dis- play devices, or other media

 

The taxonomy approach in this book is quite different. It strives for a breakout that focuses on the nature of the material collected and processed, rather than on the collection means.

Traditional COMINT, HUMINT, and open- source collection are concerned mainly with literal information, that is, information in a form that humans use for communication. The basic product and the general methods for collecting and analyzing literal information are usually well understood by intelligence analysts and the customers of intelligence. It requires no special exploitation after the processing step (which includes translation) to be understood. It literally speaks for itself.

Nonliteral information, in contrast, usually requires special processing and exploitation in order for analysts to make use of it.

 

The logic of this division has been noted by other writers in the intelligence business. British author Michael Herman observed that there are two basic types of collection: One produces evidence in the form of observations and measurements of things (nonlit- eral), and one produces access to human thought processes

 

The automation of data handling has been a major boon to intelligence analysts. Informa- tion collected from around the globe arrives at the analyst’s desk through the Internet or in electronic message form, ready for review and often presorted on the basis of keyword searches. A downside of this automation, however, is the tendency to treat all information in the same way. In some cases the analyst does not even know what collection source provided the information; after all, everything looks alike on the display screen. However, information must be treated depending on its source. And, no matter the source, all information must be evaluated before it is synthesized into the model—the subject to which we now turn.

Evaluating Evidence

The fundamental problem in weighing evidence is determining its credibility—its completeness and soundness.

checking the quality of information used in intelligence analysis is an ongoing, continuous process. Having multiple sources on an issue is not a substitute for having good information that has been thoroughly examined. Analysts should perform periodic checks of the information base for their analytic judgments.

Evaluating the Source

  • Is the source competent (knowledgeable about the information being given)?
  • Did the source have the access needed to get the information?
  • Does the source have a vested interest or bias?

Competence

The Anglo-American judicial system deals ef- fectively with competence: It allows people to describe what they observed with their senses because, absent disability, we are pre- sumed competent to sense things. The judi- cial system does not allow the average per- son to interpret what he or she sensed unless the person is qualified as an expert in such interpretation.

Access

The issue of source access typically does not arise because it is assumed that the source had access. When there is reason to be suspicious about the source, however, check whether the source might not have had the claimed access.

In the legal world, checks on source access come up regularly in witness cross-examinations. One of the most famous examples was the “Almanac Trial” of 1858, where Abraham Lincoln conducted the cross-examination. It was the dying wish of an old friend that

Lincoln represent his friend’s son, Duff Armstrong, who was on trial for murder. Lincoln gave his client a tough, artful, and ultimately successful defense; in the trial’s highlight, Lincoln consulted an almanac to discredit a prosecution witness who claimed that he saw the murder clearly because the moon was high in the sky. The almanac showed that the moon was lower on the horizon, and the wit- ness’s access—that is, his ability to see the murder—was called into question.

Vested Interest or Bias

In HUMINT, analysts occasionally encounter the “professional source” who sells information to as many bidders as possible and has an incentive to make the information as interesting as possible. Even the densest sources quickly realize that more interesting information gets them more money.

An intelligence organization faces a problem in using its own parent organization’s (or country’s) test and evaluation results: Many have been contaminated. Some of the test results are fabricated; some contain distortions or omit key points. An honestly conducted, objective test may be a rarity. Several reasons for this problem exist. Tests are sometimes conducted to prove or dis- prove a preconceived notion and thus unconsciously are slanted. Some results are fabricated because they would show the vulnerability or the ineffectiveness of a system and because procurement decisions often depend on the test outcomes.

Although the majority of contaminated cases probably are never discovered, history provides many examples of this issue.

In examining any test or evaluation results, begin by asking two questions:

  • Did the testing organization have a major stake in the outcome (such as the threat that a program would be canceled due to negative test results or the possibility that it would profit from positive results)?
  • Did the reported outcome support the organization’s position or interests?

If the answer to both questions is yes, be wary of accepting the validity of the test. In the pharmaceutical testing industry, for example, tests have been fraudulently conducted or the results skewed to support the regulatory approval of the pharmaceutical.

A very different type of bias can occur when collection is focused on a particular issue. This bias comes from the fact that, when you look for something in the intelligence business, you may find what you are looking for, whether or not it’s there. In looking at suspected Iraqi chemical facilities prior to 2003, analysts concluded from imagery reporting that the level of activity had increased at the facilities. But the appearance of an increase in activity may simply have been a result of an increase in imagery collection.

David Schum and Jon Morris have published a detailed treatise on human sources of intelligence analysis. They pose a set of twenty-five questions di- vided into four categories: source competence, veracity, objectivity, and observational sensitivity. Their questions cover in more explicit detail the three questions posed in thissection about competence, access, and vested interest.

Evaluating the Communications Channel

A second basic rule of weighing evidence is to look at the communications channel through which the evidence arrives.

The accuracy of a message through any communications system decreases with the length of the link or the number of intermediate nodes.

Large and complex systems tend to have more entropy. The result is often cited as “poor communication” problems in large organizations

In the business intelligence world, analysts recognize the importance of the communications channel by using the differentiating terms primary sources for firsthand information, acquired through discussions or other interaction directly with a human source, and secondary sources for information learned through an intermediary, a publication, or online. This division does not consider the many gradations of reliability, and national intelligence organizations commonly do not use the primary/secondary source division. Some national intelligence collection organizations use the term collateral to refer to intelligence gained from other collectors, but it does not have the same meaning as the terms primary and secondary as used in business intelligence.

It’s not un- heard of (though fortunately not common) for the raw intelligence to be misinterpreted or misanalyzed as it passes through the chain. Organizational or personal biases can shape the interpretation and analysis, especially of literal intelligence. It’s also possible for such biases to shape the analysis of non- literal intelligence, but that is a more difficult product for all-source analysts to challenge, as noted earlier.

Entropy has another effect in intelligence. An intelligence assertion that “X is a possibility” very often, over time and through diverse communications channels, can become “X may be true,” then “X probably is the case,” and eventually “X is a fact,” without a shred of new evidence to support the assertion. In intelligence, we refer to this as the “creeping validity” problem.

 

 

Evaluating the Credentials of Evidence

The major credentials of evidence, as noted earlier, are credibility, reliability, and inferential force. Credibility refers to the extent to which we can believe something. Reliability means consistency or replicability. Inferential force means that the evidence carries weight, or has value, in supporting a conclusion.

 

 

U.S. government intelligence organizations have established a set of definitions to distinguish levels of credibility of intelligence:

  • Fact. Verified information, something known to exist or to have happened.
  • Direct information. The content of reports, research, and reflection on an intelligence issue that helps to evaluate the likelihood that something is factual and thereby reduces uncertainty. This is information that can be considered factual because of the nature of the source (imagery, signal intercepts, and similar observations).
  • Indirect information. Information that may or may not be factual because of some doubt about the source’s reliability, the source’s lack of direct access, or the complex (non-concrete) character of the contents (hearsay from clandestine sources, foreign government reports, or local media accounts).

In weighing evidence, the usual approach is to ask three questions that are embedded in the oath that witnesses take before giving testimony in U.S. courts:

  • Is it true?
  • Is it the whole truth?
  • Is it nothing but the truth? (Is it relevant or significant?)

 

Is It True?

Is the evidence factual or opinion (someone else’s analysis)? If it is opinion, question its validity unless the source quotes evidence to support it.

How does it fit with other evidence? The relating of evidence—how it fits in—is best done in the synthesis phase. The data from different collection sources are most valuable when used together.

The synergistic effect of combining data from many sources both strengthens the conclusions and increases the analyst’s confidence in them.

 

 

 

  • HUMINT and OSINT are often melded together to give a more comprehensive picture of people, programs, products, facilities, and research specialties. This is excellent background information to interpret data derived from COMINT and IMINT.
  • Data on environmental conditions during weapons tests, acquired through specialized technical collection, can be used with ELINT and COMINT data obtained during the same test event to evaluate the cap- abilities of the opponent’s sensor systems.
  • Identification of research institutes and their key scientists and research- ers can be initially made through HUMINT, COMINT, or OSINT. Once the organization or individual has been identified by one intelligence collector, the other ones can often provide extensive additional information.
  • Successful analysis of COMINT data may require correlating raw COMINT data with external information such as ELINT and IMINT, or with knowledge of operational or technical practices.

Is It the Whole Truth?

When asking this question, it is time to do source analysis.

An incomplete picture can mislead as much as an outright lie.

 

 

 

Is It Nothing but the Truth?

It is worthwhile at this point to distinguish between data and evidence. Data become evidence only when the data are relevant to the problem or issue at hand. The simple test of relevance is whether it affects the likelihood of a hypothesis about the target.

Does it help answer a question that has been asked?

Or does it help answer a question that should be asked?

The preliminary or initial guidance from customers seldom tells what they really need to know—an important reason to keep them in the loop through the target-centric process.

Doctors encounter difficulties when they must deal with a patient who has two pathologies simultaneously. Some of the symptoms are relevant to one pathology, some to the other. If the doctor tries to fit all of the symptoms into one diagnosis, he or she is apt to make the wrong call. This is a severe enough problem for doctors, who must deal with relatively few symptoms. It is a much worse problem for intelligence analysts, who typically deal with a large volume of data, most of which is irrelevant.

Pitfalls in Evaluating Evidence

Vividness Weighting

In general, the channel for communication of intelligence should be as short as possible; but when could a short channel become a problem? If the channel is too short, the res- ult is vividness weighting—the phenomenon that evidence that is experienced directly is strongest (“seeing is believing”). Customers place the most weight on evidence that they collect themselves—a dangerous pitfall that senior executives fall into repeatedly and that makes them vulnerable to deception.

Michael Herman tells how Churchill, reading Field Marshal Erwin Rommel’s decrypted cables during World War II, concluded that the Germans were desperately short of supplies in North Africa. Basing his interpretation on this raw COMINT traffic, Churchill pressed his generals to take the offensive against Rommel. Churchill did not realize what his own intelligence analysts could have readily told him: Rommel consistently exaggerated his short- ages in order to bolster his demands for sup- plies and reinforcements.

Statistics are the least persuasive form of evidence; abstract (general) text is next; concrete (specific, focused, exemplary) text is a more persuasive form still; and visual evidence, such as imagery or video, is the most persuasive.

Weighing Based on the Source

One of the most difficult traps for an analyst to avoid is that of weighing evidence based on its source.

Favoring the Most Recent Evidence

Analysts often give the most recently acquired evidence the most weight.

The freshest intelligence—crisp, clear, and the focus of the analyst’s attention—often gets more weight than the fuzzy and half-re- membered (but possibly more important) in- formation that has had to travel down the long lines of time. The analyst has to remember this tendency and compensate for it. It sometimes helps to go back to the original (older) intelligence and reread it to bring it more freshly to mind.

Favoring or Disfavoring the Unknown

It is hard to decide how much weight to give to answers when little or no information is available for or against each one.

Trusting Hearsay

The chief problem with much of HUMINT (not including documents) is that it is hearsay evidence; and as noted earlier, the judiciary long ago learned to distrust hearsay for good reasons, including the biases of the source and the collector. Sources may deliberately distort or misinform because they want to influence policy or increase their value to the collector.

Finally, and most important, people can be misinformed or lie. COMINT can only report what people say, not the truth about what they say. So intelligence analysts have to use hearsay, but they must also weigh it accordingly.

Unquestioning Reliance on Expert Opinions

Expert opinion is often used as a tool for analyzing data and making estimates. Any intelligence community must often rely on its nation’s leading scientists, economists, and political and social scientists for insights into foreign developments.

outside experts often have issues with objectivity. With experts, an ana- lyst gets not only their expertise, but also their biases; there are those experts who have axes to grind or egos that convince them there is only one right way to do things (their way).

British counterintelligence officer Peter Wright once noted that “on the big issues, the experts are very rarely right.”

Analysts should treat expert opinion as HUMINT and be wary when the expert makes extremely positive comments (“that foreign development is a stroke of genius!”) or extremely negative ones (“it can’t be done”).

Analysis Principle 7-3

Many experts, particularly scientists, are not mentally prepared to look for deception, as intelligence officers should be. It is simply not part of the expert’s training. A second problem, as noted earlier, is that experts often are quite able to deceive themselves without any help from opponents.

Varying the way expert opinion is used is one way to attempt to head off the problems cited here. Using a panel of experts to make analytic judgments is a common method of trying to reach conclusions or to sort through a complex array of interdisciplinary data.

Such panels have had mixed results. One former CIA office director observed that “advisory panels of eminent scientists are usually useless. The members are seldom willing to commit the time to studying the data to be of much help.”

The quality of the conclusions reached by such panels depends

on several variables, including the panel’s

  • Expertise
  • Motivation to produce a quality product
  • Understanding of the problem area to be addressed
  • Effectiveness in working as a group

A major advantage of the target-centric approach is that it formalizes the process of obtaining independent opinions.

Both single-source and all-source analysts have to guard against falling into the trap of reaching conclusions too early.

Premature closure also has been described as “affirming conclusions,” based on the observation that people are inclined to verify or affirm their existing beliefs rather than modify or discredit those beliefs.

The primary danger of premature closure is not that one might make a bad assessment because the evidence is incomplete. Rather, the danger is that when a situation is changing quickly or when a major, unprecedented event occurs, the analyst will become trapped by the judgments already made. Chances increase that he or she will miss indications of change, and it becomes harder to revise an initial estimate

The counterintelligence technique of deception thrives on this tendency to ignore evidence that would disprove an existing assumption

Denial and deception succeed if one op- ponent can get the other to make a wrong initial estimate.

Combining Evidence

In almost all cases, intelligence analysis in- volves combining disparate types of evidence.

Analysts have to have methods for weighing the combined data to help them make qualitative judgments as to which conclusions the various data best support.

Convergent and Divergent Evidence

Two items of evidence are said to be conflicting or divergent if one item favors one conclusion and the other item favors a different conclusion.

two items of evidence are contradictory if they say logically opposing things.

Redundant Evidence

Convergent evidence can also be redundant. To understand the concept of redundancy, it helps to understand its importance in communications theory.

Redundant, or duplicative, evidence can have corroborative redundancy or cumulatsive redundancy. In both types, the weight of the evidence piles up to reinforce a given conclusion. A simple example illustrates the difference.

Formal Methods for Combining Evidence

The preceding sections describe some informal methods for evidence combination. It often is important to combine evidence and demonstrate the logical process of reaching a conclusion based on that evidence by careful argument. The formal process of making that argument is called structured argumentation. Such formal structured argumentation approaches have been around at least since the seventeenth century.

Structured Argumentation

Structured argumentation is an analytic process that relies on a framework to make assumptions, reasoning, rationales, and evidence explicit and transparent. The process begins with breaking down and organizing a problem into parts so that each one can be examined systematically, as discussed in earlier chapters.

As analysts work through each part, they identify the data require- ments, state their assumptions, define any terms or concepts, and collect and evaluate relevant information. Potential explanations or hypotheses are formulated and evaluated with empirical evidence, and information gaps are identified.

Formal graphical or numerical processes for combining evidence are time consuming to apply and are not widely used in intelligence analysis. They are usually reserved for cases in which the customer requires them because the issue is critically important, because the customer wants to examine the reasoning process, or because the exact probabilities associated with each alternative are import- ant to the customer.

Wigmore’s Charting Method

John Henry Wigmore was the dean of the Northwestern University School of Law in the early 1900s and author of a ten-volume treatise commonly known as Wigmore on Evidence. In this treatise he defined some principles for rational inquiry into disputed facts and methods for rigorously analyzing and ordering possible inferences from those facts.

Wigmore argued that structured argumentation brings into the open and makes explicit the important steps in an argument, and thereby makes it easier to judge both their soundness and their probative value. One of the best ways to recognize any inherent tendencies one may have in making biased or illogical arguments is to go through the body of evidence using Wigmore’s method.

  • Different symbols are used to show varying kinds of evidence: explanatory, testimonial, circumstantial, corroborative, undisputed fact, and combinations.
  • Relationships between symbols (that is, between individual pieces of evidence) are indicated by their relative positions (for example, evidence tending to prove a fact is placed be- low the fact symbol).
  • The connections between symbols indicate the probative effect of their relationship and the degree of uncertainty about the relationship.

Even proponents admit that it is too time-consuming for most practical uses, especially in intelligence analysis, where the analyst typically has limited time.

Nevertheless, making Wigmore’s approach, or something like it, widely usable in intelligence analysis would be a major contribution.

Bayesian Techniques for Combining Evidence

By the early part of the eighteenth century, mathematicians had solved what is called the “forward probability” problem: When all of the facts about a situation are known, what is the probability of a given event happening?

Intelligence analysts find this problem of far more interest than the forward probability problem, because they often must make judgments about an underlying situation from observing the events that the situation causes. Bayes developed a formula for the answer that bears his name: Bayes’ rule.

One advantage claimed for Bayesian analysis is its ability to blend the subjective probability judgments of experts with historical frequencies and the latest sample evidence.

Bayes seems difficult to teach. It is generally considered to be “advanced” statistics and, given the problem that many people (including intelligence analysts) have with traditional elementary probabilistic and statistical techniques, such a solution seems to require expertise not currently resident in the intelligence community or available only through expensive software solutions.

A Note about the Role of Information Technology

It may be impossible for new analysts today to appreciate the markedly different work environment that their counterparts faced 40 years ago. Incoming intelligence arrived at the analyst’s desk in hard copy, to be scanned, marked up, and placed in file drawers. Details about intelligence targets—installations, persons, and organizations—were often kept on 5” × 7” cards in card catalog boxes. Less tidy analysts “filed” their most interesting raw intelligence on their desktops and cabinet tops, sometimes in stacks over 2 feet high.

IT systems allow analysts to acquire raw intelligence material of interest (incoming classified cable traffic and open source) and to search, organize, and store it electronically. Such IT capabilities have been eagerly accepted and used by analysts because of their ad- vantages in dealing with the information explosion.

A major consequence of this information explosion is that we must deal with what is called “big data” in collating and analyzing intelligence. Big data has been defined as “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.”

Analysts, inundated by the flood, have turned to IT tools for extracting meaning from the data. A wide range of such tools exists, including ones for visualizing the data and identifying patterns of intelligence interest, ones for conducting statistical analysis, and ones for running simulation models. Analysts with responsibility for counterterrorism, organized crime, counternarcotics, counterproliferation, or financial fraud can choose from commercially available tools such as Palantir, CrimeLink, Analyst’s Notebook, NetMap, Orion, or VisuaLinks to produce matrix and link diagrams, timeline charts, telephone toll charts, and similar pattern displays.

Tactical intelligence units, in both the military and law enforcement, find geospatial analysis tools to be essential.

Some intelligence agencies also have in-house tools that replicate these capabilities. Depending on the analyst’s specialty, some tools may be more relevant than others. All, though, have definite learning curves and their database structures are generally not compatible with each other. The result is that these tools are used less effectively than they might be, and the absence of a single standard tool hinders collaborative work across intelligence organizations.

Summary

In gathering information for synthesizing the target model, analysts should start by re- viewing existing finished and raw intelligence. This provides a picture of the current target model. It is important to do a key assumptions check at this point: Do the premises that underlie existing conclusions about the target seem to be valid?

Next, the analyst must acquire and evaluate raw intelligence about the target, and fit it into the target model—a step often called col- lation. Raw intelligence is viewed and evalu- ated differently depending on whether it is literal or nonliteral. Literal sources include open source, COMINT, HUMINT, and cyber collection. Nonliteral sources involve several types of newer and highly focused collection techniques that depend heavily on processing, exploitation, and interpretation

to turn the material into usable intelligence.

Once a model template has been selected for the target, it becomes necessary to fit the relevant information into the template. Fitting the information into the model template re- quires a three-step process:

  • Evaluating the source, by determining whether the source (a) is competent, that is, knowledgeable about the information being given; (b) had the access needed to get the information; and (c) had a vested interest or bias regarding the information provided.
  • Evaluating the communications channel through which the information arrived. Information that passes through many intermediate points becomes distorted. Processors and exploiters of collected information can also have a vested interest or bias.
  • Evaluating the credentials of the evidence itself. This involves evaluating (a) the credibility of evidence, based in part on the previously completed source and communications channel evaluations; (b) the reliability; and (c) the relevance of the evidence. Relevance is a particularly important evaluation step; it is too easy to fit evidence into the wrong target model.
  • As evidence is evaluated, it must be combined and incorporated into the target mod- el. Multiple pieces of evidence can be convergent (favoring the same conclusion) or diver- gent (favoring different conclusions and leading to alternative target models). Convergent evidence can also be redundant, reinforcing a conclusion.

Tools to extract meaning from data, for example, by relation- ship, pattern, and geospatial analysis, are used by analysts where they add value that offsets the cost of “care and feeding” of the tool. Tools to support structured argumentation are available and can significantly im- prove the quality of the analytic product, but whether they will find serious use in intelligence analysis is still an open question.

Denial, Deception, and Signaling

There is nothing more deceptive than an obvious fact.

Sherlock Holmes, in “The Boscombe Valley Mystery”

In evaluating evidence and developing a target model, an analyst must constantly take into account the fact that evidence may have been deliberately shaped by an opponent.

Denial and deception are major weapons in the counterintelligence arsenal of a country or organization.

 

They may be the only weapons available for many countries to use against highly sophisticated technical intelligence.

At the opposite extreme, the opponent may intentionally shape what the analyst sees, not to mislead but rather to send a message or signal. It is important to be able to recognize signals and to understand their meaning.

Denial

Denial and deception come in many forms. Denial is somewhat more straightforward.

Deception

Deception techniques are limited only by our imagination. Passive deception might include using decoys or having the intelligence target emulate an activity that is not of intelligence interest—making a chemical or biological warfare plant look like a medical drug production facility, for example. Decoys that have been widely used in warfare include dummy ships, missiles, and tanks.

Active deception includes misinformation (false communications traffic, signals, stories, and documents), misleading activities, and double agents (agents who have been discovered and “turned” to work against their former employers), among others.

Illicit groups (for example, terrorists) con- duct most of the deception that intelligence must deal with. Illicit arms traffickers (known as gray arms traffickers) and narcotics traffickers have developed an extensive set of deceptive techniques to evade international restrictions. They use intermediaries to hide financial transactions. They change ship names or aircraft call signs en route to mislead law enforcement officials. One airline changed its corporate structure and name overnight when its name became linked to illicit activities.1 Gray arms traffickers use front companies and false end-user certificates.

Defense against Denial and Deception: Protecting Intelligence Sources and Methods

In the intelligence business, it is axiomatic that if you need information, someone will try to keep it from you. And we have noted repeatedly that if an opponent can model a system, he can defeat it. So your best defense is to deny your opponent an understanding of your intelligence capabilities. Without such understanding, the opponent cannot effectively conduct D&D.

For small governments, and in the business intelligence world, protection of sources and methods is relatively straightforward. Selective dissemination of and tight controls on intelligence information are possible. But a major government has too many intelligence customers to justify such tight restrictions. Thus these bureaucracies have established an elaborate system to simultaneously protect and disseminate intelligence information. This protection system is loosely called compartmentation, because it puts information in “compartments” and restricts access to the compartments.

In the U.S. intelligence community, the intelligence product, sources, and methods are protected by the sensitive compartmented information (SCI) system. The SCI system uses an extensive set of compartments to protect sources and methods. Only the col- lectors and processors have access to many of the compartmented materials. Much of the product, however, is protected only by standard markings such as “Secret,” and access is granted to a wide range of people.

Open-source intelligence has little or no protection because the source material is unclassified. However, the techniques for exploiting open-source material, and the specific material of interest for exploitation, can tell an opponent much about an intelligence service’s targets. For this reason, intelligence agencies that translate open source often restrict its dissemination, using markings such as “Official Use Only.”

Higher Level Denial and Deception

A few straightforward examples of denial and deception were cited earlier. But sophisticated deception must follow a careful path; it has to be very subtle (too-obvious clues are likely to tip off the deception) yet not so subtle that your opponent misses it. It is commonly used in HUMINT, but today it frequently requires multi-INT participation or a “swarm” attack to be effective. Increasingly, carefully planned and elaborate multi- INT D&D is being used by various countries. Such efforts even have been given a different name—perception management—that focuses on the end result that the effort is intended to achieve.

Perception management can be effective against an intelligence organization that, through hubris or bureaucratic politics, is reluctant to change its initial conclusions about a topic. If the opposing intelligence organization makes a wrong initial estimate, then long-term deception is much easier to pull off. If D&D are successful, the opposing organization faces an unlearning process: its predispositions and settled conclusions have to be discarded and replaced.

The best perception management results from highly selective targeting, intended to get a specific message to a specific person or organization. This requires knowledge of that person’s or organization’s preferences in intelligence—a difficult feat to accomplish, but the payoff of a successful perception management effort is very high. It can result in an opposing intelligence service making a miscall or causing it to develop a false sense of security. If you are armed with a well-developed model of the three elements of a foreign intelligence strategy —targets, operations, and linkages—an effective counterintelligence counterattack in the form of perception management or covert action is possible, as the following examples show.

The Farewell Dossier

Detailed knowledge of an opponent is the key to successful counterintelligence, as the “Farewell” operation shows. In 1980 the French internal security service Direction de la Surveillance du Territoire (DST) recruited a KGB lieutenant colonel, Vladimir I. Vetrov, codenamed “Farewell.” Vetrov gave the French some four thousand documents, de- tailing an extensive KGB effort to clandes- tinely acquire technical know-how from the West, primarily from the United States.

In 1981 French president François Mitterrand shared the source and the documents (which DST named “the Farewell Dossier”) with U.S. president Ronald Reagan.

In early 1982 the U.S. Department of Defense, the Federal Bureau of Investigation, and the CIA began developing a counterattack. Instead of simply improving U.S. defenses against the KGB efforts, the U.S. team used the KGB shopping list to feed back, through CIA-controlled channels, the items on the list—augmented with “improvements” that were designed to pass acceptance testing but would fail randomly in service. Flawed computer chips, turbines, and factory plans found their way into Soviet military and civilian factories and equipment. Misleading information on U.S. stealth technology and space defense flowed into the Soviet intelligence reporting. The resulting failures were a severe setback for major segments of Soviet industry. The most dramatic single event resulted when the United States provided gas pipeline management software that was in- stalled in the trans-Siberian gas pipeline. The software had a feature that would, at some time, cause the pipeline pressure to build up to a level far above its fracture pres- sure. The result was the Soviet gas pipeline explosion of 1982, described as the “most monumental non-nuclear explosion and fire ever seen from space.

Countering Denial and Deception

In recognizing possible deception, an analyst must first understand how deception works. Four fundamental factors have been identified as essential to deception: truth, denial, deception, and misdirection.

Truth—All deception works within the context of what is true. Truth establishes a foundation of perceptions and beliefs that are accepted by an opponent and can then be exploited in deception. Supplying the opponent with real data establishes the credibility of future communications that the opponent then relies on.

Denial—It’s essential to deny the op- ponent access to some parts of the truth. Denial conceals aspects of what is true, such as your real intentions and capabilities. Denial often is used when no deception is intended; that is, the end objective is simply to deny knowledge. One can deny without intent to deceive, but not the

converse.

Deceit—Successful deception requires the practice of deceit.

Misdirection—Deception depends on manipulating the opponent’s perceptions. You want to redirect the opponent away from the truth and toward a false perception. In operations, a feint is used to redirect the adversary’s attention away from where the real operation will occur.

 

The first three factors allow the deceiver to present the target with desirable, genuine data while reducing or eliminating signals that the target needs to form accurate perceptions. The fourth provides an attractive alternative that commands the target’s attention.

The effectiveness of hostile D&D is a direct reflection of the predictability of collection.

Collection Rules

The best way to defeat D&D is for all of the stakeholders in the target-centric approach to work closely together. The two basic rules for collection, described here, form a complementary set. One rule is intended to provide incentive for collectors to defeat D&D. The other rule suggests ways to defeat it.

The first rule is to establish an effective feedback mechanism.

Relevance of the product to intelligence questions is the correct measure of collection effectiveness, and analysts and customers—not collectors—determine relevance. The system must enforce a content-oriented evaluation of the product, because content is used to determine relevance.

The second rule is to make collection smarter and less predictable. There exist several tried-and-true tactics for doing so:

  • Don’t optimize systems for quality and quantity; optimize for content.
  • Apply sensors in new ways. Analysis groups often can help with new sensor approaches in their areas of responsibility.
  • Consider provocative techniques against D&D targets.

Probing an opponent’s system and watching the response is a useful tactic for learning more about the system. Even so, probing may have its own set of un- desirable consequences: The Soviets would occasionally chase and shoot down the reconnaissance aircraft to discourage the probing practice.

  • Hit the collateral or inferential tar- gets. If an opponent engages in D&D about a specific facility, then sup- porting facilities may allow inferences to be made or to expose the deception. Security measures around a facility and the nature and status of nearby communications, power, or transportation facilities may provide a more complete picture.
  • Finally, use deception to protect a collection capability.

The best weapon against D&D is to mis- lead or confuse opponents about intelligence capabilities, disrupt their warning programs, and discredit their intelligence services.

 

An analyst can often beat D&D simply by using several types of intelligence—HUMINT, COMINT, and so on—in combination, simultaneously, or successively. It is relatively easy to defeat one sensor or collection channel. It is more difficult to defeat all types of intelligence at the same time.

Increasingly, opponents can be expected to use “swarm” D&D, targeting several INTs in a coordinated effort like that used by the Soviets in the Cuban missile crisis and the Indian government in the Pokhran deception.

The Information Instrument

Analysts, whether single- or all-source, are the focal points for identifying D&D. In the types of conflicts that analysts now deal with opponents have made effective use of a weapon that relies on deception: using both traditional media and social media to paint a misleading picture of their adversaries. Nongovernmental opponents (insurgents and terrorists) have made effective use of this information instrument.

 

the prevalence of media reporters in all conflicts, and the easy access to social media, have given the information instrument more utility. Media deception has been used repeatedly by opponents to portray U.S. and allied “atrocities” during military campaigns in Kosovo, Iraq, Afghanistan, and Syria.

Signaling

Signaling is the opposite of denial and deception. It is the process of deliberately sending a message, usually to an opposing intelligence service.

its use depends on a good know- ledge of how the opposing intelligence ser- vice obtains and analyzes knowledge. Recognizing and interpreting an opponent’s signals is one of the more difficult challenges an analyst must face. Depending on the situation, signals can be made verbally, by actions, by displays, or by very subtle nuances that depend on the context of the signal.

In negotiations, signals can be both verbal and nonverbal.

True signals often are used in place of open declarations, to provide in- formation while preserving the right of deniability.

Analyzing signals requires examining the content of the signal and its context, timing, and source. Statements made to the press are quite different from statements made through diplomatic channels—the latter usually carry more weight.

Signaling between members of the same culture can be subtle, with high success rates of the signal being understood. Two U.S. corporate executives can signal to each other with confidence; they both understand the rules. A U.S. executive and an Indonesian executive would face far greater risks of misunderstanding each other’s signals. The cultural differences in signaling can be substantial. Cultures differ in their reliance on verbal and nonverbal signals to communicate their messages. The more people rely on nonverbal or indirect verbal signals and on context, the higher the complexity.

  • In July 1990 the U.S. State Department unintentionally sent several signals that Saddam Hussein apparently interpreted as a green light to attack Kuwait. State Department spokesperson Margaret Tutwiler said, “[W]e do not have any defense treaties with Kuwait. . . .” The next day, Ambassador April Glaspie told Saddam Hussein, “[W]e have no opinion on Arab-Arab conflicts like your border disagreement with Kuwait.” And two days before the invasion, Assistant Secretary of State John Kelly testified before the House Foreign Affairs Committee that there was no obligation on our part to come to the defense of Kuwait if it were attacked.

 

Analytic Tradecraft in a World of Denial and Deception

Writers often use the analogy that intelligence analysis is like the medical profession.

Analysts and doctors weigh evidence and reach conclusions in much the same fashion. In fact, intelligence analysis, like medicine, is a combination of art, tradecraft, and science. Different doctors can draw different conclusions from the same evidence, just as different analysts do.

But intelligence analysts have a different type of problem than doctors do. Scientific researchers and medical professionals do not routinely have to deal with denial and deception. Though patients may forget to tell them about certain symptoms, physicians typically don’t have an opponent who is trying to deny them knowledge. In medicine, once doctors have a process for treating a pathology, it will in most cases work as expected. The human body won’t develop countermeasures to the treatment. But in intelligence, your opponent may be able to identify the analysis process and counter it. If analysis becomes standardized, an opponent can predict how you will analyze the available intelligence, and then D&D become much easier to pull off.

One cannot establish a process and retain it indefinitely.

Intelligence analysis within the context of D&D is in fact analogous to being a professional poker player, especially in the games of Seven Card Stud or Texas Hold ’em. You have an opponent. Some of the opponent’s resources are in plain sight, some are hidden. You have to observe the opponent’s actions (bets, timing, facial expressions, all of which incorporate art and tradecraft) and do pattern analysis (using statistics and other tools of science).

Summary

In evaluating raw intelligence, analysts must constantly be aware of the possibility that they may be seeing material that was deliberately provided by the opposing side. Most targets of intelligence efforts practice some form of denial. Deception—providing false information—is less common than denial because it takes more effort to execute, and it can backfire.

Defense against D&D starts with your own denial of your intelligence capabilities to op- posing intelligence services.

Where one intelligence service has extensive knowledge of another service’s sources and methods, more ambitious and elaborate D&D efforts are possible. Often called perception management, these involve developing a coordinated multi-INT campaign to get the opposing service to make a wrong initial estimate. Once this happens, the opposing service faces an unlearning process, which is difficult. A high level of detailed knowledge also allows for covert actions to disrupt and discredit the opposing service.

A collaborative target-centric process helps

to stymie D&D by bringing together different perspectives from the customer, the collector, and the analyst. Collectors can be more effective in a D&D environment with the help of analysts. Working as a team, they can make more use of deceptive, unpredictable, and provocative collection methods that have proven effective in defeating D&D.

Intelligence analysis is a combination of art, tradecraft, and science. In large part, this is because analysts must constantly deal with denial and deception, and dealing with D&D is primarily a matter of artfully applying tradecraft.

Systems Modeling and Analysis

Believe what you yourself have tested and found to be reasonable.

Buddha

In chapter 3, we described the target as three things: as a complex system, as a network, and as having temporal and spatial attributes.

any entity having the attributes of structure, function, and process can be de- scribed and analyzed as a system, as noted in previous chapters.

the basic principles apply in modeling political and economic systems, as well. Systems analysis can be applied to analyze both existing systems and those under development.

A government can be considered a system and analyzed in much the same way—by creating structural, functional, and process models.

Analyzing an Existing System: The Mujahedeen Insurgency

a single weapon can be defeated, as in this case, by tactics. But the proper mix of antiair weaponry could not. The mix here included surface-to-air missiles (SA-7s, British Blowpipes, and Stinger missiles) and machine guns (Oerlikons and captured Soviet Dashika machine guns). The Soviet helicopter operators could defend against some of these, but not all simultaneously. SA-7s were vulnerable to flares; Blowpipes were not. The HINDs could stay out of range of the Dashikas, but then they would be at an effective range for the Oerlikons.3 Unable to know what they might be hit with, Soviet pilots were likely to avoid at- tacking or rely on defensive maneuvers that would make them almost ineffective—which is exactly what happened.

Analyzing a Developmental System: Methodology

In intelligence, we also are concerned about modeling a system that is un- der development. The first step in modeling a developmental system, and particularly a future weapons system, is to identify the system(s) under development. Two approaches traditionally have been applied in weapons systems analysis, both based on reasoning paradigms drawn from the writings of philosophers: deductive and inductive.

  • The deductive approach to prediction is to postulate desirable objectives, in the eyes of the opponent; identify the system requirements; and then search the incoming intelligence for evidence of work on the weapons systems, subsystems, components, devices, and basic research and development (R&D) required to reach those objectives.
  • The opposite, an inductive or synthesis approach, is to begin by looking at the evidence of development work and then synthesize the advances in systems, subsystems, and devices that are likely to follow.

A number of writers in the intelligence field have argued that intelligence uses a different method of reasoning—abduction, which seeks to develop the best hypothesis or inference from a given body of evidence. Abduction is much like induction, but its stress is on integrating the analyst’s own thoughts and intuitions into the reasoning process.

The deductive approach can be described as starting from a hypothesis and using evidence to test the hypothesis. The inductive approach is described as evidence-based reasoning to develop a conclusion.7 Evidence- based reasoning is applied in a number of professions. In medicine, it is known as evidence-based practice—applying a combination of theory and empirical evidence to make medical decisions.

Both (or all three) approaches have advantages and drawbacks. In practice, though, deduction has some advantages over induction or abduction in identifying future systems development.

The problem arises when two or more systems are under development at the same time. Each system will have its R&D process, and it can be very difficult to separate the processes out of the mass of in- coming raw intelligence. This is the “multiple pathologies” problem: When two or more pathologies are present in a patient, the symptoms are mixed together, and diagnosing the separate ill- nesses becomes very difficult. Generally, the deductive technique works better for dealing with the multiple pathologies issue in future systems assessments.

Once a system has been identified as being in development, analysis proceeds to the second step: answering customers’ questions about it. These questions usually are about the system’s functional, process, and structural characteristics—that is, about performance, schedule, risk, and cost.

As the system comes closer to completion, a wider group of customers will want to know what specific targets the system has been designed against, in what circumstances it will be used, and what its effectiveness will be. These matters typically require analysis of the system’s performance, including its suitability for operating in its environment or in accomplishing the mission for which it has been designed. The schedule for completing development and fielding the system, as well as associated risks, also become important. In some cases, the cost of development and deployment will be of interest.

Performance

Performance analyses are done on a wide range of systems, varying from simple to highly complex multidisciplinary systems. Determining the performance of a narrowly defined system is straightforward. More challenging is assessing the performance of a complex system such as an air defense network or a narcotics distribution network. Most complex system performance analysis is now done by using simulation, a topic to which we will return.

Comparative Modeling

Comparative modeling is similar to benchmarking, but the focus is on analysis of one group’s system or product performance, versus an opponent’s.

Comparing your country’s or organization’s developments with those of an opponent can involve four distinct fact patterns. Each pat- tern poses challenges that the analyst must deal with.

In short, the possibilities can be de- scribed as follows:

  • We did it—they did it.
    • We did it—they didn’t do it.
    • We didn’t do it—they did it.
    • We didn’t do it—they didn’t do it.

There are many examples of the “we did it—they did it” sort of intelligence problem, especially in industries in which competitors typically develop similar products.

In the second case, “we did it—they didn’t do it,” the intelligence officer runs into a real problem: It is almost impossible to prove a negative in intelligence. The fact that no intelligence information exists about an opponent’s development cannot be used to show that no such development exists.

The third pattern, “we didn’t do it—they did it,” is the most dangerous type that we en- counter. Here the intelligence officer has to overcome opposition from skeptics in his country, because he has no model to use for comparison.

The “we didn’t do it—they did it” case presents analysts with an opportunity to go off in the wrong direction analytically

This sort of transposition of cause and effect is not uncommon in human source report- ing. Part of the skill required of an intelli- gence analyst is to avoid the trap of taking sources too literally. Occasionally, intelli- gence analysts must spend more time than they should on problems that are even more fantastic or improbable than that of the German engine killer.

 

 

Simulation

Performance simulation typically is a parametric, sensitivity, or “what if” type of analysis; that is, the analyst needs to try a relationship between two variables (parameters), run a computer analysis and examine the results, change the input constants, and run the simulation again.

The case also illustrates the common systems analysis problem of presenting the worst- case estimate: National security plans often are made on the basis of a systems estimate; out of fear that policymakers may become complacent, an analyst will tend to make the worst case that is reasonably possible.

The Mirror-Imaging Challenge

Both comparative modeling and simulation have to deal with the risks of mirror imaging. The opponent’s system or product (such as an airplane, a missile, a tank, or a supercomputer) may be designed to do different things or to serve a different market than expected.

The risk in all systems analysis is one of mirror imaging, which is much the same as the mirror-imaging problem in decision-making.

Unexpected Simplicity

In effect, the Soviets applied a version of Occam’s razor (choose the simplest explanation that fits the facts at hand) in their industrial practice. Because they were cautious in adopting new technology, they tended to keep everything as simple as possible. They liked straightforward, proven designs. When they copied a design, they simplified it in obvious ways and got rid of the extra features that the United States tends to put on its weapons systems. The Soviets made maintenance as simple as possible, because the hardware was going to be maintained by people who did not have extensive training.

In a comparison of Soviet and U.S. small jet engine technology, the U.S. model engine was found to have 2.5 times as much materials cost per pound of weight. It was smaller and lighter than the Soviet engine, of course, but it had 12 times as many maintenance hours per flight-hour as the Soviet model, and overall the Soviet engine had a life cycle cost half that of the U.S. engine.10 The ability to keep things simple was the Soviets’ primary advantage over the United States in technology, especially military technology.

Quantity May Replace Quality

U.S. analysts often underestimated the number of units that the Soviets would produce. The United States needed fewer units of a given system to perform a mission, since each unit had more flexibility, quality, and performance ability than its Soviet counterpart. The United States forgot a lesson that it had learned in World War II—U.S. Sherman tanks were inferior to the German Tiger tanks in combat, but the United States deployed a lot of Shermans and overwhelmed the Tigers with numbers.

Schedule

The intelligence customer’s primary concern about systems under development usually centers on performance, as discussed previously.

the importance of the systems development process, which is one of the many types of processes we deal with in intelligence.

Process Models

The functions of any system are carried out by processes. The processes will be different for different systems. That’s true whether you are describing an organization, a weapons system, or an industrial system. Different types of organizations, for ex- ample—civil government, law enforcement, military, and commercial organizations—will have markedly different processes. Even similar types of organizations will have different processes, especially in different cultures.

Political, military, economic, and weapons systems analysts all use specialized process-analysis techniques.

Most processes and most process models have feedback loops. Feedback al- lows the system to be adaptive, that is, to ad- just its inputs based on the output. Even simple systems such as a home heating/air conditioning system provide feedback via a thermostat. For complex systems, feedback is essential to prevent the process from producing undesirable output. Feedback is an important part of both synthesis and analysis

Development Process Models

In determining the schedule for a systems development, we concentrate on examining the development process and identifying the critical points in that process.

An example development process model is shown in Figure 9-2. In this display, the pro- cess nodes are separated by function into “swim lanes” to facilitate analysis.

 

The Program Cycle Model

Beginning with the system requirement and progressing to production, deployment, and operations, each phase bears unique indicators and opportunities for collection and synthesis/analysis. Customers of intelligence often want to know where a major system is in this life cycle.

Different types of systems may evolve through different versions of the cycle, and product development differs somewhat from systems development. It is therefore important for the analyst to first determine the specific names and functions of the cycle phases for the target country, industry, or company and then determine exactly where the target program is in that cycle. With that information, analytic techniques can be used to predict when the program might become operational or begin producing output.

It is important to know where a program is in the cycle in order to make accurate predictions.

A general rule of thumb is that the more phases in the program cycle, the longer the process will take, all other things being equal. Countries and organizations with large, stable bureaucracies typically have many phases, and the process, whatever it may be, takes that much longer.

Program Staffing

The duration of any stage of the cycle shown in the Generic Program Cycle is determined by the type of work involved and the number and expertise of workers assigned.

 

Fred Brooks, one of the premier figures in computer systems development, defined four types of projects in his book The Mythical Man-Month. Each type of project has a unique relationship between the number of workers needed (the project loading) and the time it takes to complete the effort.

The first type of project is a perfectly partitionable task—that is, one that can be completed in half the time by doubling the number of workers.

A second type of project involves the unpartitionable task…The profile is referred to here as the “baby production curve,” because no matter how many women are assigned to the task, it takes nine months to produce a baby.

Most small projects fit the curve shown in the lower left of the figure, which is a com- bination of the first two curves. In this case a project can be partitioned into subtasks, but the time it takes for people working on different subtasks to communicate with one another will eventually balance out the time saved by adding workers, and the curve levels off.

Large projects tend to be dominated by communication. At some point, shown as the bottom point of the lower right curve, adding additional workers begins to slow the project because all workers have to spend more time in communication.

The Technology Factor

Technology is another important factor in any development schedule; and technology is neither available nor applied in the same way everywhere. An analyst in a technologically advanced country, such as the United States, tends to take for granted that certain equipment—test equipment, for example—will be readily available and will be of a certain quality.

There is also a definite schedule advantage to not being the first to develop a system. A country or organization that is not a leader in technology development has the advantage of learning from the leader’s mistakes, an ad- vantage that entails being able to keep research and development costs low and avoid wrong paths.

A basic rule of engineering is that you are halfway to a solution when you know that there is a solution, and you are three-quarters there when you know how a competitor solved the problem. It took much less time for the Soviets to develop atomic and hydrogen bombs than U.S. intelligence had predicted. The Soviets had no principles of impotence or doubts to slow them down.

Risk

Analysts often assume that the programs and projects they are evaluating will be completed on time and that the target system will work perfectly. They would seldom be so foolish in evaluating their own projects or the performance of their own organizations. Risk analysis needs to be done in any assessment of a target program.

It is typically difficult to do and, once done, difficult to get the customer to accept. But it is important to do because intelligence customers, like many analysts, also tend to assume that an opponent’s program will be executed perfectly.

One fairly simple but often overlooked approach to evaluating the probability of success is to examine the success rate of similar ventures.

Known risk areas can be readily identified from past experience and from discussions with technical experts who have been through similar projects. The risks fall into four major categories—programmatic, technical, production, and engineering. Analyzing potential problems requires identifying specific potential risks from each category. Some of these risks include the following:

 

  • Programmatic: funding, schedule, contract relationships, political issues
  • Technical: feasibility, survivability, system performance
  • Production: manufacturability, lead times, packaging, equipment
  • Engineering: reliability, maintainability, training, operations

Risk assessment assesses risks quantitatively and ranks them to establish those of most concern. A typical ranking is based on the risk factor, which is a mathematical combination of the probability of failure and the consequence of failure. This assessment requires a combination of expertise and software tools in a structured and consistent approach to ensure that all risk categories are considered and ranked.

Risk management is the definition of alternative paths to minimize risk and set criteria on which to initiate or terminate these activities. It includes identifying alternatives, options, and approaches to mitigation.

Examples are initiation of parallel developments (for example, funding two manufacturers to build a satellite, where only one satellite is needed), extensive development testing, addition of simulations to check performance predictions, design reviews by consultants, or focused management attention on specific elements of the program. A number of decision analysis tools are useful for risk management. The most widely used tool is the Program Evaluation and Review Technique (PERT) chart, which shows the interrelationships and dependencies among tasks in a program on a timeline.

Cost

Systems analysis usually doesn’t focus heavily on cost estimates. The usual assumption is that costs will not keep the system from being completed. Sometimes, though, the costs are important because of their effect on the overall economy of a country.

Estimating the cost of a system usually starts with comparative modeling. That is, you be- gin with an estimate of what it would cost your organization or an industry in your country to build something. You multiply that number by a factor that accounts for the difference in costs of the target organization (and they will always be different).

When several system models are being considered, cost-utility analysis may be necessary. Cost-utility analysis is an important part of decision prediction. Many decision-making processes, especially those that require resource allocation, make use of cost-utility analysis. For an analyst assessing a foreign military’s decision whether to produce a new weapons system, it is a useful place to start. But the analyst must be sure to take “rationality” into account. As noted earlier, what is “rational” is different across cultures and from one individual to the next. It is important for the analyst to understand the logic of the decision maker—that is, how the opposing decision maker thinks about topics such as cost and utility.

In performing cost-utility analysis, the analyst must match cost figures to the same time horizon over which utility is being assessed. This will be a difficult task if the horizon reaches past a few years away. Life-cycle costs should be considered for new systems, and many new systems have life cycles in the tens of years.

Operations Research

A number of specialized methodologies are used to do systems analysis. Operations re- search is one of the more widely used ones.

Operations research has a rigorous process for defining problems that can be usefully applied in intelligence. As one specialist in the discipline has noted, “It often occurs that the major contribution of the operations research worker is to decide what is the real problem.” Understanding the problem often requires understanding the environment and/or system in which an issue is embedded, and operations researchers do that well.

After defining the problem, the operations research process requires representing the system in mathematical form. That is, the operations researcher builds a computation- al model of the system and then manipulates or solves the model, using computers, to come up with an answer that approximates how the real-world system should function. Systems of interest in intelligence are characterized by uncertainty, so probability analysis is a commonly used approach.

Two widely used operations research techniques are linear programming and network analysis. They are used in many fields, such as network planning, reliability analysis, capacity planning, expansion capability de- termination, and quality control.

Linear Programming

Linear programming involves planning the efficient allocation of scarce resources, such as material, skilled workers, machines, money, and time.

Linear programs are simply systems of linear equations or in- equalities that are solved in a manner that yields as its solution an optimum value—the best way to allocate limited resources, for example. The optimum value is based on some single-goal statement (provided to the program in the form of what is called a linear objective function). Linear programming is often used in intelligence for estimating production rates, though it has applicability in a wide range of disciplines.

Network Analysis

In chapter 10 we’ll investigate the concept of network analysis as applied to relation- ships among entities. Network analysis in an operations research sense is not the same. Here, networks are interconnected paths over which things move. The things can be automobiles (in which case we are dealing with a network of roads), oil (with a pipeline system), electricity (with wiring diagrams or circuits), information signals (with communication systems), or people (with elevators or hallways).

In intelligence against networks, we frequently are concerned with things like maximum throughput of the system, the shortest (or cheapest) route between two or more locations, or bottlenecks in the system.

Summary

Any entity having the attributes of structure, function, and process can be described and analyzed as a system. Systems analysis is used in intelligence extensively for assessing foreign weapons systems performance. But it also is used to model political, economic, infrastructure, and social systems.

Modeling the structure of a system can rely on an inductive, a deductive, or an abductive approach.

Functional assessments typically require analysis of a system’s performance. Comparative performance analysis is widely used in such assessments. Simulations are used to prepare more sophisticated predictions of a system’s performance.

Process analysis is important for assessing organizations and systems in general. Organizational processes vary by organization type and across cultures. Process analysis also is used to determine systems development schedules and in looking at the life cycle of a program. Program staffing and the technologies involved are other factors that shape development schedules.

10

Network Modeling and Analysis

Future conflicts will be fought more by networks than by hierarchies, and whoever masters the network form will gain major advantages.

John Arquilla and David Ronfeldt, RAND Corporation

In intelligence, we’re concerned with many types of networks: communications, social, organizational, and financial networks, to name just a few. The basic principles of modeling and analysis apply across most different types of networks.

intelligence has the job of providing an advantage in conflicts by reducing uncertainty.

One of the most powerful tools in the analyst’s toolkit is network modeling and analysis. It has been used for years in the U.S. intelligence community against targets such as terrorist groups and narcotics traffickers. The netwar model of multidimensional conflict between opposing networks is more and more applicable to all intelligence, and network analysis is our tool for examining the opposing network.

a few definitions:

 

  • Network—that group of elements forming a unified whole, also known as a system
    • Node—an element of a system that represents a person, place, or physical thing
    • Cell—a subordinate organization formed around a specific process, capability, or activity within a designated larger organization
  • Link—a behavioral, physical, or functional relationship between nodes

 

Link Models

Link modeling has a long history; the Los Angeles police department reportedly used it first in the 1940s as a tool for assessing organized crime networks. Its primary purpose was to display relationships among people or between people and events. Link models demonstrated their value in discerning the complex and typically circuitous ties between entities.

some types of link diagrams are referred to as horizontal relevance trees. Their essence is the graphical representation of (a) nodes and their connection patterns or (b) entities and relationships.

Most humans simply cannot assimilate all the information collected on a topic over the course of several years. Yet a typical goal of intelligence synthesis and analysis is to develop precise, reliable, and valid inferences (hypotheses, estimations, and conclusions) from the available data for use in strategic decision-making or operational planning. Link models directly support such inferences.

The primary purpose of link modeling is to facilitate the organization and presentation of data to assist the analytic process. A major part of many assessments is the analysis of relationships among people, organizations, locations, and things. Once the relationships have been created in a database system, they can be displayed and analyzed quickly in a link analysis program.

To be useful in intelligence analysis, the links should not only identify relationships among data items but also show the nature of their ties. A subject-verb-object display has been used in the intelligence community for sever- al decades to show the nature of such ties, and it is sometimes used in link displays.

Quantitative and temporal (date stamping) relationships have also been used when the display software has a filtering capability. Filters allow the user to focus on connections of interest and can simplify by several orders of magnitude the data shown in a link dis- play.

Link modeling has been replaced almost completely by network modeling, discussed next, because it offers a number of advantages in dealing with complex networks.

Network Models

Most modeling and analysis in intelligence today focuses on networks.

Some Network Types

A target network can include friendly or allied entities

It can include neutrals that your customer wishes to influence—either to become an ally or to remain neutral.

Social Networks

When intelligence analysts talk about net- work analysis, they often mean social net- work analysis (SNA). SNA involves identifying and assessing the relationships among people and groups—the nodes of the network. The links show relationships or trans- actions between nodes. So a social network model provides a visual display of relation- ships among people, and SNA provides a visual or mathematical analysis of the relationships. SNA is used to identify key people in an organization or social network and to model the flow of information within the network.

Organizational Networks

Management consultants often use SNA methodology with their business clients, referring to it as organizational network analysis. It is a method for looking at communication and social networks within a formal organization. Organizational network modeling is used to create statistical and graphical models of the people, tasks, groups, knowledge, and resources of organizations.

Commercial Networks

In competitive intelligence, network analysis tends to focus on networks where the nodes are organizations.

As Babson College professor and business analyst Liam Fahey noted, competition in many industries is now as much competition between networked enterprises

Fahey has de- scribed several such networks and defined five principal types:

  • Vertical networks. Networks organized across the value chain; for example, 3M Corporation goes from mining raw materials to delivering finished products.
  • Technology networks. Alliances with technology sources that allow a firm to maintain technological superiority,

such as the CISCO Systems network. • Development networks. Alliances fo- cused on developing new products or processes, such as the multimedia entertainment venture DreamWorks SKG.

  • Ownership networks. Networks in which a dominant firm owns part or all of its suppliers, as do the Japanese keiretsu.
  • Political networks. Those focused on political or regulatory gains for its members, for example, the National Association of Manufacturers.

Hybrids of the five are possible, and in some cultures such as in the Middle East and Far East, families can be the basis for a type of hybrid business network.

 

Financial Networks

Financial networks tend to feature links among organizations, though individuals can be important nodes, as in the Abacha family funds-laundering case. These networks focus on topics such as credit relationships, financial exposures between banks, liquidity flows in the interbank payment system, and funds-laundering transactions. The relationships among financial institutions, and the relationships of financial institutions with other organizations and individuals, are best captured and analyzed with network modeling.

Global financial markets are interconnected and therefore amenable to large-scale modeling. Analysis of financial system networks helps economists to understand systemic risk and is key to preventing future financial crises.

Threat Networks

Military and law enforcement organizations define a specific type of network, called a threat network. These are networks that are opposed to friendly networks.

Such net- works have been defined as being “comprised of people, processes, places, and material—components that are identifiable, targetable, and exploitable.”

A premise of threat network modeling is that all such networks have vulnerabilities that can be exploited. Intelligence must provide an understanding of how the network operates so that customers can identify actions to exploit the vulnerabilities.

Threat networks, no matter their type, can access political, military, economic, social, infrastructure, and information resources. They may connect to social structures in multiple ways (kinship, religion, former association, and history)—providing them with resources and support. They may make use of the global information networks, especially social media, to obtain recruits and funding and to conduct information operations to gain recognition and international support.

Other Network Views

Target networks can be a composite of the types described so far. That is, they can have social, organizational, commercial, and financial elements, and they can be threat net- works. But target networks can be labeled another way. They generally take one of the following relationship forms:

  • Functional networks. These are formed for a specific purpose. Individuals and organizations in this net- work come together to undertake activities based primarily on the skills, expertise, or particular capabilities they offer. Commercial net- works, crime syndicates, and insurgent groups all fall under this label.
  • Family and cultural networks. Some members or associates have familial bonds that may span generations. Or the network shares bonds due to a shared culture, language, religion, ideology, country of origin, and/or sense of identity. Friendship net- works fall into this category as do proximity networks—where the network has bonds due to geographic or proximity ties (such as time spent together in correctional institutions).
  • Virtual network. This is a relatively new phenomenon. In these networks, participants seldom (possibly never) physically meet, but work together through the Internet or some other means of communication. Networks involved in online fraud, theft, or funds laundering are usually virtual networks. Social media often are used to operate virtual networks.

Modeling the Network

Target networks can be modeled manually, or by using computer algorithms to automate the process. Using open-source and classified HUMINT or COMINT, an analyst typically goes through the following steps in manually creating a network model:

  • Understand the environment.

You should start by understanding the setting in which the network operates. That may require looking at all six of the PMESII factors that constitute the environment, and almost certainly at more than one of these factors. This approach applies to most networks of intelligence interest, again recognizing that “military” refers to that part of the network that applies force (usually physical force) to serve network interests. Street gangs and narcotics traffickers, for example, typically have enforcement arms.

  • Select or create a network template.

Pattern analysis, link analysis, and social network analysis are the foundational analytic methods that enable intelligence analysts to begin templating the target network. To begin with, are the networks centralized or decentralized? Are they regional or transnational? Are they virtual, familial, or functional? Are they a combination? This information provides a rough idea of their structure, their adaptability, and their resistance to disruption.

  • Populate the network.

If you don’t have a good idea what the network template looks like, you can apply a technique that is sometimes called “snowballing.” You begin with a few key members of the target network. Then add nodes and linkages based on the information these key members provide about others. Over time, COMINT and other collection sources (open source, HUMINT) al- low the network to be fleshed out. You identify the nodes, name them, and determine the linkages among them. You also typically need to determine the nature of the link. For example, is it a familial link, a trans- actional link, or a hostile link

Computer-Assisted and Automated Modeling

Although manual modeling is still used, commercially available network tools such as Analyst’s Notebook and Palantir are now available to help. One option for using these tools is to enter the data manually but to rely on the tool to create and manipulate the network model electronically.

Analyzing the Network

Analyzing a network involves answering the classic questions—who-what-where-when- how-why—and placing the answers in a format that the customer can understand and act upon, what is known as “actionable intelligence.” Analysis of the network pattern can help identify the what, when, and where. Social network analysis typically identifies who. And nodal analysis can tell how and why.

Nodal Analysis

As noted throughout this book, nodes in a target network can include persons, places, objects, and organizations (which also could be treated as separate networks). Where the node is an organization, it may be appropriate to assess the role of the organization in the larger network—that is, to simply treat it as a node.

The usual purpose of nodal analysis is to identify the most critical nodes in a target network. This requires analyzing the properties of individual nodes, and how they affect or are affected by other nodes in the network. So the analyst must understand the behavior of many nodes and, where the nodes are organizations, the activities taking place within the nodes.

Social Network Analysis

Social network analysis, in which all of the network nodes are persons or groups, is widely used in the social sciences, especially in studies of organizational behavior. In intelligence, as noted earlier, we more frequently use target network analysis, in which almost anything can be a node.

 

To understand a social network, we need a full description of the social relationships in the network. Ideally, we would know about every relationship between each pair of actors in the network.

In summary, SNA is a tool for understanding the internal dynamics of a target network and how best to attack, exploit, or influence it. Instead of assuming that taking out the leader will disrupt the network, SNA helps to identify the distribution of power in the net- work and the influential nodes—those that can be removed or influenced to achieve a desired result. SNA also is used to describe how a network behaves and how its connectivity shapes its behavior.

Several analytic concepts that come along with SNA also apply to target network ana- lysis. The most useful concepts are centrality and equivalence. These are used today in the analysis of intelligence problems related to terrorism, arms networks, and illegal narcotics organizations.

the extent to which an actor can reach others in the network is a major factor in determining the power that the actor wields. Three basic sources of this advantage are high degree, high closeness, and high betweenness.

Actors who have many network ties have greater opportunities because they have choices. Their rich set of choices makes them less dependent than those with fewer ties and hence more powerful.

The network centrality of the individuals removed will determine the extent to which the removal impedes continued operation of the activity. Thus centrality is an important ingredient (but by no means the only one) in considering the identification of net- work vulnerabilities.

A second analytic concept that accompanies SNA is equivalence. The disruptive effectiveness of removing one individual or a set of individuals from a network (such as by making an arrest or hiring a key executive away from a business competitor) depends not only on the individual’s centrality but also on some notion of his uniqueness, that is, on whether or not he has equivalents.

The notion of equivalence is useful for strategic targeting and is tied closely to the concept of centrality. If nodes in the social network have a unique role (no equivalents), they will be harder to replace.

Network analysis literature offers a variety of concepts of equivalence. Three in particular are quite distinct and, between them, seem to capture most of the important ideas on the subject. The three concepts are substitutability, stochastic equivalence, and role equivalence. Each can be important in specific analysis and targeting applications.

Substitutability is easiest to understand; it can best be described as interchangeability. Two objects or persons in a category are substitutable if they have identical relationships with every other object in the category.

Individuals who have no network substitutes usually make the most worthwhile targets for removal.

Substitutability also has relevance to detecting the use of aliases. The use of an alias by a criminal will often show up in a network analysis as the presence of two or more substitutable individuals (who are in reality the same person with an alias). The interchangeability of the nodes actually indicates the interchangeability of the names.

Stochastic equivalence is a slightly more sophisticated idea. Two network nodes are stochastically equivalent if the probabilities of their being linked to any other particular node are the same. Narcotics dealers working for one distribution organization could be seen as stochastically equivalent if they, as a group, all knew roughly 70 percent of the group, did not mix with dealers from any other organizations, and all received their narcotics from one source.

Role equivalence means that two individuals play the same role in different organizations, even if they have no common acquaintances at all. Substitutability implies role equivalence, but not the converse.

Stochastic equivalence and role equivalence are useful in creating generic models of target organizations and in targeting by analogy—for example, the explosives expert is analogous to the biological expert in planning collection, analyzing terrorist groups, or attacking them.

Organizational Network Analysis

Organizational network analysis is a well-developed discipline for analyzing organizational structure. The traditional hierarchical description of an organizational structure does not sufficiently portray entities and their relationships.

the typical organization also is a system that can be viewed (and analyzed) from the same  three perspectives previously discussed:

structure, function, and process.

Structure here refers to the components of the organization, especially people and their relation- ships; this chapter deals with that.

Function refers to the outcome or results produced and tends to focus on decision making.

Process describes the sequences of activities and the expertise needed to produce the results or outcome. Fahey, in his assessment of organizational infrastructure, described four perspectives: structure, systems, people, and decision-making processes. Whatever their names, all three (or four, following Fahey’s example) perspectives must be considered.

Depending on the goal, an analyst may need to assess the network’s mission, its power distribution, its human resources, and its decision- making processes. The analyst might ask questions such as, Where is control exercised? Which elements provide support ser- vices? Are their roles changing? Network analysis tools are valuable for this sort of analysis.

Threat Network Analysis

We want to develop a detailed understanding of how a threat network functions by identifying its constituent elements, learning how its internal processes work to carry out operations, and seeing how all of the network components interact.

assessing threat networks requires, among other things, looking at the

  • Command-and-control structure. Threat networks can be decentralized, or flat. They can be centralized, or hierarchical. The structures will vary, but they are all designed to facilitate the attainment of the net- work’s goals and continued survival.
  • Closeness. This is a measure of the members’ shared objectives, kinship, ideology, religion, and personal relations that bond the network and facilitate recruiting new members.
    • Expertise. This includes the know- ledge, skills, and abilities of group leaders and members.
    • Resources. These include weapons, money, social connections, and public support.
  • Adaptability. This is a measure of the network’s ability to learn and adjust behaviors and modify operations in response to opposing actions.
  • Sanctuary. These are locations where the network can safely conduct planning, training, and resupply.

Primary is the ability to adapt over time, specifically to blend into the local population and to quickly replace losses of key personnel and recruit new members. The networks also tend to be difficult to penetrate because of their insular nature and the bonds that hold them together. They typically are organized into cells in a loose network where the loss of one cell does not seriously degrade the entire network.

To carry out the network’s functions, they must engage in activities that expose parts of the network to countermeasures.

They must communicate between cells and with their leadership, exposing the network to discovery and mapping of links.

Target Network Analysis

As we have said, in intelligence work we usually apply an extension of social network analysis that retains its basic concepts. So the techniques described earlier for SNA work for almost all target networks. But whereas all of the entities in SNA are people, again, in target network analysis they can be anything.

Automating the Analysis

Target network analysis has become one of the principal tools for dealing with complex systems, thanks to new, computer-based analytic methods. One tool that has been useful in assessing threat networks is the Organization Risk Analyzer (called *ORA) developed by the Computational Analysis of Social and Organizational Systems (CASOS) at Carnegie Mellon University. *ORA is able to group nodes and identify patterns of ana- lytic significance. It has been used to identify key players, groups, and vulnerabilities, and to model network changes over space and time.

Intelligence analysis relies heavily on graphical techniques to represent the descriptions of target networks compactly. The underlying mathematical techniques allow us to use computers to store and manipulate the information quickly and more accurately than we could by hand.

Summary

One of the most powerful tools in the analyst’s toolkit is network modeling and analysis. It is widely used in analysis disciplines. It is derived from link modeling, which organizes and presents raw intelligence in a visual form such that relationships among nodes (which can be people, places, things, organizations, or events) can be analyzed to extract finished intelligence.

We prefer to have network models created and updated automatically from raw intelligence data by software algorithms. Although some software tools exist for doing that, the analyst still must evaluate the sources and validate the results.

 

 

11 Geospatial and Temporal Modeling and Analysis

GEOINT is the professional practice of integrating and interpreting all forms of geospatial data to create historical and anticipatory intelligence products used for planning or that answer questions posed by decision-makers.

This definition incorporates the key ideas of an intelligence mission: all-source analysis and modeling in both space and time (from “historical and anticipatory”). These models are frequently used in analysis; insights about networks are often obtained by examining them in spatial and temporal ways.

  • During World War II, although the Germans maintained censorship as effectively as anyone else, they did publish their freight tariffs on all goods, including petroleum products. Working from those tariffs, a young U.S. Office of Strategic Services analyst, Walter Levy, conducted geospatial modeling based on the German railroad network to pinpoint the ex- act location of the refineries, which were subsequently targeted by allied bombers.

Static Geospatial Models

In the most general case, geospatial modeling is done in both space and time. But sometimes only a snapshot in time is needed.

Human Terrain Modeling

U.S. ground forces in Iraq and Afghanistan in the past few years have rediscovered and refined a type of static geospatial model that was used in the Vietnam War, though its use dates far back in history. Military forces now generally consider what they call “human terrain mapping” as an essential part of planning and conducting operations in populated areas.

In combating an insurgency, military forces have to develop a detailed model of the local situations that includes political, economic, and sociological inform- ation as well as military force information.

It involves acquiring the following details about each village and town:

  • The boundaries of each tribal area (with specific attention to where they adjoin or overlap)
  • Location and contact information for each sheik or village mukhtar and for government officials
  • Locations of mosques, schools, and markets
  • Patterns of activity such as movement into and out of the area; waking, sleeping, and shopping habits
  • Nearest locations and checkpoints of security forces
  • Economic driving forces including occupation and livelihood of inhabit- ants; employment and unemployment levels
  • Anti-coalition presence and activities
  • Access to essential services such as fuel, water, emergency care, and fire response
  • Particular local population concerns and issues

Human terrain mapping, or more correctly human terrain modeling, is an old intelligence technique.

Though Moses’s HUMINT mission failed because of poor analysis by the spies, it remains an excellent example of specific collection tasking as well as of the history of human terrain mapping.

1919 Paris Peace Conference

In 1917 President Woodrow Wilson established a study group to prepare materials for peace negotiations that would conclude World War I. He eventually tapped geographer Isaiah Bowman to head a group of 150 academics to prepare the study. It covered the languages, ethnicities, resources, and historical boundaries of Europe. With support from the American Geological Society, Bowman directed the production of over three hundred maps per week during January 1919.

The Tools of Human Terrain Modeling

Today, human terrain modeling is used extensively to support military operations in Syria, Iraq, and Afghanistan. Many tools have been developed to create and analyze such models. The ability to do human terrain mapping and other types of geospatial modeling has been greatly expanded and popularized by Google Earth and by Microsoft’s Virtual Earth. These geospatial modeling tools provide multiple layers of information.

This unclassified online material has a number of intelligence applications. For intelligence analysts, it permits planning HUMINT and COMINT operations. For military forces, it supports precise targeting. For terrorists, it facilitates planning of attacks.

Temporal Models

Pure temporal models are used less frequently than the dynamic geospatial models discussed next, because we typically want to observe activity in both space and time—sometimes over very short times. Timing shapes the consequences of planned events.

There are a number of different temporal model types; this chapter touches on two of them—timelines and pattern-of-life modeling and analysis.

Timelines

An opponent’s strategy often becomes apparent only when seemingly disparate events are placed on a timeline.

Event-time patterns tell analysts a great deal; they allow analysts to infer relationships among events and to examine trends. Activity patterns of a target network, for example, are useful in determining the best time to collect intelligence. An example is a plot of total telephone use over twenty-four hours—the plot peaks about 11 a.m., which is the most likely time for a per- son to be on the telephone.

Pattern-of-Life Modeling and Analysis

Pattern-of-life (POL) analysis is a method of modeling and understanding the behavior of a single person or group by establishing a re- current pattern of actions over time in a given situation. It has similarities to the concept of activity-based intelligence

 

Dynamic Geospatial Models

A dynamic variant of the geospatial model is the space-time model. Many activities, such as the movement of a satellite, a vehicle, a ship, or an aircraft, can best be shown spatially—as can population movements. A com- bination of geographic and time synthesis and analysis can show movement patterns, such as those of people or of ships at sea.

Dynamic geospatial modeling and analysis has been described using a number of terms. Three that are commonly used in intelligence are described in this section: movement intelligence, activity-based intelligence, and geographic profiling. Though they are similar, each has a somewhat different meaning. Dynamic modeling is also applied in understanding intelligence enigmas.

Movement Intelligence

Intelligence practitioners sometimes describe space-time models as movement intelligence, or “MOVINT” as if it were a collection “INT” instead of a target model. The name “movement intelligence” for a specialized intelligence product dates roughly to the wide use of two sensors for area surveillance.

One was the moving target indicator (MTI) capability for synthetic aperture radars. The other was the deployment of video cameras on intelligence collection platforms. MOVINT has been defined as “an intelligence gathering method by which images (IMINT), non-imaging products (MASINT), and signals (SIGINT) produce a movement history of objects of interest.”

Activity-Based Intelligence

Activity-based intelligence, or ABI, has been defined as “a discipline of intelligence where the analysis and subsequent collection is focused on the activity and transactions associated with an entity, population, or area of interest.”

ABI is a form of situational awareness that focuses on interactions over time. It has three characteristics:

  • Raw intelligence information is constantly collected on activities in a given region and stored in a database for later metadata searches.
  • It employs the concept of “sequence neutrality,” meaning that material is collected without advance knowledge of whether it will be useful for any intelligence purpose.
  • It also relies on “data neutrality,” meaning that any source of intelligence may contribute; in fact, open source may be the most valuable.

ABI therefore is a variant of the target-centric approach, focused on the activity of a target (person, object, or group) within a specified target area. So it includes both spatial and temporal dimensions. At a higher level of complexity, it can include network relationships as well.

Though the term ABI is of recent origin and is tied to the development of surveillance methods for collecting intelligence, the concept of solving intelligence problems by monitoring activity over time has been ap- plied for decades. It has been the primary tool for dealing with geographic profiling and intelligence enigmas.

Geographic Profiling

Geographic profiling is a term used in law enforcement for geospatial modeling, specifically a space-time model, that supports serial violent crime or sexual crime investigations. Such crimes, when committed by strangers, are difficult to solve. Their investigation can produce hundreds of tips and suspects, resulting in the problem of information overload

Intelligence Enigmas

Geospatial modeling and analysis frequently must deal with unidentified facilities, objects, and activities. These are often referred to by the term intelligence enigmas. For such targets, a single image—a snapshot in time—is insufficient.

Summary

One of the most powerful combination models is the geospatial model, which combines all sources of intelligence into a visual picture (often on a map) of a situation. One of the oldest of analytic products, geospatial modeling today is the product of all-source analysis that can incorporate OSINT, IMINT, HUMINT, COMINT, and advanced technical collection methods.

Many GEOINT models are dynamic; they show temporal changes. This combination of geospatial and temporal models is perhaps the single most important trend in GEOINT. Dynamic GEOINT models are used to observe how a situation develops over time and to extrapolate future developments

 

Part II

The Estimative Process

12 Predictive Analysis

“Your problem is that you are not able to see things before they happen.”

Wotan to Fricka, in Wagner’s opera Die Walküre

Describing a past event is not intelligence analysis; it is reciting history. The highest form of intelligence analysis requires structured thinking that results in an estimate of what is likely to happen.

True intelligence analysis is always predictive.

 

The value of a model of possible futures is in the insights that it produces. Those insights prepare customers to deal with the future as it unfolds. The analyst’s contribution lies in the assessment of the forces that will shape future events and the state of the target mod- el. If an analyst accurately assesses the forces, she has served the intelligence customer well, even if the prediction derived from that assessment turns out to be wrong.

policymaking customers tend to be skeptical of predictive analysis unless they do it themselves. They believe that their own opinions about the future are at least as good as those of intelligence analysts. So when an analyst offers an estimate without a compelling supporting argument, he or she should not be surprised if the policymaker ignores it.

By contrast, policymakers and executives will accept and make use of predictive analysis if it is well reasoned, and if they can follow the analyst’s logical development. This implies that we apply a formal methodology, one that the customer can understand, so that he or she can see the basis for the conclusions drawn.

Former national security adviser Brent Scowcroft observed, “What intelligence estimates do for the policymaker is to remind him what forces are at work, what the trends are, and what are some of the possibilities that he has to consider.” Any intelligence assessment that does these things will be readily accepted.

Introduction to Predictive Analysis

Intelligence can usually deal with near-term developments. Extrapolation—the act of making predictions based solely on past observations—serves us reasonably well in the short term for situations that involve established trends and normal individual or organizational behaviors.

Adding to the difficulty, intelligence estimates can also affect the future that they predict. Often, the estimates are acted on by policymakers—sometimes on both sides.

The first step in making any estimate is to consider the phenomena that are involved, in order to determine whether prediction is even possible.

Convergent and Divergent Phenomena

In examining trends and possible future events, we use the same terminology: Convergent phenomena make prediction possible; divergent phenomena frustrate it.

a basic question to ask at the outset of any predictive attempt is, Does the principle of causation apply? That is, are the phenomena we are to examine and prepare estimates about governed by the laws of cause and effect?

A good example of a divergent phenomenon in intelligence is the coup d’état. Policy- makers often complain that their intelligence organizations have failed to warn of coups. But a coup event is conspiratorial in nature, limited to a handful of people, and dependent on the preservation of secrecy for its success.

If a foreign intelligence service knows of the event, then secrecy has been com- promised and the coup is almost certain to fail—the country’s internal security services will probably forestall it. The conditions that encourage a coup attempt can be assessed and the coup likelihood estimated by using probability theory, but the timing and likelihood of success are not “predictable.”

The Estimative Approach

The target-centric approach to prediction follows an analytic pattern long established in the sciences, in organizational planning, and in systems synthesis and analysis.

 

The synthesis and analysis process discussed in this chapter and the next is derived from an estimative approach that has been formalized in several professional disciplines. In management theory, the approach has several names, one of which is the Kepner-Tregoe Rational Management Process. In engineering, the formalization is called the Kalman Filter. In the social sciences, it is called the Box-Jenkins method. Although there are differences among them, all are techniques for combining complex data to create estimates. They all require combining data to estimate an entity’s present state and evaluating the forces acting on the entity to predict its future state.

This concept—to identify the forces acting on an entity, to identify likely future forces, and to predict the likely changes in old and new forces over time, along with some indicator of confidence in these judgments—is the key to successful estimation. It takes into ac- count redundant and conflicting data as well as the analyst’s confidence in these data.

The key is to start from the present target model (and preferably, also with a past target model) and move to one of the future models, using an analysis of the forces involved as a basis. Other texts on estimative analysis describe these forces as issues, trends, factors, or drivers. All those terms have the same meaning: They are the entities that shape the future.

The methodology relies on three predictive mechanisms: extrapolation, projection, and forecasting. Those components and the general approach are defined here; later in the chapter, we delve deeper into “how-to” details of each mechanism.

An extrapolation assumes that these forces do not change between the present and future states, a projection assumes they do change, and a forecast assumes they change and that new forces are added.

The analysis follows these steps:

  1. Determine at least one past state and the present state of the entity. In intelligence, this entity is the target model, and it can be a model of almost anything—a terrorist organization, a government, a clandestine trade network, an industry, a technology, or a ballistic missile.
  2. Determine the forces that acted on the entity to bring it to its present state.

These same forces, acting unchanged, would result in the future state shown as an extrapolation (Scenario 1).

  1. To make a projection, estimate the changes in existing forces that are likely to occur. In the figure, a decrease in one of the existing forces (Force 1) is shown as causing a projected future state that is different from the extrapolation (Scenario 2).
  2. To make a forecast, start from either the extrapolation or the projection and then identify the new forces that may act on the entity, and incorporate their effect. In the figure, one new force is shown as coming to bear, resulting in a forecast future state that differs from both the extrapolated and the projected future states (Scenario 3).
  3. Determine the likely future state of the entity based on an assessment of the forces. Strong and certain forces are weighed most heavily in this pre- diction. Weak forces, and those in which the analyst lacks confidence (high uncertainty about the nature or effect of the force), are weighed least.

The process is iterative.

In this figure, we are concerned with a target (technology, system, person, organization, country, situation, industry, or some combination) that changes over time. We want to describe or characterize the entity at some future point.

the basic analytic paradigm is to create a model of the past and present state of the target, followed by alternative models of its possible future states, usually created in scenario form.

A CIA assessment of Mikhail Gorbachev’s economic reforms in 1985–1987 correctly estimated that his proposed reforms risked “confusion, economic disruption, and worker discontent” that could embolden potential rivals to his power.17 This projection was based on assessing the changing forces in Soviet society along with the inertial forces that would resist change.

The process we’ve illustrated in these examples has many names—force field analysis and system dynamics are two.

for forecasting, the analyst must identify new forces that are likely to come into play. Most of the chapters that follow focus on identifying and measuring these forces.

An analyst can (wrongly) shape the outcome by concentrating on some forces and ignoring or downplaying the significance of others.

Force Analysis According to Sun Tzu

Factor or force analysis is an ancient predictive technique. Successful generals have practiced it in warfare for thousands of years, and one of its earliest known pro- ponents was Sun Tzu. He described the art of war as being controlled by five factors, or forces, all of which must be taken into ac- count in predicting the outcome of an engagement. He called the five factors Moral Law, Heaven, Earth, the Commander, and Method and Discipline. In modern terms, the five would be called social, environmental, geospatial, leadership, and organizational factors.

The simplest approach to both projection and forecasting is to do it qualitatively. That is, an analyst who is an expert in the subject area begins the process by answering the following questions:

  1. What forces have affected this entity (organization, situation, industry, technical area) over the past several years?19
  2. Which five or six forces had more im- pact than others?
  3. What forces are expected to affect this entity over the next several years?
  4. Which five or six forces are likely to have more impact than others?
  5. What are the fundamental differ- ences between the answers to ques- tions two and four?
  6. What are the implications of these differences for the entity being analyzed?

The answers to those questions shape the changes in direction of the extrapolation… At more sophisticated levels of qualitative synthesis and analysis, the analyst might examine adaptive forces (feedback forces) and their changes over time.

High-Impact/Low-Probability Analysis

Projections and forecasts focus on the most likely outcomes. But customers also need to be aware of the unlikely outcomes that could have severe adverse effects on their interests.

 

The CIA’s tradecraft manual describes the analytic process as follows:

  • Define the high-impact outcome clearly. This definition will justify examining what most analysts believe to be a very unlikely development.
  • Devise one or more plausible explanations for or “pathways” to the low-probability outcome. This should be as precise as possible, as it can help identify possible indicators for later monitoring.
  • Insert possible triggers or changes in momentum if appropriate. These can be natural disasters, sudden health problems of key leaders, or new eco- nomic or political shocks that might have occurred historically or in other parts of the world.
  • Brainstorm with analysts having a broad set of experiences to aid the development of plausible but unpredictable triggers of sudden change.
  • Identify for each pathway a set of indicators or “observables” that would help you anticipate that events were beginning to play out this way.
  • Identify factors that would deflect a bad outcome or encourage a positive outcome.

The product of high-impact/low-probability analysis is a type of scenario called a demonstration scenario…

Two important types of bias can exist in predictive analysis: pattern, or confirmation, bias—looking for evidence that confirms rather than rejects a hypothesis; and heuristic bias—using inappropriate guidelines or rules to make predictions.

Two points are worth noting at the beginning of the discussion:

  • One must make careful use of the tools in synthesizing the model, as some will fail when applied to prediction. Expert opinion, for example, is often used in creating a target model; but experts’ biases, egos, and narrow focuses can interfere with their pre- dictions. (A useful exercise for the skeptic is to look at trade press or technical journal predictions that were made more than ten years ago that turned out to be way off base. Stock market predictions and popular science magazine predictions of automobile designs are particularly entertaining.)
  • Time constraints work against the analyst’s ability to consistently employ the most elaborate predictive techniques. Veteran analysts tend to use analytic techniques that are relatively fast and intuitive. They can view scenario development, red teams (teams formed to take the opponent’s perspective in planning or assessments), competing hypotheses, and alternative analysis as being too time-consuming to use in ordinary circumstances. An analyst has to guard against using just extrapolation because it is the fastest and easiest to do. But it is possible to use shortcut versions of many predictive techniques and sometimes the situation calls for that. This chapter and the following one contain some examples of shortcuts.

Extrapolation

An extrapolation is a statement, based only on past observations, of what is expected to happen. Extrapolation is the most conservative method of prediction. In its simplest form, an extrapolation, using historical performance as the basis, extends a linear curve on a graph to show future direction.

Extrapolation also makes use of correlation and regression techniques. Correlation is a measure of the degree of association between two or more sets of data, or a measure of the degree to which two variables are related. Regression is a technique for predicting the value of some unknown variable based only on information about the current values of other variables. Regression makes use of both the degree of association among variables and the mathematical function that is determined to best describe the relationships among variables.

the more bureaucracy and red tape involved in doing business, the more corruption is likely in the country.

Projection

Before moving on to projection and forecasting, let’s reinforce the differentiation from extrapolation. An extrapolation is a simple assertion about what a future scenario will look like. In contrast, a projection or a forecast is a probabilistic statement about some future scenario.

Projection is more reliable than extrapolation. It predicts a range of likely futures based on the assumption that forces that have operated in the past will change, whereas extrapolation assumes the forces do not change.

Projection makes use of two major analytic techniques. One technique, force analysis, was discussed earlier in this chapter. After a qualitative force analysis has been completed, the next technique is to apply probabilistic reasoning to it. Probabilistic reasoning is a systematic attempt to make subjective estimates of probabilities more explicit and consistent. It can be used at any of several levels of complexity (each successive level of sophistication adds new capability and completeness). But even the simplest level of generating alternatives, discussed next, helps to prevent premature closure and adds structure to complicated problems.

Generating Alternatives

The first step to probabilistic reasoning is no more complicated than stating formally that more than one outcome is possible. One can generate alternatives simply by listing all possible outcomes to the issue under consideration. One can generate alternatives simply by listing all possible outcomes to the issue under consideration. Remember that the possible outcomes can be defined as alternative scenarios.

The mere act of generating a complete, detailed list often provides a useful perspective on a problem.

Influence Trees or Diagrams

A list of alternative outcomes is the first step. A simple projection might not go beyond this level. But for more rigorous analysis, the next step typically is to identify the things that influence the possible outcomes and indicate the interrelationship of these influences. This process is frequently done by using an influence tree.

let’s assume that an analyst wants to assess the outcome of an ongoing African insurgency movement. There are three obvious possible outcomes: The insur- gency will be crushed, the insurgency will succeed, or there will be a continuing stale- mate. Other outcomes may be possible, but we can assume that they are so unlikely as not to be worth including. The three outcomes for the influence diagram are as follows:

  • Regime wins
  • Insurgency wins
  • Stalemate

The analyst now describes those forces that will influence the assessment of the relative likelihoods of each outcome. For instance, the insurgency’s success may depend on whether economic conditions improve, remain the same, or become worse during the next year. It also may depend on the success of a new government poverty relief program. The assumptions about these “driver” events are often described as linchpin premises in U.S. intelligence practice, and these assumptions need to be made explicit.

Having established the uncertain events that influence the outcome, the analyst proceeds to the first stage of an influence tree.

The thought process that is invoked when generating the list of influencing events and their outcomes can be useful in several ways. It helps identify and document factors that are relevant to judging whether an alternative outcome is likely to occur.

The audit trail is particularly useful in showing colleagues what the analyst’s thinking has been, especially if he desires help in upgrading the diagram with things that may have been overlooked. Software packages for creating influence trees allow the inclusion of notes that create an audit trail.

In the process of generating the alternative lists, the analyst must address the issue of whether the event (or outcome) being listed actually will make a difference in his assessment of the relative likelihood of the outcomes of any of the events being listed.

For instance, in the economics example, if the analyst knew that it would make no difference to the success of the insurgency whether economic conditions improved or remained the same, then there would be no need to differentiate these as two separate outcomes. The analyst should instead simplify the diagram.

The second question, having to do with additional influences not yet shown on the diagram, allows the analyst to extend this pictorial representation of influences to whatever level of detail is considered necessary. Note, however, that the analyst should avoid adding unneeded layers of detail.

Probabilistic reasoning is used to evaluate outcome scenarios.

This influence tree approach to evaluating possible outcomes is more convincing to customers than would be an unsupported ana- lytic judgment about the prospects for the insurgency. Human beings tend to do poorly at such complex assessments when they are approached in a totally unaided, subjective manner; that is, by the analyst mentally combining the force assessments in an un- structured way.

Influence Nets

Influence net modeling is an alternative to the influence tree.

To create an influence net, the analyst defines influence nodes, which depict events that are part of cause-effect relationships within the target model. The analyst also creates “influence links” between cause and effect that graphically illustrate the causal relation between the connected pair of events.

The influence can be either positive (sup- porting a given decision) or negative (decreasing the likelihood of the decision), as identified by the link “terminator.” The terminator is either an arrowhead (positive influence) or a filled circle (negative influence). The resulting graphical illustration is called the “influence net topology.”

 

Making Probability Estimates

Probabilistic projection is used to predict the probability of future events for some time- dependent random process… A number of these probabilistic techniques are used in industry for projection.

Two techniques that we use in intelligence analysis are as follows:

  • Point and interval estimation. This method attempts to describe the probability of outcomes for a single event. An example would be a country’s economic growth rate, and the event of concern might be an eco- nomic depression (the point where the growth rate drops below a certain level).
  • Monte Carlo simulation. This method simulates all or part of a process by running a sequence of events repeatedly, with random combinations of values, until sufficient statistical material is accumulated to determine the probability distribution of the outcome.

Most of the predictive problems we deal with in intelligence use subjective probability estimates. We routinely use subjective estimates of probabilities in dealing with broad issues for which no objective estimate is feasible.

Sensitivity Analysis

When a probability estimate is made, it is usually worthwhile to conduct a sensitivity analysis on the result. For example, the occurrence of false alarms in a security system can be evaluated as a probabilistic process.

Forecasting

Projections often work out better than extrapolations over the medium term. But even the best-prepared projections often seem very conservative when compared to reality years later. New political, economic, social, technological, or military developments will create results that were not foreseen even by experts in a field.

Forecasting uses many of the same tools that projection relies on—force analysis and probabilistic reasoning, for example. But it presents a stressing intellectual challenge, because of the difficulty in identifying and assessing the effect of new forces.

The development of alternative futures is essential for effective strategic decision-making. Since there is no single predictable future, customers need to formulate strategy within the context of alternative future states of the target. To this end, it is necessary to develop a model that will make it possible to show systematically the interrelationships of the individually forecast trends and events.

A forecast is not a blueprint of the future, and it typically starts from extrapolations or projections. Forecasters then must expand their scope to admit and juggle many additional forces or factors. They must examine key technologies and developments that are far afield but that nevertheless affect the subject of the forecast.

The Nonlinear Approach to Forecasting

Obviously, a forecasting methodology requires analytic tools or principles. But for any forecasting methodology to be successful, analysts who have significant understanding of many PMESII factors and the ability to think about issues in a nonlinear fashion are also required.

Futuristic thinking examines deeper forces and flows across many disciplines that have their own order and pattern. In predictive analysis, we may seem to wander about, making only halting progress toward the solution. This nonlinear process is not a flaw; rather it is the mark of a natural learning process when dealing with complex and nonlinear matters.

The sort of person who can do such multidisciplinary analysis of what is likely to happen in the future has a broad under- standing of the principles that cause a physical phenomenon, a chemical reaction, or a social reaction to occur. People who are multidisciplinary in their knowledge and thinking can pull together concepts from several fields and assess political, economic, and social, as well as technical, factors. Such breadth of understanding recognizes the similarity of principles and the underlying forces that make them work. It might also be called “applied common sense,” but unfortunately it is not very common. Analysts instead tend to specialize, because in-depth expertise is highly valued by both intelligence management and the intelligence customer.

The failure to do multidisciplinary analysis is often tied closely to mindset.

Techniques and Analytic Tools of Forecasting

Forecasting is based on a number of assumptions, among them the following:

  • The future cannot be predicted, but by taking explicit account of uncertainty, one can make probabilistic forecasts.
  • Forecasts must take into account possible future developments in such areas as organizational changes, demography, lifestyles, technology, economics, and regulation.

For policymakers and executives, the aim of defining alternative futures is to try to determine how to create a better future than the one that would materialize if we merely keep doing what we’re currently doing. Intelligence analysis contributes to this definition of alternative futures, with emphasis on the likely actions of others—allies, neutrals, and opponents.

Forecasting starts through examination of the changing political, military, economic, and social environments.

We first select issues or concerns that require attention. These issues and concerns have component forces that can be identified using a variant of the strategies-to-task methodology.

If the forecast is done well, these scenarios stimulate the customer of intelligence—the executive—to make decisions that are appropriate for each scenario. The purpose is to help the customer make a set of decisions that will work in as many scenarios as possible.

Evaluating Forecasts

Forecasts are judged on the following criteria:

  • Clarity. Can the customer under- stand the forecast and the forces involved? Is it clear enough to be useful?
  • Credibility. Do the results make sense to the customer? Do they appear valid on the basis of common sense?
  • Plausibility. Are the results consistent with what the customer knows about the world outside the scenario and how this world really works or is likely to work in the future?
  • Relevance. To what extent will the forecasts affect the successful achievement of the customer’s mission?
  • Urgency. To what extent do the forecasts indicate that, if action is required, time is of the essence in developing and implementing the necessary changes?
  • Comparative advantage. To what extent do the results provide a basis for customer decision-making, com- pared with other sources available to the customer?
  • Technical quality. Was the process that produced the forecasts technically sound? Are the alternative forecasts internally consistent?

 

A “good” forecast is one that meets all or most of these criteria. A “bad” forecast is one that does not. The analyst has to make clear to customers that forecasts are transitory and need constant adjustment to be helpful in guiding thought and action.

Customers typically have a number of complaints about forecasts. Common complaints are that the forecast is obvious; it states nothing new; it is too optimistic, pessimistic, or naïve; or it is not credible because it overlooks obvious trends, events, causes, or consequences. Such objections are actually desirable; they help to improve the product. There are a number of appropriate responses to these objections: If something important is missing, add it. If something unimportant is included, get rid of it. If the forecast seems either obvious or counterintuitive, probe the underlying logic and revise the forecast as necessary.

Summary

Intelligence analysis, to be useful, must be predictive. Some events or future states of a target are predictable because they are driven by convergent phenomena. Some are not predictable because they are driven by divergent phenomena.

The analysis product—a demonstration scenario—describes how such a development might plausibly start and identifies its consequences. This provides indicators that can be monitored to warn that the improbable event is actually happening.

For analysts predicting systems developments as many as five years into the future, extrapolations work reasonably well; for those looking five to fifteen years into the future, projections usually fare better.

13 Estimative Forces

Estimating is what you do when you don’t know.

The factors or forces that have to be considered in estimation—primarily PMESII factors—vary from one intelligence problem to another. I do not attempt to catalog them in this book; there are too many. But an important aspect of critical thinking, discussed earlier, is thinking about the underlying forces that shape the future. This chapter deals with some of those forces.

The CIA’s tradecraft manual describes an analytic methodology that is appropriate for identifying and assessing forces. Called “outside in” thinking, it has the objective of identifying the critical external factors that could influence how a given situation will develop. According to the tradecraft manual, analysts should develop a generic description of the problem or the phenomenon under study. Then, analysts should:

  • List all the key forces (social, technological, economic, environmental, and political) that could have an impact on the topic, but over which one can exert little influence (e.g., globalization, social stress, the Internet, or the global economy).
  • Focus next on key factors over which an actor or policymaker can exert some influence. In the business world this might be the market size, customers, the competition, suppliers or partners; in the government do- main it might include the policy actions or the behavior of allies or adversaries.
  • Assess how each of these forces could affect the analytic problem.
  • Determine whether these forces actually do have an impact on the particular issue based on the available evidence.

 

Political and military factors are often the focus of attention in assessing the likely out- come of conflicts. But the other factors can turn out to be dominant. In the developing conflict between the United States and Japan in 1941, Japan had a military edge in the Pacific. But the United States had a substantial edge in these factors:

  • Political. The United States could call on a substantial set of allies. Japan had Germany and Italy.
  • Economy. Japan lacked the natural resources that the United States and its allies controlled.
  • Social. The United States had almost twice the population of Japan. Japan initially had an edge in the solidarity of its population in support of the government, but that edge was matched within the United States after Pearl Harbor.
  • Infrastructure. The U.S. manufacturing capability far exceeded that of Japan and would be decisive in a prolonged conflict (as many Japanese military leaders foresaw).
  • Information. The prewar information edge favored Japan, which had more control of its news media, while a segment of the U.S. media strongly opposed involvement in war. That edge also evaporated after December 7, 1941.

Inertia

One force that has broad implications is inertia, the tendency to stay on course and resist change.

It has been observed that: “Historical inertia is easily underrated . . . the historical forces molding the outlook of Americans, Russians, and Chinese for centuries before the words capitalism and communism were invented are easy still to overlook.”

Opposition to change is a common reason for organizations’ coming to rest. Opposition to technology in general, for example, is an inertial matter; it results from a desire of both workers and managers to preserve society as it is, including its institutions and traditions.

A common manifestation of the law of inertia is the “not-invented-here,” or NIH, factor, in which the organization opposes pressures for change from the outside.

But all societies resist change to a certain extent. The societies that succeed seem able to adapt while preserving that part of their heritage that is useful or relevant.

From an analyst’s point of view, inertia is an important force in prediction. Established factories will continue to produce what they know how to produce. In the automobile industry, it is no great challenge to predict that next year’s autos will look much like this year’s. A naval power will continue to build ships for some time even if a large navy ceases to be useful.

Countervailing Forces

All forces are likely to have countervailing or resistive forces that must be considered.

The principle is summarized well by another of Newton’s laws of physics: For every action there is an equal and opposite reaction.

Applications of this principle are found in all organizations and groups, commercial, national, and civilizational. As Samuel P. Huntington noted, “[W]e know who we are . . . often only when we know who we are against.”

A predictive analysis will always be incomplete unless it identifies and assesses opposing forces. All forces eventually meet counterforces. An effort to expand free trade inevitably arouses protectionist reactions. One country’s expansion of its military strength always causes its neighbors to react in some fashion.

 

Counterforces need not be of the same nature as the force they are countering. A prudent organization is not likely to play to its opponent’s strengths. Today’s threats to U.S. national security are asymmetric; that is, there is little threat of a conventional force-on-force engagement by an opposing military, but there is a threat of an unconventional yet lethal attack by a loosely organized terrorist group, as the events of September 11, 2001, and more recently the Boston Marathon bombing, demonstrated. Asymmetric counterforces are common in industry as well. Industrial organizations try to achieve cost asymmetry by using defensive tactics that have a large favorable cost differential between their organization and that of an opponent.

Contamination

Contamination is the degradation of any of the six factors—political, military, economic, social, infrastructure, or information (PMESII factors)—through an infection-like process. Corruption is a form of political and social contamination. Funds laundering and counterfeiting are forms of economic contamination. The result of propaganda is information contamination.

Contamination phenomena can be found throughout organizations as well as in the scientific and technical disciplines. Once such an infection starts, it is almost impossible to eradicate.

Contamination phenomena have analogies in the social sciences, organization theory, and folklore.

At some point in organizations, contamination can become so thorough that only drastic measures will help—such as shutting down the glycerin plant or rebuilding the microwave tube plant. Predictive intelligence has to consider the extent of such social contamination in organizations, because contamination is a strong restraining force on an organization’s ability to deal with change.

The effects of social contamination are hard to measure, but they are often highly visible.

The contamination phenomenon has an interesting analogy in the use of euphemism in language. It is well known that if a word has or develops negative associations, it will be replaced by a succession of euphemisms. Such words have a half-life, or decay rate, that is shorter as the word association be- comes more negative. In older English, the word stink meant “to smell.” The problem is that most of the strong impressions we get from scents are unpleasant ones; so each word for olfactory senses becomes contaminated over time and must be replaced. Smell has a generally unpleasant connotation now

The renaming of a program or project is a good signal that the program or project is in trouble—especially in Washington, D.C., but the same rule holds in any culture.

Synergy

predictive intelligence analysis almost always requires multidisciplinary understanding. Therefore, it is essential that the analysis organization’s professional development program cultivate a professional staff that can understand a broad range of concepts and function in a multidisciplinary environment. One of the most basic concepts is that of synergy: The whole can be more than the sum of its parts due to interactions among the parts. Synergy is therefore, in some respects, the opposite of the countervailing forces discussed earlier.

Synergy is not really a force or factor as much as a way of thinking about how forces or factors interact. Synergy can result from cooperative efforts and alliances among organizations (synergy on a large scale).

Netwar is an application of synergy.

In electronics warfare, it is now well known that a weapons system may be unaffected by a single countermeasure; however, it may be degraded by a combination of countermeasures, each of which fail individually to defeat it. The same principle applies in a wide range of systems and technology developments: The combination may be much greater than the sum of the components taken individually.

Synergy is the foundation of the “swarm” approach that military forces have applied for centuries—the coordinated application of overwhelming force.

In planning a business strategy against a competitive threat, a company will often put in place several actions that, each taken alone, would not succeed. But the combination can be very effective. As a simple example, a company might use sever- al tactics to cut sales of a competitor’s new product: start rumors of its own improved product release, circulate reports on the defects or expected obsolescence of the competitor’s product, raise buyers’ costs of switching from its own to the competitor’s product, and tie up suppliers by using exclusive contracts. Each action, taken separately, might have little impact, but the synergy—the “swarm” effect of the actions taken in combination—might shatter the competitor’s market.

Feedback

In examining any complex system, it is important for the analyst to evaluate the system’s feedback mechanism. Feedback is the mechanism whereby the system adapts—that is, learns and changes itself. The following discussion provides more detail about how feedback works to change a system.

Many of the techniques for prediction de- pend on the assumption that the process being analyzed can be described, using systems theory, as a closed-loop system. Under the mathematical theory of such systems, feedback is a controlling force in which the out- put is compared with the objective or standard, and the input process is corrected as necessary to bring the output toward a desired state

The feedback function therefore determines the behavior of the total system over time. Only one feedback loop is shown in the figure, but many feedback loops can exist, and usually do in a complex system.

Notes on Methods and Motives: Exploring Links between Transnational Organized Crime & International Terrorism

Notes from Methods and Motives: Exploring Links between Transnational Organized Crime & International Terrorism

In preparation for the work on this report, we reviewed a significant body of academic research on the structure and behavior of organized crime and terrorist groups. By examining how other scholars have approached the issues of organized crime or terrorism, we were able to refine our methodology. This novel approach combines a framework drawn from intelligence analysis with the tenets of a methodological approach devised by the criminologist Donald Cressey, who uses the metaphor of an archeological dig to systematize a search for information on organized crime. All the data and examples used to populate the model have been verified, and our findings have been validated through the rigorous application of case study methods.

While experts broadly accept no single definition of organized crime, a review of the numerous definitions offered identifies several central themes.8 There is consensus that at least two perpetrators are in- volved, but there is a variety of views about the way organized crime is typically organized as a hierarchy or as a network.

Organized crime is a continuing enterprise, so does not include conspiracies that perpetrate single crimes and then go their separate ways. Furthermore, the overarching goals of organized crime groups are profit and power. Groups seek a balance between maximizing profits and minimizing their own risk, while striving for control by menacing certain businesses. Violence, or the threat of violence, is used to enforce obligations and maintain hegemony over rackets and enterprises such as extortion and narcotics smuggling. Corruption is a means of reducing the criminals’ own risk, maintaining control and making profits.

few definitions challenge the common view of organized crime as a ‘parallel government’ that seeks power at the expense of the state but retains patriotic or nationalistic ties to the state. This report takes up that challenge by illustrating the rise of a new class of criminal groups with little or no national allegiance. These criminals are ready to pro- vide services for terrorists as has been observed in European prisons.10

We prefer the definition offered by the UN Convention Against Transnational Organized Crime, which defines an organized crime group as “a structured group [that is not randomly formed for the im- mediate commission of an offense] of three or more persons, existing for a period of time and acting in concert with the aim of committing one or more serious crimes or offences [punishable by a deprivation of liberty of at least four years] established in accordance with this Convention, in order to obtain, directly or indirectly, a financial or other material benefit.

we prefer the notion of a number of shadow economies, in the same way that macroeconomists use the global economy, comprising markets, sectors and national economies, as their basic unit of reference.

terrorism scholar Bruce Hoffman has offered a comprehensive and useful definition of terrorism as the deliberate creation and exploitation of fear through violence or the threat of violence in the pursuit of political change.15 Hoffman’s definition offers precise terms of reference while remaining comprehensive; he further notes that terrorism is ‘political in aims and motives,’ ‘violent,’ ‘designed to have far-reaching psychological repercussions beyond the immediate victim or target,’ and ‘conducted by an organization with an identifiable chain of command or conspiratorial cell structure.’ These elements include acts of terrorism by many different types of criminal groups, yet they clearly circumscribe the violent and other terrorist acts. Therefore, the Hoffman definition can be applied to both groups and activities, a crucial distinction for this methodology we propose in this report.

Early identification of terror-crime cooperation occurred in the 1980s and focused naturally on narcoterrorism, a phrase coined by Peru’s President Belaunde Terry to describe the terrorist attacks against anti-narcotics police in Peru.

the links between narcotics trafficking and terror groups exist in many regions of the world but that it is difficult to make generalizations about the terror- crime nexus.

International relations theorists have also produced a group of scholarly works that examine organized crime and terrorism (i.e., agents or processes) as objects of investigation for their paradigms. While in some cases, the frames of reference international relations scholars employed proved too general for the purposes of this report, the team found that these works demonstrated more environmental or behavioral aspects of the interaction.

2.3 Data collection

Much of the information in the report that follows was taken from open sources, including government reports, private and academic journal articles, court documents and media accounts.

To ensure accuracy in the collection of data, we adopted standards and methods to form criteria for accepting data from open sources. In order to improve accuracy and reduce bias, we attempted to corroborate every piece of data collected from one secondary source with data from a further source that was independent of the original source — that is, the second source did not quote the first source. Second, particularly when using media sources, we checked subsequent reporting by the same publication to find out whether the subject was described in the same way as before. Third, we sought a more heterogeneous data set by examining foreign-language documents from non-U.S. sources. We also obtained primary- source materials such as declassified intelligence reports from the Republic of Georgia, that helped to clarify and confirm the data found in secondary sources.

Since all these meetings were confidential, it was agreed in all cases that the information given was not for attribution by name.

For each of these studies, researchers traveled to the regions a number of times to collect information. Their work was combined with relevant secondary sources to produce detailed case studies presented later in the report. The format of the case studies followed the tenets outlined by Robert Yin, who proposes that case studies offer an advantage to researchers who present data illustrating complex relationships – such as the link between organized crime and terror.

2.4. Research goals

This project aimed to discover whether terrorist and organized crime groups would borrow one another’s methods, or cooperate, by what means, and how investigators and analysts could locate and assess crime-terror interactions. This led to an examination of why this overlap or interaction takes place. Are the benefits merely logistical or do both sides derive some long-term gains such as undermining the capacity of the state to detect and curtail their activities?

preparation of the investigative environment (PIE), by adapting a long-held military practice called intelligence preparation of the battlespace (IPB). The IPB method anticipates enemy locations and movements in order to obtain the best position for a commander’s limited battlefield resources and troops. The goal of PIE is similar to that of IPB—to provide investigators and analysts a strategic and discursive analytical method to identify areas ripe for locating terror and crime interactions, confirm their existence and then assess the ramifications of these collaborations. The PIE approach provides twelve watch points within which investigators and analysts can identify those areas most likely to contain crime-terror interactions.

The PIE methodology was designed with the investigator and analyst in mind, and thus PIE demonstrates how to establish investigations in a way that expend resources most fruitfully. The PIE methodology shows how insights can be gained from analysts to help practitioners identify problems and organize their investigations more effectively.

2.5. Research challenges

Our first challenge in investigating the links between organized crime and terrorism was to obtain enough data to provide an accurate portrayal of that relationship. Given the secrecy of all criminal organizations, many traditional methods of quantitative and qualitative research were not viable. Nonetheless we con- ducted numerous interviews, and obtained identified statements from investigators and policy officials. Records of legal proceedings, criminal records, and terrorist incident reports were also important data sources.

The strategy underlying the collection of data was to focus on the sources of interaction wherever they were located (e.g., developing countries and urban areas), rather than on instances of interaction in developed countries like the September 11th or the Madrid bombing investigations. In so doing, the project team hoped to avoid characterizing the problem “from out there.”

All three case studies highlight patterns of association that are particularly visible, frequent, and of lengthy duration. Because the conflict regions in the case studies also contribute to crime in the United States, our view was these models were needed to perceive patterns of association that are less visible in other environments. A further element in the selection of these regions was practical: in each one, researchers affiliated with the project had access to reliable sources with first-hand knowledge of the subject matter. Our hypothesis was that some of the most easy to detect relations would be in these societies that are so corrupted and with such limited enforcement that the phenomena might be more open for analysis and disclosure than in environments where this is more covert.

  1. A new analytical approach: PIE

Investigators seeking to detect a terrorist activity before an incident takes place are overwhelmed by data.

A counterterrorist analyst at the Central Intelligence Agency took this further, noting that the discovery of crime-terror interactions was often the accidental result of analysis on a specific terror group, and thus rarely was connected to the criminal patterns of other terror groups.

IPB is an attractive basis for analyzing the behavior of criminal and terrorist groups because it focuses on evidence about their operational behavior as well as the environment in which they operate. This evidence is plentiful: communications, financial transactions, organizational forms and behavioral patterns can all be analyzed using a form of IPB.

the project team has devised a methodology based on IPB, which we have termed preparation of the investigation environment, or PIE. We define PIE as a concept in which investigators and analysts organize existing data to identify areas of high potential for collaboration between terrorists and organized criminals in order to focus next on developing specific cases of crime-terror interaction—thereby generating further intelligence for the development of early warning on planned terrorist activity.

While IPB is chiefly a method of eliminating data that is not likely to be relevant, our PIE method also provides positive indicators about where relevant evidence should be sought.

3.1 The theoretical basis for the PIE Method

Donald Cressey’s famous study of organized crime in the U.S., with the analogy of an archeological dig, was the starting point for our model of crime-terror cooperation.35 As Cressey defines it, archeologists first examine documentary sources to collect what is known and develop a map based on what is known. That map allows the investigator to focus on those areas that are not known—that is, the archeologist uses the map to focus on where to dig. The map also serves as a context within which artifacts discovered during the dig can be evaluated for their significance. For example, discovery of a bowl at a certain depth and location can provide information to the investigator concerning the date of an encampment and who established it.

The U.S. Department of Defense defines IPB as an analytical methodology employed to reduce un- certainties concerning the enemy, environment, and terrain for all types of operations. Intelligence preparation of the battlespace builds an extensive database for each potential area in which a unit may be re- quired to operate. The database is then analyzed in detail to determine the impact of the enemy, environment, and terrain on operations and presents it in graphic form.36 Alongside Cressey’s approach, IPB was selected as a second basis of our methodological approach.

Territory outside the control of the central state such as exists in failed or failing states, poorly regulated or border regions (especially those regions surrounding the intersection of multiple borders), and parts of otherwise viable states where law and order is absent or compromised, including urban quarters populated by diaspora communities or penal institutions, are favored locales for crime-terror interactions.

3.2 Implementing PIE as an investigative tool

Organized crime and terrorist groups have significant differences in their organizational form, culture, and goals. Bruce Hoffman notes that terrorist organizations can be further categorized based on their organizational ideology.

In converting IPB to PIE, we defined a series of watch points based on organizational form, goals, culture and other aspects to ensure PIE is flexible enough to compare a transnational criminal syndicate or a traditional crime hierarchy with an ethno-nationalist terrorist faction or an apocalyptic terror group.

The standard operating procedures and means by which military units are expected to achieve their battle plan are called doctrine, which is normally spelled out in great detail as manuals and training regimens. The doctrine of an opposing force thus is an important part of an IPB analysis. Such information is equally important to PIE, but is rarely found in manuals nor is it as highly developed as military doctrines.

Once the organizational forms, terrain and behavior of criminal and terrorist groups were defined at this level of detail, we settled on 12 watch points to cover the three components of PIE. For example, the watch point entitled organizational goals examines what the goals of organized crime and terror groups can tell investigators about potential collaboration or overlap between the two.

Investigators using PIE will collect evidence systematically through the investigation of watch points and analyze the data through its application to one or more indicators. That in turn will enable them to build a case for making timely predictions about crime-terror cooperation or overlap. Conversely, PIE also provides a mechanism for ruling out such links.

The indicators are designed to reduce the fundamental uncertainty associated with seemingly disparate or unrelated pieces of information. They also serve as a way of constructing probable cause, with evidence triggering indicators.

Although some watch points may generate ambiguous indicators of interaction between terror and crime, providing investigators and analysts with negative evidence of collusion between criminals and terrorists also has the practical benefit of steering scarce resources toward higher pay-off areas for detecting cooperation between the groups.

3.3. PIE composition: Watch points and indicators

The first step for PIE is to identify those areas where terror-crime collaborations are most likely to occur. To prepare this environment, PIE asks investigators and analysts to engage in three preliminary analyses. These are first to map where particular criminal and terrorist groups are likely to be operating, both in physical geographic terms and through information traditional and electronic media; secondly, to develop typologies for the behavior patterns of the groups and, when possible, their broader networks (often represented chronologically as a timeline); thirdly, to detail the organizations of specific crime and terror groups and, as feasible, their networks.

The geographical areas where terrorists and criminals are highly likely to be cooperating are known in IPB parlance as named areas of interest, or localities that are highly likely to support military operations. In PIE they are referred to as watch points.

A critical function of PIE is to set sensible priorities for analysts.

The second step of a PIE analysis concentrates on the watch points to identify named areas of inter- action where overlaps between crime and terror groups are most likely. The PIE method expresses areas of interest geographically but remains focused on the overlap between terrorism and organized crime.

the three preliminary analyses mentioned above are deconstructed into watch points, which are broad categories of potential crime-terror interactions.

the use of PIE leads to the early detection of named areas of interest through the analysis of watch points, providing investigators the means of concentrating their focus on terror-crime interactions and thereby enhancing their ability to detect possible terrorist planning.

The third and final step is for the collection and analysis of information that indicates organizational, operational or other nodes whereby criminals and terrorists appear to interact. While watch points are broad categories, they are composed of specific indicators of how organized criminals and terrorists might cooperate. These specific patterns of behavior help to confirm or deny that a watch point is applicable.

If several indicators are present, or if the indicators are particularly clear, this bolsters the evidence that a particular type of terror-crime interaction is present. No single indicator is likely to provide ‘smoking gun’ evidence of a link, although examples of this have occasionally arisen. Instead, PIE is a holistic approach that collects evidence systematically in order to make timely predictions of an affiliation, or not, between specific criminal and terrorist groups.

For policy analysts and planners, indicators reduce the sampling risk that is unavoidable for anyone collecting seemingly disparate and unrelated pieces of evidence. For investigators, indicators serve as a means of constructing probable cause. Indeed, even negative evidence of interaction has the practical benefit of helping investigators and analysts manage their scarce resources more efficiently.

3.4 The PIE approach in practice: Two Cases

the process began with the collection of relevant information (scanning) that was then placed into the larger context of watch points and indicators (codification) in order to produce the aforementioned analytical insights (abstraction).

 

Each case will describe how the TraCCC team shared (diffusion) its findings in or- der to obtain validation and to have an impact on practitioners fighting terrorism and/or organized crime.

3.4.1 The Georgia Case

In 2003-4, TraCCC used the PIE approach to identify one of the largest money laundering cases ever successfully prosecuted. The PIE method helped close down a major international vehicle for money laundering. The ability to organize the financial records from a major money launderer allowed the construction of a significant network that allowed understanding of the linkages among major criminal groups whose relationship has not previously been acknowledged.

Some of the information most pertinent to Georgia included but that was not limited to:

  1. Corrupt Georgian officials held high law enforcement positions prior to the Rose Revolution and maintained ties to crime and terror groups that allowed them to operate with impunity;
  2. Similar patterns of violence were found among organized crime and terrorist groups operating in Georgia;
  3. Numerous banks, corrupt officials and other providers of illicit goods and services assisted both organized crime and terrorists
  4. Regions of the country supported criminal infrastructures useful to organized crime and terrorists alike, including Abkhazia, Ajaria and Ossetia.

Combined with numerous other pieces of information and placed into the PIE watch point structure, the resulting analysis triggered a sufficient number of indicators to suggest that further analysis was warranted to try to locate a crime-terror interaction.

 

The second step of the PIE analysis was to examine information within the watch points for connections that would suggest patterns of interaction between specific crime and terror groups. These points of interaction are identified in the Black Sea case study but the most successful identification was found from an analysis of the watch point that specifically examined the financial environment that would facilitate the link between crime and terrorism.

The TraCCC team began its investigation within this watch point by identifying the sectors of the Georgian economy that were most conducive to economic crime and money laundering. This included such sectors as energy, railroads and banking. All of these sectors were found to be highly criminalized.

Only by having researchers with knowledge of the economic climate, the nature of the business community and the banking sector determined that investigative resources needed to be concentrated on the “G” bank. By knowing the terrain, investigative focus was focused on “G” bank by the newly established financial investigative unit of the Central Bank. A six-month analysis of the G bank and its transactions enabled the development of a massive network analysis that facilitated prosecution in Georgia and may lead to prosecutions in major financial centers that were previously unable to address some crime groups, at least one of which was linked to a terrorist group.

Using PIE allowed a major intelligence breakthrough.

First, it located a large facilitator of dirty money. Second, the approach was able to map fundamental connections between crime and terror groups. Third, the analysis highlighted the enormous role that purely “dirty banks” housed in countries with small economies can provide as a service for transnational crime and even terrorism.

While specific details must remain sealed due to deference to ongoing legal proceedings, to date the PIE analysis has grown into investigations in Switzerland, and others in the US and Georgia.

the PIE approach is one that favors the construction and prosecution of viable cases.

the PIE approach is a platform for starting and later focusing investigations. When coupled with investigative techniques like network analysis, the PIE approach supports the construction and eventual prosecution of cases against organized crime and terrorist suspects.

3.4.2 Russian Closed Cities

In early 2005, a US government agency asked TraCCC to identify how terrorists are potentially trying to take advantage of organized crime groups and corruption to obtain fissile material in a specific region of Russia—one that is home to a number of sensitive weapons facilities located in so-called “closed cities.” The project team assembled a wealth of information concerning the presence and activities of both criminal and terror groups in the region in question, but was left with the question of how best to organize the data and develop significant conclusions.

The project’s information supported connections in 11 watch points, including:

  • A vast increase in the prevalence of violence in the region, especially in economic sectors with close ties to organized crime;
  • Commercial ties in the drug trade between crime groups in the region and Islamic terror groups formerly located in Afghanistan;
  • Rampant corruption in all levels of the regional government and law enforcement mechanisms, rendering portions of the region nearly ungovernable;
  • The presence of numerous regional and transnational crime groups as well as recruiters for Islamic groups on terrorist watch lists;

employment of the watch points prompted creative leads to important connections that were not readily apparent until placed into the larger context of the PIE analytical framework. Specifically, the analysis might not have included evidence of trust links and cultural ties between crime and terror groups had the PIE approach not explained their utility.

When the TraCCC team applied the PIE to the closed cities case, the team found using the technologies reduced time analyzing data while improving the analytical rigor of the task. For example, structured queries of databases and online search engines provided information quickly. Likewise, network mapping improved analytical rigor by codifying the links between numerous actors (e.g., crime groups, terror groups, workers at weapons facilities and corrupt officials) in local, regional and transnational contexts.

3.5 Emergent behavior and automation

The dynamic nature of crime and terror groups complicates the IPB to PIE transition. The spectrum of cooperation demonstrates that crime-terror intersections are emergent phenomena.

PIE must have feedback loops to cope with the emergent behavior of crime and terror groups

when the project team spoke with analysts and investigators, the one deficiency they noted was the ability to conduct strategic intelligence given their operational tempo.

  1. The terror-crime interaction spectrum

In formulating PIE, we recognized that crime and terrorist groups are more diverse in nature than military units. They may be networks or hierarchies, they have a variety of cultures rather than a disciplined code of behavior, and their goals are far less clear. Hoffman notes that terrorist groups can be further categorized based on their organizational ideology.

Other researchers have found significant evidence of interaction between terrorism and organized crime, often in support of the general observation that while their methods might converge, the basic motives of crime and terror groups would serve to keep them at arm’s length—thus the term “methods, not motives.”41 Indeed, the differences between the two are plentiful: terrorists pursue political or religious objectives through overt violence against civilians and military targets. They turn to crime for the money they need to survive and operate.

Criminal groups, on the other hand, are focused on making money. Any use of violence tends to be concealed, and is generally focused on tactical goals such as intimidating witnesses, eliminating competitors or obstructing investigators.

In a corrupt environment, the two groups find common cause.

Terrorists often find it expedient, even necessary, to deal with outsiders to get funding and logistical support for their operations. As such interactions are repeated over time, concerns arise that criminal and terrorist organizations will integrate and might even form new types of organizations.

Support for this point can be found in the seminal work of Sutherland, who has argued that the “in- tensity and duration” of an association with criminals makes an individual more likely to adopt criminal behavior. In conflict regions, where there is intensive interaction between criminals and terrorists, there is more shared behavior and a process of mutual learning that goes on.

The dynamic relationship between international terror and transnational crime has important strategic implications for the United States.

The result is a model known as the terror-crime interaction spectrum that depicts the relationship between terror and criminal groups and the different forms it takes.

Each form of interaction represents different, yet specific, threats, as well as opportunities for detection by law enforcement and intelligence agencies.

An interview with a retired member of the Chicago organized crime investigative unit revealed that it had investigated taxi companies and taxicab owners as cash-based money launderers. Logic suggests that terrorists may also be benefiting from the scheme. But this line of investigation was not pursued in the 9/11 investigations although two of the hijackers had worked as taxi drivers.

Within the spectrum, processes we refer to as activity appropriation, nexus, symbiotic relationship, hybrid, and transformation illustrate the different forms of interaction between a terrorist group and an organized crime group, as well as the behavior of a single group engaged in both terrorism and organized crime.

While activity appropriation does not represent organizational linkages between crime and terror groups, it does capture the merger of methods that were well-documented in section 2. Activity appropriation is one way that terrorists are exposed to organized crime activities and, as Chris Dishman has noted, can lead to a transformation of terror cells into organized crime groups.

Applying the Sutherland principle of differential association, these activities are likely to bring a terror group into regular contact with organized crime. As they attempt to acquire forged documents, launder money, or pay bribes, it is a natural step to draw on the support and expertise of the criminal group, which is likely to have more experience in these activities. It is referred to here as a nexus.

terrorists first engage in “do it yourself” organized crime and then turn to organized crime groups for specialized services like document forgery or money laundering.

In most cases a nexus involves the criminals providing goods and services to terrorists for payment although it can work in both directions. A typically short-term relation- ship, a nexus does not imply that the criminals share the ideological views of the terrorists, merely that the transaction offers benefits to both sides.

After all, they have many needs in common: safe havens, false documentation, evasive tactics, and other strategies to lower the risk of being detected. In Latin America, transnational criminal gangs have employed terrorist groups to guard their drug processing plants. In Northern Ireland, terrorists have provided protection for human smuggling operations by the Chinese Triads.

If the nexus continues to benefit both sides over a period of time, the relationship will deepen. More members of both groups will cooperate, and the groups will create structures and procedures for their business transactions, transfer skills and/or share best practices. We refer to this closer, more sustained cooperation as a symbiotic relationship, and define it as a relationship of mutual benefit or dependence.

In the next stage, the two groups continue to cooperate over a long period and members of the organized crime group begin to share the ideological goals of the terrorists. They grow increasingly alike and finally they merge. That process results in a hybrid or dark network49 that has been memorably described as terrorist by day and criminal by night.50 Such an organization engages in criminal acts but also has a political agenda. Both the criminal and political ends are forwarded by the use of violence and corruption.

These developments are not inevitable, but result from a series of opportunities that can lead to the next stage of cooperation. It is important to recognize, however, that even once the two groups have reached the point of hybrid, there is no reason per se to suspect that transformation will follow. Likewise, a group may persist with borrowed methods indefinitely without ever progressing to cooperation. In Italy and elsewhere, crime groups that also engaged in terrorism never found a terrorist partner and thus remained at the activity appropriation stage. Eventually they ended their terrorist activities and returned to the exclusive pursuit of organized crime.

Interestingly, the TraCCC team found no example where a terrorist group engaging in organized crime, either through activity appropriation or through an organizational linkage, came into conflict with a criminal group.51 Neither archival sources nor our interviews revealed such a conflict over “turf,” though logic would suggest that organized crime groups would react to such forms of competition.

The spectrum does not create exact models of the evolution of criminal-terrorist cooperation. In- deed, the evidence presented both here and in prior studies suggests that a single evolutionary path for crime-terror interactions does not exist. Environmental factors outside the control of either organization and the varied requirements of specific organized crime or terrorist groups are but two of the reasons that interactions appear more idiosyncratic than generalizable.

Using the PIE method, investigators and analysts can gain an understanding of the terror-crime intersection by analyzing evidence sourced from communications, financial transactions, organizational charts, and behavior. They can also apply the methodology to analyze watch points where the two entities may interact. Finally, using physical, electronic, and data surveillance, they can develop indicators showing where watch points translate into practice.

  1. The significance of terror-crime interactions in geographic terms

Some shared characteristics arose from examining this case. First, both neighborhoods shared similar diaspora compositions and a lack of effective or interested policing. Second, both terror cells had strong connections to the shadow economy.

the case demonstrated that each cell shared three factors—poor governance, a sense of ethnic separation amongst the cell (supported by the nature of the larger diaspora neighborhoods), and a tradition of organized crime.

U.S. intelligence and law enforcement are naturally inclined to focus on manifestations of organized crime and terrorism in their own country, but they would benefit from studying and assessing patterns and behavior of crime in other countries as well as areas of potential relevance to terrorism.

When turning to the situation overseas, one can differentiate between longstanding crime groups and their more recently formed counterparts according to their relationship to the state. With the exception of Colombia, rarely do large, established (i.e., “traditional”) crime organizations link with terrorists. These groups possess long-held financial interests that would suffer should the structures of the state and the international financial community come to be undermined. Through corruption and movement into the lawful economy, these groups minimize the risk of prosecution and therefore do not fear the power of state institutions.

Developing countries with weak economies, a lack of social structures, many desperate, hungry people, and a history of unstable government are both relatively likely to provide ideological and economic foundations for both organized crime and terrorism within their borders and relatively unlikely to have much capacity to combat either of them. Conflict zones have traditionally provided tremendous opportunities for smuggling and corruption and reduced oversight capacities, as regulatory and enforcements be- come almost solely directed at military targets. They are therefore especially vulnerable to both serious organized crime and violent activity directed at civilian populations for political goals – as well as cooperation between those engaging in pure criminal activities and those engaging in politically-motivated violence.

Post-conflict zones are also likely to spawn such cooperation; as such areas often retain weak enforcement capacity for some time following an end to formal hostilities.

these patterns of criminal behavior and organization can arise from areas as diverse as conflict zones overseas (which then tend can replicate once they arrive in the U.S.) to neighborhoods in U.S. cities. The problematic combinations of poor governance, ethnic separation from larger society, and a tradition of criminal activity (frequently international) are the primary concerns behind this broad taxonomy of geographic locales for crime-terror interaction.

  1. Watch points and indicators

Taking the evidence of cooperation between organized crime and terrorism, we have generated 12 specific areas of interaction, which we refer to as watch points. In turn these watch points are subdivided into a number of indicators that point out where interaction between terror and crime may be taking place.

These watch points cover a variety of habits and operating modes of organized crime and terrorist groups.

We have organized our watch points into three categories: environmental, organizational, and behavioral. Each of the following sections details one of the twelve watch points.

 

Watch Point 1: Open activities in the legitimate economy

Watch Point 2: Shared illicit nodes

Watch Point 3: Communications

Watch Point 4: Use of information technology (IT)

Watch Point 5: Violence

Watch Point 6: Use of corruption

Watch Point 7: Financial transactions & money laundering

Watch Point 8: Organizational structures

Watch Point 9: Organizational goals

Watch Point 10: Culture

Watch Point 11: Popular support

Watch Point 12: Trust

 

6.1. Watch Point 1: Open activities in the legitimate economy

The many indicators of possible links include habits of travel, the use of mail and courier services, and the operation of fronts.

Organized crime and terror may be associated with subterfuge and secrecy, but both criminal types engage legitimate society quite openly for particular political purposes. Yet in the first instance, criminal groups are likely to leave greater “traces,” especially when they operate in societies with functioning governments, than do terrorist groups.

Terrorist groups usually seek to make common cause with segments of society that will support their goals, particularly the very poor and the disadvantaged. Terrorists usually champion repressed or dis- enfranchised ethnic and religious minorities, describing their terrorist activities as mechanisms to pressure the government for greater autonomy and freedom, even independence, for these minorities… the openly take responsibility for their attacks, but their operational mechanisms are generally kept secret, and any ongoing contacts they may have with legitimate organizations are carefully hidden.

Criminal groups, like terrorists, may have political goals. For example, such groups may seek to strengthen their legitimacy through donating some of their profits to charity. Colombian drug traffickers are generous in their support of schools and local sports teams.5

criminals of all types could scarcely carry out criminal activities, maintain their cover, and manage their money flows without doing legal transactions with legitimate businesses.

Travel: Frequent use of passenger carriers and shipping companies are potential indicators of illicit activity. Clues can be gleaned from almost any pattern of travel that can be identified as such.

Mail and courier services: Indicators of interaction are present in the tracking information on international shipments of goods, which also generate customs records. Large shipments require bills-of-lading and other documentation. Analysis of such transactions, cross-referenced with in- formation on crime databases, can identify links between organized crime and terrorist groups.

Fronts: A shared front company or mutual connections to legitimate businesses are clearly also indicators of interaction.

Watch Point 2: Shared illicit nodes

 

The significance of overt operations by criminal groups should not be overstated. Transnational crime and terror groups alike carry out their operations for the most part with illegal and undercover methods. There are many similarities in these tactics. Both organized criminals and terrorists need forged pass- ports, driver’s licenses, and other fraudulent documents. Dishonest accountants and bankers help criminals launder money and commit fraud. Arms and explosives, training camps and safe houses are other goods and services that terrorists obtain illicitly.

Fraudulent Documents. Groups of both types may use the same sources of false documents,

or the same techniques, indicating cooperation or overlap. A criminal group often develops an expertise in false document production as a business, expanding production and building a customer base.

 

Some of the 9/11 hijackers fraudulently obtained legitimate driver’s licenses through a fraud ring based at an office of DMV in the Virginia suburbs of Washington, DC. Ac- cording to an INS investigator, this ring was under investigation well before the 9/11 attacks, but there was insufficient political will inside the INS to take the case further.

Arms Suppliers. Both terror and organized crime might use the same supplier, or the same distinctive method of doing business, such as bartering weapons or drugs. In 2001 the Basque terror group ETA contracted with factions of the Italian Camorra to obtain missile launchers and ammunition in return for narcotics.

Financial experts. Bankers and financial professionals who assist organized crime might also have terrorist affiliations. The methods of money laundering long used by narcotics traffickers and other organized crime have now been adopted by some terrorist groups.

 

Drug Traffickers. Drug trafficking is the single largest source of revenues for international organized crime. Substantial criminal groups often maintain well-established smuggling routes to distribute drugs. Such an infrastructure would be valuable to terrorists who purchased weapons of mass destruction and needed to transport them.

 

Other Criminal Enterprises. An increasing number of criminal enterprises outside of narcotics smuggling are serving the financial or logistical ends of terror groups and thus serve as nodes of interaction. For example, piracy on the high seas, a growing threat to maritime commerce, often depends on the collusion of port authorities, which are controlled in many cases by organized crime.

These relationships are particularly true of developed countries with effective law enforcement, since criminals obviously need to be more cautious and often restrict their operations to covert activity. In conflict zones, however, criminals of all types feel even less restraint about flaunting their illegal nature, since there is little chance of being detected or apprehended.

Watch Point 3: Communications

 

The Internet, mobile phones and satellite communications enable criminals and terrorists to communicate globally in a relatively secure fashion. FARC, in concert with Colombian drug cartels, offered training on how to set up narcotics trafficking businesses used secure websites and email to handle registration.

Such scenarios are neither hypothetical nor anecdotal. Interviews with an analyst at the US Drug Enforcement Administration revealed that narcotics cartels were increasingly using encryption in their digital communications. In turn, the agent interviewed stated that the same groups were frequently turning to information technology experts to provide them encryption to help secure their communications.

Nodes of interaction therefore include:

  • Technical overlap: Examples exist where organized crime groups opened their illegal communications systems to any paying customer, thus providing a service to other criminals and terrorists among others. For example, a recent investigation found clandestine telephone exchanges in the Tri-Border region of South America that were connected to Jihadist networks. Most were located in Brazil, since calls between Middle Eastern countries and Brazil would elicit less suspicion and thus less chance of electronic eavesdropping.
  • Personnel overlap: Crime and terror groups that recruit common high-tech specialists to their cause. Given their ability to encrypt messages, criminals of all kinds may rely on outsiders to carry the message. Smuggling networks all have operatives who can act as couriers, and terrorists have networks of sympathizers in ethnic diasporas who can also help.

Watch Point 4: Use of information technology (IT)

 

Organized crime has devised IT-based fraud schemes such as online gambling, securities fraud, and pirating of intellectual property. Such schemes appeal to terror groups, too, particularly given the relative anonymity that digital transactions offer. Investigators into the Bali disco bombing of 2002 found that the laptop computer of the ringleader, Imam Samudra, contained a primer he authored on how to use online fraud to finance operations. Evidence of terror groups’ involvement is a significant set of indicators of cooperation or overlap.

Indicators of possible cooperation or nodes of interaction include:

Fundraising: Online fraud schemes and other uses of IT for obtaining ill-gotten gains are already well-established by organized crime groups and terrorists are following suit. Such IT- assisted criminal activities serve as another node of overlap for crime and terror groups, and thus expand the area of observation beyond the brick-and-mortar realm into cyberspace (i.e., investigators now expect to find evidence of collaboration on the Internet or in email as much as through telephone calls or postal services).

  • Use of technical experts: While no evidence exists that criminals and terrorists have directly cooperated to conduct cybercrime or cyberterrorism, they are often served by the same technical experts.

Watch Point 5: Violence

 

Violence is not so much a tactic of terrorists as their defining characteristic. These acts of violence are designed to obtain publicity for the cause, to create a climate of fear, or to provoke political repression, which they hope will undermine the legitimacy of the authorities. Terrorist attacks are deliberately highly visible in order to enhance their impact on the public consciousness. Indiscriminate violence against innocent civilians is therefore more readily ascribed to terrorism.

no examples exist where terrorists have engaged criminal groups for violent acts.

A more significant challenge lies in trying to discern generalities about organized crime’s patterns of violence. Categorizing patterns of violence according to their scope or their promulgation is suspect. In the past, crime groups have used violence selectively and quietly to achieve their goals, but then have also used violence broadly and loudly to achieve other goals. Neither can one categorize organized crime’s violence according to goals as social, political and economic considerations often overlap in every attack or campaign.

Violence is therefore an important watch point that may not yield specific indicators of crime-terror interaction per se but can serve to frame the likelihood that an area might support terror-crime interaction.

Watch Point 6: Use of corruption

 

Both terrorists and organized criminals bribe government officials to undermine the work of law enforcement and regulation. Corrupt officials assist criminals by exerting pressure on businesses that refuse to cooperate with organized crime groups, or by providing passports for terrorists. The methods of corruption are diverse on both sides and include payments, the provision of illegal goods, the use of compromising information to extort cooperation, and outright infiltration of a government agency or other target.

Many studies have demonstrated that organized crime groups often evolve in places where the state cannot guarantee law or order, or provide basic health care, education, and social services. The absence of effective law enforcement combines with rampant corruption to make well-organized criminals nearly invulnerable.

Colombia may be the only example of a conflict zone where a major transnational crime group with very large profits is directly and openly connected to terrorists. The interaction between the FARC and ELN terror groups and the drug syndicates provides crucial important financial resources for the guerillas to operate against the Colombian state – and against each another. This is facilitated by universal corruption, from top government officials to local police. Corruption has served as the foundation for the growth of the narcotics cartels and insurgent/terrorist groups.

In the search for indicators, it would be simplistic to look for a high level of corruption, particularly in conflict zones. Instead, we should pose a series of questions:

Cooperation Are terrorist and criminal groups working together to minimize cost and maximize leverage from corrupt individuals and institutions?

Division of labor Are terrorist and criminal groups purposefully corrupting the areas they have most contact with? In the case of crime groups, that would be law enforcement and the judiciary; in the case of terrorists, the intelligence and security services.

  • Autonomy Are corruption campaigns carried out by one or both groups completely independent of the other?

These indicators can be applied to analyze a number of potential targets of corruption. Personnel that can provide protection or services are often mentioned as the target of corruption. Examples include law enforcement, the judiciary, border guards, politicians and elites, internal security agents and Consular officials. Economic aid and foreign direct investment are also targeted as sources of funds by criminals and terrorists that they can access by means of corruption.

 

Watch Point 7: Financial transactions & money laundering

 

despite the different purposes that may be involved in their respective uses of financial institutions (organized crime seeking to turn illicit funds into licit funds; terrorists seeking to move licit funds to use them for illicit means), the groups tend to share a common infrastructure for carrying out their financial activities. Both types of groups need reliable means of moving, and laundering money in many different jurisdictions, and as a result, both use similar methods to move money internationally. Both use charities and front groups as a cover for money flows.

Possible indicators include:

  • Shared methods of money laundering
  • Mutual use of known front companies and banks, as well as financial experts.

Watch Point 8: Organizational structures

 

The traditional model of organized crime used by U.S. law enforcement is that of the Sicilian Mafia – a hierarchical, conservative organization embedded in the traditional social structures of southern Italy… among today’s organized crime groups the Sicilian mafia is more of an exception than the rule.

Most organized crime now operates not as a hierarchy but as a decentralized, loose-knit network – which is a crucial similarity to terror groups. Networks offer better security, make intelligence-gathering more efficient, cover geographic distances and span diverse memberships more effectively.

Membership dynamics Both terror and organized crime groups – with the exception of the Sicilian Mafia and other traditional crime groups (i.e., Yakuza) – are made up of members with loose, relatively short-term affiliations to each other and even to the group itself. They can readily be recruited by other groups. By this route, criminals have become terrorists.

Scope of organization Terror groups need to make constant efforts to attract and recruit new members. Obvious attempts to attract individuals from crime groups are a clear indication of co- operation. An intercepted phone conversation in May 2004 by a suspected terrorist called Rabei Osman Sayed Ahmed revealed his recruitment tactics: “You should also know that I have met other brothers, that slowly I have created with a few things. First, they were drug pushers, criminals, I introduced them to the faith and now they are the first ones who ask when the moment of the jihad will be…”

Need to buy, wish to sell Often the business transactions between the two sides operate in both directions. Terrorist groups are not just customers for the services of organized crime, but often act as suppliers, too. Arms supply by terrorists is particularly marked in certain conflict zones. Thus, any criminal group found to be supplying outsiders with goods or services should be investigated for its client base too.

Investigators who discovered the money laundering in the above example were able to find out more about the terrorists’ activities too. The Islamic radical cell that planned the Madrid train bombings of 2004 was required to support itself financially through a business venture despite its initial funding by Al Qaeda.

Watch Point 9: Organizational goals

 

In theory, their different goals are what set terrorists apart from the perpetrators of organized crime. Terrorist groups are most often associated with political ends, such as change in leadership regimes or the establishment of an autonomous territory for a subnational group. Even millenarian and apocalyptic terrorist groups, such as the science-fiction mystics of Aum Shinrikyo, often include some political objectives. Organized crime, on the other hand, is almost always focused on personal enrichment.

By cataloging the different – and shifting – goals of terror and organized crime groups, we can develop indicators of convergence or divergence. This will help identify shared aspirations or areas where these aims might bring the two sides into conflict. On this basis, investigators can ask what conditions might prompt either side to adopt new goals or to fall back to basic goals, such as self-preservation.

Long view or short-termism

Affiliations of protagonists

 

Watch Point 10: Culture

 

Both terror and criminal groups use ideologies to maintain their internal identity and provide external justifications for their activities. Religious terror groups adopt and may alter the teachings of religious scholars to suggest divine support for their cause, while Italian, Chinese, Japanese, and other organized crime groups use religious and cultural themes to win public acceptance. Both types use ritual and tradition to construct and maintain their identity. Tattoos, songs, language, and codes of conduct are symbolic to both.

Religious affiliations, strong nationalist sentiments and strong roots in the local community are often characteristics that cause organized criminals to shun any affiliation with terrorists. Conversely, the absence of such affiliations means that criminals have fewer constraints keeping them from a link with terrorists.

In any organization, culture connects and strengthens ties between members. For networks, cultural features can also serve as a bridge to other networks.

  • Religion Many criminal and terrorist groups feature religion prominently.
  • Nationalism Ethno-nationalist insurgencies and criminal groups with deep historical roots are particularly likely to play the nationalist card.
  • Society Many criminal and terrorist networks adapt cultural aspects of the local and regional societies in which they operate to include local tacit knowledge, as contained in narrative traditions. Manuel Castells notes the attachment of drug traffickers to their country, and to their regions of origin. “They were/are deeply rooted in their cultures, traditions, and regional societies. …they have also revived local cultures, rebuilt rural life, strongly affirmed their religious feeling, and their beliefs in local saints and miracles, supported musical folklore (and were rewarded with laudatory songs from Colombian bards)…”

Watch Point 11: Popular support

 

Both organized crime and terrorist groups engage legitimate society in furtherance of their own agendas. In conflict zones, this may be done quite openly, while under the rule of law they are obliged to do so covertly. One way of doing so is to pay lip service to the interests of certain ethnic groups or social classes. Organized crime is particularly likely to make an appeal to disadvantaged people or people in certain professionals though paternalistic actions that make them a surrogate for the state. For instance, the Japanese Yakuza crime groups provided much-needed assistance to the citizens of Kobe after the serious earthquake there. Russian organized crime habitually supports cultural groups and sports troupes.

 

Both crime and terror derive crucial power and prestige through the support of their members and of some segment of the public at large. This may reflect enlightened self-interest, when people see that the criminals are acting on their behalf and improving their well-being and personal security. But it is equally likely to be that people are afraid to resist a violent criminal group in their neighborhood

This quest for popular support and common cause suggests various indicators:

  • Sources Terror groups seek and sometimes obtain the assistance of organized crime based on the perceived worthiness of the terrorist cause, or because of their common cause against state authorities or other sources of opposition. In testimony before the U.S. House Committee on International Relations, Interpol Secretary General Ronald Noble made this point. One of his examples was that Lebanese syndicates in South America send funds to Hezbollah.
  • Means Groups that cooperate may have shared activities for gaining popular support such as political parties, labor movements, and the provision of social services.
  • Places In conflict zones where the government has lost authority to criminal groups, social welfare and public order might be maintained by the criminal groups that hold power.

 

Watch Point 12: Trust

Like business corporations, terrorist and organized crime groups must attract and retain talented, dedicated, and loyal personnel. These skills are at an even greater premium than in the legitimate economy because criminals cannot recruit openly. A further challenge is that law enforcement and intelligence services are constantly trying to infiltrate and dismantle criminal networks. Members’ allegiance to any such group is constantly tested and demonstrated through rituals such as the initiation rites…

We propose three forms of trust in this context, using as a basis Newell and Swan’s model for inter- personal trust within commercial and academic groups.94

Companion trust based on goodwill or personal friendships… In this context, indicators of terror-crime interaction would be when members of the two groups use personal bonds based on family, tribe, and religion to cement their working relationship. Efforts to recruit known associates of the other group, or in common recruiting pools such as diasporas, would be another indicator.

Competence trust, which Newell and Swan define as the degree to which one person depends upon another to perform the expected task.

Commitment or contract trust, where all actors understand the practical importance of their role in completing the task at hand.

  1. Case studies

7.1. The Tri-Border Area of Paraguay, Brazil, and Argentina

Chinese Triads such as the Fuk Ching, Big Circle Boys, and Flying Dragons are well established and believed to be the main force behind organized crime in CDE.

CDE is also a center of operations for several terrorist groups, including Al Qaeda, Hezbollah, Islamic Jihad, Gamaa Islamiya, and FARC.

Watch points

Crime and terrorism in the Tri-Border Area interact seamlessly, making it difficult to draw a clean line be- tween the types of persons and groups involved in each of these two activities. There is no doubt, however, that the social and economic conditions allow groups that are originally criminal in nature and groups whose primary purpose is terrorism to function and interact freely.

Organizational structure

Evidence from CDE suggests that some of the local structures used by both groups are highly likely to overlap. There is no indication, however, of any significant organizational overlap between the criminal and terrorist groups. Their cooperation, when it exists, is ad hoc and without any formal or lasting agreements, i.e., activity appropriation and nexus forms only.

Organizational goals

In this region, the short-term goals of criminals and terrorists converge. Both benefit from easy border crossings and the networks necessary to raise funds.

Culture Cultural affinities between criminal and terrorist groups in the Tri-Border Area include shared ethnicities, languages and religions.

It emerged that 400 to 1000 kilograms of cocaine may have been shipped on a monthly basis through the Tri-Border Area on its way to Sao Paulo and thence to the Middle East and Europe

Numerous arrests revealed the strong ties between entrepreneurs in CDE and criminal and potentially terrorist groups. From the evidence in CDE it seems that the two phenomena operate in rather separate cultural realities, focusing their operations within ethnic groups. But nor does culture serve as a major hindrance to cooperation between organized crime and terrorists.

Illicit activities and subterfuge

The evidence in CDE suggests that terrorists see it as logical and cost-effective to use the skills, contacts, communications and smuggling routes of established criminal networks rather than trying to gain the requisite experience and knowledge themselves. Likewise, terrorists appear to recognize that to strike out on their own risks potential turf conflicts with criminal groups.

There is a clear link between Hong Kong-based criminal groups that specialize in large-scale trafficking of counterfeit products such as music albums and software, and the Hezbollah cells active in the Tri-Border Area. Within their supplier-customer relationship, the Hong Kong crime groups smuggle contraband goods into the region and deliver them to Hezbollah operatives, who in turn profit from their sale. The proceeds are then used to fund the terrorist groups.

Open activities in the legitimate economy

The knowledge and skills potential of CDE is tremendous. While no specific examples exist to connect terrorist and criminal groups through the purchase of legal goods and services, it is obvious that the likelihood of this is high, given how the CDE economy is saturated with organized crime.

Support or sustaining activities

The Tri-Border Area has an usually large and efficient transport infrastructure, which naturally assists organized crime. In turn, the many criminals and terrorists using cover require a sophisticated and reliable document forgery industry. The ease with which these documents can be obtained in CDE is an indicator of cooperation between terrorists and criminals.

Brazilian intelligence services have evidence that Osama bin Laden visited CDE in 1995 and met with the members of the Arab community in the city’s mosque to talk about his experience as a mujahadeen fighter in the Afghan war against the Soviet Union.

Use of violence

Contract murder in CDE costs as little as one thousand dollars, and the frequent violence in CDE is directed at business people who refuse to bend to extortion by terror groups. Ussein Mohamed Taiyen, president of the CDE Chamber of Commerce, was one such victim—murdered because he refused to pay the tax.

Financial transactions and money laundering in 2000, money laundering in the Tri-Border Area was estimated at 12 billion U.S. dollars annually.

As many as 261 million U.S. dollars annually has been raised in Tri-Border Area and sent overseas to fund the terrorist activities of Hezbollah, Hamas, and Islamic Jihad.

Use of corruption

Most of the illegal activities in the Tri-Border Area bear the hallmark of corruption. In combination with the generally low effectiveness of state institutions, especially in Paraguay, and high level of corruption in that country, CDE appears to be a perfect environment for the logistical operations of both terrorists and organized criminals.

Even the few bona fide anti-corruption attempts made by the Paraguayan government have been under- mined because of the pervasive corruption, another example being the attempts to crack down on the Chinese criminal groups in CDE. The Consul General of Taiwan in CDE, Jorge Ho, stated that the Chinese groups were successful in bribing Paraguayan judges, effectively neutralizing law enforcement moves against the criminals.122

The other watch points described earlier – including fund raising and use of information technology – can also be illustrated with similar indicators of possible cooperation between terror and organized crime.

In sum, for the investigator or analyst seeking examples of perfect conditions for such cooperation, the Tri-Border Area is an obvious choice.

7.2. Crime and terrorism in the Black Sea region

Illicit or veiled operations Cigarette, drugs and arms smuggling have been major sources of financing of all the terrorist groups in the region.

Cigarette and alcohol smuggling has fueled the Kurdish-Turkish conflict as well as the terrorist violence in both the Abkhaz and Ossetian conflicts.

From the very beginning, the Chechen separatist movement had close ties with the Chechen crime rings in Russia, mainly operating in Moscow. These crime groups provided and some of them still provide financial sup- port for the insurgents.

  1. Conclusion and recommendations

The many examples in this report of cooperation between terrorism and organized crime make clear that the links between these two potent threats to national and global security are widespread, dynamic, and dangerous. It is only rational to consider the possibility that an effective organized crime group may have a connection with terrorists that has gone unnoticed so far.

Our key conclusion is that crime is not a peripheral issue when it comes to investigating possible terrorist activity. Efforts to analyze the phenomenon of terrorism without considering the crime component undermine all counter-terrorist activities, including those aimed at protecting sites containing weapons of mass destruction.

Yet the staffs of intelligence and law enforcement agencies in the United States are already over- whelmed. Their common complaint is that they do not have the time to analyze the evidence they possess, or to eliminate unnecessary avenues of investigation. The problem is not so much a dearth of data, but the lack of suitable tools to evaluate that data and make optimal decisions about when, and how, to investigate further.

Scrutiny and analysis of the interaction between terrorism and organized crime will become a matter of routine best practice. Aware- ness of the different forms this interaction takes, and the dynamic relationship between them, will become the basis for crime investigations, particularly for terrorism cases.

In conclusion, our overarching recommendation is that crime analysis must be central to understanding the patterns of terrorist behavior and cannot be viewed as a peripheral issue.

For policy analysts:

  1. More detailed analysis of the operation of illicit economies where criminals and terrorists interact would improve understanding of how organized crime operates, and how it cooperates with terrorists. Domestically, more detailed analysis of the businesses where illicit transactions are most common would help investigation of organized crime – and its affiliations. More focus on the illicit activities within closed ethnic communities in urban centers and in prisons in developed countries would prove useful in addressing potential threats.
  2. Corruption overseas, which is so often linked to facilitating organized crime and terrorism, should be elevated to a U.S. national security concern with an operational focus. After all, many jihadists are recruited because they are disgusted with the corrupt governments in their home countries. Corruption has facilitated the commission of criminal acts such as the Chechen suicide bombers who bribed airport personnel to board aircraft in Moscow.
  3. Analysts must study patterns of organized crime-terrorism interaction as guidance for what maybe observed subsequently in the United States.
  4. Intelligence and law enforcement agencies need more analysts with the expertise to understand the motivations and methods of criminal and terrorist groups around the globe, and with the linguistic and other skills to collect and analyze sufficient data.

For investigators:

  1. The separation of criminals and terrorists is not always as clear cut as many investigators believe. Crime and terrorists’ groups are often indistinguishable in conflict zones and in prisons.
  2. The hierarchical structure and conservative habits of the Sicilian Mafia no longer serves as an appropriate model for organized crime investigations. Most organized crime groups now operate as loose networked affiliations. In this respect they have more in common with terrorist groups.
  3. The PIE method provides a series of indicators that can result in superior profiles and higher- quality risk analysis for law enforcement agencies both in the United States and abroad. The approach can be refined with sensitive or classified information.
  4. Greater cooperation between the military and the FBI would allow useful sharing of intelligence, such as the substantial knowledge on crime and illicit transactions gleaned by the counterintelligence branch of the U.S. military that is involved in conflict regions where terror-crime interaction is most profound.
  5. Law enforcement personnel must develop stronger working relationships with the business sector. In the past, there has been too little cognizance of possible terrorist-organized crime interaction among the clients of private-sector business corporations and banks. Law enforcement must pursue evidence of criminal affiliations with high status individuals and business professionals who are often facilitators of terrorist financing and money laundering. In the spirit of public-private partnerships, corporations and banks should be placed under an obligation to watch for indications of organized crime or terrorist activity by their clients and business associates. Furthermore, they should attempt to analyze what they discover and to pass on their assessment to law enforcement.
  6. Law enforcement personnel posted overseas by federal agencies such as the DEA, the Department of Justice, the Department of Homeland Security, and the State Department’s Bureau of International Narcotics and Law Enforcement should be tasked with helping to develop a better picture of the geography of organized crime and its most salient features (i.e., the watch points of the PIE approach). This should be used to assist analysts in studying patterns of crime behavior that put American interests at risk overseas and alert law enforcement to crime patterns that may shortly appear in the U.S.
  7. Training for law enforcement officers at federal, state, and local level in identifying authentic and forged passports, visas, and other documents required for residency in the U.S. would eliminate a major shortcoming in investigations of criminal networks.

 

 

 

 

 

 

 

 

 

 

 

A.1 Defining the PIE Analytical Process

In order to begin identifying the tools to support the analytical process, the process of analysis itself first had to be captured. The TraCCC team adopted Max Boisot’s (2003) I-Space as a representation for de- scribing the analytical process. As Figure A-1 illustrates, I-Space provides a three-dimensional representation of the cognitive steps that constitute analysis in general and the utilization of the PIE methodology in particular. The analytical process is reduced to a series of logical steps, with one step feeding the next until the process starts anew. The steps are:

  1. Scanning
    2. Codification 3. Abstraction 4. Diffusion
    5. Validation 6. Impacting

Over time, repeated iterations of these steps result in more and more PIE indicators being identified, more information being gathered, more analytical product being generated, and more recommendations being made. Boisot’s I-Space is described below in terms of law enforcement and intelligence analytical processes.

A.1.1. Scanning

The analytical process begins with scanning, which Boisot defines as the process of identifying threats and opportunities in generally available but often fuzzy data. For example, investigators often scan avail- able news sources, organizational data sources (e.g., intelligence reports) and other information feeds to identify patterns or pieces of information that are of interest. Sometimes this scanning is performed with a clear objective in mind (e.g., set up through profiles to identify key players). From a tools perspective, scanning with a focus on a specific entity like a person or a thing is called a subject-based query. At other times, an investigator is simply reviewing incoming sources for pieces of a puzzle that is not well under- stood at that moment. From a tools perspective, scanning with a focus on activities like money laundering or drug trafficking is called a pattern-based query. For this type of query, a specific subject is not the target, but a sequence of actors/activities that form a pattern of interest.

Many of the tools described herein focus on either:

o Helping an investigator build models for these patterns then comparing those models against the data to find ‘matches’, or

o Supporting automated knowledge discovery where general rules about interesting patterns are hypothesized and then an automated algorithm is employed to search through large amounts of data based on those rules.

The choice between subject-based and pattern-based queries is dependent on several factors including the availability of expertise, the size of the data source to be scanned, the amount of time available and, of course, how well the subject is understood and anticipated. For example, subject-based queries are by nature more tightly focused and thus are often best conducted through keyword or Boolean searches, such as a Google search containing the string “Bin Laden” or “Abu Mussab al-Zarqawi.” Pattern-based queries, on the other hand, support a relationship/discovery process, such as an iterative series of Google searches starting at ‘with all of the words’ terrorist, financing, charity, and hawala, proceeding through ‘without the words’ Hezbollah and Iran and culminating in ‘with the exact phrase’ Al Qaeda Wahabi charities. Regard- less of which is employed, the results provide new insights into the problem space. The construction, employment, evaluation, and validation of results from these various types of scanning techniques will pro- vide a focus for our tool exploration.

A.1.2. Codification

In order for the insights that result from scanning to be of use to the investigator, they must be placed into the context of the questions that the investigator is attempting to answer. This context provides structure through a codification process that turns disconnected patterns into coherent thoughts that can be more easily communicated to the community. The development of indicators is an example of this codification. Building up network maps from entities and their relationships is another example that could sup- port indicator development. Some important tools will be described that support this codification step.

A.1.3. Abstraction

During the abstraction phase, investigators generalize the application of newly codified insights to a wider range of situations, moving from the specific examples identified during scanning and codification towards a more abstract model of the discovery (e.g., one that explains a large pattern of behavior or predicts future activities). Indicators are placed into the larger context of the behaviors that are being monitored. Tools that support the generation and maintenance of models that support this abstraction process

81

will be key to making the analysis of an overwhelming number of possibilities and unlimited information manageable.

A.1.4. Diffusion

Many of the intelligence failures cited in the 9/11 Report were due to the fact that information and ideas were not shared. This was due to a variety of reasons, not the least of which were political. Technology also built barriers to cooperation, however. Information can only be shared if one of two conditions is met. Either the sender and receiver must share a context (a common language, background, understanding of the problem) or the information must be coded and abstracted (see steps 2 and 3 above) to extract it from the personal context of the sender to one that is generally understood by the larger community. Once this is done, the newly created insights of one investigator can be shared with investigators in sister groups.

The technology for the diffusion itself is available through any number of sources ranging from repositories where investigators can share information to real-time on-line cooperation. Tools that take advantage of this technology include distributed databases, peer-to-peer cooperation environments and real- time meeting software (e.g., shared whiteboards).

A.1.5. Validation

In this step of the process, the hypotheses that have been formed and shared are now validated over time, either by a direct match of the data against the hypotheses (i.e., through automation) or by working towards a consensus within the analytical community. Some hypotheses will be rejected, while others will be retained and ranked according to probability of occurrence. In either case, tools are needed to help make this match and form this consensus.

A.1.6. Impacting

Simply validating a set of hypotheses is not enough. If the intelligence gathering community stops at that point, the result is a classified CNN feed to the policy makers and practitioners. The results of steps 1 through 5 must be mapped against the opposing landscape of terrorism and transnational crime in order to understand how the information impacts the decisions that must be taken. In this final step, investigators work to articulate how the information/hypotheses they are building impact the overall environment and make recommendations on actions (e.g., probes) that might be taken to clarify that environment. The con- sequences of the actions taken as a result of the impacting phase are then identified during the scanning phase and the cycle begins again.

A.1.7. An Example of the PIE Analytical Approach

While section 4 provided some real-life examples of the PIE approach in action, a retrodictive analysis of terror-crime cooperation in the extraction, smuggling, and sale of conflict diamonds provides a grounding example of Boisot’s six step analytical process. Diamonds from West Africa were a source of funding for various factions in the Lebanese civil war since the 1980s. Beginning in the late 1990s intelligence, law enforcement, regulatory, non-governmental, and press reports suggested that individuals linked to transnational criminal smuggling and Middle Eastern terrorist groups were involved in Liberia’s illegal diamond trade. We would expect to see the following from an investigator assigned to track terrorist financing:

  1. Scanning: During this step investigators could have assembled fragmentary reports to reveal crude patterns that indicated terror-crime interaction in a specific region (West Africa), involving two countries (Liberia and Sierra Leone) and trade in illegal diamonds.
  2. Codification: Based on patterns derived from scanning, investigators could have codified the terror- crime interaction by developing explicit network maps that showed linkages between Russian arms dealers, Russian and South American organized crime groups, Sierra Leone insurgents, the government of Liberia, Al Qaeda, Hezbollah, Lebanese and Belgian diamond merchants, and banks in Cyprus, Switzerland, and the U.S.
  3. Abstraction: The network map developed via codification is essentially static at this point. Utilizing social network analysis techniques, investigators could have abstracted this basic knowledge to gain a dynamic understanding of the conflict diamond network. A calculation of degree, betweenness, and closeness centrality of the conflict diamond network would have revealed those individuals with the most connections within the network, those who were the links between various subgroups within the network, and those with the shortest paths to reach all of the network participants. These calculations would have revealed that all the terrorist links in the conflict diamond network flowed through Ibra- him Bah, a Libyan-trained Senegalese who had fought with the mujahadeen in Afghanistan and whom Charles Taylor, then President of Liberia, had entrusted to handle the majority of his diamond deals. Bah arranged for terrorist operatives to buy all diamonds possible from the RUF, the Charles Taylor- supported rebel army that controlled much of neighboring civil-war-torn Sierra Leone. The same calculations would have delineated Taylor and his entourage as the key link to transnational criminals in the network, and the link between Bah and Taylor as the essential mode of terror-crime interaction for purchase and sale of conflict diamonds.
  4. Diffusion: Disseminating the results of the first three analytical steps in this process could have alerted investigators in other domestic and foreign law enforcement and intelligence agencies to the emergent terror-crime nexus involving conflict diamonds in West Africa. Collaboration between various security services at this junction could have revealed Al Qaeda’s move into commodities such as diamonds, gold, tanzanite, emeralds, and sapphires in the wake of the Clinton Administration’s freezing of 240 million dollars belonging to Al Qaeda and the Taliban in Western banks in the aftermath of the August 1998 attacks on the U.S. embassies in Kenya and Tanzania. In particular, diffusion of the parameters of the conflict diamond network could have allowed investigators to tie Al Qaeda fund raising activities to a Belgian bank account that contained approximately 20 million dollars of profits from conflict diamonds.
  5. Validation: Having linked Al Qaeda, Hezbollah, and multiple organized crime groups to the trade in conflict diamonds smuggled into Europe from Sierra Leone via Liberia, investigators would have been able to draw operational implications from the evidence amassed in the previous steps of the analytical process. For example, Al Qaeda diamond purchasing behavior changed markedly. Prior to July 2001 Al Qaeda operatives sought to buy low in Africa and sell high in Europe so as to maximize profit. Around July they shifted to a strategy of buying all the diamonds they could and offering the highest prices required to secure the stones. Investigators could have contrasted these buying patterns and hypothesized that Al Qaeda was anticipating events which would disrupt other stores of value, such as financial instruments, as well as bring more scrutiny of Al Qaeda financing in general.
  6. Impacting: In the wake of the 9/11attacks, the hypothesis that Al Qaeda engaged in asset shifting prior to those strikes similar to that undertaken in 1999 has gained significant validity. During this final step in the analytical process, investigators could have created a watch point involving a terror-crime nexus associated with conflict diamonds in West Africa, and generated the following indicators for use in future investigations:
  • Financial movements and expenditures as attack precursors;
  • Money as a link between known and unknown nodes;
  • Changes in the predominant patterns of financial activity;
  • Criminal activities of a terrorist cell for direct or indirect operational support;
  • Surge in suspicious activity reports.

A.2. The tool space

The key to successful tool application is understanding what type of tool is needed for the task at hand. In order to better characterize the tools for this study, we have divided the tool space into three dimensions:

  • An abstraction dimension: This continuum focuses on tools that support the movement of concepts from the concrete to the abstract. Building models is an excellent example of moving concrete, narrow concepts to a level of abstraction that can be used by investigators to make sense of the past and predict the future.
  • A codification dimension: This continuum attaches labels to concepts that are recognized and accepted by the analytical community to provide a common context for grounding models. One end of the spectrum is the local labels that individual investigators assign and perhaps only that they understand. The other end of the spectrum is the community-accepted labels (e.g., commonly accepted definitions that will be understood by the broader analytical community). As we saw earlier, concepts must be defined in community-recognizable labels before the community can begin to cooperate on those concepts.
  • The number of actors: This last continuum talks in term of the number of actors who are involved with a given concept within a certain time frame. Actors could include individual people, groups, and even automated software agents. Understanding the number of actors involved with the analysis will play a key role in determining what type of tool needs to be employed.

Although they may appear to be performing the same function, abstraction and codification are not the same. An investigator could build a set of models (moving from concrete to abstract concepts) but not take the step of changing his or her local labels. The result would be an abstracted model of use to the single investigator, but not to a community working from a different context. For example, one investigator could model a credit card theft ring as a petty crime network under the loose control of a traditional organized crime family, while another investigator could model the same group as a terrorist logistic sup- port cell.

The analytical process described above can now be mapped into the three-dimensional tool space, represented graphically in Figure A-1. So, for example, scanning (step 1) is placed in the portion of the tool space that represents an individual working in concrete terms without those terms being highly codified (e.g., queries). Validation (step 5), on the other hand, requires the cooperation of a larger group working with abstract, highly codified concepts.

A.2.1. Scanning tools

Investigators responsible for constructing and monitoring a set of indicators could begin by scanning available data sources – including classified databases, unclassified archives, news archives, and internet sites – for information related to the indicators of interest. As can be seen from exhibit 6, all scanning tools will need to support requirements dictated by where these tools fall within the tool space. Scanning tools should focus on:

  • How to support an individual investigator as opposed to the collective analytical community. Investigators, for the most part, will not be performing these scanning functions as a collaborative effort;
  • Uncoded concepts, since the investigator is scanning for information that is directly related to a specific context (e.g., money laundering), then the investigator will need to be intimately familiar with the terms that are local (uncoded) to that context;
  • Concrete concepts or, in this case, specific examples of people, groups, and circumstances within the investigator’s local context. In other words, if the investigator attempts to generalize at this stage, much could be missed.

Using these criteria as a background, and leveraging state-of-the-art definitions for data mining, scanning tools fall into two basic categories:

  • Tools that support subject-based queries are used by investigators when they are searching for specific information about people, groups, places, events, etc.; and
  • Investigators who are not as interested in individuals as they are in identifying patterns of activities use tools that support pattern-based queries.

This section briefly describes the functionality in general, as well as providing specific tool examples, to support both of these critical types of scanning.

A.2.1.1. Subject-based queries

Subject-based queries are the easiest to perform and the most popular. Examples of tools that are used to support subject-based queries are Boolean search tools for databases and internet search engines.

Functionalities that should be evaluated when selecting subject-based query tools include that they are easy to use and intuitive to the investigator. Investigators should not be faced with a bewildering array of ‘ifs’, ‘ands’, and ‘ors’, but should be presented with a query interface that matches the investigator’s cognitive view of searching the data. The ideal is a natural language interface for constructing the queries. An- other benefit is that they provide a graphical interface whenever possible. One example might be a graphical interface that allows the investigator to define subjects of interest, then uses overlapping circles to indicate the interdependencies among the search terms. Furthermore, query interfaces should support synonyms, have an ability to ‘learn’ from the investigator based on specific interests, and create an archive of queries so that the investigator can return and repeat. Finally, they should provide a profiling capability that alerts the investigator when new information is found based on the subject.

Subject-based query tools fall into three categories: queries against databases, internet searches, and customized search tools. Examples of tools for each of these categories include:

  • Queries from news archives: All major news groups provide web-based interfaces that support queries against their on-line data sources. Most allow you to select the subject, enter keywords, specify date ranges, and so on. Examples include the New York Times (at http://www.nytimes.com/ref/membercenter/nytarchive.html) and the Washington Post (at http://pqasb.pqarchiver.com/washingtonpost/search.html). Most of these sources allow you to read through the current issue, but charge a subscription for retrieving articles from past issues.
  • Queries from on-line references: There are a host of on-line references now available for query that range from the Encyclopedia Britannica (at http://www.eb.com/) to the CIA’s World Factbook (at http://www.cia.gov/cia/publications/factbook/). A complete list of such references is impossible to include, but the search capabilities provided by each are clear examples of subject-based queries.
  • Search engines: Just as with queries against databases, there are a host of commercial search engines available for free-format internet searching. The most popular is Google, which combines a technique called citation indexing with web crawlers that constantly search out and index new web pages. Google broke the mold of free-format text searching by not focusing on exact matches between the search terms and the retrieved information. Rather, Google assumes that the most popular pages (the ones that are referenced the most often) that include your search terms will be the pages of greatest interest to you. The commercial version of Google is available free of charge on the internet, and organizations can also purchase a version of Google for indexing pages on an intranet. Google also works in many languages. More information about Google as a business solution can be found at http://www.google.com/services/. Although the current version of Google supports many of the requirements for subject-based queries, its focus is quick search and it does not support sophisticated query interfaces, natural language queries, synonyms, or a managed query environment where queries can be saved. There are now numerous software packages available that provide this level of support, many of them as add-on packages to existing applications.

o Name Search®: This software enables applications to find, identify and match information. Specifically, Name Search finds and matches records based on personal and corporate names, social security numbers, street addresses and phone numbers even when those records have variations due to phonetics, missing words, noise words, nicknames, prefixes, keyboard errors or sequence variations. Name Search claims that searches using their rule-based matching algorithms are faster and more accurate than those based only on Soundex or similar techniques. Soundex, developed by Odell and Russell, uses codes based on the sound of each letter to translate a string into a canonical form of at most four characters, preserving the first letter.

Name Search also supports foreign languages, technical data, medical information, and other specialized information. Other problem-specific packages take advantage of the Name Search functionality through an Application Programming Interface (API) (i.e., Name Search is bundled). An example is ISTwatch. See http://www.search-software.com/.

o ISTwatch©: ISTwatch is a software component suite that was designed specifically to search and match individuals against the Office of Foreign Assets Control’s (OFAC’s) Specially Designated Nationals list and other denied parties lists. These include the FBI’s Most Wanted, Canadian’s OSFI terrorist lists, the Bank of England’s consolidated lists and Financial Action Task Force data on money-laundering countries. See

http://www.intelligentsearch.com/ofac_software/index.html

All these tools are packages designed to be included in an application. A final set of subject-based query tools focus on customized search environments. These are tools that have been customized to per- form a particular task or operate within a particular context. One example is WebFountain.

o WebFountain: IBM’s WebFountain began as a research project focused on extending subject- based query techniques beyond free format text to target money-laundering activities identified through web sources. The WebFountain project, a product of IBM’s Almaden research facility in California, used advanced natural language processing technologies to analyze the entire internet – the search covered 256 terabytes of data in the process of matching a structured list of people who were indicted for money laundering activities in the past with unstructured in- formation on the internet. If a suspicious transaction is identified and the internet analysis finds a relationship between the person attempting the transaction and someone on the list, then an alert is issued. WebFountain has now been turned into a commercially available IBM product. Robert Carlson, IBM WebFountain vice president, describes the current content set as over 1 petabyte in storage with over three billion pages indexed, two billion stored, and the ability to mine 20 million pages a day. The commercial system also works across multiple languages. Carlson stated in 2003 that it would cover 21 languages by the end of 2004 [Quint, 2003]. See: http://www.almaden.ibm.com/webfountain

o Memex: Memex is a suite of tools that was created specifically for law enforcement and national security groups. The focus of these tools is to provide integrated search capabilities against both structured (i.e., databases) and unstructured (i.e., documents) data sources. Memex also provides a graphical representation of the process the investigator is following, structuring the subject-based queries. Memex’s marketing literature states that over 30 percent of the intelligence user population of the UK uses Memex. Customers include the Metropolitan Police Service (MPS), whose Memex network that includes over 90 dedicated intelligence servers pro- viding access to over 30,000 officers; the U.S. Department of Defense; numerous U.S. intelligence agencies, drug intelligence Groups and law enforcement agencies. See http://www.memex.com/index.shtml.

A.2.1.2. Pattern queries

Pattern-based queries focus on supporting automated knowledge discovery (1) where the exact subject of interest is not known in advance and (2) where what is of interest is a pattern of activity emerging over time. In order for pattern queries to be formed, the investigator must hypothesize about the patterns in advance and then use tools to confirm or deny these hypotheses. This approach is useful when there is expertise available to make reasonable guesses with respect to the potential patterns. Conversely, when that expertise is not available or the potential patterns are unknown due to extenuating circumstances (e.g., new patterns are emerging too quickly for investigators to formulate hypotheses), then investigators can auto- mate the construction of candidate patterns by formulating a set of rules that describe how potentially interesting, emerging patterns might appear. In either case, tools can help support the production and execution of the pattern queries. The degree of automation is dependent upon the expertise available and the dynamics of the situation being investigated.

As indicated earlier, pattern-based query tools fall into two general categories: those that support investigators in the construction of patterns based on their expertise, then run those patterns against large data sets, and those that allow the investigator to build rules about patterns of interest and, again, run those rules against large data sets.

Examples of tools for each of these categories include

  1. Megaputer (PolyAnalyst 4.6): This tool falls into the first category of pattern-based query tools, helping the investigator hypothesize patterns and explore the data based on those hypotheses. PolyAnalyst is a tool that supports a particular type of pattern-based query called Online Analytical Processing (OLAP), a popular analytical approach for large amounts of quantitative data. Using PolyAnalyst, the investigator defines dimensions of interest to be considered in text exploration and then displays the results of the analysis across various combinations of these dimensions. For example, an investigator could search for mujahideen who had trained at the same Al Qaeda camp in the 1990s and who had links to Pakistani Intelligence as well as opium growers and smuggling networks into Europe. See http://www.megaputer.com/.
  2. Autonomy Suite: Autonomy’s search capabilities fall into the second category of pattern-based query tools. Autonomy has combined technologies that employ adaptive pattern-matching techniques with Bayesian inference and Claude Shannon’s principles of information theory. Autonomy identifies the pat- terns that naturally occur in text, based on the usage and frequency of words or terms that correspond to specific ideas or concepts as defined by the investigator. Based on the preponderance of one pattern over another in a piece of unstructured information, Autonomy calculates the probability that a document in question is about a subject of interest [Autonomy, 2002]. See http://www.autonomy.com/content/home/
  3. Fraud Investigator Enterprise: The Fraud Investigator Enterprise Similarity Search Engine (SSE) from InfoGlide Software is another example of the second category of pattern search tools. SSE uses ana- lytic techniques that dissect data values looking for and quantifying partial matches in addition to exact matches. SSE scores and orders search results based upon a user-defined data model. See http://www.infoglide.com/composite/ProductsF_2_1.htm

Although an evaluation of data sources available for scanning is beyond the scope of this paper, one will serve as an example of the information available. It is hypothesized in this report that tools could be developed to support the search and analysis of Short Message Service (SMS) traffic for confirmation of PIE indicators. Often referred to as ‘text messaging’ in the U.S., the SMS is an integrated message service that lets GSM cellular subscribers send and receive data using their handset. A single short message can be up to 160 characters of text in length – words, numbers, or punctuation symbols. SMS is a store and for- ward service; this means that messages are not sent directly to the recipient but via a network SMS Center. This enables messages to be delivered to the recipient if their phone is not switched on or if they are out of a coverage area at the time the message was sent. This process, called asynchronous messaging, operates in much the same way as email. Confirmation of message delivery is another feature and means the sender can receive a return message notifying them whether the short message has been delivered or not. SMS messages can be sent to and received from any GSM phone, providing the recipient’s network supports text messaging. Text messaging is available to all mobile users and provides both consumers and business people with a discreet way of sending and receiving information
Over 15 billion SMS text messages were sent around the globe in January 2001. Tools taking advantage of the stored messages in an SMS Center could:

  • Perform searches of the text messages for keywords or phrases,
  • Analyze SMS traffic patterns, and
  • Search for people of interest in the Home Location Register (HLR) database that maintains information about the subscription profile of the mobile phone and also about the routing information for the subscriber.

A.2.2. Codification tools

As can be seen from exhibit 6, all codification tools will need to support requirements dictated by where these tools fall within the tool space. Codification tools should focus on:

  • Supporting individual investigators (or at best a small group of investigators) in making sense of the information discovered during the scanning process.
  • Moving the terms with which the information is referenced from a localized organizational context (uncoded, e.g., hawala banking) to a more global context (codified, e.g., informal value storage and transfer operations).
  • Moving that information from specific, concrete examples towards more abstract terms that could support identification of concepts and patterns across multiple situations, thus providing a larger context for the concepts being explored.

Using these criteria as a background, the codification tools reviewed fall into two major categories:

  1. Tools that help investigators label concepts and cluster different concepts into terms that are recognizable and used by the larger analytical community; and
  2. Tools that use this information to build up network maps identifying entities, relationships, missions, etc.

This section briefly describes codification functionality in general, as well as providing specific tool examples, to support both of these types of codification.

A.2.2.1. Labeling and clustering

The first step to codification is to map the context-specific terms used by individual investigators to a taxonomy of terms that are commonly accepted in a wider analytical context. This process is performed through labeling individual terms, clustering other terms and renaming them according to a community- accepted taxonomy.

In general, labeling and clustering tools should:

  • Support the capture of taxonomies that are being developed by the broader analytical community; Allow the easy mapping of local terms to these broader terms;
    Support the clustering process either by providing algorithms for calculating the similarity between concepts, or tools that enable collaborative consensus construction of clustered concepts;
  • Label and cluster functionality is typically embedded in applications support analytical processes, not provided separately as stand-alone tools.

Two examples of such products include:

COPLINK® – COPLINK began as a research project at the University of Arizona and has now grown into a commercially available application from Knowledge Computing Corporation (KCC). It is focused on providing tools for organizing vast quantities of structured and seemingly unrelated information in the law enforcement arena. See COPLINK’s commercial website at http://www.knowledgecc.com/index.htm and its academic website at the University of Arizona at http://ai.bpa.arizona.edu/COPLINK/.

Megaputer (PolyAnalyst 4.6) – In addition to supporting pattern queries, PolyAnalyst also pro- vides a means for creating, importing and managing taxonomies which could be useful in the codification step and carries out automated categorization of text records against existing taxonomies.

A.2.2.2. Network mapping

Terrorists have a vested interest in concealing their relationships, they often emit confusing or intentionally misleading information and they operate in self-contained and difficult to penetrate cells for much of the time. Criminal networks are also notoriously difficult to map, and the mapping often happens after a crime has been committed than before. What is needed are tools and approaches that support the map- ping of networks to represent agents (e.g., people, groups), environments, behaviors, and the relationships between all of these.

A large number of research efforts and some commercial products have been created to automate aspects of network mapping in general and link analysis specifically. In the past, however, these tools have provided only marginal utility in understanding either criminal or terrorist behavior (as opposed to espionage networks, for which this type of tool was initially developed). Often the linkages constructed by such tools are impossible to disentangle since all links have the same importance. PIE holds the potential to focus link analysis tools by clearly delineating watch points and allowing investigators to differentiate, characterize and prioritize links within an asymmetric threat network. This section focuses on the requirements dictated by PIE and some candidate tools that might be used in the PIE context.

In general, network mapping tools should:

  • Support the representation of people, groups, and the links between them within the PIE indicator framework;
  • Sustain flexibility for mapping different network structures;
  • Differentiate, characterize and prioritize links within an asymmetric threat network;
  • Focus on organizational structures to determine what kinds of network structures they use;
  • Provide a graphical interface that supports analysis;
  • Access and associate evidence with an investigator’s data sources.

Within the PIE context, investigators can use network mapping tools to identify the flows of information and authority within different types of network forms such as chains, hub and spoke, fully matrixed, and various hybrids of these three basic forms.
Examples of network mapping tools that are available commercially include:

Analyst Notebook®: A PC-based package from i2 that supports network mapping/link analysis via network, timeline and transaction analysis. Analyst Notebook allows an investigator to capture link information between people, groups, activities, and other entities of interest in a visual format convenient for identifying relationships, dependencies and trends. Analyst Notebook facilitates this capture by providing a variety of tools to review and integrate information from a number of data sources. It also allows the investigator to make a connection between the graphical icons representing entities and the original data sources, supporting a drill-down feature. Some of the other useful features included with Analyst Note- book are the ability to: 1) automatically order and depict sequences of events even when exact date and time data is unknown and 2) use background visuals such as maps, floor plans or watermarks to place chart information in context or label for security purposes. See http://www.i2.co.uk/Products/Analysts_Notebook/default.asp. Even though i2 Analyst Notebook is widely used by intelligence community, anti-terrorism and law enforcement investigators for constructing network maps, interviews with investigators indicate that it is more useful as a visual aid for briefing rather than in performing the analysis itself. Although some investigators indicated that they use it as an analytical tool, most seem to perform the analysis using either another tool or by hand, then entering the results into the Analyst Notebook in order to generate a graphic for a report or briefing. Finally, few tools are available within the Analyst Notebook to automatically differentiate, characterize and prioritize links within an asymmetric threat network.

Patterntracer TCA: Patterntracer Telephone Call Analysis (TCA) is an add-on tool for the Analyst Notebook intended to help identify patterns in telephone billing data. Patterntracer TCA automatically finds repeating call patterns in telephone billing data and graphically displays them using network and timeline charts. See http://www.i2.co.uk/Products/Analysts_Workstation/default.asp

Memex: Memex has already been discussed in the context of subject-based query tools. In addition to supporting such queries, however, Memex also provides a tool that supports automated link analysis on unstructured data and presents the results in graphical form.

Megaputer (PolyAnalyst 4.6): In addition to supporting pattern-based queries, PolyAnalyst was also designed to support a primitive form of link analysis, by providing a visual relationship of the results.

A.2.3. Abstraction tools

As can be seen from exhibit 6, all abstraction tools will need to support requirements dictated by where these tools fall within the tool space. Abstraction tools should focus on:

  • Functionalities that help individual investigators (or a small group of investigators) build abstract models;
  • Options to help share these models, and therefore the tools should be defined using terms that will be recognized by the larger community (i.e., codified as opposed to uncoded);
  • Highly abstract notions that encourage examination of concepts across networks, groups, and time.

The product of these tools should be hypotheses or models that can be shared with the community to support information exchange, encourage dialogue, and eventually be validated against both real-world data and by other experts. This section provides some examples of useful functionality that should be included in tools to support the abstraction process.

A.2.3.1. Structured argumentation tools

Structured argumentation is a methodology for capturing analytical reasoning processes designed to address a specific analytic task in a series of alternative constructs, or hypotheses, represented by a set of hierarchical indicators and associated evidence. Structured argumentation tools should:

  • Capture multiple, competing hypotheses of multi-dimensional indicators at both summary and/or detailed levels of granularity;
  • Develop and archive indicators and supporting evidence;
  • Monitor ongoing activities and assess the implications of new evidence;
  • Provide graphical visualizations of arguments and associated evidence;
  • Encourage a careful analysis by reminding the investigator of the full spectrum of indicators to be considered;
  • Ease argument comprehension by allowing the investigator to move along the component lines of reasoning to discover the basis and rationale of others’ arguments;
  • Invite and facilitate argument comparison by framing arguments within common structures; and
  • Support collaborative development and reuse of models among a community of investigators.
  • Within the PIE context, investigators can use structured argumentation tools to assess a terrorist group’s ability to weaponize biological materials, and determine the parameters of a transnational criminal organization’s money laundering methodology.

Examples of structured argumentation tools that are available commercially include:

Structured Evidential Argument System (SEAS) from SRI International was initially applied to the problem of early warning for project management, and more recently to the problem of early crisis warning for the U.S. intelligence and policy communities. SEAS is based on the concept of a structured argument, which is a hierarchically organized set of questions (i.e., a tree structure). These are multiple-choice questions, with the different answers corresponding to discrete points or subintervals along a continuous scale, with one end of the scale representing strong support for a particular type of opportunity or threat and the other end representing strong refutation. Leaf nodes represent primitive questions, and internal nodes represent derivative questions. The links represent support relationships among the questions. A derivative question is supported by all the derivative and primitive questions below it. SEAS arguments move concepts from their concrete, local representations into a global context that supports PIE indicator construction. See http://www.ai.sri.com/~seas/.

A.2.3.2. Modeling

  • By capturing information about a situation (e.g., the actors, possible actions, influences on those actions, etc.), in a model, users can define a set of initial conditions, match these against the model, and use the results to support analysis and prediction. This process can either be performed manually or, if the model is complex, using an automated tool or simulator.
  • Utilizing modeling tools, investigators can systematically examine aspects of terror-crime interaction. Process models in particular can reveal linkages between the two groups and allow investigators to map these linkages to locations on the terror-crime interaction spectrum. Process models capture the dynamics of networks in a series of functional and temporal steps. Depending on the process being modeled, these steps must be conducted either sequentially or simultaneously in order for the process to execute as de- signed. For example, delivery of cocaine from South America to the U.S. can be modeled as process that moves sequentially from the growth and harvesting of coca leaves through refinement into cocaine and then transshipment via intermediate countries into U.S. distribution points. Some of these steps are sequential (e.g., certain chemicals must be acquired and laboratories established before the coca leaves can be processed in bulk) and some can be conducted simultaneously (e.g., multiple smuggling routes can be utilized at the same time).

Corruption, modeled as a process, should reveal useful indicators of cooperation between organized crime and terrorism. For example, one way to generate and validate indicators of terror-crime interaction is to place cases of corrupt government officials or private sector individuals in an organizational network construct utilizing a process model and determine if they serve as a common link between terrorist and criminal networks via an intent model with attached evidence. An intent model is a type of process model constructed by reverse engineering a specific end-state, such as the ability to move goods and people into and out of a country without interference from law enforcement agencies.

This end-state is reached by bribing certain key officials in groups that supply border guards, provide legitimate import-export documents (e.g., end-user certificates), monitor immigration flows, etc.

Depending on organizational details, a bribery campaign can proceed sequentially or simultaneously through various offices and individuals. This type of model allows analysts to ‘follow the money’ through a corruption network and link payments to officials with illicit sources. The model can be set up to reveal payments to officials that can be linked to both criminal and terrorist involvement (perhaps via individuals or small groups with known links to both types of network).

Thus investigators can use a process model as a repository for numerous disparate data items that, taken together, reveal common patterns of corruption or sources of payments that can serve as indicators of cooperation between organized crime and terrorism. Using these tools, investigators can explore multiple data dimensions by dynamically manipulating several elements of analysis:

  • Criminal and/or terrorist priorities, intent and factor attributes;
  • Characterization and importance of direct evidence;
  • Graphical representations and other multi-dimensional data visualization approaches.

There have been a large number of models built over the last several years focusing on counter- terrorism and criminal activities. Some of the most promising are models that support agent-based execution of complex adaptive environments that are used for intelligence analysis and training. Some of the most sophisticated are now being developed to support the generation of more realistic environments and interactions for the commercial gaming market.

In general, modeling tools should:

  • Capture and present reasoning from evidence to conclusion;
  • Enable comparison of information across situation, time, and groups;
  • Provide a framework for challenging assumptions and exploring alternative hypotheses;
  • Facilitate information sharing and cooperation by representing hypotheses and analytical judgment, not just facts;
  • Incorporate the first principle of analysis—problem decomposition;
  • Track ongoing and evolving situations, collect analysis, and enable users to discover information and critical data relationships;
  • Make rigorous option space analysis possible in a distributed electronic context;
  • Warn users of potential cognitive bias inherent in analysis.

Although there are too many of these tools to list in this report, good examples of some that would be useful to support PIE include:

NETEST: This model estimates the size and shape of covert networks given multiple sources with omissions and errors. NETEST makes use of Bayesian updating techniques, communications theory and social network theory [Dombroski, 2002].

The Modeling, Virtual Environments and Simulation (MOVES) Institute at the Naval Postgraduate School in Monterey, California, is using a model of cognition formulated by Aaron T. Beck to build models capturing the characteristics of people willing to employ violence [Beck, 2002].

BIOWAR: This is a city scale multi-agent model of weaponized bioterrorist attacks for intelligence and training. At present the model is running with 100,000 agents (this number will be increased). All agents have real social networks and the model contains real city data (hospitals, schools, etc.). Agents are as realistic as possible and contain a cognitive model [Carley, 2003a].

All of the models reviewed had similar capabilities:

  • Capture the characteristics of entities – people, places, groups, etc.;
  • Capture the relationships between entities at a level of detail that supports programmatic construction of processes, situations, actions, etc. these are usually “is a” and “a part of” representations of object-oriented taxonomies, influence relationships, time relationships, etc.;
  • The ability to represent this information in a format that supports using the model in simulations. The next section provides information on simulation tools that are in common use for running these types of models.
  • User interfaces for defining the models, the best being graphical interfaces that allow the user to define the entities and their relationships through intuitive visual displays. For example, if the model involves defining networks or influences between entities, graphical displays with the ability to create connections and perform drag and drop actions become important.

A.2.4. Diffusion tools

As can be seen from exhibit 6, all diffusion tools will need to support requirements dictated by where these tools fall within the tool space. Diffusion tools should focus on:

  • Moving information from an individual or small group of investigators to the collective community;
  • Providing abstract concepts that are easily understood in a global context with little worry that the terms will be misinterpreted;
  • Supporting the representation of abstract concepts and encouraging dialogues about those concepts.

In general diffusion tools should:

  • Provide a shared environment that investigators can access on the internet;
  • Support the ability for everyone to upload abstract concepts and their supporting evidence (e.g., documents);
  • Contain the ability for the person uploading the information to be able to attach an annotation and keywords;
  • Possess the ability to search concept repositories;
  • Be simple to set up and use.

Within the PIE context, investigators could use diffusion tools to:

  • Employ a collaborative environment to exchange information, results of analysis, hypotheses, models, etc.;
  • Utilize collaborative environments that might be set up between law enforcement groups and counterterrorism groups to exchange information on a continual and near real-time basis. Examples of diffusion tools run from one end of the cooperation/dissemination spectrum to the other. One of the simplest to use is:
  • AskSam: The AskSam Web Publisher is an extension of the standalone AskSam capability that has been used by the analytical community for many years. The capabilities of AskSam Web Publisher include: 1) sharing documents with others who have access to the local net- work, 2) anyone who has access to the network has access to the AskSam archive without the need for an expensive license, and 3) advanced searching capabilities including adding keywords which supports a group’s codification process (see step 2 in exhibit 6 in our analytical process). See http://www.asksam.com/.

There are some significant disadvantages to using AskSam as a cooperation environment. For example, each document included has to be ‘published’. The assumption is that there are only one or two people primarily responsible for posting documents and these people control all documents that are made available, a poor assumption for an analytical community where all are potential publishers of concepts. The result is expensive licenses for publishers. Finally, there is no web-based service for AskSam, requiring each organization to host its own AskSam server.

There are two leading commercial tools for cooperation now available and widely used. Which tool is chosen for a task depends on the scope of the task and the number of users.

  • Groove: virtual office software that allows small teams of people to work together securely over a network on a constrained problem. Groove capabilities include: 1) the ability for investigators to set up a shared space, invite people to join and give them permission to post documents to a document repository (i.e., file sharing), 2) security including encryption that protects content (e.g., upload and download of documents) and communications (e.g., email and text messaging), investigators can work across firewalls without a Virtual Private Network (VPN) which improves speed and makes it accessible from outside of an intranet, 4) investigators are able to work off-line, then synchronize when they come back on line, 5) includes add- in tools to support cooperation such as calendars, email, text- and voice-based instant messaging, and project management.

Although Groove satisfies most of the basic requirements listed for this category, there are several drawbacks to using Groove for large projects. For example, there is no free format search for text documents and investigators cannot add on their own keyword categories or attributes to the stored documents. This limits Groove’s usefulness as an information exchange archive. In addition, Groove is a fat client, peer-to-peer architecture. This means that all participants are required to purchase a license, download and install Groove on their individual machines. It also means that Groove requires high bandwidth for the information exchange portion of the peer-to-peer updates. See http://www.groove.net/default.cfm?pagename=Workspace.

  • SharePoint: Allows teams of people to work together on documents, tasks, contacts, events, and other information. SharePoint capabilities include: 1) text document loading and sharing, 2) free format search capability, 3) cooperation tools to include instant messaging, email and a group calendar, and 4) security with individual and group level access control. The TraCCC

team employed SharePoint for this project to facilitate distributed research and document

generation. See http://www.microsoft.com/sharepoint/.
SharePoint has many of the same features as Groove, but there are fundamental underlying differences. Sharepoint’s architecture is server based with the client running in a web browser. One ad- vantage to this approach is that each investigator is not required to download a personal version on a machine (Groove requires 60-80MB of space on each machine). In fact, an investigator can access the SharePoint space from any machine (e.g., at an airport). The disadvantage of this approach is that the investigator does not have a local version of the SharePoint information and is unable to work offline. With Groove, an investigator can work offline, and then resynchronize with the remaining members of the group when the network once again becomes available. Finally, since peer-to-peer updates are not taking place, SharePoint does not necessarily require a high-speed internet access, except perhaps in the case where the investigator would like to upload large documents.

Another significant difference between SharePoint and Groove is linked to the search function. In Groove, the search capability is limited to information that is typed into Groove directly, not to documents that have been attached to Groove in an archive. A SharePoint support not only document searches, but also allows the community of investigators to set up their own keyword categories to help with the codification of the shared documents (again see step 2 from exhibit 6). It should be noted, however, that SharePoint only supports searches for Microsoft documents (e.g., Word, Power- Point, etc.) and not ‘foreign’ document formats such as PDF. This fact is not surprising given that SharePoint is a Microsoft tool.

SharePoint and Groove are commercially available cooperation solutions. There are also a wide variety of customized cooperation environments now appearing on the market. For example:

  • WAVE Enterprise Information Integration System– Modus Operandi’s Wide Area Virtual Environment (WAVE) provides tools to support real-time enterprise information integration, cooperation and performance management. WAVE capabilities include: 1) collaborative workspaces for team-based information sharing, 2) security for controlled sharing of information, 3) an extensible enterprise knowledge model that organizes and manages all enterprise knowledge assets, 4) dynamic integration of legacy data sources and commercial off-the-shelf (COtS) tools, 5) document version control, 6) cooperation tools, including discussions, issues, action items, search, and reports, and 7) performance metrics. WAVE is not a COtS solution, however. An organization must work with Modus Operandi services to set up a custom environment. The main disadvantage to this approach as opposed to Groove or SharePoint is cost and the sharing of information across groups. See http://www.modusoperandi.com/wave.htm.

Finally, many of the tools previously discussed have add-ons available for extending their functionality to a group. For example:

  • iBase4: i2’s Analyst Notebook can be integrated with iBase4, an application that allows investigators to create multi-user databases for developing, updating, and sharing the source information being used to create network maps. It even includes security to restrict access or functionality by user, user groups and data fields. It is not clear from the literature, but it appears that this functionality is restricted to the source data and not the sharing of network maps generated by the investigators. See http://www.i2.co.uk/Products/iBase/default.asp

The main disadvantage of iBase4 is its proprietary format. This limitation might be somewhat mitigated by coupling iBase4 with i2’s iBridge product which creates a live connection between legacy databases, but there is no evidence in the literature that i2 has made this integration.

A.2.5. Validation tools

As can be seen from exhibit 6, all validation tools will need to support requirements dictated by where these tools fall within the tool space. Validation tools should focus on:

  • Providing a community context for validating the concepts put forward by the individual participants in the community;
  • Continuing to work within a codified realm in order to facilitate communication between different groups articulating different perspectives;
  • Matching abstract concepts against real world data (or expert opinion) to determine the validity of the concepts being put forward.

Using these criteria as background, one of the most useful toolsets available for validation are simulation tools. This section briefly describes the functionality in general, as well as providing specific tool examples, to support simulations that ‘kick the tires’ of the abstract concepts.

Following are some key capabilities that any simulation tool must possess:

  • Ability to ingest the model information that has been constructed in the previous steps in the

analytical process;

  • Access to a data source for information that might be required by the model during execution;
  • Users need to be able to define the initial conditions against which the model will be run;
  • The more useful simulators allow the user to “step through” the model execution, examining

variables and resetting variable values in mid-execution;

  • Ability to print out step-by-step interim execution results and final results;
  • Change the initial conditions and compare the results against prior runs.

Although there are many simulation tools available, following are brief descriptions of some of the most promising:

  • Online iLink: An optional application for i2’s Analyst Notebook that supports dynamic up- date of Analyst Notebook information from online data sources. Once a connection is made with an on-line source (e.g., LexisNexistM, or D&B®) Analyst Notebook uses this connection to automatically check for any updated information and propagates those updates throughout to support validation of the network map information. See http://www.i2inc.com.

One apparent drawback with this plug-in is that Online iLink appears to require that the line data provider deploy i2’s visualization technology.

  • NETEST: A research project from Carnegie Mellon University, which is developing tools

that combine multi-agent technology with hierarchical Bayesian inference models and biased net models to produce accurate posterior representations of terrorist networks. Bayesian inference models produce representations of a network’s structure and informant accuracy by combining prior network and accuracy data with informant perceptions of a network. Biased net theory examines and captures the biases that may exist in a specific network or set of net- works. Using NETEST, an investigator can estimate a network’s size, determine its member- ship and structure, determine areas of the network where data is missing, perform cost/benefit analysis of additional information, assess group level capabilities embedded in the network, and pose “what if” scenarios to destabilize a network and predict its evolution over time [Dombroski, 2002].

  • REcursive Porous Agent Simulation toolkit (REPAST): A good example of the free, open-source toolkits available for creating agent-based simulations. Begun by the University of Chicago’s social sciences research community and later maintained by groups such as Argonne National Laboratory, Repast is now managed by the non-profit volunteer Repast Organization for Architecture and Development (ROAD). Some of Repast’s features include: 1) a variety of agent templates and examples (however, the toolkit gives users complete flexibility as to how they specify the properties and behaviors of agents), 2) a fully concurrent discrete event scheduler (this scheduler supports both sequential and parallel discrete event operations), 3) built-in simulation results logging and graphing tools, 4) an automated Monte Carlo simulation framework, 5) allows users to dynamically access and modify agent properties, agent behavioral equations, and model properties at run time, 6) includes libraries for genetic algorithms, neural networks, random number generation, and specialized mathematics, and 7) built-in systems dynamics modeling.

More to the point for this investigation, Repast has social network modeling support tools. The Repast website claims that “Repast is at the moment the most suitable simulation framework for the applied modeling of social interventions based on theories and data,” [Tobias, 2003]. See http://repast.sourceforge.net/.

A.2.6. Impacting tools

As can be seen from exhibit 6, all impacting tools will need to support requirements dictated by where these tools fall within the tool space. Impacting tools should focus on:

  • Helping law enforcement and intelligence practitioners understand the implications of their validated models. For example, what portions of the terror-crime interaction spectrum are relevant in various parts of the world, and what is the likely evolutionary path of this phenomenon in each specific geographic area?

Support for translating abstracted knowledge into more concrete local execution strategies. The information flows feeding the scanning process, for example, should be updated based on the results of mapping local events and individuals to the terror-crime interaction spectrum. Watch points and their associated indicators should be reviewed, updated and modified. Probes can be constructed to clarify remaining uncertainties in specific situations or locations.

The following general requirements have been identified for impacting tools:

  • Probe management software to help law enforcement investigators and intelligence community analysts plan probes against known and suspected transnational threat entities, monitor their execution, map their impact, and analyze the resultant changes to network structure and operations.
  • Situational assessment software that supports transnational threat monitoring and projection. Data fusion and visualization algorithms that portray investigators’ current understanding of the nature and extent of terror-crime interaction, and allow investigators to focus scarce collection and analytical resources on the most threatening regions and networks.

Impacting tools are only just beginning to exit the laboratory, and none of them can be considered ready for operational deployment. This type of functionality, however, is being actively pursued within the U.S. governmental and academic research communities. An example of an impacting tool currently under development is described below:

DyNet – A multi-agent network system designed specifically for assessing destabilization strategies on dynamic networks. A knowledge network (e.g., a hypothesized network resulting from Steps 1 through 5 of Boisot’s I-Space-driven analytical process) is given to DyNet as input. In this case, a knowledge network is defined as an individual’s knowledge about who they know, what resources they have, and what task(s) they are performing. The goal of an investigator using DyNet is to build stable, high performance, adaptive networks with and conduct what-if analysis to identify successful strategies for destabilizing those net- works. Investigators can run sensitivity tests examining how differences in the structure of the covert net- work would impact the overall ability of the network to respond to probe and attacks on constituent nodes. [Carley, 2003b]. See the DyNet website hosted by Carnegie Mellon University at http://www.casos.cs.cmu.edu/projects/DyNet/.

A.3. Overall tool requirements

This appendix provides a high-level overview of PIE tool requirements:

  • Easy to put information into the system and get information out of it. The key to the successful use of many of these tools is the quality of the information that is put into them. User interfaces have to be easy to use, context based, intuitive, and customizable. Otherwise, investigators soon determine that the “care and feeding” of the tool does not justify the end product.
  • Reasonable response time: The response time of the tool needs to match the context. If the tool is being used in an operational setting, then the ability to retrieve results can be time- critical–perhaps a matter of minutes. In other cases, results may not be time-critical and days can be taken to generate results.
  • Training: Some tools, especially those that have not been released as commercial products, may not have substantial training materials and classes available. When making a decision regarding tool selection, the availability and accessibility of training may be critical.

Ability to integrate with the enterprise resources: There are many cases where the utility of the tool will depend on its ability to access and integrate information from the overall enterprise in which the investigator is working. Special-purpose tools that require re-keying of information or labor-intensive conversions of formats should be carefully evaluated to determine the man- power required to support such functions.

  • Support for integration with other tools: Tools that have standard interfaces will act as force multipliers in the overall analytical toolbox. At a minimum, tools should have some sort of a developer’s kit that allows the creation of an API. In the best case, a tool would support some generally accepted integration standard such as web services.
  • Security: Different situations will dictate different security requirements, but in almost all cases some form of security is required. Examples of security include different access levels for different user populations. The ability to be able to track and audit transactions, linking them back to their sources, will also be necessary in many cases.
  • Customizable: Augmenting usability, most tools will need to support some level of customizability (e.g., customizable reporting templates).
  • Labeling of information: Information that is being gathered and stored will need to be labeled (e.g., for level of sensitivity or credibility).
  • Familiar to the current user base: One characteristic in favor of any tool selected is how well the current user base has accepted it. There could be a great deal of benefit to upgrading existing tools that are already familiar to the users.
  • Heavy emphasis on visualization: To the greatest extent possible, tools should provide the investigator with the ability to display different aspects of the results in a visual manner.
  • Support for cooperation: In many cases, the strength of the analysis is dependent on leveraging cross-disciplinary expertise. Most tools will need to support some sort of cooperation.

A.4. Bibliography and Further Reading

Autonomy technology White Paper, Ref: [WP tECH] 07.02. This and other information documents about Autonomy may be downloaded after registration from http://www.autonomy.com/content/downloads/

Beck, Aaron T., “Prisoners of Hate,” Behavior research and therapy, 40, 2002: 209-216. A copy of this article may be found at http://mail.med.upenn.edu/~abeck/prisoners.pdf. Also see Dr. Beck’s website at http://mail.med.upenn.edu/~abeck/ and the MOVES Institute at http://www.movesinstitute.org/.

Boisot, Max and Ron Sanchez, “the Codification-Diffusion-Abstraction Curve in the I-Space,” Economic Organization and Nexus of Rules: Emergence and the Theory of the Firm, a working paper, the Universitat Oberta de Catalunya, Barcelona, Spain, May 2003.

Carley, K. M., D. Fridsma, E. Casman, N. Altman, J. Chang, B. Kaminsky, D. Nave, & Yahja, “BioWar: Scalable Multi-Agent Social and Epidemiological Simulation of Bioterrorism Events” in Proceedings from the NAACSOS Conference, 2003. this document may be found at http://www.casos.ece.cmu.edu/casos_working_paper/carley_2003_biowar.pdf

Carley, Kathleen M., et. al., “Destabilizing Dynamic Covert Networks” in Proceedings of the 8th International Command and Control Research and technology Symposium, 2003. Conference held at the National Defense War College, Washington, DC. This document may be found at http://www.casos.ece.cmu.edu/resources_others/a2c2_carley_2003_destabilizing.pdf

Collier, N., Howe, T., and North, M., “Onward and Upward: the transition to Repast 2.0,” in Proceedings of the First Annual North American Association for Computational Social and Organizational Science Conference, Electronic Proceedings, Pittsburgh, PA, June 2003. Also, read about REPASt 3.0 at the REPASt website: http://repast.sourceforge.net/index.html

DeRosa, Mary, “Data Mining and Data Analysis for Counterterrorism,” CSIS Report, March 2004. this document may be purchased at http://csis.zoovy.com/product/0892064439

Dombroski, M. and K. Carley, “NETEST: Estimating a Terrorist Network’s Structure,” Journal of Computational and Mathematical Organization theory, 8(3), October 2002: 235-241.
http://www .casos.ece.cmu.edu/conference2003/student_paper/Dombroski.pdf

Farah, Douglas, Blood from Stones: The Secret Financial Network of Terror, New York: Broadway Books, 2004.

Hall, P. and G. Dowling, “Approximate string matching,” Computing Surveys, 12(4), 1980: 381-402. For more information on phonetic string matching see http://www.cs.rmit.edu.au/~jz/fulltext/sigir96.pdf. A good summary of the inherent limitations of Soundex may be found at http://www.las-inc.com/soundex/?source=gsx.

Lowrance, J.D., Harrison, I.W., and Rodriguez, A.C., “Structured Argumentation for Analysis,” Proceedings of the 12th Inter- national Conference on Systems Research, Informatics, and Cybernetics, (August 2000).

Quint, Barbara, “IBM’s WebFountain Launched – the Next Big Thing?” September 22, 2003 – from the Information today, Inc. website at http://www.infotoday.com/newsbreaks/nb030922-1.shtml Also see IBM’s WebFountain website at http://www.almaden.ibm.com/webfountain/ and the WebFountain Application Development Guide at
http://www .almaden.ibm.com/webfountain/resources/sg247029.pdf.

Shannon, Claude, “A mathematical theory of communication,” Bell System technical Journal, (27), July and October 1948: 379- 423 and 623-656.

Tobias, R. and C. Hofmann, “Evaluation of Free Java-libraries for Social-scientific Agent Based Simulation,” Journal of Artificial Societies and Social Simulation, University of Surrey, 7(1), January 2003 may be found at http://jasss.soc.surrey.ac.uk/7/1/6.html.

Notes on Structured Analytic Techniques for Intelligence Analysis

Selections from Structured Analytic Techniques for Intelligence Analysis by Richards J Heuer and Randolph H Pherson.

In contrast to the bipolar dynamics of the Cold War, this new world is strewn with failing states, proliferation dangers, regional crises, rising powers, and dangerous nonstate actors—all at play against a backdrop of exponential change in fields as diverse as population and technology.

To be sure, there are still precious secrets that intelligence collection must uncover—things that are knowable and discoverable. But this world is equally rich in mysteries having to do more with the future direction of events and the intentions of key actors. Such things are rarely illuminated by a single piece of secret intelligence data; they are necessarily subjects for analysis.

intelligence analysis differs from similar fields of intellectual endeavor.  Intelligence analysts must traverse a minefield of potential errors.

First, they typically must begin addressing their subjects where others have left off; in most cases the questions they get are about what happens next, not about what is known.

Second, they cannot be deterred by lack of evidence. As Heuer pointed out in his earlier work, the essence of the analysts’ challenge is having to deal with ambiguous situations in which information is never complete and arrives only incrementally—but with constant pressure to arrive at conclusions.

Third, analysts must frequently deal with an adversary that actively seeks to deny them the information they need and is often working hard to deceive them.

Finally, analysts, for all of these reasons, live with a high degree of risk—essentially the risk of being wrong and thereby contributing to ill-informed policy decisions.

The risks inherent in intelligence analysis can never be eliminated, but one way to minimize them is through more structured and disciplined thinking about thinking.

The key point is that all analysts should do something to test the conclusions they advance. To be sure, expert judgment and intuition have their place—and are often the foundational elements of sound analysis— but analysts are likely to minimize error to the degree they can make their underlying logic explicit in the ways these techniques demand.

Just as intelligence analysis has seldom been more important, the stakes in the policy process it informs have rarely been higher. Intelligence analysts these days therefore have a special calling, and they owe it to themselves and to those they serve to do everything possible to challenge their own thinking and to rigorously test their conclusions.

Preface: Origin and Purpose

 

Structured analysis involves a step-by-step process that externalizes an individual analyst’s thinking in a manner that makes it readily apparent to others, thereby enabling it to be shared, built on, and critiqued by others. When combined with the intuitive judgment of subject matter experts, such a structured and transparent process can significantly reduce the risk of analytic error.

Each step in a technique prompts relevant discussion and, typically, this generates more divergent information and more new ideas than any unstructured group process. The step-by-step process of structured analytic techniques structures the interaction among analysts in a small analytic group or team in a way that helps to avoid the multiple pitfalls and pathologies that often degrade group or team performance.

By defining the domain of structured analytic techniques, providing a manual for using and testing these techniques, and outlining procedures for evaluating and validating these techniques, this book lays the groundwork for continuing improvement of how analysis is done, both within the Intelligence Community and beyond.

Audience for This Book

 

This book is for practitioners, managers, teachers, and students of intelligence analysis and foreign affairs in both the public and private sectors. Managers, commanders, action officers, planners, and policymakers who depend upon input from analysts to help them achieve their goals should also find it useful. Academics who specialize in qualitative methods for dealing with unstructured data will be interested in this pathbreaking book as well.

 

Techniques such as Analysis of Competing Hypotheses, Key Assumptions Check, and Quadrant Crunching developed specifically for intelligence analysis are now being adapted for use in other fields. New techniques that the authors developed to fill gaps in what is currently available for intelligence analysis are being published for the first time in this book and have broad applicability.

Introduction and Overview

 

Analysis in the U.S. Intelligence Community is currently in a transitional stage, evolving from a

mental activity done predominantly by a sole analyst to a collaborative team or group activity.

The driving forces behind this transition include the following:

  • The growing complexity of international issues and the consequent requirement for

multidisciplinary input to most analytic products.

  • The need to share more information more quickly across organizational boundaries.
  • The dispersion of expertise, especially as the boundaries between analysts, collectors, and operators become blurred.
  • And the need to identify and evaluate the validity of alternative mental models.

This transition is being enabled by advances in technology, such as the Intelligence Community’s Intellipedia and new A-Space collaborative network, “communities of interest,” the mushrooming growth of social networking practices among the upcoming generation of analysts, and the increasing use of structured analytic techniques that guide the interaction among analysts.

 

OUR VISION

 

Structured analysis is a mechanism by which internal thought processes are externalized in a systematic and transparent manner so that they can be shared, built on, and easily critiqued by others. Each technique leaves a trail that other analysts and managers can follow to see the basis for an analytic judgment.

This transparency also helps ensure that differences of opinion among analysts are heard and seriously considered early in the analytic process. Analysts have told us that this is one of the most valuable benefits of any structured technique.

Structured analysis helps analysts ensure that their analytic framework—the foundation upon which they form their analytic judgments—is as solid as possible. By helping break down a specific analytic problem into its component parts and specifying a step-by-step process for handling these parts, structured analytic techniques help to organize the amorphous mass of data with which most analysts must contend. This is the basis for the terms structured analysis and structured analytic techniques. Such techniques make an analyst’s thinking more open and available for review and critique than the traditional approach to analysis. It is this transparency that enables the effective communication at the working level that is essential for interoffice and interagency collaboration.

Structured analytic techniques in general, however, do form a methodology—a set of principles and procedures for qualitative analysis of the kinds of uncertainties that intelligence analysts must deal with on a daily basis.

There is, of course, no formula for always getting it right, but the use of structured techniques can reduce the frequency and severity of error. These techniques can help analysts mitigate the proven cognitive limitations, side-step some of the known analytic pitfalls, and explicitly confront the problems associated with unquestioned mental models (also known as mindsets). They help analysts think more rigorously about an analytic problem and ensure that preconceptions and assumptions are not taken for granted but are explicitly examined and tested.

Intelligence analysts, like humans in general, do not start with an empty mind. Whenever people try to make sense of events, they begin with some body of experience or knowledge that gives them a certain perspective or viewpoint which we are calling a mental model. Intelligence specialists who are expert in their field have well developed mental models.

If an analyst’s mindset is seen as the problem, one tends to blame the analyst for being inflexible or outdated in his or her thinking.

1.2 THE VALUE OF TEAM ANALYSIS

 

Our vision for the future of intelligence analysis dovetails with that of the Director of National Intelligence’s Vision 2015, in which intelligence analysis increasingly becomes a collaborative enterprise, with the focus of collaboration shifting “away from coordination of draft products toward regular discussion of data and hypotheses early in the research phase.”

 

Analysts have also found that use of a structured process helps to depersonalize arguments when there are differences of opinion. Fortunately, today’s technology and social networking programs make structured collaboration much easier than it has ever been in the past.

1.3 THE ANALYST’S TASK

 

we developed a taxonomy for a core group of fifty techniques that appear to be the most useful for the Intelligence Community, but also useful for those engaged in related analytic pursuits in academia, business, law enforcement, finance, and medicine. This list, however, is not static.

 

It is expected to increase or decrease as new techniques are identified and others are tested and found wanting. Some training programs may have a need to boil down their list of techniques to the essentials required for one particular type of analysis.

 

willingness to share in a collaborative environment is also conditioned by the sensitivity of the

information that one is working with.

 

1.4 HISTORY OF STRUCTURED ANALYTIC TECHNIQUES

 

The first use of the term “Structured Analytic Techniques” in the Intelligence Community was in 2005. However, the origin of the concept goes back to the 1980s, when the eminent teacher of intelligence analysis, Jack Davis, first began teaching and writing about what he called “alternative analysis.” The term referred to the evaluation of alternative explanations or hypotheses, better understanding of other cultures, and analyzing events from the other country’s point of view rather than by mirror imaging.

 

organized the techniques into three categories: diagnostic techniques, contrarian techniques, and imagination techniques.

It proposes that most analysis be done in two phases: a divergent analysis or creative phase with broad participation by a social network using a wiki, followed by a convergent analysis phase and final report done by a small analytic team.

1.6 AGENDA FOR THE FUTURE

A principal theme of this book is that structured analytic techniques facilitate effective collaboration among analysts. These techniques guide the dialogue among analysts with common interests as they share evidence and alternative perspectives on the meaning and significance of this evidence. Just as these techniques provide structure to our individual thought processes, they also structure the interaction of analysts within a small team or group. Because structured techniques are designed to generate and evaluate divergent information and new ideas, they can help avoid the common pitfalls and pathologies that commonly beset other small group processes. In other words, structured analytic techniques are enablers of collaboration.

2 Building a Taxonomy

A taxonomy is a classification of all elements of the domain of information or knowledge. It defines a domain by identifying, naming, and categorizing all the various objects in this space. The objects are organized into related groups based on some factor common to each object in the group.

The word taxonomy comes from the Greek taxis meaning arrangement, division, or order and nomos meaning law.

 

Development of a taxonomy is an important step in organizing knowledge and furthering the development of any particular discipline.

 

“a taxonomy differentiates domains by specifying the scope of inquiry, codifying naming conventions, identifying areas of interest, helping to set research priorities, and often leading to new

theories. Taxonomies are signposts, indicating what is known and what has yet to be discovered.”

 

To the best of our knowledge, a taxonomy of analytic methods for intelligence analysis has not previously been developed, although taxonomies have been developed to classify research methods used in forecasting, operations research, information systems, visualization tools, electronic commerce, knowledge elicitation, and cognitive task analysis.

 

After examining taxonomies of methods used in other fields, we found that there is no single right way to organize a taxonomy—only different ways that are more or less useful in achieving a specified goal. In this case, our goal is to gain a better understanding of the domain of structured analytic techniques, investigate how these techniques contribute to providing a better analytic product, and consider how they relate to the needs of analysts. The objective has been to identify various techniques that are currently available, identify or develop additional potentially useful techniques, and help analysts compare and select the best technique for solving any specific analytic problem. Standardization of terminology for structured analytic techniques will facilitate collaboration across agency boundaries during the use of these techniques.

 

 

2.1 FOUR CATEGORIES OF ANALYTIC METHODS

 

The taxonomy described here posits four functionally distinct methodological approaches to intelligence analysis. These approaches are distinguished by the nature of the analytic methods used, the type of quantification if any, the type of data that are available, and the type of training that is expected or required. Although each method is distinct, the borders between them can be blurry.

 

* Expert judgment: This is the traditional way most intelligence analysis has been done. When done well, expert judgment combines subject matter expertise with critical thinking. Evidentiary reasoning, historical method, case study method, and reasoning by analogy are included in the expert judgment category. The key characteristic that distinguishes expert judgment from structured analysis is that it is usually an individual effort in which the reasoning remains largely in the mind of the individual analyst until it is written down in a draft report. Training in this type of analysis is generally provided through postgraduate education, especially in the social sciences and liberal arts, and often along with some country or language expertise.

 

* Structured analysis: Each structured analytic technique involves a step-by-step process that externalizes the analyst’s thinking in a manner that makes it readily apparent to others, thereby enabling it to be reviewed, discussed, and critiqued piece by piece, or step by step. For this reason, structured analysis often becomes a collaborative effort in which the transparency of the analytic process exposes participating analysts to divergent or conflicting perspectives. This type of analysis is believed to mitigate the adverse impact on analysis of known cognitive limitations and pitfalls. Frequently used techniques include Structured Brainstorming, Scenarios, Indicators, Analysis of Competing Hypotheses, and Key Assumptions Check. Structured techniques can be used by analysts who have not been trained in statistics, advanced mathematics, or the hard sciences. For most analysts, training in structured analytic techniques is obtained only within the Intelligence Community.

 

* Quantitative methods using expert-generated data: Analysts often lack the empirical data needed to analyze an intelligence problem. In the absence of empirical data, many methods are designed to use quantitative data generated by expert opinion, especially subjective probability judgments. Special procedures are used to elicit these judgments. This category includes methods such as Bayesian inference, dynamic modeling, and simulation. Training in the use of these methods is provided through graduate education in fields such as mathematics, information science, operations research, business, or the sciences.

 

* Quantitative methods using empirical data: Quantifiable empirical data are so different from expert- generated data that the methods and types of problems the data are used to analyze are also quite different. Econometric modeling is one common example of this method. Empirical data are collected by various types of sensors and are used, for example, in analysis of weapons systems. Training is generally obtained through graduate education in statistics, economics, or the hard sciences.

 

 

2.2 TAXONOMY OF STRUCTURED ANALYTIC TECHNIQUES

Structured techniques have been used by Intelligence Community methodology specialists and some analysts in selected specialties for many years, but the broad and general use of these techniques by the average analyst is a relatively new approach to intelligence analysis. The driving forces behind the development and use of these techniques are:

(1) an increased appreciation of cognitive limitations and pitfalls that make intelligence analysis so difficult

(2) prominent intelligence failures that have prompted reexamination of how intelligence analysis is generated

(3) policy support and technical support for interagency collaboration from the Office of the Director of National Intelligence

(4) a desire by policymakers who receive analysis that it be more transparent as to how the conclusions were reached.

 

There are eight categories of structured analytic techniques, which are listed below:

Decomposition and Visualization (chapter 4)
Idea Generation (chapter 5)
Scenarios and Indicators (chapter 6)
Hypothesis Generation and Testing (chapter 7)

Decision Support (chapter 11)

Assessment of Cause and Effect (chapter 8)

Challenge Analysis (chapter 9)
Conflict Management (chapter 10)

 

3 Criteria for Selecting Structured Techniques

 

Techniques that require a major project of the type usually outsourced to an outside expert or company are not included. Several interesting techniques that were recommended to us were not included for this reason. A number of techniques that tend to be used exclusively for a single type of analysis, such as tactical military, law enforcement, or business consulting, have not been included.

In this collection of techniques we build on work previously done in the Intelligence Community.

3.2 TECHNIQUES EVERY ANALYST SHOULD MASTER

 

The average intelligence analyst is not expected to know how to use every technique in this book. All analysts should, however, understand the functions performed by various types of techniques and recognize the analytic circumstances in which it is advisable to use them.

 

Structured Brainstorming: Perhaps the most commonly used technique, Structured Brainstorming is a simple exercise often employed at the beginning of an analytic project to elicit relevant information or insight from a small group of knowledgeable analysts. The group’s goal might be to identify a list of such things as relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, potential responses by an adversary or competitor to some action or situation, or, for law enforcement, potential suspects or avenues of investigation.

 

Cross-Impact Matrix: If the brainstorming identifies a list of relevant variables, driving forces, or key players, the next step should be to create a Cross-Impact Matrix and use it as an aid to help the group visualize and then discuss the relationship between each pair of variables, driving forces, or players. This is a learning exercise that enables a team or group to develop a common base of knowledge about, for example, each variable and how it relates to each other variable.

 

Key Assumptions Check: Requires analysts to explicitly list and question the most important working assumptions underlying their analysis. Any explanation of current events or estimate of future developments requires the interpretation of incomplete, ambiguous, or potentially deceptive evidence. To fill in the gaps, analysts typically make assumptions about such things as another country’s intentions or capabilities, the way governmental processes usually work in that country, the relative strength of political forces, the trustworthiness or accuracy of key sources, the validity of previous analyses on the same subject, or the presence or absence of relevant changes in the context in which the activity is occurring.

 

Indicators: Indicators are observable or potentially observable actions or events that are monitored to detect or evaluate change over time. For example, indicators might be used to measure changes toward an undesirable condition such as political instability, a humanitarian crisis, or an impending attack. They can also point toward a desirable condition such as economic or democratic reform. The special value of indicators is that they create an awareness that prepares an analyst’s mind to recognize the earliest signs of significant change that might otherwise be overlooked. Developing an effective set of indicators is more difficult than it might seem. The Indicator Validator helps analysts assess the diagnosticity of their indicators.

 

Analysis of Competing Hypotheses: This technique requires analysts to start with a full set of plausible hypotheses rather than with a single most likely hypothesis. Analysts then take each item of evidence, one at a time, and judge its consistency or inconsistency with each hypothesis. The idea is to refute hypotheses rather than confirm them. The most likely hypothesis is the one with the least evidence against it, not the most evidence for it. This process applies a key element of scientific method to intelligence analysis.

 

Premortem Analysis and Structured Self-Critique:  These two easy-to-use techniques enable a small team of analysts who have been working together on any type of future-oriented analysis to challenge effectively the accuracy of their own conclusions. Premortem Analysis uses a form of reframing, in which restating the question or problem from another perspective enables one to see it in a different way and come up with different answers.

 

With Structured Self-Critique, analysts respond to a list of questions about a variety of factors, including sources of uncertainty, analytic processes that were used, critical assumptions, diagnosticity of evidence, information gaps, and the potential for deception. Rigorous use of both of these techniques can help prevent a future need for a postmortem.

 

What If? Analysis: one imagines that an unexpected event has happened and then, with the benefit of “hindsight,” analyzes how it could have happened and considers the potential consequences. This type of exercise creates an awareness that prepares the analyst’s mind to recognize early signs of a significant change, and it may enable a decision maker to plan ahead for that contingency.

 

3.3 COMMON ERRORS IN SELECTING TECHNIQUES

 

The value and accuracy of an analytic product depends in part upon selection of the most appropriate technique or combination of techniques for doing the analysis… Lacking effective guidance, analysts are vulnerable to various influences:

 

  • College or graduate-school recipe: Analysts are inclined to use the tools they learned in college or graduate school whether or not those tools are the best application for the different context of intelligence analysis.
  • Tool rut: Analysts are inclined to use whatever tool they already know or have readily available. Psychologist Abraham Maslow observed that “if the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.”
  • Convenience shopping: The analyst, guided by the evidence that happens to be available, uses a method appropriate for that evidence, rather than seeking out the evidence that is really needed to address the intelligence issue. In other words, the evidence may sometimes drive the technique selection instead of the analytic need driving the evidence collection.
  • Time constraints: Analysts can easily be overwhelmed by their in-boxes and the myriad tasks they have to perform in addition to their analytic workload. The temptation is to avoid techniques that would “take too much time.”

 

3.4 ONE PROJECT, MULTIPLE TECHNIQUES

 

Multiple techniques can also be used to check the accuracy and increase the confidence in an analytic conclusion. Research shows that forecasting accuracy is increased by combining “forecasts derived from methods that differ substantially and draw from different sources of information.”

 

3.5 STRUCTURED TECHNIQUE SELECTION GUIDE

Analysts must be able, with minimal effort, to identify and learn how to use those techniques that best meet their needs and fit their styles.

 

4 Decomposition and Visualization

 

Two common approaches for coping with this limitation of our working memory are decomposition —that is, breaking down the problem or issue into its component parts so that each part can be considered separately—and visualization—placing all the parts on paper or on a computer screen in some organized manner designed to facilitate understanding how the various parts interrelate.

 

Any technique that gets a complex thought process out of the analyst’s head and onto paper or the computer screen can be helpful. The use of even a simple technique such as a checklist can be extremely productive.

 

Analysis is breaking information down into its component parts. Anything that has parts also has a structure that relates these parts to each other. One of the first steps in doing analysis is to determine an appropriate structure for the analytic problem, so that one can then identify the various parts and begin assembling information on them. Because there are many different kinds of analytic problems, there are also many different ways to structure analysis.

—Richards J. Heuer Jr., The Psychology of Intelligence Analysis (1999).

 

Overview of Techniques

 

Getting Started Checklist, Customer Checklist, and Issue Redefinition are three techniques that can be combined to help analysts launch a new project. If an analyst can start off in the right direction and avoid having to change course later, a lot of time can be saved.

 

Chronologies and Timelines are used to organize data on events or actions. They are used whenever it is important to understand the timing and sequence of relevant events or to identify key events and gaps.

 

Sorting is a basic technique for organizing data in a manner that often yields new insights. Sorting is effective when information elements can be broken out into categories or subcategories for comparison by using a computer program, such as a spreadsheet.

 

Ranking, Scoring, and Prioritizing provide how-to guidance on three different ranking techniques—Ranked Voting, Paired Comparison, and Weighted Ranking. Combining an idea-generation technique such as Structured Brainstorming with a ranking technique is an effective way for an analyst to start a new project or to provide a foundation for interoffice or interagency collaboration. The idea-generation technique is used to develop lists of driving forces, variables to be considered, indicators, possible scenarios, important players, historical precedents, sources of information, questions to be answered, and so forth. Such lists are even more useful once they are ranked, scored, or prioritized to determine which items are most important, most useful, most likely, or should be at the top of the priority list.

 

Matrices are generic analytic tools for sorting and organizing data in a manner that facilitates comparison and analysis. They are used to analyze the relationships among any two sets of variables or the interrelationships among a single set of variables. A Matrix consists of a grid with as many cells as needed for whatever problem is being analyzed. Some analytic topics or problems that use a matrix occur so frequently that they are described in this book as separate techniques.

 

Network Analysis is used extensively by counterterrorism, counternarcotics, counterproliferation, law enforcement, and military analysts to identify and monitor individuals who may be involved in illegal activity. Social Network Analysis is used to map and analyze relationships among people, groups, organizations, computers, Web sites, and any other information processing entities.

 

Mind Maps and Concept Maps are visual representations of how an individual or a group thinks about a topic of interest.

 

Process Maps and Gantt Charts were developed for use in industry and the military, but they are also useful to intelligence analysts. Process Mapping is a technique for identifying and diagramming each step in a complex process; this includes event flow charts, activity flow charts, and commodity flow charts.

 

4.1 GETTING STARTED CHECKLIST

 

The Method

Analysts should answer several questions at the beginning of a new project. The following is our list of suggested starter questions, but there is no single best way to begin. Other lists can be equally effective.

 

  • What has prompted the need for the analysis? For example, was it a news report, a new intelligence report, a new development, a perception of change, or a customer request?
    What is the key intelligence question that needs to be answered?
    Why is this issue important, and how can analysis make a meaningful contribution?
  • Has your organization or any other organization ever answered this question or a similar question before, and, if so, what was said? To whom was this analysis delivered, and what has changed since that time?
  • Who are the principal customers? Are these customers’ needs well understood? If not, try to gain a better understanding of their needs and the style of reporting they like.
    Are there other stakeholders who would have an interest in the answer to this question? Who might see the issue from a different perspective and prefer that a different question be answered? Consider meeting with others who see the question from a different perspective.
  • From your first impressions, what are all the possible answers to this question? For example, what alternative explanations or outcomes should be considered before making an analytic judgment on the issue?
  • Depending on responses to the previous questions, consider rewording the key intelligence question. Consider adding subordinate or supplemental questions.
  • Generate a list of potential sources or streams of reporting to be explored.
  • Reach out and tap the experience and expertise of analysts in other offices or organizations—both within and outside the government—who are knowledgeable on this topic. For example, call a meeting or conduct a virtual meeting to brainstorm relevant evidence and to develop a list of alternative hypotheses, driving forces, key indicators, or important players.

 

4.2 CUSTOMER CHECKLIST

 

The Customer Checklist helps an analyst tailor the product to the needs of the principal customer for the analysis. When used appropriately, it ensures that the product is of maximum possible value to this customer.

 

The Method

  • Before preparing an outline or drafting a paper, ask the following questions:
  • Who is the key person for whom the product is being developed?
  • Will this product answer the question the customer asked or the question the customer should be asking? If necessary, clarify this before proceeding.
  • What is the most important message to give this customer?
  • How is the customer expected to use this information?
  • How much time does the customer have to digest this product?
  • What format would convey the information most effectively?
  • Is it possible to capture the essence in one or a few key graphics?
  • What classification is most appropriate for this product? Is it necessary to consider publishing the paper at more than one classification level?
  • What is the customer’s level of tolerance for technical language? How much detail would the customer expect? Can the details be provided in appendices or backup papers, graphics, notes, or pages?
  • Will any structured analytic technique be used? If so, should it be flagged in the product?
  • Would the customer expect you to reach out to other experts within or outside the Intelligence Community to tap their expertise in drafting this paper? If this has been done, how has the contribution of other experts been flagged in the product? In a footnote? In a source list?
  • To whom or to what source might the customer turn for other views on this topic? What data or analysis might others provide that could influence how the customer reacts to what is being prepared in this product?

 

 

4.3 ISSUE REDEFINITION

 

 

Many analytic projects start with an issue statement. What is the issue, why is it an issue, and how will it be addressed? Issue Redefinition is a technique for experimenting with different ways to define an issue. This is important, because seemingly small differences in how an issue is defined can have significant effects on the direction of the research.

 

When to Use It

Using Issue Redefinition at the beginning of a project can get you started off on the right foot. It may also be used at any point during the analytic process when a new hypothesis or critical new evidence is introduced. Issue Redefinition is particularly helpful in preventing “mission creep,” which results when analysts unwittingly take the direction of analysis away from the core intelligence question or issue at hand, often as a result of the complexity of the problem or a perceived lack of information.

 

Value Added

Proper issue identification can save a great deal of time and effort by forestalling unnecessary research and analysis on a poorly stated issue. Issues are often poorly presented when they are:

 

  • Solution driven (Where are the weapons of mass destruction in Iraq?)
  • Assumption driven (When China launches rockets into Taiwan, will the Taiwanese government collapse?)
  • Too broad or ambiguous (What is the status of Russia’s air defense system?)
  • Too narrow or misdirected (Who is voting for President Hugo Chávez in the election?)

 

The Method

 

* Rephrase: Redefine the issue without losing the original meaning. Review the results to see if they provide a better foundation upon which to conduct the research and assessment to gain the best answer. Example: Rephrase the original question, “How much of a role does Aung San Suu Kyi play in the ongoing unrest in Burma?” as, “How active is the National League for Democracy, headed by Aung San Suu Kyi, in the antigovernment riots in Burma?”

 

* Ask why? Ask a series of “why” or “how” questions about the issue definition. After receiving the first response, ask “why” to do that or “how” to do it. Keep asking such questions until you are satisfied that the real problem has emerged. This process is especially effective in generating possible alternative answers.

 

* Broaden the focus: Instead of focusing on only one piece of the puzzle, step back and look at several pieces together. What is the issue connected to? Example: The original question, “How corrupt is the Pakistani president?” leads to the question, “How corrupt is the Pakistani government as a whole?”

 

* Narrow the focus: Can you break down the issue further? Take the question and ask about the components that make up the problem. Example: The original question, “Will the European Union ratify a new constitution?” can be broken down to, “How do individual member states view the new European Union constitution?”

 

* Redirect the focus: What outside forces impinge on this issue? Is deception involved? Example: The original question, “What are the terrorist threats against the U.S. homeland?” is revised to, “What opportunities are there to interdict terrorist plans?”

 

* Turn 180 degrees: Turn the issue on its head. Is the issue the one asked or the opposite of it? Example: The original question, “How much of the ground capability of China’s People’s Liberation Army would be involved in an initial assault on Taiwan?” is rephrased as, “How much of the ground capability of China’s People’s Liberation Army would not be involved in the initial Taiwan assault?”

 

Relationship to Other Techniques

 

Issue Redefinition is often used simultaneously with the Getting Started Checklist and the Customer Checklist. The technique is also known as Issue Development, Problem Restatement, and Reframing the Question.

 

4.4 CHRONOLOGIES AND TIMELINES

 

When to Use It

Chronologies and timelines aid in organizing events or actions. Whenever it is important to understand the timing and sequence of relevant events or to identify key events and gaps, these techniques can be useful. The events may or may not have a cause-and-effect relationship.

 

Value Added

Chronologies and timelines aid in the identification of patterns and correlations among events. These techniques also allow you to relate seemingly disconnected events to the big picture to highlight or identify significant changes or to assist in the discovery of trends, developing issues, or anomalies. They can serve as a catch-all for raw data when the meaning of the data has not yet been identified. Multiple-level timelines allow analysts to track concurrent events that may have an effect on each other. Although timelines may be developed at the onset of an analytic task to ascertain the context of the activity to be analyzed, timelines and chronologies also may be used in postmortem intelligence studies to break down the intelligence reporting, find the causes for intelligence failures, and highlight significant events after an intelligence surprise.

 

When researching the problem, ensure that the relevant information is listed with the date or order in which it occurred. Make sure the data are properly referenced.
Review the chronology or timeline by asking the following questions.

  • What are the temporal distances between key events? If “lengthy,” what caused the delay? Are there missing pieces of data that may fill those gaps that should be collected?
  • Did the analyst overlook piece(s) of intelligence information that may have had an impact on or be related to the events?
  • Conversely, if events seem to have happened more rapidly than were expected, or if not all events appear to be related, is it possible that the analyst has information related to multiple event timelines?
  • Does the timeline have all the critical events that are necessary for the outcome to occur?
  • When did the information become known to the analyst or a key player?
  • What are the intelligence gaps?
  • Are there any points along the timeline when the target is particularly vulnerable to U.S. intelligence collection activities or countermeasures?
  • What events outside this timeline could have influenced the activities?
  • If preparing a timeline, synopsize the data along a line, usually horizontal or vertical. Use the space on both sides of the line to highlight important analytic points. For example, place facts above the line and points of analysis or commentary below the line.
  • Alternatively, contrast the activities of different groups, organizations, or streams of information by placement above or below the line. If multiple actors are involved, you can use multiple lines, showing how and where they converge.
  • Look for relationships and patterns in the data connecting persons, places, organizations, and other activities. Identify gaps or unexplained time periods, and consider the implications of the absence of evidence. Prepare a summary chart detailing key events and key analytic points in an annotated timeline.

 

 

4.5 SORTING

 

When to Use It

Sorting is effective when information elements can be broken out into categories or subcategories for comparison with each other, most often by using a computer program, such as a spreadsheet. This technique is particularly effective during the initial data gathering and hypothesis generation phases of analysis, but you may also find sorting useful at other times.

Value Added

Sorting large amounts of data into relevant categories that are compared with each other can provide analysts with insights into trends, similarities, differences, or abnormalities of intelligence interest that otherwise would go unnoticed. When you are dealing with transactions data in particular (for example, communications intercepts or transfers of goods or money), it is very helpful to sort the data first.

 

The Method

Follow these steps:

* Review the categories of information to determine which category or combination of categories might show trends or an abnormality that would provide insight into the problem you are studying. Place the data into a spreadsheet or a database using as many fields (columns) as necessary to differentiate among the data types (dates, times, locations, people, activities, amounts, etc.). List each of the facts, pieces of information, or hypotheses involved in the problem that are relevant to your sorting schema. (Use paper, whiteboard, movable sticky notes, or other means for this.)

* Review the listed facts, information, or hypotheses in the database or spreadsheet to identify key fields that may allow you to uncover possible patterns or groupings. Those patterns or groupings then illustrate the schema categories and can be listed as header categories. For example, if an examination of terrorist activity shows that most attacks occur in hotels and restaurants but that the times of the attacks vary, “Location” is the main category; while “Date” and “Time” are secondary categories.

  • Group those items according to the sorting schema in the categories that were defined in step 1.
  • Choose a category and sort the data within that category. Look for any insights, trends, or oddities.

Good analysts notice trends; great analysts notice anomalies.

* Review (or ask others to review) the sorted facts, information, or hypotheses to see if there are alternative ways to sort them. List any alternative sorting schema for your problem. One of the most useful applications for this technique is to sort according to multiple schemas and examine results for correlations between data and categories. But remember that correlation is not the same as causation.

 

Origins of This Technique

Sorting is a long-established procedure for organizing data. The description here is from Defense Intelligence Agency training materials.

 

 

4.6 RANKING, SCORING, PRIORITIZING

 

When to Use It

 

A ranking technique is appropriate whenever there are too many items to rank easily just by looking at the list; the ranking has significant consequences and must be done as accurately as possible; or it is useful to aggregate the opinions of a group of analysts.

 

Value Added

 

Combining an idea-generation technique with a ranking technique is an excellent way for an analyst to start a new project or to provide a foundation for inter-office or interagency collaboration. An idea-generation technique is often used to develop lists of such things as driving forces, variables to be considered, or important players. Such lists are more useful once they are ranked, scored, or prioritized.

 

Ranked Voting

In a Ranked Voting exercise, members of the group individually rank each item in order according to the member’s preference or what the member regards as the item’s importance.

 

Paired Comparison

Paired Comparison compares each item against every other item, and the analyst can assign a score to show how much more important or more preferable or more probable one item is than the others. This technique provides more than a simple ranking, as it shows the degree of importance or preference for each item. The list of items can then be ordered along a dimension, such as importance or preference, using an interval-type scale.

Follow these steps to use the technique:

  • List the items to be compared. Assign a letter to each item.
  • Create a table with the letters across the top and down the left side as in Figure 4.6a. The results of the comparison of each pair of items are marked in the cells of this table. Note the diagonal line of darker-colored cells. These cells are not used, as each item is never compared with itself. The cells below this diagonal line are not used because they would duplicate a comparison in the cells above the diagonal line. If you are working in a group, distribute a blank copy of this table to each participant.
  • Looking at the cells above the diagonal row of gray cells, compare the item in the row with the one in the column. For each cell, decide which of the two items is more important (or more preferable or more probable). Write the letter of the winner of this comparison in the cell, and score the degree of difference on a scale from 0 (no difference) to 3 (major difference) as in Figure 4.6a.
  • Consolidate the results by adding up the total of all the values for each of the items and put this number in the “Score” column. For example, in Figure 4.6a item B has one 3 in the first row plus one 2, and two 1s in the second row, for a score of 7.
  • Finally, it may be desirable to convert these values into a percentage of the total score. To do this, divide the total number of scores (20 in the example) by the score for each individual item. Item B, with a score of 7, is ranked most important or most preferred. Item B received a score of 35 percent (7 divided by 20) as compared with 25 percent for item D and only 5 percent each for items C and E, which received only one vote each. This example shows how Paired Comparison captures the degree of difference between each ranking.
  • To aggregate rankings received from a group of analysts, simply add the individual scores for each analyst.

 

Weighted Ranking

In Weighted Ranking, a specified set of criteria are used to rank items. The analyst creates a table with items to be ranked listed across the top row and criteria for ranking these items listed down the far left column

* Create a table with one column for each item. At the head of each column, write the name of an item or assign it a letter to save space.

* Add two more blank columns on the left side of this table. Count the number of selection criteria, and then adjust the table so that it has that number of rows plus three more, one at the top to list the items and two at the bottom to show the raw scores and percentages for each item. In the first column on the left side, starting with the second row, write in all the selection criteria down the left side of the table. There is some value in listing the criteria roughly in order of importance, but that is not critical. Leave the bottom two rows blank for the scores and percentages.

* Now work down the far left hand column assigning weights to the selection criteria based on their relative importance for judging the ranking of the items. Depending upon how many criteria there are, take either 10 points or 100 points and divide these points between the selection criteria based on what is believed to be their relative importance in ranking the items. In other words, ask what percentage of the decision should be based on each of these criteria? Be sure that the weights for all the selection criteria combined add up to either 10 or 100, whichever is selected. Also be sure that all the criteria are phrased in such a way that a higher weight is more desirable.

  • Work across the rows to write the criterion weight in the left side of each cell.
  • Next, work across the matrix one row (selection criterion) at a time to evaluate the relative ability of each of the items to satisfy that selection criteria. Use a ten-point rating scale, where 1 = low and 10 = high, to rate each item separately. (Do not spread the ten points proportionately across all the items as was done to assign weights to the criteria.) Write this rating number after the criterion weight in the cell for each item.

 

* Again, work across the matrix one row at a time to multiply the criterion weight by the item rating for that criterion, and enter this number for each cell as shown in Figure 4.6b.

* Now add the columns for all the items. The result will be a ranking of the items from highest to lowest score. To gain a better understanding of the relative ranking of one item as compared with another, convert these raw scores to percentages. To do this, first add together all the scores in the “Totals” row to get a total number. Then divide the score for each item by this total score to get a percentage ranking for each item. All the percentages together must add up to 100 percent. In Figure 4.6b it is apparent that item B has the number one ranking (with 20.3 percent), while item E has the lowest (with 13.2 percent).

 

4.7 MATRICES


A matrix is an analytic tool for sorting and organizing data in a manner that facilitates comparison and analysis. It consists of a simple grid with as many cells as needed for whatever problem is being analyzed.

 

When to Use It

Matrices are used to analyze the relationship between any two sets of variables or the interrelationships between a single set of variables. Among other things, they enable analysts to:

  • Compare one type of information with another.
  • Compare pieces of information of the same type.
  • Categorize information by type.
  • Identify patterns in the information.
  • Separate elements of a problem.

A matrix is such an easy and flexible tool to use that it should be one of the first tools analysts think of when dealing with a large body of data. One limiting factor in the use of matrices is that information must be organized along only two dimensions.

 

Value Added

Matrices provide a visual representation of a complex set of data. By presenting information visually, a matrix enables analysts to deal effectively with more data than they could manage by juggling various pieces of information in their head. The analytic problem is broken down to component parts so that each part (that is, each cell in the matrix) can be analyzed separately, while ideally maintaining the context of the problem as a whole.

 

The Method

A matrix is a tool that can be used in many different ways and for many different purposes. What matrices have in common is that each has a grid with sufficient columns and rows for you to enter two sets of data that you want to compare. Organize the category headings for each set of data in some logical sequence before entering the headings for one set of data in the top row and the headings for the other set in the far left column. Then enter the data in the appropriate cells.

 

4.8 NETWORK ANALYSIS

 

Network Analysis is the review, compilation, and interpretation of data to determine the presence of associations among individuals, groups, businesses, or other entities; the meaning of those associations to the people involved; and the degrees and ways in which those associations can be strengthened or weakened. It is the best method available to help analysts understand and identify opportunities to influence the behavior of a set of actors about whom information is sparse. In the fields of law enforcement and national security, information used in Network Analysis usually comes from informants or from physical or technical surveillance.

 

 

Analysis of networks is broken down into three stages, and analysts can stop at the stage that answers their questions.

* Network Charting is the process of and associated techniques for identifying people, groups, things, places, and events of interest (nodes) and drawing connecting lines (links) between them on the basis of various types of association. The product is often referred to as a Link Chart.

* Network Analysis is the process and techniques that take the chart and strive to make sense of the data represented by the chart by grouping associations (sorting) and identifying patterns in and among those groups.

* Social Network Analysis (SNA) is the mathematical measuring of variables related to the distance between nodes and the types of associations in order to derive even more meaning from the chart, especially

 

 

 

 

about the degree and type of influence one node has on another.

When to Use It

Network Analysis is used extensively in law enforcement, counterterrorism analysis, and analysis of transnational issues such as narcotics and weapons proliferation to identify and monitor individuals who may be involved in illegal activity.

 

When to Use It

Network Analysis is used extensively in law enforcement, counterterrorism analysis, and analysis of transnational issues such as narcotics and weapons proliferation to identify and monitor individuals who may be involved in illegal activity.

 

Value Added

Network Analysis has proved to be highly effective in helping analysts identify and understand patterns of organization, authority, communication, travel, financial transactions, or other interactions between people or groups that are not apparent from isolated pieces of information. It often identifies key leaders, information brokers, or sources of funding.

 

Potential Pitfalls

This method is extremely dependent upon having at least one good source of information. It is hard to know when information may be missing, and the boundaries of the network may be fuzzy and constantly changing, in which case it is difficult to determine whom to include. The constantly changing nature of networks over time can cause information to become outdated.

 

The Method

Analysis of networks attempts to answer the question “Who is related to whom and what is the nature of their relationship and role in the network?” The basic network analysis software identifies key nodes and shows the links between them. SNA software measures the frequency of flow between links and explores the significance of key attributes of the nodes. We know of no software that does the intermediate task of grouping nodes into meaningful clusters, though algorithms do exist and are used by individual analysts. In all cases, however, you must interpret what is represented, looking at the chart to see how it reflects organizational structure, modes of operation, and patterns of behavior.

 

Network charting usually involves the following steps.

  • Identify at least one reliable source or stream of data to serve as a beginning point. Identify, combine, or separate nodes within this reporting.
    List each node in a database, association matrix, or software program.
    Identify interactions among individuals or groups.
  • List interactions by type in a database, association matrix, or software program.
    Identify each node and interaction by some criterion that is meaningful to your analysis. These criteria often include frequency of contact, type of contact, type of activity, and source of information.
    Draw the connections between nodes—connect the dots—on a chart by hand, using a computer drawing tool, or using Network Analysis software.
  • Work out from the central nodes, adding links and nodes until you run out of information from the good sources.
    Add nodes and links from other sources, constantly checking them against the information you already have. Follow all leads, whether they are people, groups, things, or events, and regardless of source. Make note of the sources.
  • Stop in these cases: when you run out of information, when all of the new links are dead ends, when all of the new links begin to turn in on each other like a spider web, or when you run out of time.
    Update the chart and supporting documents regularly as new information becomes available, or as you have time.
  • Rearrange the nodes and links so that the links cross over each other as little as possible.
  • Cluster the nodes. Do this by looking for “dense” areas of the chart and relatively “empty” areas. Draw shapes around the dense areas. Use a variety of shapes, colors, and line styles to denote different types of clusters, your relative confidence in the cluster, or any other criterion you deem important.
  • Cluster the clusters, if you can, using the same method.
  • Label each cluster according to the common denominator among the nodes it contains. In doing this you will identify groups, events, activities, and/or key locations. If you have in mind a model for groups or activities, you may be able to identify gaps in the chart by what is or is not present that relates to the model.
  • Look for “cliques”—a group of nodes in which every node is connected to every other node, though not to many nodes outside the group. These groupings often look like stars or pentagons. In the intelligence world, they often turn out to be clandestine cells.
  • Look in the empty spaces for nodes or links that connect two clusters. Highlight these nodes with shapes or colors. These nodes are brokers, facilitators, leaders, advisers, media, or some other key connection that bears watching. They are also points where the network is susceptible to disruption.
  • Chart the flow of activities between nodes and clusters. You may want to use arrows and time stamps. Some software applications will allow you to display dynamically how the chart has changed over time. Analyze this flow. Does it always go in one direction or in multiple directions? Are the same or different nodes involved? How many different flows are there? What are the pathways? By asking these questions, you can often identify activities, including indications of preparation for offensive action and lines of authority. You can also use this knowledge to assess the resiliency of the network. If one node or pathway were removed, would there be alternatives already built in?
  • Continually update and revise as nodes or links change.

 

 

4.9 MIND MAPS AND CONCEPT MAPS

Mind Maps and Concept Maps are visual representations of how an individual or a group thinks about a topic of interest. Such a diagram has two basic elements: the ideas that are judged relevant to whatever topic one is thinking about, and the lines that show and briefly describe the connections between these ideas.

Whenever you think about a problem, develop a plan, or consider making even a very simple decision, you are putting a series of thoughts together. That series of thoughts can be represented visually with words or images connected by lines that represent the nature of the relationship between them. Any thinking for any purpose, whether about a personal decision or analysis of an intelligence issue, can be diagrammed in this manner.

  • By an individual or a group to help sort out their own thinking and achieve a shared understanding of key concepts.

After having participated in this group process to define the problem, the group should be better able to identify what further research needs to be done and able to parcel out additional work among the best qualified members of the group. The group should also be better able to prepare a report that represents as fully as possible the collective wisdom of the group as a whole.

The Method

Start a Mind Map or Concept Map with a focal question that defines what is to be included. Then follow these steps:

  • Make a list of concepts that relate in some way to the focal question.
  • Starting with the first dozen or so concepts, sort them into groupings within the diagram space in some logical manner. These groups may be based on things they have in common or on their status as either direct or indirect causes of the matter being analyzed.
  • Begin making links between related concepts, starting with the most general concepts. Use lines with arrows to show the direction of the relationship. The arrows may go in either direction or in both directions.
  • Choose the most appropriate words for describing the nature of each relationship. The lines might be labeled with words such as “causes,” “influences,” “leads to,” “results in,” “is required by,” or “contributes to.” Selecting good linking phrases is often the most difficult step.
  • While building all the links between the concepts and the focal question, look for and enter crosslinks between concepts.
  • Don’t be surprised if, as the map develops, you discover that you are now diagramming on a different focus question from the one you started with. This can be a good thing. The purpose of a focus question is not to lock down the topic but to get the process going.
  • Finally, reposition, refine, and expand the map structure as appropriate.

Mind Mapping has only one main or central idea, and all other ideas branch off from it radially in all directions. The central idea is preferably shown as an image rather than in words, and images are used throughout the map. “Around the central word you draw the 5 or 10 main ideas that relate to that word. You then take each of those child words and again draw the 5 or 10 main ideas that relate to each of

those words.” A Concept Map has a more flexible form. It can have multiple hubs and clusters. It can also be designed around a central idea, but it does not have to be and often is not designed that way. It does not normally use images. A Concept Map is usually shown as a network, although it too can be shown as a hierarchical structure like Mind Mapping when that is appropriate. Concept Maps can be very complex and are often meant to be viewed on a large-format screen.

 

4.10 PROCESS MAPS AND GANTT CHARTS

Process Mapping is an umbrella term that covers a variety of procedures for identifying and depicting visually each step in a complex procedure. It includes flow charts of various types (Activity Flow Charts,

Commodity Flow Charts, Causal Flow Charts), Relationship Maps, and Value Stream Maps commonly used to assess and plan improvements for business and industrial processes. A Gantt Chart is a specific type of Process Map that was developed to facilitate the planning, scheduling, and management of complex industrial projects.

When to Use It

Process Maps, including Gantt Charts, are used by intelligence analysts to track, understand, and monitor the progress of activities of intelligence interest being undertaken by a foreign government, a criminal or terrorist group, or any other nonstate actor. For example, a Process Map can be used to monitor progress in developing a new weapons system, preparations for a major military action, or the execution of any other major plan that involves a sequence of observable steps. It is often used to identify and describe the modus operandi of a criminal or terrorist group, including the preparatory steps that such a group typically takes prior to a major action.

Value Added

The process of constructing a Process Map or a Gantt Chart helps analysts think clearly about what someone else needs to do to complete a complex project.

When a complex plan or process is understood well enough to be diagrammed or charted, analysts can then answer questions such as the following: What are they doing? How far along are they? What do they still need to do? What resources will they need to do it? How much time do we have before they have this capability? Is there any vulnerable point in this process where they can be stopped or slowed down?

The Process Map or Gantt Chart is a visual aid for communicating this information to the customer. If sufficient information can be obtained, the analyst’s understanding of the process will lead to a set of indicators that can be used to monitor the status of an ongoing plan or project.

The Method

There is a substantial difference in appearance between a Process Map and a Gantt Chart. In a Process Map, the steps in the process are diagrammed sequentially with various symbols representing starting and end points, decisions, and actions connected with arrows. Diagrams can be created with readily available software such as Microsoft Visio.

Example

The Intelligence Community has considerable experience monitoring terrorist groups. This example describes how an analyst would go about creating a Gantt Chart of a generic terrorist attack-planning process (see Figure 4.10). The analyst starts by making a list of all the tasks that terrorists must complete, estimating the schedule for when each task will be started and finished, and determining what resources are needed for each task. Some tasks need to be completed in a sequence, with each task being more-or-less completed before the next activity can begin. These are called sequential, or linear, activities. Other activities are not dependent upon completion of any other tasks. These may be done at any time before or after a particular stage is reached. These are called nondependent, or parallel, tasks.

Note whether each terrorist task to be performed is sequential or parallel. It is this sequencing of dependent and nondependent activities that is critical in determining how long any particular project or process will take. The more activities that can be worked in parallel, the greater the chances of a project being completed on time. The more tasks that must be done sequentially, the greater the chances of a single bottleneck delaying the entire project.

Gantt Charts that map a generic process can also be used to track data about a more specific process as it is received.

information about a specific group’s activities could be layered by using a different color or line type. Layering in the specific data allows an analyst to compare what is expected with the actual data. The chart can then be used to identify and narrow gaps or anomalies in the data and even to identify and challenge assumptions about what is expected or what is happening.

5.0 Idea Generation
5 Idea Generation

New ideas, and the combination of old ideas in new ways, are essential elements of effective intelligence analysis. Some structured techniques are specifically intended for the purpose of eliciting or generating ideas at the very early stage of a project, and they are the topic of this chapter.

 

Structured Brainstorming is not a group of colleagues just sitting around talking about a problem. Rather, it is a group process that follows specific rules and procedures. It is often used at the beginning of a project to identify a list of relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, or, in law enforcement, potential suspects or avenues of investigation. It requires little training, and is one of the most frequently used structured techniques in the Intelligence Community.

The wiki format—including the ability to upload documents and even hand-drawn graphics or photos —allows analysts to capture and track brainstorming ideas and return to them at a later date.

Nominal Group Technique, often abbreviated NGT, serves much the same function as Structured Brainstorming, but it uses a quite different approach. It is the preferred technique when there is a concern that a senior member or outspoken member of the group may dominate the meeting, that junior members may be reluctant to speak up, or that the meeting may lead to heated debate. Nominal Group Technique encourages equal participation by requiring participants to present ideas one at a time in round-robin fashion until all participants feel that they have run out of ideas.

Starbursting is a form of brainstorming that focuses on generating questions rather than answers. To help in defining the parameters of a research project, use Starbursting to identify the questions that need to be answered. Questions start with the words Who, What, When, Where, Why, and How.

Cross-Impact Matrix is a technique that can be used after any form of brainstorming session that identifies a list of variables relevant to a particular analytic project. The results of the brainstorming session are put into a matrix, which is used to guide a group discussion that systematically examines how each variable influences all other variables to which it is judged to be related in a particular problem context.

Morphological Analysis is useful for dealing with complex, nonquantifiable problems for which little data are available and the chances for surprise are significant. It is a generic method for systematically identifying and considering all possible relationships in a multidimensional, highly complex, usually nonquantifiable problem space. It helps prevent surprises in intelligence analysis by generating a large number of outcomes for any complex situation, thus reducing the chance that events will play out in a way that the analyst has not previously imagined and has not at least considered.

Quadrant Crunching is an application of Morphological Analysis that uses key assumptions and their opposites as a starting point for systematically generating a large number of alternative outcomes. For example, an analyst might use Quadrant Crunching to identify the many different ways that a terrorist might attack a water supply. The technique forces analysts to rethink an issue from a broad range of perspectives and systematically question all the assumptions that underlie their lead hypothesis.

5.1 STRUCTURED BRAINSTORMING

When to Use It

Structured Brainstorming is one of the most widely used analytic techniques. It is often used at the beginning of a project to identify a list of relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, or, for law enforcement, potential suspects or avenues of investigation.

 

The Method

There are seven general rules to follow, and then a twelve-step process for Structured Brainstorming. Here are the rules:

  • Be specific about the purpose and the topic of the brainstorming session. Announce the topic beforehand, and ask participants to come to the session with some ideas or to forward them to the facilitator before the session.
  • New ideas are always encouraged. Never criticize an idea during the divergent (creative) phase of the process no matter how weird or unconventional or improbable it might sound. Instead, try to figure out how the idea might be applied to the task at hand.
  • Allow only one conversation at a time, and ensure that everyone has an opportunity to speak.
  • Allocate enough time to do the brainstorming correctly. It often takes one hour to set the rules of the game, get the group comfortable, and exhaust the conventional wisdom on the topic. Only then do truly creative ideas begin to emerge.
  • To avoid groupthink and stimulate divergent thinking, include one or more “outsiders” in the group— that is, astute thinkers who do not share the same body of knowledge or perspective as the other group members but do have some familiarity with the topic.
  • Write it down! Track the discussion by using a whiteboard, an easel, or sticky notes (see Figure 5.1).
  • Summarize the key findings at the end of the session. Ask the participants to write down the most important thing they learned on a 3 x 5 card as they depart the session. Then prepare a short summary and distribute the list to the participants (who may add items to the list) and to others interested in the topic (including supervisors and those who could not attend). Capture these findings and disseminate them to attendees and other interested parties either by e-mail or, preferably, a wiki.
  1. Figure 5.1 Picture of Brainstorming
  • Pass out Post-it or “sticky” notes and Sharpie-type pens or markers to all participants.
  • Pose the problem or topic in terms of a “focal question.” Display this question in one sentence for all to see on a large easel or whiteboard.
  • Ask the group to write down responses to the question with a few key words that will fit on a Post-it.
  • When a response is written down, the participant is asked to read it out loud or to give it to the facilitator who will read it out loud. Sharpie-type pens are used so that people can easily see what is written on the Post-it notes later in the exercise.
  • Stick all the Post-its on a wall in the order in which they are called out. Treat all ideas the same. Encourage participants to build on one another’s ideas.
  • Usually there is an initial spurt of ideas followed by pauses as participants contemplate the question. After five or ten minutes there is often a long pause of a minute or so. This slowing down suggests that the group has “emptied the barrel of the obvious” and is now on the verge of coming up with some fresh insights and ideas. Do not talk during this pause even if the silence is uncomfortable.
  • After two or three long pauses, conclude this divergent thinking phase of the brainstorming session.
  • Ask all participants as a group to go up to the wall and rearrange the Post-its in some organized manner. This arrangement might be by affinity groups (groups that have some common characteristic), scenarios, a predetermined priority scale, or a time sequence. Participants are not allowed to talk during this process. Some Post-its may be moved several times, but they will gradually be clustered into logical groupings. Post-its may be copied if necessary to fit one idea into more than one group.
  • When all Post-its have been arranged, ask the group to select a word or phrase that best describes each grouping.
  • Look for Post-its that do not fit neatly into any of the groups. Consider whether such an outlier is useless noise or the germ of an idea that deserves further attention.
  • Assess what the group has accomplished. Have new ideas or concepts been identified, have key issues emerged, or are there areas that need more work or further brainstorming?
  • To identify the potentially most useful ideas, the facilitator or group leader should establish up to five criteria for judging the value or importance of the ideas. If so desired, then use the Ranking, Scoring,

Prioritizing technique, described in chapter 4, for voting on or ranking or prioritizing ideas

  • Set the analytic priorities accordingly, and decide on a work plan for the next steps in the analysis.

Relationship to Other Techniques

As discussed under “When to Use It,” some form of brainstorming is commonly combined with a wide variety of other techniques.

Structured Brainstorming is also called Divergent/Convergent Thinking.

Origins of This Technique

Brainstorming was a creativity technique used by advertising agencies in the 1940s. It was popularized in a book by advertising manager Alex Osborn, Applied Imagination: Principles and Procedures of Creative Problem Solving. There are many versions of brainstorming. The description here is a combination of information from Randy Pherson, “Structured Brainstorming,” in Handbook of Analytic Tools and Techniques (Reston, Va.: Pherson Associates, LLC, 2008), and training materials from the CIA’s Sherman Kent School for Intelligence Analysis.

5.2 VIRTUAL BRAINSTORMING

Virtual Brainstorming is the same as Structured Brainstorming except that it is done online with participants who are geographically dispersed or unable to meet in person.

The Method

Virtual Brainstorming is usually a two-phase process. It usually begins with the divergent process of creating as many relevant ideas as possible. The second phase is a process of convergence when the ideas are sorted into categories, weeded out, prioritized, or combined and molded into a conclusion or plan of action.

5.3 NOMINAL GROUP TECHNIQUE

Nominal Group Technique (NGT) is a process for generating and evaluating ideas. It is a form of brainstorming, but NGT has always had its own identity as a separate technique.

When to Use It

NGT prevents the domination of a discussion by a single person. Use it whenever there is concern that a senior officer or executive or an outspoken member of the group will control the direction of the meeting by speaking before anyone else.

The Method

An NGT session starts with the facilitator asking an open-ended question, such as, “What factors will influence …?” “How can we learn if …?” “In what circumstances might … happen?” “What should be included or not included in this research project?” The facilitator answers any questions about what is expected of participants and then gives participants five to ten minutes to work privately to jot down on note cards their initial ideas in response to the focal question. This part of the process is followed by these steps:

  • The facilitator calls on one person at a time to present one idea. As each idea is presented, the facilitator writes a summary description on a flip chart or whiteboard. This process continues in a round-robin fashion until all ideas have been exhausted.
  • When no new ideas are forthcoming, the facilitator initiates a group discussion to ensure that there is a common understanding of what each idea means. The facilitator asks about each idea, one at a time, in the order presented, but no argument for or against any idea is allowed. It is possible at this time to expand or combine ideas, but no change can be made to any idea without the approval of the original presenter of the idea.
  • Voting to rank or prioritize the ideas as discussed in chapter 4 is optional, depending upon the purpose of the meeting. When voting is done, it is usually by secret ballot, although various voting procedures may be used depending in part on the number of ideas and the number of participants. It usually works best to employ a ratio of one vote for every three ideas presented. For example, if the facilitator lists twelve ideas, each participant is allowed to cast four votes.

Origins of This Technique

Nominal Group Technique was developed by A. L. Delbecq and A. H. Van de Ven and first described in “A Group Process Model for Problem Identification and Program Planning,” Journal of Applied Behavioral Science

5.4 STARBURSTING

Starbursting is a form of brainstorming that focuses on generating questions rather than eliciting ideas or answers. It uses the six questions commonly asked by journalists: Who? What? When? Where? Why? and How?

When to Use It

Use Starbursting to help define your research project. After deciding on the idea, topic, or issue to be analyzed, brainstorm to identify the questions that need to be answered by the research. Asking the right questions is a common prerequisite to finding the right answer.

Origin of This Technique

Starbursting is one of many techniques developed to stimulate creativity.

5.5 CROSS-IMPACT MATRIX

Cross-Impact Matrix helps analysts deal with complex problems when “everything is related to everything else.” By using this technique, analysts and decision makers can systematically examine how each factor in a particular context influences all other factors to which it appears to be related.

When to Use It

The Cross-Impact Matrix is useful early in a project when a group is still in a learning mode trying to sort out a complex situation.

The Method

Assemble a group of analysts knowledgeable on various aspects of the subject. The group brainstorms a list of variables or events that would likely have some effect on the issue being studied. The project coordinator then creates a matrix and puts the list of variables or events down the left side of the matrix and the same variables or events across the top.

The matrix is then used to consider and record the relationship between each variable or event and every other variable or event.

5.6 MORPHOLOGICAL ANALYSIS

A method for systematically structuring and examining all the possible relationships in a multidimensional, highly complex, usually nonquantifiable problem space. The basic idea is to identify a set of variables and then look at all the possible combinations of these variables.

For intelligence analysis, it helps prevent surprise by generating a large number of feasible outcomes for any complex situation. This exercise reduces the chance that events will play out in a way that the analyst has not previously imagined and considered.

When to Use It

Morphological Analysis is most useful for dealing with complex, nonquantifiable problems for which little information is available and the chances for surprise are great. It can be used, for example, to identify possible variations of a threat, possible ways a crisis might occur between two countries, possible ways a set of driving forces might interact, or the full range of potential outcomes in any ambiguous situation.

Although Morphological Analysis is typically used for looking ahead, it can also be used in an investigative context to identify the full set of possible explanations for some event.

Value Added

By generating a comprehensive list of possible outcomes, analysts are in a better position to identify and select those outcomes that seem most credible or that most deserve attention. This list helps analysts and decision makers focus on what actions need to be undertaken today to prepare for events that could occur in the future. They can then take the actions necessary to prevent or mitigate the effect of bad outcomes and help foster better outcomes. The technique can also sensitize analysts to low probability/high impact developments, or “nightmare scenarios,” which could have significant adverse implications for influencing policy or allocation of resources.

The product of Morphological Analysis is often a set of potential noteworthy scenarios, with indicators of each, plus the intelligence collection requirements for each scenario. Another benefit is that morphological analysis leaves a clear audit trail about how the judgments were reached.

The Method

Morphological analysis works through two common principles of creativity techniques: decomposition and forced association. Start by defining a set of key parameters or dimensions of the problem, and then break down each of those dimensions further into relevant forms or states or values that the dimension can assume —as in the example described later in this section. Two dimensions can be visualized as a matrix and three dimensions as a cube. In more complicated cases, multiple linked matrices or cubes may be needed to break the problem down into all its parts.

The principle of forced association then requires that every element be paired with and considered in connection with every other element in the morphological space. How that is done depends upon the complexity of the case. In a simple case, each combination may be viewed as a potential scenario or problem solution and examined from the point of view of its possibility, practicability, effectiveness, or other criteria. In complex cases, there may be thousands of possible combinations and computer assistance is required. With or without computer assistance, it is often possible to quickly eliminate about 90 percent of the combinations as not physically possible, impracticable, or undeserving of attention. This narrowing-down process allows the analyst to concentrate only on those combinations that are within the realm of the possible and most worthy of attention.

5.7 QUADRANT CRUNCHING

Quadrant Crunching helps analysts avoid surprise by examining multiple possible combinations of selected key variables. It also helps analysts to identify and systematically challenge assumptions, explore the implications of contrary assumptions, and discover “unknown unknowns.” By generating multiple possible outcomes for any situation, Quadrant Crunching reduces the chance that events could play out in a way that has not previously been at least imagined and considered. Training and practice are required before an analyst should use this technique, and an experienced facilitator is recommended.

The technique forces analysts to rethink an issue from many perspectives and systematically question assumptions that underlie their lead hypothesis. As a result, analysts can be more confident that they have considered a broad range of possible permutations for a particularly complex and ambiguous situation. In so doing, analysts are more likely to anticipate most of the ways a situation can develop (or terrorists might launch an attack) and to spot indicators that signal a specific scenario is starting to develop.

The Method

Quadrant Crunching is sometimes described as a Key Assumptions Check on steroids. It is most useful when there is a well-established lead hypothesis that can be articulated clearly.

Quadrant Crunching calls on the analyst to break down the lead hypothesis into its component parts, identifying the key assumptions that underlie the lead hypothesis, or dimensions that focus on Who, What, When, Where, Why, and How. Once the key dimensions of the lead hypothesis are articulated, the analyst generates at least two examples of contrary dimensions.

 

Relationship to Other Techniques

Quadrant Crunching is a specific application of a generic method called Morphological Analysis (described in this chapter). It draws on the results of the Key Assumptions Check and can contribute to Multiple Scenarios Generation. It can also be used to identify Indicators.

Origins of This Technique

The Quadrant Crunching technique was developed by Randy Pherson and Alan Schwartz to meet a specific analytic need. It was first published in Randy Pherson, Handbook of Analytic Tools and Techniques

6.0 Scenarios and Indicators
6 Scenarios and Indicators

In the complex, evolving, uncertain situations that intelligence analysts and decision makers must deal with, the future is not easily predicable. Some events are intrinsically of low predictability. The best the analyst can do is to identify the driving forces that may determine future outcomes and monitor those forces as they interact to become the future. Scenarios are a principal vehicle for doing this. Scenarios are plausible and provocative stories about how the future might unfold.

 

Scenarios Analysis provides a framework for considering multiple plausible futures. As Peter

Schwartz, author of The Art of the Long View, has argued, “The future is plural.”1 Trying to divine or predict a single outcome often is a disservice to senior intelligence officials, decision makers, and other clients. Generating several scenarios (for example, those that are most likely, least likely, and most dangerous) helps focus attention on the key underlying forces and factors most likely to influence how a situation develops. Analysts can also use scenarios to examine assumptions and deliver useful warning messages when high impact/low probability scenarios are included in the exercise.

 

Identification and monitoring of indicators or signposts can provide early warning of the direction in which the future is heading, but these early signs are not obvious. The human mind tends to see what it expects to see and to overlook the unexpected. These indicators take on meaning only in the context of a specific scenario with which they have been identified. The prior identification of a scenario and associated indicators can create an awareness that prepares the mind to recognize early signs of significant change.

 

Change sometimes happens so gradually that analysts don’t notice it, or they rationalize it as not being of fundamental importance until it is too obvious to ignore. Once analysts take a position on an issue, they typically are slow to change their minds in response to new evidence. By going on the record in advance to specify what actions or events would be significant and might change their minds, analysts can avert this type of rationalization.

 

Another benefit of scenarios is that they provide an efficient mechanism for communicating complex ideas. A scenario is a set of complex ideas that can be described with a short label.

 

Overview of Techniques

 

 

Indicators are a classic technique used to seek early warning of some undesirable event. Indicators are often paired with scenarios to identify which of several possible scenarios is developing. They are also used to measure change toward an undesirable condition, such as political instability or a desirable condition, such as economic reform. Use indicators whenever you need to track a specific situation to monitor, detect, or evaluate change over time.

 

Indicators Validator is a new tool that is useful for assessing the diagnostic power of an indicator. An indicator is most diagnostic when it clearly points to the likelihood of only one scenario or hypothesis and suggests that the others are unlikely. Too frequently indicators are of limited value, because they may be consistent with several different outcomes or hypotheses.

 

6.1 SCENARIOS ANALYSIS

 

Identification and analysis of scenarios helps to reduce uncertainties and manage risk. By postulating different scenarios analysts can identify the multiple ways in which a situation might evolve. This process can help decision makers develop plans to exploit whatever opportunities the future may hold or, conversely, to avoid risks. Monitoring of indicators keyed to various scenarios can provide early warnings of the direction in which the future may be heading.

 

When to Use It

Scenarios Analysis is most useful when a situation is complex or when the outcomes are too uncertain to trust a single prediction. When decision makers and analysts first come to grips with a new situation or challenge, there usually is a degree of uncertainty about how events will unfold.

 

Value Added

When analysts are thinking about scenarios, they are rehearsing the future so that decision makers can be prepared for whatever direction that future takes. Instead of trying to estimate the most likely outcome (and being wrong more often than not), scenarios provide a framework for considering multiple plausible futures.

 

Analysts have learned, from past experience, that involving decision makers in a scenarios exercise is an effective way to communicate the results of this technique and to sensitize them to important uncertainties. Most participants find the process of developing scenarios as useful as any written report or formal briefing. Those involved in the process often benefit in several ways. Analysis of scenarios can:

 

  • Suggest indicators to monitor for signs that a particular future is becoming more or less likely.
  • Help analysts and decision makers anticipate what would otherwise be surprising developments by forcing them to challenge assumptions and consider plausible “wild card” scenarios or discontinuous events.
  • Produce an analytic framework for calculating the costs, risks, and opportunities represented by different outcomes.
  • Provide a means of weighing multiple unknown or unknowable factors and presenting a set of plausible outcomes.
  • Bound a problem by identifying plausible combinations of uncertain factors.

 

When decision makers or analysts from different intelligence disciplines or organizational cultures are included on the team, new insights invariably emerge as new information and perspectives are introduced.

 

6.1.1 The Method: Simple Scenarios

Of the three scenario techniques described here, Simple Scenarios is the easiest one to use. It is the only one of the three that can be implemented by an analyst working alone rather than in a group or a team, and it is the only one for which a coach or a facilitator is not needed.

. Here are the steps for using this technique:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Make a list of forces, factors, and events that are likely to influence the future.
  • Organize the forces, factors, and events that are related to each other into five to ten affinity groups that are expected to be the driving forces in how the focal issue will evolve.
  • Label each of these drivers and write a brief description of each. For example, one training exercise for this technique is to forecast the future of the fictional country of Caldonia by identifying and describing six drivers. Generate a matrix, as shown in Figure 6.1.1, with a list of drivers down the left side. The columns of the matrix are used to describe scenarios. Each scenario is assigned a value for each driver. The values are strong or positive (+), weak or negative (–), and blank if neutral or no change.

 

  • Government effectiveness: To what extent does the government exert control over all populated regions of the country and effectively deliver services?
  • Economy: Does the economy sustain a positive growth rate?
  • Civil society: Can nongovernmental and local institutions provide appropriate services and security to the population?
  • Insurgency: Does the insurgency pose a viable threat to the government? Is it able to extend its dominion over greater portions of the country?
  • Drug trade: Is there a robust drug-trafficking economy?
  • Foreign influence: Do foreign governments, international financial organizations, or nongovernmental organizations provide military or economic assistance to the government?
  • Generate at least four different scenarios—a best case, worst case, mainline, and at least one other by assigning different values (+, 0, –) to each driver.
  • This is a good time to reconsider both drivers and scenarios. Is there a better way to conceptualize and describe the drivers? Are there important forces that have not been included? Look across the matrix to see the extent to which each driver discriminates among the scenarios. If a driver has the same value across all scenarios, it is not discriminating and should be deleted. To stimulate thinking about other possible scenarios, consider the key assumptions that were made in deciding on the most likely scenario. What if some of these assumptions turn out to be invalid? If they are invalid, how might that affect the outcome, and are such outcomes included within the available set of scenarios?
  • For each scenario, write a one-page story to describe what that future looks like and/or how it might come about. The story should illustrate the interplay of the drivers.
  • For each scenario, describe the implications for the decision maker.
  • Generate a list of indicators, or “observables,” for each scenario that would help you discover that events are starting to play out in a way envisioned by that scenario.
  • Monitor the list of indicators on a regular basis.

6.1.2 The Method: Alternative Futures Analysis

Alternative Futures Analysis and Multiple Scenarios Generation differ from Simple Scenarios in that they are usually larger projects that rely on a group of experts, often including academics and decision makers. They use a more systematic process, and the assistance of a knowledgeable facilitator is very helpful.

The steps in the Alternative Futures Analysis process are:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Brainstorm to identify the key forces, factors, or events that are most likely to influence how the issue will develop over a specified time period.
  • If possible, group these various forces, factors, or events to form two critical drivers that are expected to determine the future outcome. In the example on the future of Cuba (Figure 6.1.2), the two key drivers are Effectiveness of Government and Strength of Civil Society. If there are more than two critical drivers, do not use this technique. Use the Multiple Scenarios Generation technique, which can handle a larger number of scenarios.
  • As in the Cuba example, define the two ends of the spectrum for each driver.
  • Draw a 2 × 2 matrix. Label the two ends of the spectrum for each driver.
  • Note that the square is now divided into four quadrants. Each quadrant represents a scenario generated by a combination of the two drivers. Now give a name to each scenario, and write it in the relevant quadrant.
  • Generate a narrative story of how each hypothetical scenario might come into existence. Include a hypothetical chronology of key dates and events for each of the scenarios.
  • Describe the implications of each scenario should it be what actually develops.
  • Generate a list of indicators, or “observables,” for each scenario that would help determine whether events are starting to play out in a way envisioned by that scenario.
  • Monitor the list of indicators on a regular basis.

Figure 6.1.2 Alternative Futures Analysis: Cuba

6.1.3 The Method: Multiple Scenarios Generation

Multiple Scenarios Generation is similar to Alternative Futures Analysis except that with this technique, you are not limited to two critical drivers generating four scenarios. By using multiple 2 × 2 matrices pairing every possible combination of multiple driving forces, you can create a very large number of possible scenarios. This is sometimes desirable to make sure nothing has been overlooked. Once generated, the scenarios can be screened quickly without detailed analysis of each one.

Once sensitized to these different scenarios, analysts are more likely to pay attention to outlying data that would suggest that events are playing out in a way not previously imagined.

Training and an experienced facilitator are needed to use this technique. Here are the basic steps:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Brainstorm to identify the key forces, factors, or events that are most likely to influence how the issue will develop over a specified time period.
  • Define the two ends of the spectrum for each driver.
  • Pair the drivers in a series of 2 × 2 matrices.
  • Develop a story or two for each quadrant of each 2 × 2 matrix.
  • From all the scenarios generated, select those most deserving of attention because they illustrate compelling and challenging futures not yet being considered.
  • Develop indicators for each scenario that could be tracked to determine whether or not the scenario is developing.

 

6.2 INDICATORS

Indictors are observable phenomena that can be periodically reviewed to help track events, spot emerging trends, and warn of unanticipated changes. An indicators list is a pre-established set of observable or

potentially observable actions, conditions, facts, or events whose simultaneous occurrence would argue strongly that a phenomenon is present or is very likely to occur. Indicators can be monitored to obtain tactical, operational, or strategic warnings of some future development that, if it were to occur, would have a major impact.

The identification and monitoring of indicators are fundamental tasks of intelligence analysis, as they are the principal means of avoiding surprise. They are often described as forward-looking or predictive indicators. In the law enforcement community indicators are also used to assess whether a target’s activities or behavior is consistent with an established pattern. These are often described as backward-looking or descriptive indicators.

When to Use It

Indicators provide an objective baseline for tracking events, instilling rigor into the analytic process, and enhancing the credibility of the final product. Descriptive indicators are best used to help the analyst assess whether there are sufficient grounds to believe that a specific action is taking place. They provide a systematic way to validate a hypothesis or help substantiate an emerging viewpoint.

In the private sector, indicators are used to track whether a new business strategy is working or whether a low-probability scenario is developing that offers new commercial opportunities.

Value Added

The human mind sometimes sees what it expects to see and can overlook the unexpected. Identification of indicators creates an awareness that prepares the mind to recognize early signs of significant change. Change often happens so gradually that analysts don’t see it, or they rationalize it as not being of fundamental importance until it is too obvious to ignore. Once analysts take a position on an issue, they can be reluctant to change their minds in response to new evidence. By specifying in advance the threshold for what actions or events would be significant and might cause them to change their minds, analysts can seek to avoid this type of rationalization.

Defining explicit criteria for tracking and judging the course of events makes the analytic process more visible and available for scrutiny by others, thus enhancing the credibility of analytic judgments. Including an indicators list in the finished product helps decision makers track future developments and builds a more concrete case for the analytic conclusions.

Preparation of a detailed indicator list by a group of knowledgeable analysts is usually a good learning experience for all participants. It can be a useful medium for an exchange of knowledge between analysts from different organizations or those with different types of expertise—for example, analysts who specialize in a particular country and those who are knowledgeable about a particular field, such as military mobilization, political instability, or economic development.

The indicator list becomes the basis for directing collection efforts and for routing relevant information to all interested parties. It can also serve as the basis for the analyst’s filing system to keep track of these indicators.

When analysts or decision makers are sharply divided over the interpretation of events (for example, how the war in Iraq or Afghanistan is progressing), of the guilt or innocence of a “person of interest,” or the culpability of a counterintelligence suspect, indicators can help depersonalize the debate by shifting attention away from personal viewpoints to more objective criteria. Emotions often can be diffused and substantive disagreements clarified if all parties agree in advance on a set of criteria that would demonstrate that developments are—or are not—moving in a particular direction or that a person’s behavior suggests that he or she is guilty as suspected or is indeed a spy.

Potential Pitfalls

The quality of indicators is critical, as poor indicators lead to analytic failure. For these reasons, analysts must periodically review the validity and relevance of an indicators list.

The Method

The first step in using this technique is to create a list of indicators. (See Figure 6.2b for a sample indicators list.) The second step is to monitor these indicators regularly to detect signs of change. Developing the indicator list can range from a simple process to a sophisticated team effort.

For example, with minimum effort you could jot down a list of things you would expect to see if a particular situation were to develop as feared or foreseen. Or you could join with others to define multiple variables that would influence a situation and then rank the value of each variable based on incoming information about relevant events, activities, or official statements. In both cases, some form of brainstorming, hypothesis generation, or scenario development is often used to identify the indicators.

A good indicator must meet several criteria, including the following:

Observable and collectible. There must be some reasonable expectation that, if present, the indicator will be observed and reported by a reliable source. If an indicator is to monitor change over time, it must be collectable over time.
Valid. An indicator must be clearly relevant to the end state the analyst is trying to predict or assess, and it must be inconsistent with all or at least some of the alternative explanations or outcomes. It must accurately measure the concept or phenomenon at issue.
Reliable. Data collection must be consistent when comparable methods are used. Those observing and collecting data must observe the same things. Reliability requires precise definition of the indicators. Stable. An indicator must be useful over time to allow comparisons and to track events. Ideally, the indicator should be observable early in the evolution of a development so that analysts and decision makers have time to react accordingly.
Unique. An indicator should measure only one thing and, in combination with other indicators, should point only to the phenomenon being studied. Valuable indicators are those that are not only consistent with a specified scenario or hypothesis but are also inconsistent with alternative scenarios or hypotheses. The Indicators Validator tool, described later in this chapter, can be used to check the diagnosticity of indicators.

Maintaining separate indicator lists for alternative scenarios or hypotheses is particularly useful when making a case that a certain event is unlikely to happen, as in What If? Analysis or High Impact/Low Probability Analysis.

After creating the indicator list or lists, you or the analytic team should regularly review incoming reporting and note any changes in the indicators. To the extent possible, you or the team should decide well in advance which critical indicators, if observed, will serve as early-warning decision points. In other words, if a certain indicator or set of indicators is observed, it will trigger a report advising of some modification in the intelligence appraisal of the situation.

Techniques for increasing the sophistication and credibility of an indicator list include the following:

Establishing a scale for rating each indicator
Providing specific definitions of each indicator
Rating the indicators on a scheduled basis (e.g., monthly, quarterly, or annually)
Assigning a level of confidence to each rating
Providing a narrative description for each point on the rating scale, describing what one would expect to observe at that level
Listing the sources of information used in generating the rating

6.3 INDICATORS VALIDATOR

The Indicators Validator is a simple tool for assessing the diagnostic power of indicators.

When to Use It

The Indicators Validator is an essential tool to use when developing indicators for competing hypotheses or alternative scenarios. Once an analyst has developed a set of alternative scenarios or future worlds, the next step is to generate indicators for each scenario (or world) that would appear if that particular world were beginning to emerge. A critical question that is not often asked is whether a given indicator would appear only in the scenario to which it is assigned or also in one or more alternative scenarios. Indicators that could appear in several scenarios are not considered diagnostic, suggesting that they are not particularly useful in determining whether a specific scenario is emerging. The ideal indicator is highly consistent for the world to which it is assigned and highly inconsistent for all other worlds.

Value Added

Employing the Indicators Validator to identify and dismiss nondiagnostic indicators can significantly increase the credibility of an analysis. By applying the tool, analysts can rank order their indicators from most to least diagnostic and decide how far up the list they want to draw the line in selecting the indicators that will be used in the analysis. In some circumstances, analysts might discover that most or all the indicators for a given scenario have been eliminated because they are also consistent with other scenarios, forcing them to brainstorm a new and better set of indicators. If analysts find it difficult to generate independent lists of diagnostic indicators for two scenarios, it may be that the scenarios are not sufficiently dissimilar, suggesting that they should be combined.

The Method

The first step is to populate a matrix similar to that used for Analysis of Competing Hypotheses. This can be done manually or by using the Indicators Validator software. The matrix should list:

Alternative scenarios or worlds (or competing hypotheses) along the top of the matrix (as is done for hypotheses in Analysis of Competing Hypotheses)
Indicators that have already been generated for all the scenarios down the left side of the matrix (as is done with evidence in Analysis of Competing Hypotheses)

In each cell of the matrix, assess whether the indicator for that particular scenario is

 

Highly likely to appear

Likely to appear
Could appear
Unlikely to appear

Highly unlikely to appear

Once this process is complete, re-sort the indicators so that the most discriminating indicators are displayed at the top of the matrix and the least discriminating indicators at the bottom.

The most discriminating indicator is “Highly Likely” to emerge in one scenario and “Highly Unlikely” to emerge in all other scenarios.
The least discriminating indicator is “Highly Likely” to appear in all scenarios.
Most indicators will fall somewhere in between.

The Indicators with the most “Highly Unlikely” and “Unlikely” ratings are the most discriminating and should be retained.
Indicators with few or no “Highly Unlikely” or “Unlikely” ratings should be eliminated.
Once nondiscriminating indicators have been eliminated, regroup the indicators under their assigned scenario. If most indicators for a particular scenario have been eliminated, develop new—and more diagnostic—indicators for that scenario.

Recheck the diagnostic value of any new indicators by applying the Indicators Validator to them as well.

 

7.0 Hypothesis Generation and Testing
7 Hypothesis Generation and Testing

Intelligence analysis will never achieve the accuracy and predictability of a true science, because the information with which analysts must work is typically incomplete, ambiguous, and potentially

deceptive. Intelligence analysis can, however, benefit from some of the lessons of science and adapt some of the elements of scientific reasoning.

The scientific process involves observing, categorizing, formulating hypotheses, and then testing those hypotheses. Generating and testing hypotheses is a core function of intelligence analysis. A possible explanation of the past or a judgment about the future is a hypothesis that needs to be tested by collecting and presenting evidence.

The generation and testing of hypotheses is a skill, and its subtleties do not come naturally. It is a form of reasoning that people can learn to use for dealing with high-stakes situations. What does come naturally is drawing on our existing body of knowledge and experience (mental model) to make an intuitive judgment. In most circumstances in our daily lives, this is an efficient approach that works most of the time.

When one is facing a complex choice of options, the reliance on intuitive judgment risks following a practice called “satisficing,” a term coined by Nobel Prize winner Herbert Simon by combining the words satisfy and suffice.1 It means being satisfied with the first answer that seems adequate, as distinct from assessing multiple options to find the optimal or best answer. The “satisficer” who does seek out additional information may look only for information that supports this initial answer rather than looking more broadly at all the possibilities.

 

The truth of a hypothesis can never be proven beyond doubt by citing only evidence that is consistent with the hypothesis, because the same evidence may be and often is consistent with one or more other hypotheses. Science often proceeds by refuting or disconfirming hypotheses. A hypothesis that cannot be refuted should be taken just as seriously as a hypothesis that seems to have a lot of evidence in favor of it. A single item of evidence that is shown to be inconsistent with a hypothesis can be sufficient grounds for rejecting that hypothesis. The most tenable hypothesis is often the one with the least evidence against it.

Analysts often test hypotheses by using a form of reasoning known as abduction, which differs from the two better known forms of reasoning, deduction and induction. Abductive reasoning starts with a set of facts. One then develops hypotheses that, if true, would provide the best explanation for these facts. The most tenable hypothesis is the one that best explains the facts. Because of the uncertainties inherent to intelligence analysis, conclusive proof or refutation of hypotheses is the exception rather than the rule.

The Analysis of Competing Hypotheses (ACH) technique, was developed by Richards Heuer specifically for use in intelligence analysis. It is the application to intelligence analysis of Karl Popper’s theory of science.2 Popper was one of the most influential philosophers of science of the twentieth century. He is known for, among other things, his position that scientific reasoning should start with multiple hypotheses and proceed by rejecting or eliminating hypotheses, while tentatively accepting only those hypotheses that cannot be refuted.

This chapter describes techniques that are intended to be used specifically for hypothesis generation.

 

Overview of Techniques

Hypothesis Generation is a category that includes three specific techniques—Simple Hypotheses, Multiple Hypotheses Generator, and Quadrant Hypothesis Generation. Simple Hypotheses is the easiest of the three, but it is not always the best selection. Use Multiple Hypotheses Generator to identify a large set of all possible hypotheses. Quadrant Hypothesis Generation is used to identify a set of hypotheses when there are just two driving forces that are expected to determine the outcome.

Diagnostic Reasoning applies hypothesis testing to the evaluation of significant new information. Such information is evaluated in the context of all plausible explanations of that information, not just in the context of the analyst’s well-established mental model. The use of Diagnostic Reasoning reduces the risk of surprise, as it ensures that an analyst will have given at least some consideration to alternative conclusions. Diagnostic Reasoning differs from the Analysis of Competing Hypotheses (ACH) technique in that it is used to evaluate a single item of evidence, while ACH deals with an entire issue involving multiple pieces of evidence and a more complex analytic process.

Analysis of Competing Hypotheses

The requirement to identify and then refute all reasonably possible hypotheses forces an analyst to recognize the full uncertainty inherent in most analytic situations. At the same time, the ACH software helps the analyst sort and manage evidence to identify paths for reducing that uncertainty.

Argument Mapping is a method that can be used to put a single hypothesis to a rigorous logical test. The structured visual representation of the arguments and evidence makes it easier to evaluate any analytic judgment. Argument Mapping is a logical follow on to an ACH analysis. It is a detailed presentation of the arguments for and against a single hypothesis, while ACH is a more general analysis of multiple hypotheses. The successful application of Argument Mapping to the hypothesis favored by the ACH analysis would increase confidence in the results of both analyses.

Deception Detection is discussed in this chapter because the possibility of deception by a foreign intelligence service or other adversary organization is a distinctive type of hypothesis that analysts must frequently consider. The possibility of deception can be included as a hypothesis in any ACH analysis. Information identified through the Deception Detection technique can then be entered as evidence in the ACH matrix.

7.1 HYPOTHESIS GENERATION

In broad terms, a hypothesis is a potential explanation or conclusion that is to be tested by collecting and presenting evidence. It is a declarative statement that has not been established as true—an “educated guess” based on observation that needs to be supported or refuted by more observation or through experimentation.

A good hypothesis:

Is written as a definite statement, not as a question. Is based on observations and knowledge.
Is testable and falsifiable.
Predicts the anticipated results clearly.

Contains a dependent and an independent variable. The dependent variable is the phenomenon being explained. The independent variable does the explaining.

When to Use It

Analysts should use some structured procedure to develop multiple hypotheses at the start of a project when:

The importance of the subject matter is such as to require systematic analysis of all alternatives. Many variables are involved in the analysis.
There is uncertainty about the outcome.
Analysts or decision makers hold competing views.

Value Added

Generating multiple hypotheses at the start of a project can help analysts avoid common analytic pitfalls such as these:

Coming to premature closure.
Being overly influenced by first impressions.
Selecting the first answer that appears “good enough.”
Focusing on a narrow range of alternatives representing marginal, not radical, change. Opting for what elicits the most agreement or is desired by the boss.
Selecting a hypothesis only because it avoids a previous error or replicates a past success.

7.1.1 The Method: Simple Hypotheses

To use the Simple Hypotheses method, define the problem and determine how the hypotheses are expected to be used at the beginning of the project.

Gather together a diverse group to review the available evidence and explanations for the issue, activity, or behavior that you want to evaluate. In forming this diverse group, consider that you will need different types of expertise for different aspects of the problem, cultural expertise about the geographic area involved, different perspectives from various stakeholders, and different styles of thinking (left brain/right brain, male/female). Then:

Ask each member of the group to write down on a 3 × 5 card up to three alternative explanations or hypotheses. Prompt creative thinking by using the following:

Situational logic: Take into account all the known facts and an understanding of the underlying forces at work at that particular time and place.
Historical analogies: Consider examples of the same type of phenomenon.
Theory: Consider theories based on many examples of how a particular type of situation generally plays out.

Collect the cards and display the results on a whiteboard. Consolidate the list to avoid any duplication. Employ additional group and individual brainstorming techniques to identify key forces and factors. Aggregate the hypotheses into affinity groups and label each group.
Use problem restatement and consideration of the opposite to develop new ideas.

Update the list of alternative hypotheses. If the hypotheses will be used in ACH, strive to keep them mutually exclusive—that is, if one hypothesis is true all others must be false.
Have the group clarify each hypothesis by asking the journalist’s classic list of questions: Who, What, When, Where, Why, and How?

Select the most promising hypotheses for further exploration.

7.1.2 The Method: Multiple Hypotheses Generator

The Multiple Hypotheses Generator provides a structured mechanism for generating a wide array of hypotheses. Analysts often can brainstorm a useful set of hypotheses without such a tool, but the Hypotheses Generator may give greater confidence than other techniques that a critical alternative or an outlier has not been overlooked. To use this method:

Define the issue, activity, or behavior that is subject to examination. Do so by using the journalist’s classic list of Who, What, When, Where, Why, and How for explaining this issue, activity, or behavior.

7.1.3 The Method: Quadrant Hypothesis Generation

Use the quadrant technique to identify a basic set of hypotheses when there are two easily identified key driving forces that will determine the outcome of an issue. The technique identifies four potential scenarios that represent the extreme conditions for each of the two major drivers. It spans the logical possibilities inherent in the relationship and interaction of the two driving forces, thereby generating options that analysts otherwise may overlook.

These are the steps for Quadrant Hypothesis Generation:

Identify the two main drivers by using techniques such as Structured Brainstorming or by surveying subject matter experts. A discussion to identify the two main drivers can be a useful exercise in itself. Construct a 2 × 2 matrix using the two drivers.
Think of each driver as a continuum from one extreme to the other. Write the extremes of each of the drivers at the end of the vertical and horizontal axes.

Fill in each quadrant with the details of what the end state would be as shaped by the two drivers. Develop signposts that show whether events are moving toward one of the hypotheses. Use the signposts or indicators of change to develop intelligence collection strategies to determine the direction in which events are moving.

7.2 DIAGNOSTIC REASONING

Diagnostic Reasoning applies hypothesis testing to the evaluation of a new development, the assessment of a new item of intelligence, or the reliability of a source. It is different from the Analysis of Competing Hypotheses (ACH) technique in that Diagnostic Reasoning is used to evaluate a single item of evidence, while ACH deals with an entire issue involving multiple pieces of evidence and a more complex analytic process.

When to Use It

Analysts should use Diagnostic Reasoning instead of making a snap intuitive judgment when assessing the meaning of a new development in their area of interest, or the significance or reliability of a new intelligence report. The use of this technique is especially important when the analyst’s intuitive interpretation of a new piece of evidence is that the new information confirms what the analyst was already thinking.

Value Added

Diagnostic Reasoning helps balance people’s natural tendency to interpret new information as consistent with their existing understanding of what is happening—that is, the analyst’s mental model. It is a common experience to discover that much of the evidence supporting what one believes is the most likely conclusion is really of limited value in confirming one’s existing view, because that same evidence is also consistent with alternative conclusions. One needs to evaluate new information in the context of all possible explanations of that information, not just in the context of a well-established mental model. The use of Diagnostic Reasoning reduces the element of surprise by ensuring that at least some consideration has been given to alternative conclusions.

The Method

Diagnostic Reasoning is a process by which you try to refute alternative judgments rather than confirm what you already believe to be true. Here are the steps to follow:

* When you receive a potentially significant item of information, make a mental note of what it seems to mean (i.e., an explanation of why something happened or what it portends for the future). Make a quick intuitive judgment based on your current mental model.

* Brainstorm, either alone or in a small group, the alternative judgments that another analyst with a different perspective might reasonably deem to have a chance of being accurate. Make a list of these alternatives.

* For each alternative, ask the following question: If this alternative were true or accurate, how likely is it that I would see this new information?

* Make a tentative judgment based on consideration of these alternatives. If the new information is equally likely with each of the alternatives, the information has no diagnostic value and can be ignored. If the information is clearly inconsistent with one or more alternatives, those alternatives might be ruled out. Following this mode of thinking for each of the alternatives, decide which alternatives need further attention and which can be dropped from consideration.

* Proceed further by seeking evidence to refute the remaining alternatives rather than confirm them.

7.3 ANALYSIS OF COMPETING HYPOTHESES

Analysis of Competing Hypotheses (ACH) is a technique that assists analysts in making judgments on issues that require careful weighing of alternative explanations or estimates. ACH involves identifying a set of

mutually exclusive alternative explanations or outcomes (presented as hypotheses), assessing the consistency or inconsistency of each item of evidence with each hypothesis, and selecting the hypothesis that best fits the evidence. The idea behind this technique is to refute rather than to confirm each of the hypotheses. The most likely hypothesis is the one with the least evidence against it, as well as evidence for it, not the one with the most evidence for it.

When to Use It

ACH is appropriate for almost any analysis where there are alternative explanations for what has happened, is happening, or is likely to happen. Use it when the judgment or decision is so important that you cannot afford to be wrong. Use it when your gut feelings are not good enough, and when you need a systematic approach to prevent being surprised by an unforeseen outcome. Use it on controversial issues when it is desirable to identify precise areas of disagreement and to leave an audit trail to show what evidence was considered and how different analysts arrived at their judgments.

ACH also is particularly helpful when an analyst must deal with the potential for denial and deception, as it was initially developed for that purpose.

Value Added

There are a number of different ways by which ACH helps analysts produce a better analytic product. These include the following:

* It prompts analysts to start by developing a full set of alternative hypotheses. This process reduces the risk of what is called “satisficing”—going with the first answer that comes to mind that seems to meet the need. It ensures that all reasonable alternatives are considered before the analyst gets locked into a preferred conclusion.

* It requires analysts to try to refute hypotheses rather than support a single hypothesis. The technique helps analysts overcome the tendency to search for or interpret new information in a way that confirms their preconceptions and avoids information and interpretations that contradict prior beliefs. A word of caution, however. ACH works this way only when the analyst approaches an issue with a relatively open mind. An analyst who is already committed to a belief in what the right answer is will often find a way to interpret the evidence as consistent with that belief. In other words, as an antidote to confirmation bias, ACH is similar to a flu shot. Taking the flu shot will usually keep you from getting the flu, but it won’t make you well if you already have the flu.

* It helps analysts to manage and sort evidence in analytically useful ways. It helps maintain a record of relevant evidence and tracks how that evidence relates to each hypothesis. It also enables analysts to sort data by type, date, and diagnosticity of the evidence.

* It spurs analysts to present conclusions in a way that is better organized and more transparent as to how these conclusions were reached than would otherwise be possible.

* It provides a foundation for identifying indicators that can be monitored to determine the direction in which events are heading.

* It leaves a clear audit trail as to how the analysis was done.
As a tool for interoffice or interagency collaboration, ACH ensures that all analysts are working from the

same database of evidence, arguments, and assumptions and ensures that each member of the team has had an opportunity to express his or her view on how that information relates to the likelihood of each hypothesis. Users of ACH report that:

* The technique helps them gain a better understanding of the differences of opinion with other analysts or between analytic offices.

* Review of the ACH matrix provides a systematic basis for identification and discussion of differences between two or more analysts.

* Reference to the matrix helps depersonalize the argumentation when there are differences of opinion. The Method

Simultaneous evaluation of multiple, competing hypotheses is difficult to do without some type of analytic aid. To retain three or five or seven hypotheses in working memory and note how each item of information fits into each hypothesis is beyond the capabilities of most people. It takes far greater mental agility than the common practice of seeking evidence to support a single hypothesis that is already believed to be the most likely answer. ACH can be accomplished, however, with the help of the following eight-step process:

* First, identify the hypotheses to be considered. Hypotheses should be mutually exclusive; that is, if one hypothesis is true, all others must be false. The list of hypotheses should include all reasonable possibilities. Include a deception hypothesis if that is appropriate. For each hypothesis, develop a brief scenario or “story” that explains how it might be true.

* Make a list of significant “evidence,” which for ACH means everything that is relevant to evaluating the hypotheses—including evidence, arguments, assumptions, and the absence of things one would expect to see if a hypothesis were true. It is important to include assumptions as well as factual evidence, because the matrix is intended to be an accurate reflection of the analyst’s thinking about the topic. If the analyst’s thinking is driven by assumptions rather than hard facts, this needs to become apparent so that the assumptions can be challenged. A classic example of absence of evidence is the Sherlock Holmes story of the dog barking in the night. The failure of the dog to bark was persuasive evidence that the guilty party was not an outsider but an insider who was known to the dog.

* Analyze the diagnosticity of the evidence, arguments, and assumptions to identify which inputs are most influential in judging the relative likelihood of the hypotheses. Assess each input by working across the matrix. For each hypothesis, ask, “Is this input consistent with the hypothesis, inconsistent with the hypothesis, or is it not relevant?” If it is consistent, place a “C” in the box; if it is inconsistent, place an “I”; if it is not relevant to that hypothesis leave the box blank. If a specific item of evidence, argument, or assumption is particularly compelling, place two “CCs” in the box; if it strongly undercuts the hypothesis, place two “IIs.” When you are asking if an input is consistent or inconsistent with a specific hypothesis, a common response is, “It all depends on….” That means the rating for the hypothesis will be based on an assumption—whatever assumption the rating “depends on.” You should write down all such assumptions. After completing the matrix, look for any pattern in those assumptions—that is, the same assumption being made when ranking

multiple items of evidence. After sorting the evidence for diagnosticity, note how many of the highly diagnostic inconsistency ratings are based on assumptions. Consider how much confidence you should have in those assumptions and then adjust the confidence in the ACH Inconsistency Scores accordingly. See Figure 7.3a for an example.

* Refine the matrix by reconsidering the hypotheses. Does it make sense to combine two hypotheses into one or to add a new hypothesis that was not considered at the start? If a new hypothesis is added, go back and evaluate all the evidence for this hypothesis. Additional evidence can be added at any time.

* Draw tentative conclusions about the relative likelihood of each hypothesis, basing your conclusions on an analysis of the diagnosticity of each item of evidence. The software calculates an inconsistency score based on the number of “I” or “II” ratings or a weighted inconsistency score that also includes consideration of the weight assigned to each item of evidence. The hypothesis with the lowest inconsistency score is tentatively the most likely hypothesis. The one with the most inconsistencies is the least likely.

* Analyze the sensitivity of your tentative conclusion to a change in the interpretation of a few critical items of evidence. Do this by using the ACH software to sort the evidence by diagnosticity. This identifies the most diagnostic evidence that is driving your conclusion. See Figure 7.3b. Consider the consequences for your analysis if one or more of these critical items of evidence were wrong or deceptive or subject to a different interpretation. If a different interpretation would be sufficient to change your conclusion, go back and do everything that is reasonably possible to double check the accuracy of your interpretation.

* Report the conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one. State which items of evidence were the most diagnostic and how compelling a case they make in distinguishing the relative likelihood of the hypotheses.

* Identify indicators or milestones for future observation. Generate two lists: the first focusing on future events or what might be developed through additional research that would help prove the validity of your analytic judgment, and the second, a list of indicators that would suggest that your judgment is less likely to be correct. Monitor both lists on a regular basis, remaining alert to whether new information strengthens or weakens your case.

Potential Pitfalls

The inconsistency or weighted inconsistency scores generated by the ACH software for each hypothesis are not the product of a magic formula that tells you which hypothesis to believe in! The ACH software takes you through a systematic analytic process, and the computer makes the calculation, but the judgment that emerges is only as accurate as your selection and evaluation of the evidence to be considered.

Because it is more difficult to refute hypotheses than to find information that confirms a favored hypothesis, the generation and testing of alternative hypotheses will often increase rather than reduce the analyst’s level of uncertainty. Such uncertainty is frustrating, but it is usually an accurate reflection of the true situation. The ACH procedure has the offsetting advantage of focusing your attention on the few items of critical evidence that cause the uncertainty or which, if they were available, would alleviate it.

Assumptions or logical deductions omitted: If the scores in the matrix do not support what you believe is the most likely hypothesis, the matrix may be incomplete. Your thinking may be influenced by assumptions or logical deductions that have not been included in the list of evidence/arguments. If so, these should be included so that the matrix fully reflects everything that influences your judgment on this issue. It is important for all analysts to recognize the role that unstated or unquestioned (and sometimes unrecognized) assumptions play in their analysis. In political or military analysis, for example, conclusions may be driven by assumptions about another country’s capabilities or intentions.

Insufficient attention to less likely hypotheses: If you think the scoring gives undue credibility to one or more of the less likely hypotheses, it may be because you have not assembled the evidence needed to refute them. You may have devoted insufficient attention to obtaining such evidence, or the evidence may simply not be there.

Definitive evidence: There are occasions when intelligence collectors obtain information from a trusted and well-placed inside source. The ACH analysis can assign a “High” weight for Credibility, but this is probably not enough to reflect the conclusiveness of such evidence and the impact it should have on an analyst’s thinking about the hypotheses. In other words, in some circumstances one or two highly authoritative reports from a trusted source in a position to know may support one hypothesis so strongly that they refute all other hypotheses regardless of what other less reliable or less definitive evidence may show.

Unbalanced set of evidence: Evidence and arguments must be representative of the problem as a whole. If there is considerable evidence on a related but peripheral issue and comparatively few items of evidence on the core issue, the inconsistency or weighted inconsistency scores may be misleading.

Diminishing returns: As evidence accumulates, each new item of inconsistent evidence or argument has less impact on the inconsistency scores than does the earlier evidence.

When you are evaluating change over time, it is desirable to delete the older evidence periodically or to partition the evidence and analyze the older and newer evidence separately.

Origins of This Technique

Richards Heuer originally developed the ACH technique as a method for dealing with a particularly difficult type of analytic problem at the CIA in the 1980s. It was first described publicly in his book The Psychology of Intelligence Analysis

7.4 ARGUMENT MAPPING

Argument Mapping is a technique that can be used to test a single hypothesis through logical reasoning. The process starts with a single hypothesis or tentative analytic judgment and then uses a box-and-arrow

diagram to lay out visually the argumentation and evidence both for and against the hypothesis or analytic judgment.

When to Use It

When making an intuitive judgment, use Argument Mapping to test your own reasoning. Creating a visual map of your reasoning and the evidence that supports this reasoning helps you better understand the strengths, weaknesses, and gaps in your argument.

Argument Mapping and Analysis of Competing Hypotheses (ACH) are complementary techniques that work well either separately or together. Argument Mapping is a detailed presentation of the argument for a single hypothesis, while ACH is a more general analysis of multiple hypotheses. The ideal is to use both.

Value Added

An Argument Map makes it easier for both analysts and recipients of the analysis to evaluate the soundness of any conclusion. It helps clarify and organize one’s thinking by showing the logical relationships between the various thoughts, both pro and con. An Argument Map also helps the analyst recognize assumptions and identify gaps in the available knowledge.

The Method

An Argument Map starts with a hypothesis—a single-sentence statement, judgment, or claim about which the analyst can, in subsequent statements, present general arguments and detailed evidence, both pro and con. Boxes with arguments are arrayed hierarchically below this statement, and these boxes are connected with arrows. The arrows signify that a statement in one box is a reason to believe, or not to believe, the statement in the box to which the arrow is pointing. Different types of boxes serve different functions in the reasoning process, and boxes use some combination of color-coding, icons, shapes, and labels so that one can quickly distinguish arguments supporting a hypothesis from arguments opposing it.

7.5 DECEPTION DETECTION

Deception is an action intended by an adversary to influence the perceptions, decisions, or actions of another to the advantage of the deceiver. Deception Detection is a set of checklists that analysts can use to

help them determine when to look for deception, discover whether deception actually is present, and figure out what to do to avoid being deceived. “The accurate perception of deception in counterintelligence analysis is extraordinarily difficult. If deception is done well, the analyst should not expect to see any evidence of it. If, on the other hand, it is expected, the analyst often will find evidence of deception even when it is not there.”4

When to Use It

Analysts should be concerned about the possibility of deception when:

  • The potential deceiver has a history of conducting deception.
  • Key information is received at a critical time, that is, when either the recipient or the potential deceiver has a great deal to gain or to lose.
  • Information is received from a source whose bona fides are questionable.
  • Analysis hinges on a single critical piece of information or reporting.
  • Accepting new information would require the analyst to alter a key assumption or key judgment.
  • Accepting the new information would cause the Intelligence Community, the U.S. government, or the client to expend or divert significant resources.
  • The potential deceiver may have a feedback channel that illuminates whether and how the deception information is being processed and to what effect.

Value Added

Most analysts know they cannot assume that everything that arrives in their inbox is valid, but few know how to factor such concerns effectively into their daily work practices. If an analyst accepts the possibility that some of the information received may be deliberately deceptive, this puts a significant cognitive burden on the analyst. All the evidence is open then to some question, and it becomes difficult to draw any valid inferences from the reporting. This fundamental dilemma can paralyze analysis unless practical tools are available to guide the analyst in determining when it is appropriate to worry about deception, how best to detect deception in the reporting, and what to do in the future to guard against being deceived.

The Method

Analysts should routinely consider the possibility that opponents are attempting to mislead them or to hide important information. The possibility of deception cannot be rejected simply because there is no evidence of it; if it is well done, one should not expect to see evidence of it.

Analysts have also found the following “rules of the road” helpful in dealing with deception.

  • Avoid over-reliance on a single source of information.
  • Seek and heed the opinions of those closest to the reporting.
  • Be suspicious of human sources or sub-sources who have not been met with personally or for whom it is unclear how or from whom they obtained the information.
  • Do not rely exclusively on what someone says (verbal intelligence); always look for material evidence (documents, pictures, an address or phone number that can be confirmed, or some other form of concrete, verifiable information).
  • Look for a pattern where on several occasions a source’s reporting initially appears correct but later turns out to be wrong and the source can offer a seemingly plausible, albeit weak, explanation for the discrepancy.
  • Generate and evaluate a full set of plausible hypothesis—including a deception hypothesis, if appropriate—at the outset of a project.
  • Know the limitations as well as the capabilities of the potential deceiver.

DECEPTION DETECTION CHECKLISTs

 

Motion, Opportunity, and Means (MOM):

Motive: What are the goals and motives of the potential deceiver?
Channels: What means are available to the potential deceiver to feed information to us?
Risks: What consequences would the adversary suffer if such a deception were revealed?
Costs: Would the potential deceiver need to sacrifice sensitive information to establish the credibility of the deception channel?
Feedback: Does the potential deceiver have a feedback mechanism to monitor the impact of the deception operation?

 

Past Opposition Practices (POP):

Does the adversary have a history of engaging in deception?
Does the current circumstance fit the pattern of past deceptions?
If not, are there other historical precedents?
If not, are there changed circumstances that would explain using this form of deception at this time?

 

Manipulability of Sources (MOSES):

Is the source vulnerable to control or manipulation by the potential deceiver?

What is the basis for judging the source to be reliable?
Does the source have direct access or only indirect access to the information? How good is the source’s track record of reporting?

 

Evaluation of Evidence (EVE):

How accurate is the source’s reporting? Has the whole chain of evidence including translations been checked?
Does the critical evidence check out? Remember, the sub-source can be more critical than the source.

Does evidence from one source of reporting (e.g., human intelligence) conflict with that coming from another source (e.g., signals intelligence or open source reporting)?
Do other sources of information provide corroborating evidence?
Is any evidence one would expect to see noteworthy by its absence?

Relationship to Other Techniques

Analysts can combine Deception Detection with Analysis of Competing Hypotheses to assess the possibility of deception. The analyst explicitly includes deception as one of the hypotheses to be analyzed, and information identified through the MOM, POP, MOSES, and EVE checklists is then included as evidence in the ACH analysis.

 

8.0 Cause and Effect
8 Assessment of Cause and Effect

At tempts to explain the past and forecast the future are based on an understanding of cause and effect. Such understanding is difficult, because the kinds of variables and relationships studied by the intelligence analyst are, in most cases, not amenable to the kinds of empirical analysis and theory development that are common in academic research. The best the analyst can do is to make an informed judgment, but such judgments depend upon the analyst’s subject matter expertise and reasoning ability and are vulnerable to various cognitive pitfalls and fallacies of reasoning.

 

One of the most common causes of intelligence failures is mirror imaging, the unconscious assumption that other countries and their leaders will act as we would in similar circumstances. Another is the tendency to attribute the behavior of people, organizations, or governments to the nature of the actor and underestimate the influence of situational factors. Conversely, people tend to see their own behavior as conditioned almost entirely by the situation in which they find themselves. This is known as the “fundamental attribution error.”

There is also a tendency to assume that the results of an opponent’s actions are what the opponent intended, and we are slow to accept the reality of simple mistakes, accidents, unintended consequences, coincidences, or small causes leading to large effects. Analysts often assume that there is a single cause and stop their search for an explanation when the first seemingly sufficient cause is found. Perceptions of causality are partly determined by where one’s attention is directed; as a result, information that is readily available, salient, or vivid is more likely to be perceived as causal than information that is not. Cognitive limitations and common errors in the perception of cause and effect are discussed in greater detail in Richards Heuer’s Psychology of Intelligence Analysis.

 

The Psychology of Intelligence Analysis describes three principal strategies that intelligence analysts use to make judgments to explain the cause of current events or forecast what might happen in the future:

* Situational logic: Making expert judgments based on the known facts and an understanding of the underlying forces at work at that particular time and place. When an analyst is working with incomplete, ambiguous, and possibly deceptive information, these expert judgments usually depend upon assumptions about capabilities, intent, or the normal workings of things in the country of concern. Key Assumptions Check, which is one of the most commonly used structured techniques, is described in this chapter.

* Comparison with historical situations: Combining an understanding of the facts of a specific situation with knowledge of what happened in similar situations in the past, either in one’s personal experience or historical events. This strategy involves the use of analogies. The Structured Analogies technique described in this chapter adds rigor and increased accuracy to this process.

* Applying theory: Basing judgments on the systematic study of many examples of the same phenomenon. Theories or models often based on empirical academic research are used to explain how and when certain types of events normally happen. Many academic models are too generalized to be applicable to the unique characteristics of most intelligence problems.

Overview of Techniques

Key Assumptions Check is one of the most important and frequently used techniques. Analytic judgment is always based on a combination of evidence and assumptions, or preconceptions, that influence how the evidence is interpreted.

Structured Analogies applies analytic rigor to reasoning by analogy. This technique requires that the analyst systematically compares the issue of concern with multiple potential analogies before selecting the one for which the circumstances are most similar to the issue of concern. It seems natural to use analogies when making decisions or forecasts as, by definition, they contain information about what has happened in similar situations in the past. People often recognize patterns and then consciously take actions that were successful in a previous experience or avoid actions that previously were unsuccessful. However, analysts need to avoid the strong tendency to fasten onto the first analogy that comes to mind and supports their prior view about an issue.

Role Playing, as described here, starts with the current situation, perhaps with a real or hypothetical new development that has just happened and to which the players must react.

Red Hat Analysis is a useful technique for trying to perceive threats and opportunities as others see them. Intelligence analysts frequently endeavor to forecast the behavior of a foreign leader, group, organization, or country. In doing so, they need to avoid the common error of mirror imaging, the natural tendency to assume that others think and perceive the world in the same way we do. Red Hat Analysis is of limited value without significant cultural understanding of the country and people involved.

Outside-In Thinking broadens an analyst’s thinking about the forces that can influence a particular issue of concern. This technique requires the analyst to reach beyond his or her specialty area to consider broader social, organizational, economic, environmental, technological, political, and global forces or trends that can affect the topic being analyzed.

Policy Outcomes Forecasting Model is a theory-based procedure for estimating the potential for political change. Formal models play a limited role in political/strategic analysis, since analysts generally are concerned with what they perceive to be unique events, rather than with any need to search for general patterns in events. Conceptual models that tell an analyst how to think about a problem and help the analyst through that thought process can be useful for frequently recurring issues, such as forecasting policy outcomes or analysis of political instability. Models or simulations that use a mathematical algorithm to calculate a conclusion are outside the domain of structured analytic techniques that are the topic of this book.

Prediction Markets are speculative markets created for the purpose of making predictions about future events. Just as betting on a horse race sets the odds on which horse will win, betting that some future occurrence will or will not happen sets the estimated probability of that future occurrence. Although the use of this technique has been successful in the private sector, it may not be a workable method for the Intelligence Community.

8.1 KEY ASSUMPTIONS CHECK

Analytic judgment is always based on a combination of evidence and assumptions, or preconceptions, which influence how the evidence is interpreted.2 The Key Assumptions Check is a systematic effort to make explicit and question the assumptions (the mental model) that guide an analyst’s interpretation of evidence and reasoning about any particular problem. Such assumptions are usually necessary and unavoidable as a means of filling gaps in the incomplete, ambiguous, and sometimes deceptive information with which the analyst must work. They are driven by the analyst’s education, training, and experience, plus the organizational context in which the analyst works.

An organization really begins to learn when its most cherished assumptions are challenged by counterassumptions. Assumptions underpinning existing policies and procedures should therefore be unearthed, and alternative policies and procedures put forward based upon counterassumptions.

—Ian I. Mitroff and Richard O. Mason,

Creating a Dialectical Social Science: Concepts, Methods, and Models

 

When to Use It

Any explanation of current events or estimate of future developments requires the interpretation of evidence. If the available evidence is incomplete or ambiguous, this interpretation is influenced by assumptions about how things normally work in the country of interest. These assumptions should be made explicit early in the analytic process.

If a Key Assumptions Check is not done at the outset of a project, it can still prove extremely valuable if done during the coordination process or before conclusions are presented or delivered.

Value Added

Preparing a written list of one’s working assumptions at the beginning of any project helps the analyst:

  • Identify the specific assumptions that underpin the basic analytic line.
  • Achieve a better understanding of the fundamental dynamics at play.
  • Gain a better perspective and stimulate new thinking about the issue.
  • Discover hidden relationships and links between key factors.
  • Identify any developments that would cause an assumption to be abandoned.
  • Avoid surprise should new information render old assumptions invalid.

A sound understanding of the assumptions underlying an analytic judgment sets the limits for the confidence the analyst ought to have in making a judgment.

The Method

The process of conducting a Key Assumptions Check is relatively straightforward in concept but often challenging to put into practice. One challenge is that participating analysts must be open to the possibility that they could be wrong. It helps to involve in this process several well-regarded analysts who are generally familiar with the topic but have no prior commitment to any set of assumptions about the issue at hand. Keep in mind that many “key assumptions” turn out to be “key uncertainties.”

Here are the steps in conducting a Key Assumptions Check:

* Gather a small group of individuals who are working the issue along with a few “outsiders.” The primary analytic unit already is working from an established mental model, so the “outsiders” are needed to bring other perspectives.

* Ideally, participants should be asked to bring their list of assumptions when they come to the meeting. If this was not done, start the meeting with a silent brainstorming session. Ask each participant to write down several assumptions on 3 × 5 cards.

*  Collect the cards and list the assumptions on a whiteboard for all to see.

*  Elicit additional assumptions. Work from the prevailing analytic line back to the key arguments that support it. Use various devices to help prod participants’ thinking:

  • Ask the standard journalist questions. Who: Are we assuming that we know who all the key players are? What: Are we assuming that we know the goals of the key players? When: Are we assuming that conditions have not changed since our last report or that they will not change in the foreseeable future? Where: Are we assuming that we know where the real action is going to be? Why: Are we assuming that we understand the motives of the key players? How: Are we assuming that we know how they are going to do it?
  • After identifying a full set of assumptions, go back and critically examine each assumption. Ask:
  • Why am I confident that this assumption is correct?
    In what circumstances might this assumption be untrue?
    Could it have been true in the past but no longer be true today?
    How much confidence do I have that this assumption is valid?
    If it turns out to be invalid, how much impact would this have on the analysis?
  • Place each assumption in one of three categories:
  • Basically solid.
    Correct with some caveats.
    Unsupported or questionable—the “key uncertainties.”

Refine the list, deleting those that do not hold up to scrutiny and adding new assumptions that emerge from the discussion. Above all, emphasize those assumptions that would, if wrong, lead to changing the analytic conclusions.

There is a particularly noteworthy interaction between Key Assumptions Check and Analysis of Competing Hypotheses (ACH). Key assumptions need to be included as “evidence” in an ACH matrix to ensure that the matrix is an accurate reflection of the analyst’s thinking. And analysts frequently identify assumptions during the course of filling out an ACH matrix. This happens when an analyst assesses the consistency or inconsistency of an item of evidence with a hypothesis and concludes that this judgment is dependent upon something else—usually an assumption. Users of ACH should write down and keep track of the assumptions they make when evaluating evidence against the hypotheses.

8.2 STRUCTURED ANALOGIES
The Structured Analogies technique applies increased rigor to analogical reasoning by requiring that the

issue of concern be compared systematically with multiple analogies rather than with a single analogy.

When to Use It

When one is making any analogy, it is important to think about more than just the similarities. It is also necessary to consider those conditions, qualities, or circumstances that are dissimilar between the two phenomena. This should be standard practice in all reasoning by analogy and especially in those cases when one cannot afford to be wrong.

We recommend that analysts considering the use of this technique read Richard D. Neustadt and Ernest R. May, “Unreasoning from Analogies,” chapter 4, in Thinking in Time: The Uses of History for Decision Makers (New York: Free Press, 1986). Also recommended is Giovanni Gavetti and Jan W. Rivkin, “How Strategists Really Think: Tapping the Power of Analogy,” Harvard Business Review (April 2005).

Value Added

Reasoning by analogy helps achieve understanding by reducing the unfamiliar to the familiar. In the absence of data required for a full understanding of the current situation, reasoning by analogy may be the only alternative.

The benefit of the Structured Analogies technique is that it avoids the tendency to fasten quickly on a single analogy and then focus only on evidence that supports the similarity of that analogy. Analysts should take into account the time required for this structured approach and may choose to use it only when the cost of being wrong is high.

The following is a step-by-step description of this technique.

*  Describe the issue and the judgment or decision that needs to be made.

*  Identify a group of experts who are familiar with the problem a

* Ask the group of experts to identify as many analogies as possible without focusing too strongly on how similar they are to the current situation. Various universities and international organizations maintain databases to facilitate this type of research. For example, the Massachusetts Institute of Technology (MIT) maintains its Cascon System for Analyzing International Conflict, a database of 85 post–World War II conflicts that are categorized and coded to facilitate their comparison with current conflicts of interest.

* Review the list of potential analogies and agree on which ones should be examined further.

* Develop a tentative list of categories for comparing the analogies to determine which analogy is closest to the issue in question. For example, the MIT conflict database codes each case according to the following broad categories as well as finer subcategories: previous or general relations between sides, great power and allied involvement, external relations generally, military-strategic, international organization (UN, legal, public opinion), ethnic (refugees, minorities), economic/resources, internal politics of the sides, communication and information, actions in disputed area.

* Write up an account of each selected analogy, with equal focus on those aspects of the analogy that are similar and those that are different. The task of writing accounts of all the analogies should be divided up among the experts. Each account can be posted on a wiki where each member of the group can read and comment on them.

* Review the tentative list of categories for comparing the analogous situations to make sure they are still appropriate. Then ask each expert to rate the similarity of each analogy to the issue of concern. The experts should do the rating in private using a scale from 0 to 10, where 0 = not at all similar, 5 = somewhat similar, and 10 = very similar.

* After combining the ratings to calculate an average rating for each analogy, discuss the results and make a forecast for the current issue of concern. This will usually be the same as the outcome of the most similar analogy. Alternatively, identify several possible outcomes, or scenarios, based on the diverse outcomes of analogous situations. Then use the analogous cases to identify drivers or policy actions that might influence the outcome of the current situation.

8.3 ROLE PLAYING

In Role Playing, analysts assume the roles of the leaders who are the subject of their analysis and act out their responses to developments. This technique is also known as gaming, but we use the name Role Playing here

to distinguish it from the more complex forms of military gaming. This technique is about simple Role Playing, when the starting scenario is the current existing situation, perhaps with a real or hypothetical new development that has just happened and to which the players must react.

When to Use It

Role Playing is often used to improve understanding of what might happen when two or more people, organizations, or countries interact, especially in conflict situations or negotiations. It shows how each side might react to statements or actions from the other side. Many years ago Richards Heuer participated in several Role Playing exercises, including one with analysts of the Soviet Union from throughout the Intelligence Community playing the role of Politburo members deciding on the successor to Soviet leader Leonid Brezhnev.

Role Playing has a desirable byproduct that might be part of the rationale for using this technique. It is a useful mechanism for bringing together people who, although they work on a common problem, may have little opportunity to meet and discuss their perspectives on this problem. A role-playing game may lead to the long-term benefits that come with mutual understanding and ongoing collaboration. To maximize this benefit, the organizer of the game should allow for participants to have informal time together.

Value Added

Role Playing is a good way to see a problem from another person’s perspective, to gain insight into how others think, or to gain insight into how other people might react to U.S. actions.

Role Playing is particularly useful for understanding the potential outcomes of a conflict situation. Parties to a conflict often act and react many times, and they can change as a result of their interactions. There is a body of research showing that experts using unaided judgment perform little better than chance in predicting the outcome of such conflict. Performance is improved significantly by the use of “simulated interaction” (Role Playing) to act out the conflicts.

Role Playing does not necessarily give a “right” answer, but it typically enables the players to see some things in a new light. Players become more conscious that “where you stand depends on where you sit.”

Potential Pitfalls

One limitation of Role Playing is the difficulty of generalizing from the game to the real world. Just because something happens in a role-playing game does not necessarily mean the future will turn out that way. This observation seems obvious, but it can actually be a problem. Because of the immediacy of the experience and the personal impression made by the simulation, the outcome may have a stronger impact on the participants’ thinking than is warranted by the known facts of the case. As we shall discuss, this response needs to be addressed in the after-action review.

When the technique is used for intelligence analysis, the goal is not an explicit prediction but better understanding of the situation and the possible outcomes. The method does not end with the conclusion of the Role Playing. There must be an after-action review of the key turning points and how the outcome might have been different if different choices had been made at key points in the game.

The Method

Most of the gaming done in the Department of Defense and in the academic world is rather elaborate so it requires substantial preparatory work.

Whenever possible, a Role Playing game should be conducted off site with cell phones turned off. Being away from the office precludes interruptions and makes it easier for participants to imagine themselves in a different environment with a different set of obligations, interests, ambitions, fears, and historical memories.

The analyst who plans and organizes the game leads a control team. This team monitors time to keep the game on track, serves as the communication channel to pass messages between teams, leads the after-action review, and helps write the after-action report to summarize what happened and lessons learned. The control team also plays any role that becomes necessary but was not foreseen, for example, a United Nations mediator. If necessary to keep the game on track or lead it in a desired direction, the control team may introduce new events, such as a terrorist attack that inflames emotions or a new policy statement on the issue by the U.S. president.

After the game ends or on the following day, it is necessary to conduct an after-action review. If there is agreement that all participants played their roles well, there may be a natural tendency to assume that the outcome of the game is a reasonable forecast of what will eventually happen in real life.

8.4 RED HAT ANALYSIS

Intelligence analysts frequently endeavor to forecast the actions of an adversary or a competitor. In doing so, they need to avoid the common error of mirror imaging, the natural tendency to assume that others think and

perceive the world in the same way we do. Red Hat Analysis is a useful technique for trying to perceive threats and opportunities as others see them, but this technique alone is of limited value without significant cultural understanding of the other country and people involved.

 

To see the options faced by foreign leaders as these leaders see them, one must understand their values and assumptions and even their misperceptions and misunderstandings. Without such insight, interpreting foreign leaders’ decisions or forecasting future decisions is often little more than partially informed speculation. Too frequently, behavior of foreign leaders appears ‘irrational’ or ‘not in their own best interest.’ Such conclusions often indicate analysts have projected American values and conceptual frameworks onto the foreign leaders and societies, rather than understanding the logic of the situation as it appears to them.

—Richards J. Heuer Jr., Psychology of Intelligence Analysis (1999).

When to Use It

The chances of a Red Hat Analysis being accurate are better when one is trying to foresee the behavior of a specific person who has the authority to make decisions. Authoritarian leaders as well as small, cohesive groups, such as terrorist cells, are obvious candidates for this type of analysis. The chances of making an accurate forecast about an adversary’s or a competitor’s decision is significantly lower when the decision is constrained by a legislature or influenced by conflicting interest groups.

Value Added

There is a great deal of truth to the maxim that “where you stand depends on where you sit.” Red Hat Analysis is a reframing technique that requires the analyst to adopt—and make decisions consonant with— the culture of a foreign leader, cohesive group, criminal, or competitor. This conscious effort to imagine the situation as the target perceives it helps the analyst gain a different and usually more accurate perspective on a problem or issue. Reframing the problem typically changes the analyst’s perspective from that of an analyst observing and forecasting an adversary’s behavior to that of a leader who must make a difficult decision within that operational culture.

This reframing process often introduces new and different stimuli that might not have been factored into a traditional analysis. For example, in a Red Hat exercise, participants might ask themselves these questions: “What are my supporters expecting from me?” “Do I really need to make this decision now?” What are the consequences of making a wrong decision?” “How will the United States respond?”

Potential Pitfalls

Forecasting human decisions or the outcome of a complex organizational process is difficult in the best of circumstances.

It is even more difficult when dealing with a foreign culture and significant gaps in the available information. Mirror imaging is hard to avoid because, in the absence of a thorough understanding of the foreign situation and culture, your own perceptions appear to be the only reasonable way to look at the problem.

A common error in our perceptions of the behavior of other people, organizations, or governments of all types is likely to be even more common when assessing the behavior of foreign leaders or groups.

This is the tendency to attribute the behavior of people, organizations, or governments to the nature of the actor and to underestimate the influence of situational factors. This error is especially easy to make when one assumes that the actor has malevolent intentions but our understanding of the pressures on that actor is limited. Conversely, people tend to see their own behavior as conditioned almost entirely by the situation in which they find themselves. We seldom see ourselves as a bad person, but we often see malevolent intent in others.

This is known to cognitive psychologists as the fundamental attribution error.

The Method

* Gather a group of experts with in-depth knowledge of the target, operating environment, and senior decision maker’s personality, motives, and style of thinking. If at all possible, try to include people who are well grounded in the adversary’s culture, who speak the same language, share the same ethnic background, or have lived extensively in the region.

* Present the experts with a situation or a stimulus and ask the experts to put themselves in the adversary’s or competitor’s shoes and simulate how they would respond.

* Emphasize the need to avoid mirror imaging. The question is not “What would you do if you were in their shoes?” but “How would this person or group in that particular culture and circumstance most likely think, behave, and respond to the stimulus?”

* If trying to foresee the actions of a group or an organization, consider using the Role Playing technique.

* In presenting the results, describe the alternatives that were considered and the rationale for selecting the path the person or group is most likely to take. Consider other less conventional means of presenting the results of your analysis, such as the following:

Describing a hypothetical conversation in which the leader and other players discuss the issue in the first person.
Drafting a document (set of instructions, military orders, policy paper, or directives) that the adversary or competitor would likely generate.

Relationship to Other Techniques

Red Hat Analysis differs from a Red Team Analysis in that it can be done or organized by any analyst who needs to understand or forecast foreign behavior and who has, or can gain access to, the required cultural expertise.

8.5 OUTSIDE-IN THINKING

Outside-In Thinking identifies the broad range of global, political, environmental, technological, economic, or social forces and trends that are outside the analyst’s area of expertise but that may profoundly affect the issue of concern. Many analysts tend to think from the inside out, focused on familiar factors in their specific area of responsibility with which they are most familiar.

When to Use It

This technique is most useful in the early stages of an analytic process when analysts need to identify all the critical factors that might explain an event or could influence how a particular situation will develop. It should be part of the standard process for any project that analyzes potential future outcomes, for this approach covers the broader environmental context from which surprises and unintended consequences often come.

Outside-In Thinking also is useful if a large database is being assembled and needs to be checked to ensure that no important field in the database architecture has been overlooked. In most cases, important categories of information (or database fields) are easily identifiable early on in a research effort, but invariably one or two additional fields emerge after an analyst or group of analysts is well into a project, forcing them to go back and review all previous files, recoding for that new entry. Typically, the overlooked fields are in the broader environment over which the analysts have little control. By applying Outside-In Thinking, analysts can better visualize the entire set of data fields early on in the research effort.

Value Added

Most analysts focus on familiar factors within their field of specialty, but we live in a complex, interrelated world where events in our little niche of that world are often affected by forces in the broader environment over which we have no control. The goal of Outside-In Thinking is to help analysts get an entire picture, not just the part of the picture with which they are already familiar.

Outside-In Thinking reduces the risk of missing important variables early in the analytic process. It encourages analysts to rethink a problem or an issue while employing a broader conceptual framework.

The Method

  • Generate a generic description of the problem or phenomenon to be studied.
  • Form a group to brainstorm the key forces and factors that could have an impact on the topic but over which the subject can exert little or no influence, such as globalization, the emergence of new technologies, historical precedent, and the growth of the Internet.
  • Employ the mnemonic STEEP +2 to trigger new ideas (Social, Technical, Economic, Environmental, Political plus Military and Psychological).
  • Move down a level of analysis and list the key factors about which some expertise is available.
  • Assess specifically how each of these forces and factors could have an impact on the problem.
  • Ascertain whether these forces and factors actually do have an impact on the issue at hand basing your conclusion on the available evidence.
  • Generate new intelligence collection tasking to fill in information gaps.

Relationship to Other Techniques

Outside-In Thinking is essentially the same as a business analysis technique that goes by different acronyms, such as STEEP, STEEPLED, PEST, or PESTLE. For example, PEST is an acronym for Political, Economic, Social, and Technological, while STEEPLED also includes Legal, Ethical, and Demographic. All require the analysis of external factors that may have either a favorable or unfavorable influence on an organization.

8.6 POLICY OUTCOMES FORECASTING MODEL

The Policy Outcomes Forecasting Model structures the analysis of competing political forces in order to forecast the most likely political outcome and the potential for significant political change. The model was

originally designed as a quantitative method using expert-generated data, not as a structured analytic technique. However, like many quantitative models, it can also be used simply as a conceptual model to guide how an expert analyst thinks about a complex issue.

When to Use It

The Policy Outcomes Forecasting Model has been used to analyze the following types of questions:

What policy is Country W likely to adopt toward its neighbor?
Is the U.S. military likely to lose its base in Country X?
How willing is Country Y to compromise in its dispute with Country X?
In what circumstances can the government of Country Z be brought down?

Use this model when you have substantial information available on the relevant actors (individual leaders or organizations), their positions on the issues, the importance of the issues to each actor, and the relative strength of each actor’s ability to support or oppose any specific policy. Judgments about the positions and the strengths and weaknesses of the various political forces can then be used to forecast what policies might be adopted and to assess the potential for political change.

Use of this model is limited to situations when there is a single issue that will be decided by political bargaining and maneuvering, and when the potential outcomes can be visualized on a continuous line

Value Added

Like any model, Policy Outcomes Forecasting provides a systematic framework for generating and organizing information about an issue of concern. Once the basic analysis is done, it can be used to analyze the significance of changes in the position of any of the stakeholders. An analyst may also use the data to answer What If? questions such as the following:

Would a leader strengthen her position if she modified her stand on a contentious issue?
Would the military gain the upper hand if the current civilian leader were to die?
What would be the political consequences if a traditionally apolitical institution—such as the church or the military—became politicized?

An analyst or group of analysts can make an informed judgment about an outcome by explicitly identifying all the stakeholders in the outcome of an issue and then determining how close or far apart they are on the issue, how influential each one is, and how strongly each one feels about it. Assembling all this data in a graphic such as Figure 8.6 helps the analyst manage the complexity, share and discuss the information with other analysts, and present conclusions in an efficient and effective manner.

The Method

Define the problem in terms of a policy or leadership choice issue. The issue must vary along a single dimension so that options can be arrayed from one extreme to another in a way that makes sense within the country in which the decision will be made.

These alternative policies are rated on a scale from 0 to 100, with the position on the scale reflecting the distance or difference between the policies.

These options range between the two extremes—full nationalization of energy investment at the left end of the scale and private investment only at the right end. Note that the position of these policies on the horizontal scale captures the full range of the policy debate and reflects the estimated political distance or difference between each of the policies.

The next step is to identify all the actors, no matter how strong or weak, that will try to influence the policy outcome.

First, their position on the horizontal scale shows where the actor stands on the issue, and, second, their height above the scale is a measure of the relative amount of clout each actor has and is prepared to use to influence the outcome of the policy decision. To judge the relative height of each actor, identify the strongest actor and arbitrarily assign that actor a strength of 100. Assign proportionately lower values to other actors based on your judgment or gut feeling about how their strength and political clout compare with those of the actor assigned a strength of 100.

This graphic representation of the relevant variables is used as an aid in assessing and communicating to others the current status of the most influential forces on this issue and the potential impact of various changes in this status.

Origins of This Technique

The Policy Outcomes Forecasting Model described here is a simplified, nonquantitative version of a policy forecasting model developed by Bruce Bueno de Mesquita and described in his book The War Trap (New Haven: Yale University Press, 1981). It was further refined by Bueno de Mesquita et al., in Forecasting Political Events: The Future of Hong Kong (New Haven: Yale University Press, 1988).

In the 1980s, CIA analysts used this method with the implementing software to analyze scores of policy and political instability issues in more than thirty countries. Analysts used their subject expertise to assign numeric values to the variables. The simplest version of this methodology uses the positions of each actor, the relative strength of each actor, and the relative importance of the issue to each actor to calculate which actor’s or group’s position would get the most support if each policy position had to compete with every other policy position in a series of “pairwise “contests. In other words, the model finds the policy option around which a coalition will form that can defeat every other possible coalition in every possible contest between any two policy options (the “median voter” model). The model can also test how sensitive the policy forecast is to various changes in the relative strength of the actors or in their positions or in the importance each attaches to the issue.

A testing program at that time found that traditional analysis and analyses using the policy forces analysis software were both accurate in hitting the target about 90 percent of the time, but the software hit the bull’s- eye twice as often. Also, reports based on the policy forces software gave greater detail on the political dynamics leading to the policy outcome and were less vague in their forecasts than were traditional analyses.

8.7 PREDICTION MARKETS

Prediction Markets are speculative markets created solely for the purpose of allowing participants to make predictions in a particular area. Just as betting on a horse race sets the odds on which horse will win, supply and demand in the prediction market sets the estimated probability of some future occurrence. Two books, The Wisdom of Crowds by James Surowiecki and Infotopia by Cass Sunstein, have popularized the concept of Prediction Markets.

We do not support the use of Prediction Markets for intelligence analysis for reasons that are discussed below. We have included Prediction Markets in this book because it is an established analytic technique and it has been suggested for use in the Intelligence Community.

The following arguments have been made against the use of Prediction Markets for intelligence analysis:

* Prediction Markets can be used only in situations that will have an unambiguous outcome, usually within a predictable time period. Such situations are commonplace in business and industry, though much less so in intelligence analysis.

* Prediction Markets do have a strong record of near-term forecasts, but intelligence analysts and their customers are likely to be uncomfortable with their predictions. No matter what the statistical record of accuracy with this technique might be, consumers of intelligence are unlikely to accept any forecast without understanding the rationale for the forecast and the qualifications of those who voted on it.

* If people in the crowd are offering their unsupported opinions, and not informed judgments, the utility of the prediction is questionable. Prediction Markets are more likely to be useful in dealing with commercial preferences or voting behavior and less accurate, for example, in predicting the next terrorist attack in the United States, a forecast that would require special expertise and knowledge.

* Like other financial markets, such as commodities futures markets, Prediction Markets are subject to liquidity problems and speculative attacks mounted in order to manipulate the results. Financially and politically interested parties may seek to manipulate the vote. The fewer the participants, the more vulnerable a market is.

* Ethical objections have been raised to the use of a Prediction Market for national security issues. The Defense Advanced Research Projects Agency (DARPA) proposed a Policy Analysis Market in 2003. It would have worked in a manner similar to the commodities market, and it would have allowed investors to earn profits by betting on the likelihood of such events as regime changes in the Middle East and the likelihood of terrorist attacks. The DARPA plan was attacked on grounds that “it was unethical and in bad taste to accept wagers on the fate of foreign leaders and a terrorist attack. The project was canceled a day after it was announced.” Although attacks on the DARPA plan in the media may have been overdone, there is a legitimate concern about government-sponsored betting on international events.

Relationship to Other Techniques

The Delphi Method is a more appropriate method for intelligence agencies to use to aggregate outside expert opinion; Delphi also has a broader applicability for other types of intelligence analysis.

9.0 Challenge Analysis
9 Challenge Analysis

Challenge analysis encompasses a set of analytic techniques that have also been called contrarian analysis, alternative analysis, competitive analysis, red team analysis, and devil’s advocacy. What all of these have in common is the goal of challenging an established mental model or analytic consensus in order to broaden the range of possible explanations or estimates that are seriously considered. The fact that this same activity has been called by so many different names suggests there has been some conceptual diversity about how and why these techniques are being used and what might be accomplished by their use.

There is a broad recognition in the Intelligence Community that failure to question a consensus judgment, or a long-established mental model, has been a consistent feature of most significant intelligence failures. The postmortem analysis of virtually every major U.S. intelligence failure since Pearl Harbor has identified an analytic mental model (mindset) as a key factor contributing to the failure. The situation changed, but the analyst’s mental model did not keep pace with that change or did not recognize all the ramifications of the change.

This record of analytic failures has generated discussion about the “paradox of expertise.” The experts can be the last to recognize the reality and significance of change. For example, few experts on the Soviet Union foresaw its collapse, and the experts on Germany were the last to accept that Germany was going to be reunified. Going all the way back to the Korean War, experts on China were saying that China would not enter the war—until it did.

A mental model formed through education and experience serves an essential function; it is what enables the analyst to provide on a daily basis reasonably good intuitive assessments or estimates about what is happening or likely to happen.

The problem is that a mental model that has previously provided accurate assessments and estimates for many years can be slow to change. New information received incrementally over time is easily assimilated into one’s existing mental model, so the significance of gradual change over time is easily missed. It is human nature to see the future as a continuation of the past.

There is also another logical rationale for consistently challenging conventional wisdom. Former CIA Director Michael Hayden has stated that “our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we’re at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments.” The director went on to suggest that getting it right seven times out of ten might be a realistic expectation.

This chapter describes three types of challenge analysis techniques: self-critique, critique of others, and solicitation of critique by others.

Self-critique: Two techniques that help analysts challenge their own thinking are Premortem Analysis and Structured Self-Critique. These techniques can counteract the pressures for conformity or consensus that often suppress the expression of dissenting opinions in an analytic team or group. We adapted Premortem Analysis from business and applied it to intelligence analysis.

Critique of others: Analysts can use What If? Analysis or High Impact/Low Probability Analysis to tactfully question the conventional wisdom by making the best case for an alternative explanation or outcome.

Critique by others: Several techniques are available for seeking out critique by others. Devil’s Advocacy is a well-known example of that. The term “Red Team” is used to describe a group that is assigned to take an adversarial perspective. The Delphi Method is a structured process for eliciting opinions from a panel of outside experts.

Reframing Techniques

Three of the techniques in this chapter work by a process called reframing. A frame is any cognitive structure that guides the perception and interpretation of what one sees. A mental model of how things normally work can be thought of as a frame through which an analyst sees and interprets evidence. An individual or a group of people can change their frame of reference, and thus challenge their own thinking about a problem, simply by changing the questions they ask or changing the perspective from which they ask the questions. Analysts can use this reframing technique when they need to generate new ideas, when they want to see old ideas from a new perspective, or any other time when they sense a need for fresh thinking.

it is fairly easy to open the mind to think in different ways. The trick is to restate the question, task, or problem from a different perspective that activates a different set of synapses in the brain. Each of the three applications of reframing described in this chapter does this in a different way. Premortem Analysis asks analysts to imagine themselves at some future point in time, after having just learned that a previous analysis turned out to be completely wrong. The task then is to figure out how and why it might have gone wrong. What If? Analysis asks the analyst to imagine that some unlikely event has occurred, and then to explain how it could happen and the implications of the event. Structured Self-Critique asks a team of analysts to reverse its role from advocate to critic in order to explore potential weaknesses in the previous analysis. This change in role can empower analysts to express concerns about the consensus view that might previously have been suppressed. These techniques are generally more effective in a small group than with a single analyst. Their effectiveness depends in large measure on how fully and enthusiastically participants in the group embrace the imaginative or alternative role they are playing. Just going through the motions is of limited value.

Overview of Techniques

Premortem Analysis reduces the risk of analytic failure by identifying and analyzing a potential failure before it occurs. Imagine yourself several years in the future. You suddenly learn from an unimpeachable source that your estimate was wrong. Then imagine what could have happened to cause the estimate to be wrong. Looking back from the future to explain something that has happened is much easier than looking into the future to forecast what will happen, and this exercise helps identify problems one has not foreseen.

Structured Self-Critique is a procedure that a small team or group uses to identify weaknesses in its own analysis. All team or group members don a hypothetical black hat and become critics rather than supporters of their own analysis. From this opposite perspective, they respond to a list of questions about sources of uncertainty, the analytic processes that were used, critical assumptions, diagnosticity of evidence, anomalous evidence, information gaps, changes in the broad environment in which events are happening, alternative decision models, availability of cultural expertise, and indicators of possible deception. Looking at the responses to these questions, the team reassesses its overall confidence in its own judgment.

What If? Analysis is an important technique for alerting decision makers to an event that could happen, or is already happening, even if it may seem unlikely at the time. It is a tactful way of suggesting to decision makers the possibility that they may be wrong. What If? Analysis serves a function similar to that of Scenario Analysis—it creates an awareness that prepares the mind to recognize early signs of a significant change, and it may enable a decision maker to plan ahead for that contingency. The analyst imagines that an event has occurred and then considers how the event could have unfolded.

High Impact/Low Probability Analysis is used to sensitize analysts and decision makers to the possibility that a low-probability event might actually happen and stimulate them to think about measures that could be taken to deal with the danger or to exploit the opportunity if it does occur. The analyst assumes the event has occurred, and then figures out how it could have happened and what the consequences might be.

Devil’s Advocacy is a technique in which a person who has been designated the Devil’s Advocate, usually by a responsible authority, makes the best possible case against a proposed analytic judgment, plan, or decision.

Red Team Analysis as described here is any project initiated by management to marshal the specialized substantive, cultural, or analytic skills required to challenge conventional wisdom about how an adversary or competitor thinks about an issue.

Delphi Method is a procedure for obtaining ideas, judgments, or forecasts electronically from a geographically dispersed panel of experts. It is a time-tested, extremely flexible procedure that can be used on any topic or issue for which expert judgment can contribute.

9.1 PREMORTEM ANALYSIS

The goal of a Premortem Analysis is to reduce the risk of surprise and the subsequent need for a postmortem investigation of what went wrong. It is an easy-to-use technique that enables a group of analysts

who have been working together on any type of future-oriented analysis to challenge effectively the accuracy of their own conclusions.

When to Use It

Premortem Analysis should be used by analysts who can devote a few hours to challenging their own analytic conclusions about the future to see where they might be wrong. It may be used by a single analyst but, like all structured analytic techniques, it is most effective when used in a small group.

After the trainees formulated a plan of action, they were asked to imagine that it is several months or years into the future, and their plan has been implemented but has failed. They were then asked to describe how it might have failed, and, despite their original confidence in the plan, they could easily come up with multiple explanations for failure—reasons that were not identified when the plan was first proposed and developed.

Klein reported his trainees showed a “much higher level of candor” when evaluating their own plans after being exposed to the premortem exercise, as compared with other more passive attempts at getting them to self-critique their own plans.

Value Added

Briefly, there are two creative processes at work here. First, the questions are reframed, an exercise that typically elicits responses that are different from the original ones. Asking questions about the same topic, but from a different perspective, opens new pathways in the brain, as we noted in the introduction to this chapter. Second, the Premortem approach legitimizes dissent. For various reasons, many members of small groups suppress dissenting opinions, leading to premature consensus. In a Premortem Analysis, all analysts are asked to make a positive contribution to group goals by identifying weaknesses in the previous analysis.

Research has documented that an important cause of poor group decisions is the desire for consensus. This desire can lead to premature closure and agreement with majority views regardless of whether they are perceived as right or wrong. Attempts to improve group creativity and decision making often focus on ensuring that a wider range of information and opinions are presented to the group and given consideration.

In a candid newspaper column written long before he became CIA Director, Leon Panetta wrote that “an unofficial rule in the bureaucracy says that to ‘get along, go along.’ In other words, even when it is obvious that mistakes are being made, there is a hesitancy to report the failings for fear of retribution or embarrassment. That is true at every level, including advisers to the president. The result is a ‘don’t make waves’ mentality … that is just another fact of life you tolerate in a big bureaucracy.”

The Method

The best time to conduct a Premortem Analysis is shortly after a group has reached a conclusion on an action plan, but before any serious drafting of the report has been done. If the group members are not already familiar with the Premortem technique, the group leader, another group member, or a facilitator steps up and makes a statement along the lines of the following. “Okay, we now think we know the right answer, but we need to double-check this.

To free up our minds to consider other possibilities, let’s imagine that we have made this judgment, our report has gone forward and been accepted, and now, x months or years later, we gain access to a crystal ball. Peering into this ball, we learn that our analysis was wrong, and things turned out very differently from the way we had expected. Now, working from that perspective in the future, let’s put our imaginations to work and brainstorm what could have possibly happened to cause our analysis to be so wrong.”

After all ideas are posted on the board and visible to all, the group discusses what it has learned by this exercise, and what action, if any, the group should take. This generation and initial discussion of ideas can often be accomplished in a single two-hour meeting, which is a small investment of time to undertake a systematic challenge to the group’s thinking.

 

9.2 STRUCTURED SELF-CRITIQUE

Structured Self-Critique is a systematic procedure that a small team or group can use to identify weaknesses in its own analysis. All team or group members don a hypothetical black hat and become critics rather than

supporters of their own analysis. From this opposite perspective, they respond to a list of questions about sources of uncertainty, the analytic processes that were used, critical assumptions, diagnosticity of evidence, anomalous evidence, information gaps, changes in the broad environment in which events are happening, alternative decision models, availability of cultural expertise, and indicators of possible deception. As it reviews responses to these questions, the team reassesses its overall confidence in its own judgment.

When to Use It

You can use Structured Self-Critique productively to look for weaknesses in any analytic explanation of events or estimate of the future. It is specifically recommended for use in the following ways:

  • As the next step after a Premortem Analysis raises unresolved questions about any estimated future outcome or event.
    As a double check prior to the publication of any major product such as a National Intelligence Estimate.
  • As one approach to resolving conflicting opinions

The Method

Start by re-emphasizing that all analysts in the group are now wearing a black hat. They are now critics, not advocates, and they will now be judged by their ability to find weaknesses in the previous analysis, not on the basis of their support for the previous analysis. Then work through the following topics or questions:

Sources of uncertainty: Identify the sources and types of uncertainty in order to set reasonable expectations for what the team might expect to achieve. Should one expect to find: (a) a single correct or most likely answer, (b) a most likely answer together with one or more alternatives that must also be considered, or (c) a number of possible explanations or scenarios for future development? To judge the uncertainty, answer these questions:

  • Is the question being analyzed a puzzle or a mystery? Puzzles have answers, and correct answers can be identified if enough pieces of the puzzle are found. A mystery has no single definitive answer; it depends upon the future interaction of many factors, some known and others unknown. Analysts can frame the boundaries of a mystery only “by identifying the critical factors and making an intuitive judgment about how they have interacted in the past and might interact in the future.”
    How does the team rate the quality and timeliness of its evidence?
  • Are there a greater than usual number of assumptions because of insufficient evidence or the complexity of the situation?
  • Is the team dealing with a relatively stable situation or with a situation that is undergoing, or potentially about to undergo, significant change?

Analytic process: In the initial analysis, did the team do the following. Did it identify alternative hypotheses and seek out information on these hypotheses? Did it identify key assumptions? Did it seek a broad range of diverse opinions by including analysts from other offices, agencies, academia, or the private sector in the deliberations? If these steps were not taken, the odds of the team having a faulty or incomplete analysis are increased. Either consider doing some of these things now or lower the team’s level of confidence in its judgment.

Critical assumptions: Assuming that the team has already identified key assumptions, the next step is to identify the one or two assumptions that would have the greatest impact on the analytic judgment if they turned out to be wrong. In other words, if the assumption is wrong, the judgment will be wrong. How recent and well-documented is the evidence that supports each such assumption? Brainstorm circumstances that could cause each of these assumptions to be wrong, and assess the impact on the team’s analytic judgment if the assumption is wrong. Would the reversal of any of these assumptions support any alternative hypothesis? If the team has not previously identified key assumptions, it should do a Key Assumptions Check now.

Diagnostic evidence: Identify alternative hypotheses and the several most diagnostic items of evidence that enable the team to reject alternative hypotheses. For each item, brainstorm for any reasonable alternative interpretation of this evidence that could make it consistent with an alternative hypothesis. See Diagnostic Reasoning in chapter 7.

Information gaps: Are there gaps in the available information, or is some of the information so dated that it may no longer be valid? Is the absence of information readily explainable? How should absence of information affect the team’s confidence in its conclusions?

Missing evidence: Is there any evidence that one would expect to see in the regular flow of intelligence or open source reporting if the analytic judgment is correct, but that turns out not to be there? Anomalous evidence: Is there any anomalous item of evidence that would have been important if it had been believed or if it could have been related to the issue of concern, but that was rejected as unimportant because it was not believed or its significance was not known? If so, try to imagine how this item might be a key clue to an emerging alternative hypothesis.

Changes in the broad environment: Driven by technology and globalization, the world as a whole seems to be experiencing social, technical, economic, environmental, and political changes at a faster rate than ever before in history. Might any of these changes play a role in what is happening or will happen? More broadly, what key forces, factors, or events could occur independently of the issue that is the subject of analysis that could have a significant impact on whether the analysis proves to be right or wrong? Alternative decision models: If the analysis deals with decision making by a foreign government or nongovernmental organization (NGO), was the group’s judgment about foreign behavior based on a rational actor assumption? If so, consider the potential applicability of other decision models, specifically that the action was or will be the result of bargaining between political or bureaucratic forces, the result

of standard organizational processes, or the whim of an authoritarian leader. If information for a more thorough analysis is lacking, consider the implications of that for confidence in the team’s judgment. Cultural expertise: If the topic being analyzed involves a foreign or otherwise unfamiliar culture or subculture, does the team have or has it obtained cultural expertise on thought processes in that culture?

Deception: Does another country, NGO, or commercial competitor about which the team is making judgments have a motive, opportunity, or means for engaging in deception to influence U.S. policy or to change your behavior? Does this country, NGO, or competitor have a past history of engaging in denial, deception, or influence operations?

9.3 WHAT IF? ANALYSIS

What If? Analysis imagines that an unexpected event has occurred with potential major impact. Then, with the benefit of “hindsight,” the analyst figures out how this event could have come about and what the consequences might be.

When to Use It

This technique should be in every analyst’s toolkit. It is an important technique for alerting decision makers to an event that could happen, even if it may seem unlikely at the present time. What If? Analysis serves a function similar to Scenario Analysis—it creates an awareness that prepares the mind to recognize early signs of a significant change, and it may enable the decision maker to plan ahead for that contingency. It is most appropriate when two conditions are present:

A mental model is well ingrained within the analytic or the customer community that a certain event will not happen.

There is a perceived need for others to focus on the possibility that this event could actually happen and to consider the consequences if it does occur.

Value Added

Shifting the focus from asking whether an event will occur to imagining that it has occurred and then explaining how it might have happened opens the mind to think in different ways. What If? Analysis shifts the discussion from, “How likely is it?” to these questions:

  • How could it possibly come about?
  • What would be the impact?
  • Has the possibility of the event happening increased?

The technique also gives decision makers the following additional benefits:

A better sense of what they might be able to do today to prevent an untoward development from occurring, or what they might do today to leverage an opportunity for advancing their interests. A list of specific indicators to monitor to help determine if the chances of a development actually occurring are increasing.

The What If? technique is a useful tool for exploring unanticipated or unlikely scenarios that are within the realm of possibility and that would have significant consequences should they come to pass.

9.4 HIGH IMPACT/LOW PROBABILITY ANALYSIS

High Impact/Low Probability Analysis provides decision makers with early warning that a seemingly unlikely event with major policy and resource repercussions might actually occur.

When to Use It

High Impact/Low Probability Analysis should be used when one wants to alert decision makers to the possibility that a seemingly long-shot development that would have a major policy or resource impact may be more likely than previously anticipated. Events that would have merited such treatment before they occurred include the reunification of Germany in 1989 and the collapse of the Soviet Union in 1991.

The more nuanced and concrete the analyst’s depiction of the plausible paths to danger, the easier it is for a decision maker to develop a package of policies to protect or advance vital U.S. interests.

Potential Pitfalls

Analysts need to be careful when communicating the likelihood of unlikely events. The meaning of the word “unlikely” can be interpreted as meaning anywhere from 1 percent to 25 percent probability, while “highly unlikely” may mean from 1 percent to 10 percent.

The Method

An effective High Impact/Low Probability Analysis involves these steps:

  • Clearly describe the unlikely event.
  • Define the high-impact consequences if this event occurs. Consider both the actual event and the secondary impacts of the event.
  • Identify any recent information or reporting suggesting that the likelihood of the unlikely event occurring may be increasing.
  • Postulate additional triggers that would propel events in this unlikely direction or factors that would greatly accelerate timetables, such as a botched government response, the rise of an energetic challenger, a major terrorist attack, or a surprise electoral outcome that benefits U.S. interests.
  • Develop one or more plausible pathways that would explain how this seemingly unlikely event could unfold. Focus on the specifics of what must happen at each stage of the process for the train of events to play out.
  • Generate a list of indicators that would help analysts and decision makers recognize that events were beginning to unfold in this way.
    Identify factors that would deflect a bad outcome or encourage a positive outcome.

Once the list of indicators has been developed, the analyst must periodically review the list. Such periodic reviews help analysts overcome prevailing mental models that the events being considered are too unlikely to merit serious attention.

Relationship to Other Techniques

High Impact/Low Probability Analysis is sometimes confused with What If? Analysis. Both deal with low- probability or unlikely events. High Impact/Low Probability Analysis is primarily a vehicle for warning decision makers that recent, unanticipated developments suggest that an event previously deemed highly unlikely might actually occur. Based on recent evidence or information, it projects forward to discuss what could occur and the consequences if the event does occur. It challenges the conventional wisdom. What If? Analysis does not require new or anomalous information to serve as a trigger. It reframes the question by assuming that a surprise event has happened.

9.5 DEVIL’S ADVOCACY

Devil’s Advocacy is a process for critiquing a proposed analytic judgment, plan, or decision, usually by a single analyst not previously involved in the deliberations that led to the proposed judgment, plan, or decision.

The origins of devil’s advocacy “lie in a practice of the Roman Catholic Church in the early 16th century. When a person was proposed for beatification or canonization to sainthood, someone was assigned the role of critically examining the life and miracles attributed to that individual; his duty was to especially bring forward facts that were unfavorable to the candidate.”

When to Use It

Devil’s Advocacy is most effective when initiated by a manager as part of a strategy to ensure that alternative solutions are thoroughly considered. The following are examples of well-established uses of Devil’s Advocacy that are widely regarded as good management practices:

* Before making a decision, a policymaker or military commander asks for a Devil’s Advocate analysis of what could go wrong.

* An intelligence organization designates a senior manager as a Devil’s Advocate to oversee the process of reviewing and challenging selected assessments.

* A manager commissions a Devil’s Advocacy analysis when he or she is concerned about seemingly widespread unanimity on a critical issue throughout the Intelligence Community, or when the manager suspects that the mental model of analysts working an issue for a long time has become so deeply ingrained that they are unable to see the significance of recent changes.

Within the Intelligence Community, Devil’s Advocacy is sometimes defined as a form of self-critique… We do not support this approach for the following reasons:

* Calling such a technique Devil’s Advocacy is inconsistent with the historic concept of Devil’s Advocacy that calls for investigation by an independent outsider.

* Research shows that a person playing the role of a Devil’s Advocate, without actually believing it, is significantly less effective than a true believer and may even be counterproductive. Apparently, more attention and respect is accorded to someone with the courage to advance their own minority view than to someone who is known to be only playing a role. If group members see the Devil’s Advocacy as an analytic exercise they have to put up with, rather than the true belief of one of their members who is courageous enough to speak out, this exercise may actually enhance the majority’s original belief—“a smugness that may occur because one assumes one has considered alternatives though, in fact, there has been little serious reflection on other possibilities.” What the team learns from the Devil’s Advocate presentation may be only how to better defend the team’s own entrenched position.

* There are other forms of self-critique, especially Premortem Analysis and Structured Self-Critique as described in this chapter, which may be more effective in prompting even a cohesive, heterogeneous team to question their mental model and to analyze alternative perspectives.

9.6 RED TEAM ANALYSIS

The term “red team” or “red teaming” has several meanings. One definition is that red teaming is “the practice of viewing a problem from an adversary or competitor’s perspective.”16 This is how red teaming is commonly viewed by intelligence analysts.

When to Use It

Management should initiate a Red Team Analysis whenever there is a perceived need to challenge the conventional wisdom on an important issue or whenever the responsible line office is perceived as lacking the level of cultural expertise required to fully understand an adversary’s or competitor’s point of view.

Value Added

Red Team Analysis can help free analysts from their own well-developed mental model—their own sense of rationality, cultural norms, and personal values. When analyzing an adversary, the Red Team approach requires that an analyst change his or her frame of reference from that of an “observer” of the adversary or competitor, to that of an “actor” operating within the adversary’s cultural and political milieu. This reframing or role playing is particularly helpful when an analyst is trying to replicate the mental model of authoritarian leaders, terrorist cells, or non-Western groups that operate under very different codes of behavior or motivations than those to which most Americans are accustomed.

9.7 DELPHI METHOD

Delphi is a method for eliciting ideas, judgments, or forecasts from a group of experts who may be geographically dispersed. It is different from a survey in that there are two or more rounds of questioning.

After the first round of questions, a moderator distributes all the answers and explanations of the answers to all participants, often anonymously. The expert participants are then given an opportunity to modify or clarify their previous responses, if so desired, on the basis of what they have seen in the responses of the other participants. A second round of questions builds on the results of the first round, drills down into greater detail, or moves to a related topic. There is great flexibility in the nature and number of rounds of questions that might be asked.

Over the years, Delphi has been used in a wide variety of ways, and for an equally wide variety of purposes. Although many Delphi projects have focused on developing a consensus of expert judgment, a variant called Policy Delphi is based on the premise that the decision maker is not interested in having a group make a consensus decision, but rather in having the experts identify alternative policy options and present all the supporting evidence for and against each option. That is the rationale for including Delphi in this chapter on challenge analysis. It can be used to identify divergent opinions that may be worth exploring.

One group of Delphi scholars advises that the Delphi technique “can be used for nearly any problem involving forecasting, estimation, or decision making”—as long as the problem is not so complex or so new as to preclude the use of expert judgment. These Delphi advocates report using it for diverse purposes that range from “choosing between options for regional development, to predicting election outcomes, to deciding which applicants should be hired for academic positions, to predicting how many meals to order for a conference luncheon.”

Value Added

One of Sherman Kent’s “Principles of Intelligence Analysis,” which are taught at the CIA’s Sherman Kent School for Intelligence Analysis, is “Systematic Use of Outside Experts as a Check on In-House Blinders.” Consultation with relevant experts in academia, business, and nongovernmental organizations is also encouraged by Intelligence Community Directive No. 205, on Analytic Outreach, dated July 2008.

The Method

In a Delphi project, a moderator (analyst) sends a questionnaire to a panel of experts who may be in different locations. The experts respond to these questions and usually are asked to provide short explanations for their responses. The moderator collates the results from this first questionnaire and sends the collated responses back to all panel members, requesting them to reconsider their responses based on what they see and learn from the other experts’ responses and explanations. Panel members may also be asked to answer another set of questions.

Examples

To show how Delphi can be used for intelligence analysis, we have developed three illustrative applications:

* Evaluation of another country’s policy options: The Delphi project manager or moderator identifies several policy options that a foreign country might choose. The moderator then asks a panel of experts on the country to rate the desirability and feasibility of each option, from the other country’s point of view, on a five- point scale ranging from “Very Desirable” or “Feasible” to “Very Undesirable” or “Definitely Infeasible.” Panel members are also asked to identify and assess any other policy options that ought to be considered and to identify the top two or three arguments or items of evidence that guided their judgments. A collation of all responses is sent back to the panel with a request for members to do one of the following: reconsider their position in view of others’ responses, provide further explanation of their judgments, or reaffirm their previous response. In a second round of questioning, it may be desirable to list key arguments and items of evidence and ask the panel to rate them on their validity and their importance, again from the other country’s perspective.

* Analysis of Alternative Hypotheses: A panel of outside experts is asked to estimate the probability of each hypothesis in a set of mutually exclusive hypotheses where the probabilities must add up to 100 percent. This could be done as a stand-alone project or to double-check an already completed Analysis of Competing Hypotheses (ACH) analysis (chapter 7). If two analyses using different analysts and different methods arrive at the same conclusion, this is grounds for a significant increase in confidence in the conclusion. If the analyses disagree, that may also be useful to know as one can then seek to understand the rationale for the different judgments.

* Warning analysis or monitoring a situation over time: An analyst asks a panel of experts to estimate the probability of a future event. This might be either a single event for which the analyst is monitoring early warning indicators or a set of scenarios for which the analyst is monitoring milestones to determine the direction in which events seem to be moving.

10.0 Conflict Management
10 Conflict Management

challenge analysis frequently leads to the identification and confrontation of opposing views. That is, after all, the purpose of challenge analysis, but two important

questions are raised. First, how can confrontation be managed so that it becomes a learning experience rather than a battle between determined adversaries? Second, in an analysis of any topic with a high degree of uncertainty, how can one decide if one view is wrong or if both views have merit and need to be discussed in an analytic report?

The Intelligence Community’s procedure for dealing with differences of opinion has often been to force a consensus, water down the differences, or add a dissenting footnote to an estimate. Efforts are under way to move away from this practice, and we share the hopes of many in the community that this approach will become increasingly rare as members of the Intelligence Community embrace greater interagency collaboration early in the analytic process, rather than mandated coordination at the end of the process after all parties are locked into their positions. One of the principal benefits of using structured analytic techniques for interoffice and interagency collaboration is that these techniques identify differences of opinion early in the analytic process. This gives time for the differences to be at least understood, if not resolved, at the working level before management becomes involved.

If an analysis meets rigorous standards and conflicting views still remain, decision makers are best served by an analytic product that deals directly with the uncertainty rather than minimizing it or suppressing it. The greater the uncertainty, the more appropriate it is to go forward with a product that discusses the most likely assessment or estimate and gives one or more alternative possibilities. Factors to be considered when assessing the amount of uncertainty include the following:

An estimate of the future generally has more uncertainty than an assessment of a past or current event. Mysteries, for which there are no knowable answers, are far more uncertain than puzzles, for which an

answer does exist if one could only find it.3
The more assumptions that are made, the greater the uncertainty. Assumptions about intent or capability, and whether or not they have changed, are especially critical.
Analysis of human behavior or decision making is far more uncertain than analysis of technical data. The behavior of a complex dynamic system is more uncertain than that of a simple system. The more variables and stakeholders involved in a system, the more difficult it is to foresee what might happen.

If the decision is to go forward with a discussion of alternative assessments or estimates, the next step might be to produce any of the following:

A comparative analysis of opposing views in a single report. This calls for analysts to identify the sources and reasons for the uncertainty (e.g., assumptions, ambiguities, knowledge gaps), consider the implications of alternative assessments or estimates, determine what it would take to resolve the uncertainty, and suggest indicators for future monitoring that might provide early warning of which alternative is correct.

An analysis of alternative scenarios as described in chapter 6.
A What If? Analysis or High Impact/Low Probability Analysis as described in chapter 9. A report that is clearly identified as a “second opinion.”

Overview of Techniques

Adversarial Collaboration in essence is an agreement between opposing parties on how they will work together in an effort to resolve their differences, to gain a better understanding of how and why they differ, or as often happens to collaborate on a joint paper explaining the differences. Six approaches to implementing adversarial collaboration are described.

Structured Debate is a planned debate of opposing points of view on a specific issue in front of a “jury of peers,” senior analysts, or managers. As a first step, each side writes up its best possible argument for its position and passes this summation to the opposing side. The next step is an oral debate that focuses on refuting the other side’s arguments rather than further supporting one’s own arguments. The goal is to elucidate and compare the arguments against each side’s argument. If neither argument can be refuted, perhaps both merit some consideration in the analytic report.

10.1 ADVERSARIAL COLLABORATION

Adversarial Collaboration is an agreement between opposing parties about how they will work together to resolve or at least gain a better understanding of their differences. Adversarial Collaboration is a relatively new concept championed by Daniel Kahneman, the psychologist who along with Amos Tversky initiated much of the research on cognitive biases described in Richards Heuer’s Psychology of Intelligence Analysis… he commented as follows on Adversarial Collaboration:  

Adversarial collaboration involves a good-faith effort to conduct debates by carrying out joint research—in some cases there may be a need for an agreed arbiter to lead the project and collect the data. Because there is no expectation of the contestants reaching complete agreement at the end of the exercise, adversarial collaboration will usually lead to an unusual type of joint publication, in which disagreements are laid out as part of a jointly authored paper.

Kahneman’s approach to Adversarial Collaboration involves agreement on empirical tests for resolving a dispute and conducting those tests with the help of an impartial arbiter. A joint report describes the tests, states what has been learned that both sides agree on, and provides interpretations of the test results on which they disagree.

When to Use It

Adversarial Collaboration should be used only if both sides are open to discussion of an issue. If one side is fully locked into its position and has repeatedly rejected the other side’s arguments, this technique is unlikely to be successful. It is then more appropriate to use Structured Debate in which a decision is made by an independent arbiter after listening to both sides.

Value Added

Adversarial Collaboration can help opposing analysts see the merit of another group’s perspective. If successful, it will help both parties gain a better understanding of what assumptions or evidence is behind their opposing opinions on an issue and to explore the best way of dealing with these differences. Can one side be shown to be wrong, or should both positions be reflected in any report on the subject? Can there be agreement on indicators to show the direction in which events seem to be moving?

The Method

Six approaches to Adversarial Collaboration are described here. What they all have in common is the forced requirement to understand and address the other side’s position rather than simply dismiss it. Mutual understanding of the other side’s position is the bridge to productive collaboration. These six techniques are not mutually exclusive; in other words, one might use several of them for any specific project.

Key Assumptions Check:

Analysis of Competing Hypotheses:

Argument Mapping:

Mutual Understanding:

Joint Escalation:

The analysts should be required to prepare a joint statement describing the disagreement and to present it jointly to their superiors. This requires each analyst to understand and address, rather than simply dismiss, the other side’s position. It also ensures that managers have access to multiple perspectives on the conflict, its causes, and the various ways it might be resolved.

The Nosenko Approach: Yuriy Nosenko was a Soviet intelligence officer who defected to the United States in 1964. Whether he was a true defector or a Soviet plant was a subject of intense and emotional controversy within the CIA for more than a decade. In the minds of some, this historic case is still controversial.

The interesting point here is the ground rule that the team was instructed to follow. After reviewing the evidence, each officer identified those items of evidence thought to be of critical importance in making a judgment on Nosenko’s bona fides. Any item that one officer stipulated as critically important had to be addressed by the other two members.

It turned out that fourteen items were stipulated by at least one of the team members and had to be addressed by both of the others. Each officer prepared his own analysis, but they all had to address the same fourteen issues. Their report became known as the “Wise Men” report.

10.2 STRUCTURED DEBATE

A Structured Debate is a planned debate between analysts or analytic teams holding opposing points of view on a specific issue. It is conducted according to a set of rules before an audience, which may be a “jury of peers” or one or more senior analysts or managers.

When to Use It

Structured Debate is called for when there is a significant difference of opinion within or between analytic units or within the policymaking community, or when Adversarial Collaboration has been unsuccessful or is impractical, and it is necessary to make a choice between two opposing opinions or to go forward with a comparative analysis of both. A Structured Debate requires a significant commitment of analytic time and resources.

Value Added

In the method proposed here, each side presents its case in writing, and the written report is read by the other side and the audience prior to the debate. The oral debate then focuses on refuting the other side’s position. Glib and personable speakers can always make their arguments for a position sound persuasive. Effectively refuting the other side’s position is a very different ball game, however. The requirement to refute the other side’s position brings to the debate an important feature of the scientific method, that the most likely hypothesis is actually the one with the least evidence against it as well as good evidence for it.

The Method

Start by defining the conflict to be debated. If possible, frame the conflict in terms of competing and mutually exclusive hypotheses. Ensure that all sides agree with the definition. Then follow these steps:

*  Identify individuals or teams to develop the best case that can be made for each hypothesis.

*  Each side writes up the best case for its point of view. This written argument must be structured with an explicit presentation of key assumptions, key pieces of evidence, and careful articulation of the logic behind the argument.

* The written arguments are exchanged with the opposing side, and the two sides are given time to develop counterarguments to refute the opposing side’s position.

The debate phase is conducted in the presence of a jury of peers, senior analysts, or managers who will provide guidance after listening to the debate. If desired, there might also be an audience of interested observers.

* The debate starts with each side presenting a brief (maximum five minutes) summary of its argument for its position. The jury and the audience are expected to have read each side’s full argument.

* Each side then presents to the audience its rebuttal of the other side’s written position. The purpose here is to proceed in the oral arguments by systematically refuting alternative hypotheses rather than by presenting more evidence to support one’s own argument. This is the best way to evaluate the strengths of the opposing arguments.

* After each side has presented its rebuttal argument, the other side is given an opportunity to refute the rebuttal.

* The jury asks questions to clarify the debaters’ positions or gain additional insight needed to pass judgment on the debaters’ positions.

* The jury discusses the issue and passes judgment. The winner is the side that makes the best argument refuting the other side’s position, not the side that makes the best argument supporting its own position. The jury may also recommend possible next steps for further research or intelligence collection efforts. If neither side can refute the other’s arguments, it may be that both sides have a valid argument that should be represented in any subsequent analytic report.

Origins of This Technique

The history of debate goes back to the Socratic dialogues in ancient Greece and even before, and many different forms of debate have evolved since then. Richards Heuer formulated the idea of focusing the debate between intelligence analysts on refuting the other side’s argument rather than supporting one’s own argument.

 

11.0 Decision Support
11 Decision Support

Managers, commanders, planners, and other decision makers all make choices or tradeoffs among competing goals, values, or preferences. Because of limitations in human short-term memory, we usually cannot keep all the pros and cons of multiple options in mind at the same time. That causes us to focus first on one set of problems or opportunities and then another, a situation that often leads to vacillation or procrastination in making a firm decision. Some decision-support techniques help overcome this cognitive limitation by laying out all the options and interrelationships in graphic form so that analysts can test the results of alternative options while still keeping the problem as a whole in view. Other techniques help decision makers untangle the complexity of a situation or define the opportunities and constraints in the environment in which the choice needs to be made.

 

It is not the analyst’s job to make the choices or decide on the tradeoffs, but intelligence analysts can and should use decision-support techniques to provide timely support to managers, commanders, planners, and decision makers who do make these choices. The Director of National Intelligence’s Vision 2015 foresees intelligence driven by customer needs and a “shifting focus from today’s product- centric model toward a more interactive model that blurs the distinction between producer and

consumer.”

Caution is in order, however, whenever one thinks of predicting or even explaining another person’s decision, regardless of whether the person is of similar background or not. People do not always act rationally in their own best interests. Their decisions are influenced by emotions and habits, as well as by what others might think or values of which others may not be aware.

The same is true of organizations and governments. One of the most common analytic errors is the assumption that an organization or a government will act rationally, that is, in its own best interests. There are three major problems with this assumption:

* Even if the assumption is correct, the analysis may be wrong, because foreign organizations and governments typically see their own best interests quite differently from the way Americans see them.

* Organizations and governments do not always have a clear understanding of their own best interests. Governments in particular typically have a variety of conflicting interests.

* The assumption that organizations and governments commonly act rationally in their own best interests is not always true. All intelligence analysts seeking to understand the behavior of another country should be familiar with Graham Allison’s analysis of U.S. and Soviet decision making during the Cuban

missile crisis. It describes three different models for how governments make decisions—bureaucratic bargaining processes and standard organizational procedures as well as the rational actor model.

Decision making and decision analysis are large and diverse fields of study and research. The decision- support techniques described in this chapter are only a small sample of what is available, but they do meet many of the basic requirements for intelligence analysis.

Overview of Techniques

Complexity Manager is a simplified approach to understanding complex systems—the kind of systems in which many variables are related to each other and may be changing over time. Government policy decisions are often aimed at changing a dynamically complex system. It is because of this dynamic complexity that many policies fail to meet their goals or have unforeseen and unintended consequences. Use Complexity Manager to assess the chances for success or failure of a new or proposed policy, identify opportunities for influencing the outcome of any situation, determine what would need to change in order to achieve a specified goal, or identify potential unintended consequences from the pursuit of a policy goal.

Decision Matrix is a simple but powerful device for making tradeoffs between conflicting goals or preferences. An analyst lists the decision options or possible choices, the criteria for judging the options, the weights assigned to each of these criteria, and an evaluation of the extent to which each option satisfies each of the criteria. This process will show the best choice—based on the values the analyst or a decision maker puts into the matrix. By studying the matrix, one can also analyze how the best choice would change if the values assigned to the selection criteria were changed or if the ability of an option to satisfy a specific criterion were changed. It is almost impossible for an analyst to keep track of these factors effectively without such a matrix, as one cannot keep all the pros and cons in working memory at the same time. A Decision Matrix helps the analyst see the whole picture.

Force Field Analysis is a technique that analysts can use to help a decision maker decide how to solve a problem or achieve a goal, or to determine whether it is possible to do so. The analyst identifies and assigns weights to the relative importance of all the factors or forces that are either a help or a hindrance in solving the problem or achieving the goal. After organizing all these factors in two lists, pro and con, with a weighted value for each factor, the analyst or decision maker is in a better position to recommend strategies that would be most effective in either strengthening the impact of the driving forces or reducing the impact of the restraining forces.

Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy ideas. It is intended to offset the human tendency of analysts and decision makers to jump to conclusions before conducting a full analysis of a problem, as often happens in group meetings. The first step is for the analyst or the project team to make lists of Pros and Cons. If the analyst or team is concerned that people are being unduly negative about an idea, he or she looks for ways to “Fix” the Cons, that is, to explain why the Cons are unimportant or even to transform them into Pros. If concerned that people are jumping on the bandwagon too quickly, the analyst tries to “Fault” the Pros by exploring how they could go wrong. The analyst can also do both Pros and Cons. Of the various techniques described in this chapter, this one is probably the easiest and quickest to use.

SWOT Analysis is used to develop a plan or strategy for achieving a specified goal. (SWOT is an acronym for Strengths, Weaknesses, Opportunities, and Threats.) In using this technique, the analyst first lists the strengths and weaknesses in the organization’s ability to achieve a goal, and then lists opportunities and threats in the external environment that would either help or hinder the organization from reaching the goal.

11.1 COMPLEXITY MANAGER

Complexity Manager helps analysts and decision makers understand and anticipate changes in complex systems. As used here, the word complexity encompasses any distinctive set of interactions that are more complicated than even experienced intelligence analysts can think through solely in their head.3

When to Use It

As a policy support tool, Complexity Manager can be used to assess the chances for success or failure of a new or proposed program or policy, and opportunities for influencing the outcome of any situation. It also can be used to identify what would have to change in order to achieve a specified goal, as well as unintended consequences from the pursuit of a policy goal.

Value Added

Complexity Manager can often improve an analyst’s understanding of a complex situation without the time delay and cost required to build a computer model and simulation. The steps in the Complexity Manager technique are the same as the initial steps required to build a computer model and simulation. These are identification of the relevant variables or actors, analysis of all the interactions between them, and assignment of rough weights or other values to each variable or interaction.

Scientists who specialize in the modeling and simulation of complex social systems report that “the earliest —and sometimes most significant—insights occur while reducing a problem to its most fundamental players, interactions, and basic rules of behavior,” and that “the frequency and importance of additional insights diminishes exponentially as a model is made increasingly complex.”

Complexity Manager does not itself provide analysts with answers. It enables analysts to find a best possible answer by organizing in a systematic manner the jumble of information about many relevant variables. It enables analysts to get a grip on the whole problem, not just one part of the problem at a time. Analysts can then apply their expertise in making an informed judgment about the problem. This structuring of the analyst’s thought process also provides the foundation for a well-organized report that clearly presents the rationale for each conclusion. This may also lead to some form of visual presentation, such as a Concept Map or Mind Map, or a causal or influence diagram.

The Method

Complexity Manager requires the analyst to proceed through eight specific steps:

  1. Define the problem: State the problem (plan, goal, outcome) to be analyzed, including the time period to be covered by the analysis.
  2. Identify and list relevant variables: Use one of the brainstorming techniques described in chapter 4 to identify the significant variables (factors, conditions, people, etc.) that may affect the situation of interest during the designated time period. Think broadly to include organizational or environmental constraints that are beyond anyone’s ability to control. If the goal is to estimate the status of one or more variables several years in the future, those variables should be at the top of the list. Group the other variables in some logical manner with the most important variables at the top of the list.
  3. Create a Cross-Impact Matrix: Create a matrix in which the number of rows and columns are each equal to the number of variables plus one. Leaving the cell at the top left corner of the matrix blank, enter all the variables in the cells in the row across the top of the matrix and the same variables in the column down the left side. The matrix then has a cell for recording the nature of the relationship between all pairs of variables. This is called a Cross-Impact Matrix—a tool for assessing the two-way interaction between each pair of variables. Depending on the number of variables and the length of their names, it may be convenient to use the variables’ letter designations across the top of the matrix rather than the full names.
  4. Assess the interaction between each pair of variables: Use a diverse team of experts on the relevant topic to analyze the strength and direction of the interaction between each pair of variables, and enter the results in the relevant cells of the matrix. For each pair of variables, ask the question: Does this variable impact the paired variable in a manner that will increase or decrease the impact or influence of that variable?

There are two different ways one can record the nature and strength of impact that one variable has on another. Figure 11.1 uses plus and minus signs to show whether the variable being analyzed has a positive or negative impact on the paired variable. The size of the plus or minus sign signifies the strength of the impact on a three-point scale. The small plus or minus shows a weak impact, the medium size a medium impact, and the large size a strong impact. If the variable being analyzed has no impact on the paired variable, the cell is left empty. If a variable might change in a way that could reverse the direction of its impact, from positive to negative or vice versa, this is shown by using both a plus and a minus sign.

After rating each pair of variables, and before doing further analysis, consider pruning the matrix to eliminate variables that are unlikely to have a significant effect on the outcome. It is possible to measure the relative significance of each variable by adding up the weighted values in each row and column.

  1. Analyze direct impacts: Write several paragraphs about the impact of each variable, starting with variable A. For each variable, describe the variable for further clarification if necessary. Identify all the variables that impact on that variable with a rating of 2 or 3, and briefly explain the nature, direction, and, if appropriate, the timing of this impact. How strong is it and how certain is it? When might these impacts be observed? Will the impacts be felt only in certain conditions?
  2. Analyze loops and indirect impacts: The matrix shows only the direct impact of one variable on another. When you are analyzing the direct impacts variable by variable, there are several things to look for and make note of. One is feedback loops. For example, if variable A has a positive impact on variable B, and variable B also has a positive impact on variable A, this is a positive feedback loop. Or there may be a three-variable loop, from A to B to C and back to A. The variables in a loop gain strength from each other, and this boost may enhance their ability to influence other variables. Another thing to look for is circumstances where the causal relationship between variables A and B is necessary but not sufficient for something to happen. For example, variable A has the potential to influence variable B, and may even be trying to influence variable B, but it can do so effectively only if variable C is also present. In that case, variable C is an enabling variable and takes on greater significance than it ordinarily would have.

All variables are either static or dynamic. Static variables are expected to remain more or less unchanged during the period covered by the analysis. Dynamic variables are changing or have the potential to change. The analysis should focus on the dynamic variables as these are the sources of surprise in any complex system. Determining how these dynamic variables interact with other variables and with each other is critical to any forecast of future developments. Dynamic variables can be either predictable or unpredictable. Predictable change includes established trends or established policies that are in the process of being implemented. Unpredictable change may be a change in leadership or an unexpected change in policy or available resources.

  1. Draw conclusions: Using data about the individual variables assembled in Steps 5 and 6, draw conclusions about the system as a whole. What is the most likely outcome or what changes might be anticipated during the specified time period? What are the driving forces behind that outcome? What things could happen to cause a different outcome? What desirable or undesirable side effects should be anticipated? If you need help to sort out all the relationships, it may be useful to sketch out by hand a diagram showing all the causal relationships. A Concept Map (chapter 4) may be useful for this purpose. If a diagram is helpful during the analysis, it may also be helpful to the reader or customer to include such a diagram in the report.
  2. Conduct an opportunity analysis: When appropriate, analyze what actions could be taken to influence this system in a manner favorable to the primary customer of the analysis.

Origins of This Technique

Complexity Manager was developed by Richards Heuer to fill an important gap in structured techniques that are available to the average analyst. It is a very simplified version of older quantitative modeling techniques, such as system dynamics.

11.2 DECISION MATRIX

Decision Matrix helps analysts identify the course of action that best achieves specified goals or preferences.

When to Use It

The Decision Matrix technique should be used when a decision maker has multiple options to choose from, multiple criteria for judging the desirability of each option, and/or needs to find the decision that maximizes a specific set of goals or preferences. For example, it can be used to help choose among various plans or strategies for improving intelligence analysis, to select one of several IT systems one is considering buying, to determine which of several job applicants is the right choice, or to consider any personal decision, such as what to do after retiring. A Decision Matrix is not applicable to most intelligence analysis, which typically deals with evidence and judgments rather than goals and preferences. It can be used, however, for supporting a decision maker’s consideration of alternative courses of action.

11.3 FORCE FIELD ANALYSIS

Force Field Analysis is a simple technique for listing and assessing all the forces for and against a change, problem, or goal. Kurt Lewin, one of the fathers of modern social psychology, believed that all organizations

are systems in which the present situation is a dynamic balance between forces driving for change and restraining forces. In order for any change to occur, the driving forces must exceed the restraining forces, and the relative strength of these forces is what this technique measures. This technique is based on Lewin’s theory.

The Method

* Define the problem, goal, or change clearly and concisely.

* Brainstorm to identify the main forces that will influence the issue. Consider such topics as needs, resources, costs, benefits, organizations, relationships, attitudes, traditions, interests, social and cultural trends, rules and regulations, policies, values, popular desires, and leadership to develop the full range of forces promoting and restraining the factors involved.

* Make one list showing the forces or people “driving” the change and a second list showing the forces or people “restraining” the change.

* Assign a value (the intensity score) to each driving or restraining force to indicate its strength. Assign the weakest intensity scores a value of 1 and the strongest a value of 5. The same intensity score can be assigned to more than one force if you consider the factors equal in strength. List the intensity scores in parentheses beside each item.

* Calculate a total score for each list to determine whether the driving or the restraining forces are dominant.

* Examine the two lists to determine if any of the driving forces balance out the restraining forces.

* Devise a manageable course of action to strengthen those forces that lead to the preferred outcome and weaken the forces that would hinder the desired outcome.

11.4 PROS-CONS-FAULTS-AND-FIXES

Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy ideas. It is intended to offset the human tendency of a group of analysts and decision makers to jump to a conclusion before full analysis of the problem has been completed.

When to Use It

Making lists of pros and cons for any action is a very common approach to decision making. The “Faults” and “Fixes” are what is new in this strategy. Use this technique to make a quick appraisal of a new idea or a more systematic analysis of a choice between two options.

Value Added

It is unusual for a new idea to meet instant approval. What often happens in meetings is that a new idea is brought up, one or two people immediately explain why they don’t like it or believe it won’t work, and the idea is then dropped. On the other hand, there are occasions when just the opposite happens. A new idea is immediately welcomed, and a commitment to support it is made before the idea is critically evaluated. The Pros-Cons-Faults-and-Fixes technique helps to offset this human tendency toward jumping to conclusions.

The Method

Start by clearly defining the proposed action or choice. Then follow these steps:

* List the Pros in favor of the decision or choice. Think broadly and creatively and list as many benefits, advantages, or other positives as possible.

* List the Cons, or arguments against what is proposed. There are usually more Cons than Pros, as most humans are naturally critical. It is easier to think of arguments against a new idea than to imagine how the new idea might work. This is why it is often difficult to get careful consideration of a new idea.

* Review and consolidate the list. If two Pros are similar or overlapping, consider merging them to eliminate any redundancy. Do the same for any overlapping Cons.

* If the choice is between two clearly defined options, go through the previous steps for the second option. If there are more than two options, a technique such as Decision Matrix may be more appropriate than Pros-Cons-Faults-and-Fixes.

* At this point you must make a choice. If the goal is to challenge an initial judgment that the idea won’t work, take the Cons, one at a time, and see if they can be “Fixed.” That means trying to figure a way to neutralize their adverse influence or even to convert them into Pros. This exercise is intended to counter any unnecessary or biased negativity about the idea. There are at least four ways an argument listed as a Con might be Fixed:

 

  • Propose a modification of the Con that would significantly lower the risk of the Con being a problem.
  • Identify a preventive measure that would significantly reduce the chances of the Con being a problem.
  • Do contingency planning that includes a change of course if certain indicators are observed.
  • Identify a need for further research or information gathering to confirm or refute the assumption that the Con is a problem.

* If the goal is to challenge an initial optimistic assumption that the idea will work and should be pursued, take the Pros, one at a time, and see if they can be “Faulted.” That means to try and figure out how the Pro might fail to materialize or have undesirable consequences. This exercise is intended to counter any wishful thinking or unjustified optimism about the idea. There are at least three ways a Pro might be Faulted:

  • Identify a reason why the Pro would not work or why the benefit would not be received.
  • Identify an undesirable side effect that might accompany the benefit.
  • Identify a need for further research or information gathering to confirm or refute the assumption that the Pro will work or be beneficial.

A third option is to combine both approaches, to Fault the Pros and Fix the Cons.

11.5 SWOT ANALYSIS

SWOT is commonly used by all types of organizations to evaluate the Strengths, Weaknesses,

Opportunities, and Threats involved in any project or plan of action. The strengths and weaknesses are internal to the organization, while the opportunities and threats are characteristics of the external environment.

12.0 Guide to Collaboration
12 Practitioner’s Guide to Collaboration

Analysis in the U.S. Intelligence Community is now in a transitional stage from being predominantly a mental activity done by a solo analyst to becoming a collaborative or group activity.

 

The increasing use of structured analytic techniques is central to this transition. Many things change when the internal thought process of analysts can be externalized in a transparent manner so that ideas can be shared, built on, and easily critiqued by others.

 

This chapter is not intended to describe collaboration as it exists today. It is a visionary attempt to foresee how collaboration might be put into practice in the future when interagency collaboration is the norm and the younger generation of analysts has had even more time to imprint its social networking practices on the Intelligence Community.

 

12.1 SOCIAL NETWORKS AND ANALYTIC TEAMS

There are several ways to categorize teams and groups. When discussing the U.S. Intelligence Community, it seems most useful to deal with three types: the traditional analytic team, the special project team, and social network.

* Traditional analytic team: This is the typical work team assigned to perform a specific task. It has a leader appointed by a manager or chosen by the team, and all members of the team are collectively accountable for the team’s product. The team may work jointly to develop the entire product or, as is commonly done for National Intelligence Estimates, each team member may be responsible for a specific section of the work.

The core analytic team, with participants usually working at the same agency, drafts a paper and sends it to other members of the government community for comment and coordination.

* Special project team: Such a team is usually formed to provide decision makers with near–real time analytic support during a crisis or an ongoing operation. A crisis support task force or field-deployed interagency intelligence team that supports a military operation exemplifies this type of team.

* Social networks: Experienced analysts have always had their own network of experts in their field or related fields with whom they consult from time to time and whom they may recruit to work with them on a specific analytic project. Social networks are critical to the analytic business. They do the day-to-day monitoring of events, produce routine products as needed, and may recommend the formation of a more formal analytic team to handle a specific project. The social network is the form of group activity that is now changing dramatically with the growing ease of cross-agency secure communications and the availability of social networking software.

The key problem that arises with social networks is the geographic distribution of their members. Even within the Washington, D.C., metropolitan area, distance is a factor that limits the frequency of face-to-face meetings.

Research on effective collaborative practices has shown that geographically distributed teams are most likely to succeed when they satisfy six key imperatives. Participants must

  • Know and trust each other; this usually requires that they meet face-to-face at least once. Feel a personal need to engage the group in order to perform a critical task.
  • Derive mutual benefits from working together.
  • Connect with each other virtually on demand and easily add new members.
  • Perceive incentives for participating in the group, such as saving time, gaining new insights from interaction with other knowledgeable analysts, or increasing the impact of their contribution.
  • Share a common understanding of the problem with agreed lists of common terms and definitions.

12.2 DIVIDING THE WORK

Managing the geographic distribution of the social network can also be addressed effectively by dividing the analytic task into two parts—first exploiting the strengths of the social network for divergent or creative analysis to identify ideas and gather information, and, second, forming a small analytic team that employs convergent analysis to meld these ideas into an analytic product.

 

Structured analytic techniques and collaborative software work very well with this two-part approach to analysis. A series of basic techniques used for divergent analysis early in the analytic process works well for a geographically distributed social network communicating via a wiki.

 

A project leader informs a social network of an impending project and provides a tentative project description, target audience, scope, and process to be followed. The leader also gives the name of the wiki to be used and invites interested analysts knowledgeable in that area to participate. Any analyst with access to the secure network also has access to the wiki and is authorized to add information and ideas to it. Any or all of the following techniques, as well as others, may come into play during the divergent analysis phase as specified by the project leader:

  • Issue Redefinition as described in chapter 4.
  • Collaboration in sharing and processing data using other techniques such as timelines, sorting, networking, mapping, and charting as described in chapter 4.
  • Some form of brainstorming, as described in chapter 5, to generate a list of driving forces, variables, players, etc.
  • Ranking or prioritizing this list, as described in chapter 4.
  • Putting this list into a Cross-Impact Matrix, as described in chapter 5, and then discussing and recording in the wiki the relationship, if any, between each pair of driving forces, variables, or players in that matrix.
  • Developing a list of alternative explanations or outcomes (hypotheses) to be considered (chapter 7).
  • Developing a list of items of evidence available to be considered when evaluating these hypotheses (chapter 7).
  • Doing a Key Assumptions Check (chapter 8). This will be less effective when done on a wiki than in a face-to-face meeting, but it would be useful to learn the network’s thinking about key assumptions.

Most of these steps involve making lists, which can be done quite effectively with a wiki. Making such input via a wiki can be even more productive than a face-to-face meeting, because analysts have more time to think about and write up their thoughts and are able to look at their contribution over several days and make additions or changes as new ideas come to them.

The process should be overseen and guided by a project leader. In addition to providing a sound foundation for further analysis, this process enables the project leader to identify the best analysts to be included in the smaller team that conducts the second phase of the project—making analytic judgments and drafting the report. Team members should be selected to maximize the following criteria: level of expertise on the subject, level of interest in the outcome of the analysis, and diversity of opinions and thinking styles among the group.

12.3 COMMON PITFALLS WITH SMALL GROUPS

the use of structured analytic techniques frequently helps analysts avoid many of the common pitfalls of the small-group process.

Much research documents that the desire for consensus is an important cause of poor group decisions. Development of a group consensus is usually perceived as success, but, in reality, it is often indicative of failure. Premature consensus is one of the more common causes of suboptimal group performance. It leads to failure to identify or seriously consider alternatives, failure to examine the negative aspects of the preferred

position, and failure to consider the consequences that might follow if the preferred position is wrong.8 This phenomenon is what is commonly called groupthink.

12.4 BENEFITING FROM DIVERSITY

Improvement of group performance requires an understanding of these problems and a conscientious effort to avoid or mitigate them. The literature on small-group performance is virtually unanimous in emphasizing that groups make better decisions when their members bring to the table a diverse set of ideas, opinions, and perspectives. What premature consensus, groupthink, and polarization all have in common is a failure to recognize assumptions and a failure to adequately identify and consider alternative points of view.

Briefly, then, the route to better analysis is to create small groups of analysts who are strongly encouraged by their leader to speak up and express a wide range of ideas, opinions, and perspectives. The use of structured analytic techniques generally ensures that this happens. These techniques guide the dialogue between analysts as they share evidence and alternative perspectives on the meaning and significance of the evidence. Each step in the technique prompts relevant discussion within the team, and such discussion can generate and evaluate substantially more divergent information and new ideas than can a group that does not use such a structured process.

12.5 ADVOCACY VS. OBJECTIVE INQUIRY

The desired diversity of opinion is, of course, a double-edged sword, as it can become a source of conflict which degrades group effectiveness.

In a task-oriented team environment, advocacy of a specific position can lead to emotional conflict and reduced team effectiveness. Advocates tend to examine evidence in a biased manner, accepting at face value information that seems to confirm their own point of view and subjecting any contrary evidence to highly critical evaluation. Advocacy is appropriate in a meeting of stakeholders that one is attending for the purpose of representing a specific interest. It is also “an effective method for making decisions in a courtroom when both sides are effectively represented, or in an election when the decision is made by a vote of the people.”

…many CIA and FBI analysts report that their preferred use of ACH is to gain a better understanding of the differences of opinion between them and other analysts or between analytic offices. The process of creating an ACH matrix requires identification of the evidence and arguments being used and determining how these are interpreted as either consistent or inconsistent with the various hypotheses.

Considerable research on virtual teaming shows that leadership effectiveness is a major factor in the success or failure of a virtual team. Although leadership usually is provided by a group’s appointed leader, it can also emerge as a more distributed peer process and is greatly aided by the use of a trained facilitator (see Figure 12.6). When face-to-face contact is limited, leaders, facilitators, and team members must compensate by paying more attention than they might otherwise devote to the following tasks:

  • Articulating a clear mission, goals, specific tasks, and procedures for evaluating results.
  • Defining measurable objectives with milestones and timelines for achieving them.
  • Identifying clear and complementary roles and responsibilities.
  • Building relationships with and between team members and with stakeholders. Agreeing on team norms and expected behaviors.
  • Defining conflict resolution procedures.
  • Developing specific communication protocols and practices

 

 

 

 

13.0 Evaluation of Techniques
13 Evaluation of Structured Analytic Techniques

13.1 ESTABLISHING FACE VALIDITY

The taxonomy of structured analytic techniques presents each category of structured technique in the context of how it is intended to mitigate or avoid a specific cognitive or group process problem. In other words, each structured analytic technique has face validity because there is a rational reason for expecting it to help mitigate or avoid a recognized problem that can occur when one is doing intelligence analysis. For example, a great deal of research in human cognition during the past sixty years shows the limits of working memory and suggests that one can manage a complex problem most effectively by breaking it down into smaller pieces.

Satisficing is a common analytic shortcut that people use in making everyday decisions when there are multiple possible answers. It saves a lot of time when you are making judgments or decisions of little consequence, but it is ill-advised when making judgments or decisions with significant consequences for national security.

The ACH process does not guarantee a correct judgment, but this anecdotal evidence suggests that ACH does make a significant contribution to better analysis.

13.2 LIMITS OF EMPIRICAL TESTING

Findings from empirical experiments can be generalized to apply to intelligence analysis only if the test conditions match relevant conditions in which intelligence analysis is conducted. There are so many variables that can affect the research results that it is very difficult to control for all or even most of them. These variables include the purpose for which a technique is used, implementation procedures, context of the experiment, nature of the analytic task, differences in analytic experience and skill, and whether the analysis is done by a single analyst or as a group process. All of these variables affect the outcome of any experiment that ostensibly tests the utility of an analytic technique. In a number of readily available examples of research on structured analytic techniques, we identified serious questions about the applicability of the research findings to intelligence analysis.

Different Purpose or Goal

Many structured analytic techniques can be used for several different purposes, and research findings on the effectiveness of these techniques can be generalized and applied to the Intelligence Community only if the technique is used in the same way and for the same purpose as in the actual practice of the Intelligence Community. For example, Philip Tetlock, in his important book Expert Political Judgment, describes two experiments showing that scenario development may not be an effective technique. The experiments compared judgments on a political issue before and after the test subjects prepared scenarios in an effort to gain a better understanding of the issues. The experiments showed that the predictions by both experts and nonexperts were more accurate before generating the scenarios; in other words, the generation of scenarios actually reduced the accuracy of their predictions. Several experienced analysts have separately cited this finding as evidence that scenario development may not be a useful method for intelligence analysis.
However, Tetlock’s conclusions should not be generalized to apply to intelligence analysis, as those experiments tested scenarios as a predictive tool. The Intelligence Community does not use scenarios for prediction.

Different Implementation Procedures

There are specific procedures for implementing many structured techniques. If research on the effectiveness of a specific technique is to be applicable to intelligence analysis, the research should use the same implementing procedure(s) for that technique as those used by the Intelligence Community.

Different Environment

When evaluating the validity of a technique, it is necessary to control for the environment in which the technique is used. If this is not done, the research findings may not always apply to intelligence analysis.

This is by no means intended to suggest that techniques developed for use in other domains should not be used in intelligence analysis. On the contrary, other domains are a productive source of such techniques, but the best way to apply them to intelligence analysis needs to be carefully evaluated.

Misleading Test Scenario

Empirical testing of a structured analytic technique requires developing a realistic test scenario. The test group analyzes this scenario using the structured technique while the control group analyzes the scenario without the benefit of any such technique. The MITRE Corporation conducted an experiment to test the ability of the

Analysis of Competing Hypotheses (ACH) technique to prevent confirmation bias. Confirmation bias is the tendency of people to seek information or assign greater weight to information that confirms what they already believe and to underweight or not seek information that supports an alternative belief.

Typically, intelligence analysts do not begin the process of attacking an intelligence problem by developing a full set of hypotheses. Richards Heuer, who developed the ACH methodology, has always believed that a principal benefit of ACH in mitigating confirmation bias is that it does requires analysts to develop a full set of hypotheses before evaluating any of them.

Differences in Analytic Experience and Skill

There is a difference between structured techniques in the skill level and amount of training that is required to implement them effectively.

When one is evaluating any technique, the level of skill and training required is an important variable. Any empirical testing needs to control for this variable, which suggests that testing of any medium- to high-skill technique should be done with current or former intelligence analysts, including analysts at different skill levels.

an analytic tool is not like a machine that works whenever it is turned on. It is a strategy for achieving a goal. Whether or not one reaches the goal depends in part upon the skill of the person executing the strategy.

Conclusion

Using empirical experiments to evaluate structured techniques is difficult because the outcome of any experiment is influenced by so many variables. Experiments conducted outside the Intelligence Community typically fail to replicate the important conditions that influence the outcome of analysis within the community.

13.3 A NEW APPROACH TO EVALUATION

There is a better way to evaluate structured analytic techniques. In this section we outline a new approach that is embedded in the reality of how analysis is actually done in the Intelligence Community. We then show how this approach might be applied to the analysis of three specific techniques.

Step 1 is to identify what we know, or think we know, about the benefits from using any particular structured technique. This is the face validity as described earlier in this chapter plus whatever analysts believe they have learned from frequent use of a technique. For example, we think we know that ACH provides several benefits that help produce a better intelligence product. A full analysis of ACH would consider each of the following potential benefits:

It requires analysts to start by developing a full set of alternative hypotheses. This reduces the risk of satisficing.
It enables analysts to manage and sort evidence in analytically useful ways.
It requires analysts to try to refute hypotheses rather than to support a single hypothesis. This process reduces confirmation bias and helps to ensure that all alternatives are fully considered.

It can help a small group of analysts identify new and divergent information as they fill out the matrix, and it depersonalizes the discussion when conflicting opinions are identified.
It spurs analysts to present conclusions in a way that is better organized and more transparent as to how these conclusions were reached.

It can provide a foundation for identifying indicators that can be monitored to determine the direction in which events are heading.
It leaves a clear audit trail as to how the analysis was done.

Step 2 is to obtain evidence to test whether or not a technique actually provides the expected benefits. Acquisition of evidence for or against these benefits is not limited to the results of empirical experiments. It includes structured interviews of analysts, managers, and customers; observations of meetings of analysts as they use these techniques; and surveys as well as experiments.

Step 3 is to obtain evidence of whether or not these benefits actually lead to higher quality analysis. Quality of analysis is not limited to accuracy. Other measures of quality include clarity of presentation, transparency in how the conclusion was reached, and construction of an audit trail for subsequent review, all of which are benefits that might be gained, for example, by use of ACH. Evidence of higher quality might come from independent evaluation of quality standards or interviews of customers receiving the reports. Cost effectiveness, including cost in analyst time as well as money, is another criterion of interest. As stated previously in this book, we claim that the use of a structured technique often saves analysts time in the long run. That claim should also be subjected to empirical analysis.

Indicators Validator

The Indicators Validator described in chapter 6 is a new technique developed by Randy Pherson to test the power of a set of indicators to provide early warning of future developments, such as which of several potential scenarios seems to be developing. It uses a matrix similar to an ACH matrix with scenarios listed across the top and indicators down the left side. For each combination of indicator and scenario, the analyst rates on a five-point scale the likelihood that this indicator will or will not be seen if that scenario is developing. This rating measures the diagnostic value of each indicator or its ability to diagnose which scenario is becoming most likely.

It is often found that indicators have little or no value because they are consistent with multiple scenarios. The explanation for this phenomenon is that when analysts are identifying indicators, they typically look for indicators that are consistent with the scenario they are concerned about identifying. They don’t think about the value of an indicator being diminished if it is also consistent with other hypotheses.

The Indicators Validator was developed to meet a perceived need for analysts to better understand the requirements for a good indicator. Ideally, however, the need for this technique and its effectiveness should be tested before all analysts working with indicators are encouraged to use it. Such testing might be done as follows:

* Check the need for the new technique. Select a sample of intelligence reports that include an indicators list and apply the Indicators Validator to each indicator on the list. How often does this test identify indicators that have been put forward despite their having little or no diagnostic value?

* Do a before-and-after comparison. Identify analysts who have developed a set of indicators during the course of their work. Then have them apply the Indicators Validator to their work and see how much difference it makes.

14.0 Vision of the Future
14 Vision of the Future

The Intelligence Community is pursuing several paths in its efforts to improve the quality of intelligence analysis. One of these paths is the increased use of structured analytic techniques, and this book is intended to encourage and support that effort.

 

14.4 IMAGINING THE FUTURE: 2015

Imagine it is now 2015. Our three assumptions have turned out to be accurate, and collaboration in the use of structured analytic techniques is now widespread. What has happened to make this outcome possible, and how has it transformed the way intelligence analysis is done in 2015? This is our vision of what could be happening by that date.

The use of A-Space has been growing for the past five years. Younger analysts in particular have embraced it in addition to Intellipedia as a channel for secure collaboration with their colleagues working on related topics in other offices and agencies. Analysts in different geographic locations arrange to meet as a group from time to time, but most of the ongoing interaction is accomplished via collaborative tools such as A-Space, communities of interest, and Intellipedia.

By 2015, the use of structured analytic techniques has expanded well beyond the United States. The British, Canadian, Australian, and several other foreign intelligence services increasingly incorporate structured techniques into their training programs and their processes for conducting analysis. After the global financial crisis that began in 2008, a number of international financial and business consulting firms adapted several of the core intelligence analysis techniques to their business needs, concluding that they could no longer afford multi-million dollar mistakes that could have been avoided by engaging in more rigorous analysis as part of their business processes.