Notes on Intelligence Analysis: A Target-Centric Approach

A major contribution of the 9/11 Commission and the Iraqi WMD Commission was their focus on a failed process, specifically on that part of the process where intelligence analysts interact with their policy customers.

“Thus, this book has two objectives:

The first objective is to redefine the intelligence process to help make all parts of what is commonly referred to as the “intelligence cycle” run smoothly and effectively, with special emphasis on both the analyst-collector and the analyst-customer relationships.

The second goal is to describe some methodologies that make for better predictive analysis.”

 

“An intelligence process should accomplish three basic tasks. First, it should make it easy for customers to ask questions. Second, it should use the existing base of intelligence information to provide immediate responses to the customer. Third, it should manage the expeditious creation of new information to answer remaining questions. To do these things, intelligence must be collaborative and predictive: collaborative to engage all participants while making it easy for customers to get answers; predictive because intelligence customers above all else want to know what will happen next.”

“the target-centric process outlines a collaborative approach for intelligence collectors, analysts, and customers to operate cohesively against increasingly complex opponents. We cannot simply provide more intelligence to customers; they already have more information than they can process, and information overload encourages intelligence failures. The community must provide what is called “actionable intelligence”—intelligence that is relevant to customer needs, is accepted, and is used in forming policy and in conducting operations.”

“The second objective is to clarify and refine the analysis process by drawing on existing prediction methodologies. These include the analytic tools used in organizational planning and problem solving, science and engineering, law, and economics. In many cases, these are tools and techniques that have endured despite dramatic changes in information technology over the past fifty years. All can be useful in making intelligence predictions, even in seemingly unrelated fields.”

“This book, rather, is a general guide, with references to lead the reader to more in-depth studies and reports on specific topics or techniques. The book offers insights that intelligence customers and analysts alike need in order to become more proactive in the changing world of intelligence and to extract more useful intelligence.”

“The common theme of these and many other intelligence failures discussed in this book is not the failure to collect intelligence. In each of these cases, the intelligence had been collected. Three themes are common in intelligence failures: failure to share information, failure to analyze collected material objectively, and failure of the customer to act on intelligence.”

 

“ though progress has been made in the past decade, the root causes for the failure to share remain, in the U.S. intelligence community as well as in almost all intelligence services worldwide:

Sharing requires openness. But any organization that requires secrecy to perform its duties will struggle with and often reject openness. Most governmental intelligence organizations, including the U.S. intelligence community, place more emphasis on secrecy than on effectiveness. The penalty for producing poor intelligence usually is modest. The penalty for improperly handling classified information can be career-ending. There are legitimate reasons not to share; the U.S. intelligence community has lost many collection assets because details about them were too widely shared. So it comes down to a balancing act between protecting assets and acting effectively in the world. ”

 

“Experts on any subject have an information advantage, and they tend to use that advantage to serve their own agendas. Collectors and analysts are no different. At lower levels in the organization, hoarding information may have job security benefits. At senior levels, unique knowledge may help protect the organizational budget. ”

 

“Finally, both collectors of intelligence and analysts find it easy to be insular. They are disinclined to draw on resources outside their own organizations.12 Communication takes time and effort. It has long-term payoffs in access to intelligence from other sources, but few short-term benefits.”

 

Failure to Analyze Collected Material Objectively

In each of the cases cited at the beginning of this introduction, intelligence analysts or national leaders were locked into a mindset—a consistent thread in analytic failures. Falling into the trap that Louis Pasteur warned about in the observation that begins this chapter, they believed because, consciously or unconsciously, they wished it to be so. ”

 

 

 

  • Ethnocentric bias involves projecting one’s own cultural beliefs and expectations on others. It leads to the creation of a “mirror-image” model, which looks at others as one looks at oneself, and to the assumption that others will act “rationally” as rationality is defined in one’s own culture.”
  • Wishful thinking involves excessive optimism or avoiding unpleasant choices in analysis.
  • Parochial interests cause organizational loyalties or personal agendas to affect the analysis process.
  • Status quo biases cause analysts to assume that events will proceed along a straight line. The safest weather prediction, after all, is that tomorrow’s weather will be like today’s.
  • Premature closure results when analysts make early judgments about the answer to a question and then, often because of ego, defend the initial judgments tenaciously. This can lead the analyst to select (usually without conscious awareness) subsequent evidence that supports the favored answer and to reject (or dismiss as unimportant) evidence that conflicts with it.

 

Summary

 

Intelligence, when supporting policy or operations, is always concerned with a target. Traditionally, intelligence has been described as a cycle: a process starting from requirements, to planning or direction, collection, processing, analysis and production, dissemination, and then back to requirements. That traditional view has several shortcomings. It separates the customer from the process and intelligence professionals from one another. A gap exists in practice between dissemination and requirements. The traditional cycle is useful for describing structure and function and serves as a convenient framework for organizing and managing a large intelligence community. But it does not describe how the process works or should work.”

 

 

 

Intelligence is in practice a nonlinear and target-centric process, operated by a collaborative team of analysts, collectors, and customers collectively focused on the intelligence target. The rapid advances in information technology have enabled this transition.

All significant intelligence targets of this target-centric process are complex systems in that they are nonlinear, dynamic, and evolving. As such, they can almost always be represented structurally as dynamic networks—opposing networks that constantly change with time. In dealing with opposing networks, the intelligence network must be highly collaborative.

 

“Historically, however, large intelligence organizations, such as those in the United States, provide disincentives to collaboration. If those disincentives can be removed, U.S. intelligence will increasingly resemble the most advanced business intelligence organizations in being both target-centric and network-centric.”

 

 

“Having defined the target, the first question to address is, What do we need to learn about the target that our customers do not already know? This is the intelligence problem, and for complex targets, the associated intelligence issues are also complex. ”

 

 

 

 

 

 

 

Chapter 4

Defining the Intelligence Issue

A problem well stated is a problem half solved.

Inventor Charles Franklin Kettering

“all intelligence analysis efforts start with some form of problem definition.”

“The initial guidance that customers give analysts about an issue, however, almost always is incomplete, and it may even be unintentionally misleading.”

“Therefore, the first and most important step an analyst can take is to understand the issue in detail. He or she must determine why the intelligence analysis is being requested and what decisions the results will support. The success of analysis depends on an accurate issue definition. As one senior policy customer noted in commenting on intelligence failures, “Sometimes, what they [the intelligence officers] think is important is not, and what they think is not important, is.”

 

“The poorly defined issue is so common that it has a name: the framing effect. It has been described as “the tendency to accept problems as they are presented, even when a logically equivalent reformulation would lead to diverse lines of inquiry not prompted by the original formulation.”

 

 

“veteran analysts go about the analysis process quite differently than do novices. At the beginning of a task, novices tend to attempt to solve the perceived customer problem immediately. Veteran analysts spend more time thinking about it to avoid the framing effect. They use their knowledge of previous cases as context for creating mental models to give them a head start in addressing the problem. Veterans also are better able to recognize when they lack the necessary information to solve a problem,6 in part because they spend enough time at the beginning, in the problem definition phase. In the case of the complex problems discussed in this chapter, issue definition should be a large part of an analyst’s work.

Issue definition is the first step in a process known as structured argumentation.”

 

 

“structured argumentation always starts by breaking down a problem into parts so that each part can be examined systematically.”

 

Statement of the Issue

 

In the world of scientific research, the guidelines for problem definition are that the problem should have “a reasonable expectation of results, believing that someone will care about your results and that others will be able to build upon them, and ensuring that the problem is indeed open and underexplored.”8 Intelligence analysts should have similar goals in their profession. But this list represents just a starting point. Defining an intelligence analysis issue begins with answering five questions:

 

When is the result needed? Determine when the product must be delivered. (Usually, the customer wants the report yesterday.) In the traditional intelligence process, many reports are delivered too late—long after the decisions have been made that generated the need—in part because the customer is isolated from the intelligence process… The target-centric approach can dramatically cut the time required to get actionable intelligence to the customer because the customer is part of the process.”

 

Who is the customer? Identify the intelligence customers and try to understand their needs. The traditional process of communicating needs typically involves several intermediaries, and the needs inevitably become distorted as they move through the communications channels.

 

What is the purpose? Intelligence efforts usually have one main purpose. This purpose should be clear to all participants when the effort begins and also should be clear to the customer in the result…Customer involvement helps to make the purpose clear to the analyst.”

 

 

What form of output, or product, does the customer want? Written reports (now in electronic form) are standard in the intelligence business because they endure and can be distributed widely. When the result goes to a single customer or is extremely sensitive, a verbal briefing may be the form of output.”

 

“Studies have shown that customers never read most written intelligence. Subordinates may read and interpret the report, but the message tends to be distorted as a result. So briefings or (ideally) constant customer interaction with the intelligence team during the target-centric process helps to get the message through.”

 

What are the real questions? Obtain as much background knowledge as possible about the problem behind the questions the customer asks, and understand how the answers will affect organizational decisions. The purpose of this step is to narrow the problem definition. A vaguely worded request for information is usually misleading, and the result will almost never be what the requester wanted.”

 

Be particularly wary of a request that has come through several “nodes” in the organization. The layers of an organization, especially those of an intelligence bureaucracy, will sometimes “load” a request as it passes through with additional guidance that may have no relevance to the original customer’s interests. A question that travels through several such layers often becomes cumbersome by the time it reaches the analyst.

 

“The request should be specific and stripped of unwanted excess. ”

 

“The time spent focusing the request saves time later during collection and analysis. It also makes clear what questions the customer does not want answered—and that should set off alarm bells, as the next example illustrates.”

 

“After answering these five questions, the analyst will have some form of problem statement. On large (multiweek) intelligence projects, this statement will itself be a formal product. The issue definition product helps explain the real questions and related issues. Once it is done, the analyst will be able to focus more easily on answering the questions that the customer wants answered.”

 

The Issue Definition Product

 

“When the final intelligence product is to be a written report, the issue definition product is usually in précis (summary, abstract, or terms of reference) form. The précis should include the problem definition or question, notional results or conclusions, and assumptions. For large projects, many intelligence organizations require the creation of a concept paper or outline that provides the stakeholders with agreed terms of reference in précis form.”

 

“Whether the précis approach or the notional briefing is used, the issue definition should conclude with an issue decomposition view.”

 

Issue Decomposition

 

“taking a seemingly intractable problem and breaking it into a series of manageable subproblems.”

 

 

“Glenn Kent of RAND Corporation uses the name strategies-to-task for a similar breakout of U.S. Defense Department problems.12 Within the U.S. intelligence community, it is sometimes referred to as problem decomposition or “decomposition and visualization.”

 

 

 

“Whatever the name, the process is simple: Deconstruct the highest level abstraction of the issue into its lower-level constituent functions until you arrive at the lowest level of tasks that are to be performed or subissues to be dealt with. In intelligence, the deconstruction typically details issues to be addressed or questions to be answered. Start from the problem definition statement and provide more specific details about the problem.”

 

“Start from the problem definition statement and provide more specific details about the problem. The process defines intelligence needs from the top level to the specific task level via taxonomy—a classification system in which objects are arranged into natural or related groups based on some factor common to each object in the group. ”

 

“At the top level, the taxonomy reflects the policymaker’s or decision maker’s view and reflects the priorities of that customer. At the task level, the taxonomy reflects the view of the collection and analysis team. These subtasks are sometimes called key intelligence questions (KIQs) or essential elements of information (EEIs).”

 

“Issue decomposition follows the classic method for problem solving. It results in a requirements, or needs, hierarchy that is widely used in intelligence organizations. ”

 

it is difficult to evaluate how well an intelligence organization is answering the question, “What is the political situation in Region X?” It is much easier to evaluate the intelligence unit’s performance in researching the transparency, honesty, and legitimacy of elections, because these are very specific issues.

 

“Obviously there can be several different issues associated with a given intelligence target or several different targets associated with a given issue.”

 

Complex Issue Decomposition

 

We have learned that the most important step in the intelligence process is to understand the issue accurately and in detail. Equally true, however, is that intelligence problems today are increasingly complex—often described as nonlinear, or “wicked.” They are dynamic and evolving, and thus their solutions are, too. This makes them difficult to deal with—and almost impossible to address within the traditional intelligence cycle framework. A typical example of a wicked issue is that of a drug cartel—the cartel itself is dynamic and evolving and so are the questions being posed by intelligence consumers who have an interest in it.”

 

 

 

 

 

 

 

 

“A typical real-world customer’s issue today presents an intelligence officer with the following challenges:

 

It represents an evolving set of interlocking issues and constraints.

“There are many stakeholders—people who care about or have something at stake in how the issue is resolved.”

 

“There are many stakeholders—people who care about or have something at stake in how the issue is resolved. (Again, this makes the problem-solving process a fundamentally social one, in contrast to the antisocial traditional intelligence cycle.) ”

 

The constraints on the solution, such as limited resources and political ramifications, change over time. The target is constantly changing, as the Escobar example illustrates, and the customers (stakeholders) change their minds, fail to communicate, or otherwise change the rules of the game.”

 

Because there is no final issue definition, there is no definitive solution. The intelligence process often ends when time runs out, and the customer must act on the most currently available information.”

 

“Harvard professor David S. Landes summarized these challenges nicely when he wrote, “The determinants of complex processes are invariably plural and interrelated.”15 Because of this—because complex or wicked problems are an evolving set of interlocking issues and constraints, and because the introduction of new constraints cannot be prevented—the decomposition of a complex problem must be dynamic; it will change with time and circumstances. ”

 

 

“As intelligence customers learn more about the targets, their needs and interests will shift.

Ideally, a complex issue decomposition should be created as a network because of the interrelationship among the elements.

 

 

Although the hierarchical decomposition approach may be less than ideal for complex problems, it works well enough if it is constantly reviewed and revised during the analysis process. It allows analysts to define the issue in sufficient detail and with sufficient accuracy so that the rest of the process remains relevant. There may be redundancy in a linear hierarchy, but the human mind can usually recognize and deal with the redundancy. To keep the decomposition manageable, analysts should continue to use the hierarchy, recognizing the need for frequent revisions, until information technology comes up with a better way.

 

 

 

Structured Analytic Methodologies for Issue Definition

 

Throughout the book we discuss a class of analytic methodologies that are collectively referred to as structured analytic methodologies or SATs. ”

 

 

“a relevancy check needs to be done. To be “key,” an assumption must be essential to the analytic reasoning that follows it. That is, if the assumption turns out to be invalid, then the conclusions also probably are invalid. CIA’s Tradecraft Primer identifies several questions that need to be asked about key assumptions:

 

How much confidence exists that this assumption is correct?

What explains the degree of confidence in the assumption?

What circumstances or information might undermine this assumption?

Is a key assumption more likely a key uncertainty or key factor?

Could the assumption have been true in the past but less so now?

If the assumption proves to be wrong, would it significantly alter the analytic line? How?

Has this process identified new factors that need further analysis?”

 

Example: Defining the Counterintelligence Issue

 

Counterintelligence (CI) in government usually is thought of as having two subordinate problems: security (protecting sources and methods) and catching spies (counterespionage).

 

 

If the issue is defined this way—security and counterespionage—the response in both policy and operations is defensive. Personnel background security investigations are conducted. Annual financial statements are required of all employees. Profiling is used to detect unusual patterns of computer use that might indicate computer espionage. Cipher-protected doors, badges, personal identification numbers, and passwords are used to ensure that only authorized persons have access to sensitive intelligence. The focus of communications security is on denial, typically by encryption. Leaks of intelligence are investigated to identify their source.

 

But whereas the focus on security and counterespionage is basically defensive, the first rule of strategic conflict is that the offense always wins. So, for intelligence purposes, you’re starting out on the wrong path if the issue decomposition starts with managing security and catching spies.

 

A better issue definition approach starts by considering the real target of counterintelligence: the opponent’s intelligence organization. Good counterintelligence requires good analysis of the hostile intelligence services. As we will see in several examples later in this book, if you can model an opponent’s intelligence system, you can defeat it. So we start with the target as the core of the problem and begin an issue decomposition.

 

If the counterintelligence issue is defined in this fashion, then the counterintelligence response will be forward-leaning and will focus on managing foreign intelligence perceptions through a combination of covert action, denial, and deception. The best way to win the CI conflict is to go on the offensive (model the target, anticipate the opponent’s actions, and defeat him or her). Instead of denying information to the opposing side’s intelligence machine, for example, you feed it false information that eventually degrades the leadership’s confidence in its intelligence services.

 

To do this, one needs a model of the opponent’s intelligence system that can be subjected to target-centric analysis, including its communications channels and nodes, its requirements and targets, and its preferred sources of intelligence.

 

Summary

Before beginning intelligence analysis, the analyst must understand the customer’s issue. This usually involves close interaction with the customer until the important issues are identified. The problem then has to be deconstructed in an issue decomposition process so that collection, synthesis, and analysis can be effective.”

 

All significant intelligence issues, however, are complex and nonlinear. The complex problem is a dynamic set of interlocking issues and constraints with many stakeholders and no definitive solution. Although the linear issue decomposition process is not an optimal way to approach such problems, it can work if it is reviewed and updated frequently during the analysis process.

 

 

“Issue definition is the first step in a process known as structured argumentation. As an analyst works through this process, he or she collects and evaluates relevant information, fitting it into a target model (which may or may not look like the issue decomposition); this part of the process is discussed in chapters 5–7. The analyst identifies information gaps in the target model and plans strategies to fill them. The analysis of the target model then provides answers to the questions posed in the issue definition process. The next chapter discusses the concept of a model and how it is analyzed.”

 

 

 

 

 

 

 

 

Chapter 5

Conceptual Frameworks for Intelligence Analysis

 

“If we are to think seriously about the world, and act effectively in it, some sort of simplified map of reality . . . is necessary.”

Samuel P. Huntington, The Clash of Civilizations and the Remaking of World Order

 

 

“Balance of power,” for example, was an important conceptual framework used by policymakers during the Cold War. A different conceptual framework has been proposed for assessing the influence that one country can exercise over another.”

 

Analytic Perspectives—PMESII

 

In chapter 2, we discussed the instruments of national power—an actions view that defines the diplomatic, information, military, and economic (DIME) actions that executives, policymakers, and military or law enforcement officers can take to deal with a situation.

 

The customer of intelligence may have those four “levers” that can be pulled, but intelligence must be concerned with the effects of pulling those levers. Viewed from an effects perspective, there are usually six factors to consider: political, military, economic, social, infrastructure, and information, abbreviated PMESII.

 

Political. Describes the distribution of responsibility and power at all levels of governance—formally constituted authorities, as well as informal or covert political powers. (Who are the tribal leaders in the village? Which political leaders have popular support? Who exercises decision-making or veto power in a government, insurgent group, commercial entity, or criminal enterprise?)

 

Military. Explores the military and/or paramilitary capabilities or other ability to exercise force of all relevant actors (enemy, friendly, and neutral) in a given region or for a given issue. (What is the force structure of the opponent? What weaponry does the insurgent group possess? What is the accuracy of the rockets that Hamas intends to use against Israel? What enforcement mechanisms are drug cartels using to protect their territories?)

 

Economic. Encompasses individual and group behaviors related to producing, distributing, and consuming resources. (What is the unemployment rate? Which banks are supporting funds laundering? What are Egypt’s financial reserves? What are the profit margins in the heroin trade?)

 

Social. Describes the cultural, religious, and ethnic makeup within an area and the beliefs, values, customs, and behaviors of society members. (What is the ethnic composition of Nigeria? What religious factions exist there? What key issues unite or divide the population?)

Infrastructure. Details the composition of the basic facilities, services, and installations needed for the functioning of a community, business enterprise, or society in an area. (What are the key modes of transportation? Where are the electric power substations? Which roads are critical for food supplies?)

 

Information. Explains the nature, scope, characteristics, and effects of individuals, organizations, and systems that collect, process, disseminate, or act on information. (How much access does the local population have to news media or the Internet? What are the cyber attack and defense capabilities of the Saudi government? How effective would attack ads be in Japanese elections?)

 

The typical intelligence problem seldom must deal with only one of these factors or systems. Complex issues are likely to involve them all. The events of the Arab Spring in 2011, the Syrian uprising that began that year, and the Ukrainian crisis of 2014 involved all of the PMESII factors. But PMESII is also relevant in issues that are not necessarily international. Law enforcement must deal with them all (in this case, “military” refers to the use of violence or armed force by criminal elements).

 

Modeling the Intelligence Target

 

Models are used so extensively in intelligence that analysts seldom give them much thought, even as they use them.

 

The model paradigm is a powerful tool in many disciplines.

 

“Former national intelligence officer Paul Pillar described them as “guiding images” that policymakers rely on in making decisions. We’ve discussed one guiding image—that of the PMESII concept. The second guiding image—that of a map, theory, concept, or paradigm—in this book is merged into a single entity called a model.Or, as the CIA’s Tradecraft Primer puts it succinctly:

 

“all individuals assimilate and evaluate information through the medium of “mental models…”

 

Modeling is usually thought of as being quantitative and using computers. However, all models start in the human mind. Modeling does not always require a computer, and many useful models exist only on paper. Models are used widely in fields such as operations research and systems analysis. With modeling, one can analyze, design, and operate complex systems. One can use simulation models to evaluate real-world processes that are too complex to analyze with spreadsheets or flowcharts (which are themselves models, of course) to test hypotheses at a fraction of the cost of undertaking the actual activities. Models are an efficient communication tool for showing how the target functions and stimulating creative thinking about how to deal with an opponent.

 

Models are essential when dealing with complex targets (Analysis Principle 5-1). Without a device to capture the full range of thinking and creativity that occurs in the target-centric approach to intelligence, an analyst would have to keep in mind far too many details. Furthermore, in the target-centric approach, the customer of intelligence is part of the collaborative process. Presented with a model as an organizing construct for thinking about the target, customers can contribute pieces to the model from their own knowledge—pieces that the analyst might be unaware of. The primary suppliers of information (the collectors) can do likewise.

 

The Concept of a Model

 

A model, as used in intelligence, is an organizing constraint. It is a combination of facts, hypotheses, and assumptions about a target, developed in a form that is useful for analyzing the target and for customer decision making (producing actionable intelligence). The type of model used in intelligence typically comprises facts, hypotheses, and assumptions, so it’s important to distinguish them here:

 

Fact. Something that is indisputably the case.

Hypothesis. A proposition that is set forth to explain developments or observed phenomena. It can be posed as conjecture to guide research (a working hypothesis) or accepted as a highly probable conclusion from established facts.

Assumption. A thing that is accepted as true or as certain to happen, without proof.

 

These are the things that go into a model. But, it is important to distinguish them when you present the model. Customers should never wonder whether they are hearing facts, hypotheses, or assumptions.

 

A model is a replica or representation of an idea, an object, or an actual system. It often describes how a system behaves. Instead of interacting with the real system, an analyst can create a model that corresponds to the actual one in certain ways.

 

 

Physical models are a tangible representation of something. A map, a globe, a calendar, and a clock are all physical models. The first two represent the Earth or parts of it, and the latter two represent time. Physical models are always descriptive.

 

Conceptual models—inventions of the mind—are essential to the analytic process. They allow the analyst to describe things or situations in abstract terms both for estimating current situations and for predicting future ones.”

 

 

A normative model may contain some descriptive segments, but its purpose is to describe a best, or preferable, course of action.

 

A decision-support model—that is, a model used to choose among competing alternatives—is normative.

 

 

A conceptual model may be either descriptive, describing what it represents, or normative. A normative model may contain some descriptive segments, but its purpose is to describe a best, or preferable, course of action. A decision-support model—that is, a model used to choose among competing alternatives—is normative.

In intelligence analysis, the models of most interest are conceptual and descriptive rather than normative. Some common traits of these conceptual models follow.

 

Descriptive models can be deterministic or stochastic.

In a deterministic model the relationships are known and specified explicitly. A model that has any uncertainty incorporated into it is a stochastic model (meaning that probabilities are involved), even though it may have deterministic properties.

 

Descriptive models can be linear or nonlinear.

Linear models use only linear equations (for example, x = Ay + B) to describe relationships.

 

Nonlinear models use any type of mathematical function. Because nonlinear models are more difficult to work with and are not always capable of being analyzed, the usual practice is to make some compromises so that a linear model can be used.

 

Descriptive models can be static or dynamic.

A static model assumes that a specific time period is being analyzed and the state of nature is fixed for that time period. Static models ignore time-based variances. For example, one cannot use them to determine the impact of an event’s timing in relation to other events. Returning to the example of a combat model, a snapshot of the combat that shows where opposing forces are located and their directions of movement at that instant is static. Static models do not take into account the synergy of the components of a system, where the actions of separate elements can have a different effect on the system than the sum of their individual effects would indicate. Spreadsheets and most relationship models are static.

 

Dynamic modeling (also known as simulation) is a software representation of the time-based behavior of a system. Where a static model involves a single computation of an equation, a dynamic model is iterative; it constantly recomputes its equations as time changes.

 

Descriptive models can be solvable or simulated.

A solvable model is one in which there is an analytic way of finding the answer. The performance model of a radar, a missile, or a warhead is a solvable problem. But other problems require such a complicated set of equations to describe them that there is no way to solve them. Worse still, complex problems typically cannot be described in a manageable set of equations. In complex cases—such as the performance of an economy or a person—one can turn to simulation.

 

Using Target Models for Analysis

 

Operations

Intelligence services prefer specific sources of intelligence, shaped in part by what has worked for them in the past; by their strategic targets; and by the size of their pocketbooks. The poorer intelligence services rely heavily on open source (including the web) and HUMINT, because both are relatively inexpensive. COMINT also can be cheap, unless it is collected by satellites. The wealthier services also make use of satellite-collected imagery intelligence (IMINT) and COMINT, and other types of technical collection.

 

“China relies heavily on HUMINT, working through commercial organizations, particularly trading firms, students, and university professors far more than most other major intelligence powers do.

 

In addition to being acquainted with opponents’ collection habits, CI also needs to understand a foreign intelligence service’s analytic capabilities. Many services have analytic biases, are ethnocentric, or handle anomalies poorly. It is important to understand their intelligence communications channels and how well they share intelligence within the government. In many countries, the senior policymaker or military commander is the analyst. That provides a prime opportunity for “perception management,” especially if a narcissistic leader like Hitler, Stalin, or Saddam Hussein is in charge and doing his own analysis. Leaders and policymakers find it difficult to be objective; they are people of action, and they always have an agenda. They have lots of biases and are prone to wishful thinking.

 

Linkages

Almost all intelligence services have liaison relationships with foreign intelligence or security services. It is important to model these relationships because they can dramatically extend the capabilities of an intelligence service.

 

Summary

Two conceptual frameworks are invaluable for doing intelligence analysis. One deals with the instruments of national or organizational power and the effects of their use. The second involves the use of target models to produce analysis.

 

The intelligence customer has four instruments of national or organizational power, as discussed in chapter 2. Intelligence is concerned with how opponents will use those instruments and the effects that result when customers use them. Viewed from both the opponent’s actions and the effects perspectives, there are usually six factors to consider: political, military, economic, social, infrastructure, and information, abbreviated PMESII:

 

 

Political. The distribution of power and control at all levels of governance.

 

Military. The ability of all relevant actors (enemy, friendly, and neutral) to exercise force.

 

Economic. Behavior relating to producing, distributing, and consuming resources.

 

Social. The cultural, religious, and ethnic composition of a region and the beliefs, values, customs, and behaviors of people.

 

Infrastructure. The basic facilities, services, and installations needed for the functioning of a community or society.

 

Information. The nature, scope, characteristics, and effects of individuals, organizations, and systems that collect, process, disseminate, or act on information.”

 

 

Models in intelligence are typically conceptual and descriptive. The easiest ones to work with are deterministic, linear, static, solvable, or some combination. Unfortunately, in the intelligence business the target models tend to be stochastic, nonlinear, dynamic, and simulated.

 

From an existing knowledge base, a model of the target is developed. Next, the model is analyzed to extract information for customers or for additional collection. The “model” of complex targets will typically be a collection of associated models that can serve the purposes of intelligence customers and collectors.

 

Chapter 6

Overview of Models in Intelligence

 

One picture is worth more than ten thousand words.

Chinese proverb

 

“The process of populating the appropriate model is known as synthesis, a term borrowed from the engineering disciplines. Synthesis is defined as putting together parts or elements to form a whole—in this case, a model of the target. It is what intelligence analysts do, and their skill at it is a primary measure of their professional competence. ” .

 

 

Creating a Conceptual Model

 

 

The first step in creating a model is to define the system that encompasses the intelligence issues of interest, so that the resulting model answers any problem that has been defined by using the issue definition process.

 

few questions in strategic intelligence or in-depth research can be answered by using a narrowly defined target.

 

For the complex targets that are typical of in-depth research, an analyst usually will deal with a complete system, such as an air defense system that will use the new fighter aircraft

 

In law enforcement, analysis of an organized crime syndicate involves consideration of people, funds, communications, operational practices, movement of goods, political relationships, and victims. Many intelligence problems will require consideration of related systems as well. The energy production system, for example, will give rise to intelligence questions about related companies, governments, suppliers and customers, and nongovernmental organizations (such as environmental advocacy groups). The questions that customers pose should be answerable by reference only to the target model, without the need to reach beyond it.

 

A major challenge in defining the relevant system is to use restraint. The definition must include essential subsystems or collateral systems, but nothing more. Part of an analyst’s skill lies in being able to include in a definition the relevant components, and only the relevant components, that will address the issue.

 

The systems model can therefore be structural, functional, process oriented, or any combination thereof. Structural models include actors, objects, and the organization of their relationships to each other. Process models focus on interactions and their dynamics. Functional models concentrate on the results achieved, for example, a model that simulates the financial consequences of a proposed trade agreement.

 

After an analyst has defined the relevant system, the next step is to select the generic models, or model templates, to be used. These model templates then will be made specific, or “populated,” using evidence (discussed in chapter 7). Several types of generic models are used in intelligence. The three most basic types are textual, mathematical, and visual.

 

Textual Models

 

Almost any model can be described using written text. The CIA’s World Factbook is an example of a set of textual models—actually a series of models (political, military, economic, social, infrastructure, and information)—of a country. Some common examples of textual models that are used in intelligence analysis are lists, comparative models, profiles, and matrix models.

 

 

 

 

Lists

 

Lists and outlines are the simplest examples of a model.

 

The list continues to be used by analysts today for much the same purpose—to reach a yes-or-no decision.

 

Comparative Models

 

Comparative techniques, like lists, are a simple but useful form of modeling that typically does not require a computer simulation. Comparative techniques are used in government, mostly for weapons systems and technology analyses. Both governments and businesses use comparative models to evaluate a competitor’s operational practices, products, and technologies. This is called benchmarking.

 

A powerful tool for analyzing a competitor’s developments is to compare them with your own organization’s developments. Your own systems or technologies can provide a benchmark for comparison.

 

Comparative models have to be culture specific to help avoid mirror imaging.

 

A keiretsu is a network of businesses, usually in related industries, that own stakes in one another and have board members in common as a means of mutual security. A network of essentially captive (because they are dependent on the keiretsu) suppliers provides the raw material for the keiretsu manufacturers, and the keiretsu trading companies and banks provide marketing services. Keiretsu have their roots in prewar Japan.

 

Profiles

 

Profiles are models of individuals—in national intelligence, of leaders of foreign governments; in business intelligence, of top executives in a competing organization; in law enforcement, of mob leaders and serial criminals.

 

 

Profiles depend heavily on understanding the pattern of mental and behavioral traits that are shared by adult members of a society—referred to as the society’s modal personality. Several modal personality types may exist in a society, and their common elements are often referred to as national character.

 

Defining the modal personality type is beyond the capabilities of the journeyman intelligence analyst, and one must turn to experts.

 

 

The modal personality model usually includes at least the following elements:

 

Concept of self—the conscious ideas of what a person thinks he or she is, along with the frequently unconscious motives and defenses against ego-threatening experiences such as withdrawal of love, public shaming, guilt, or isolation.

 

Relation to authority—how an individual adapts to authority figures

Modes of impulse control and expressing emotion

Processes of forming and manipulating ideas”

 

 

Three model types are often used for studying modal personalities and creating behavioral profiles:

 

Cultural pattern models are relatively straightforward to analyze and are useful in assessing group behavior.

 

 

Child-rearing systems can be studied to allow the projection of adult personality patterns and behavior. They may allow more accurate assessments of an individual than a simple study of cultural patterns, but they cannot account for the wide range of possible pattern variations occurring after childhood.

 

Individual assessments are probably the most accurate starting points for creating a behavioral model, but they depend on detailed data about the specific individual. Such data are usually gathered from testing techniques; the Rorschach (or “Inkblot”) test—a projective personality assessment based on the subject’s reactions to a series of ten inkblot pictures—is an example.

 

Interaction Matrices

A textual variant of the spreadsheet (discussed later) is the interaction matrix, a valuable analytic tool for certain types of synthesis. It appears in various disciplines and under different names and is also called a parametric matrix or a traceability matrix.

 

Mathematical Models

The most common modeling problem involves solving an equation. Most problems in engineering or technical intelligence are single equations of the form.

 

Most analysis involves fixing all of the variables and constants in such an equation or system of equations, except for two variables. The equation is then solved repetitively to obtain a graphical picture of one variable as a function of another. A number of software packages perform this type of solution very efficiently. For example, as a part of radar performance analysis, the radar range equation is solved for signal-to-noise ratio as a function of range, and a two-dimensional curve is plotted. Then, perhaps, signal-to-noise ratio is fixed and a new curve plotted for radar cross-section as a function of range.

 

Often the requirement is to solve an equation, get a set of ordered pairs, and plug those into another equation to get a graphical picture rather than solving simultaneous equations.

 

Spreadsheets

 

The computer is a powerful tool for handling the equation-solution type of problem. Spreadsheet software has made it easy to create equation-based models. The rich set of mathematical functions that can be incorporated in it, and its flexibility, make the spreadsheet a widely used model in intelligence.

 

Simulation Models

 

A simulation model is a mathematical model of a real object, a system, or an actual situation. It is useful for estimating the performance of its real-world analogue under different conditions. We often wish to determine how something will behave without actually testing it in real life. So simulation models are useful for helping decision makers choose among alternative actions by determining the likely outcomes of those actions.

 

In intelligence, simulation models also are used to assess the performance of opposing weapons systems, the consequences of trade embargoes, and the success of insurgencies.

 

Simulation models can be challenging to build. The main challenge usually is validation: determining that the model accurately represents what it is supposed to represent, under different input conditions.

 

Visual Models

 

Models can be described in written text, as noted earlier. But the models that have the most impact for both analysts and customers in facilitating understanding take a visual form.

 

Visualization involves transforming raw intelligence into graphical, pictorial, or multimedia forms so that our brains can process and understand large amounts of data more readily than is possible from simply reading text. Visualization lets us deal with massive quantities of data and identify meaningful patterns and structures that otherwise would be incomprehensible.

 

 

Charts and Graphs

 

Graphical displays, often in the form of curves, are a simple type of model that can be synthesized both for analysis and for presenting the results of analysis.

 

 

Pattern Models

 

Many types of models fall under the broad category of pattern models. Pattern recognition is a critical element of all intelligence

 

Most governmental and industrial organizations (and intelligence services) also prefer to stick with techniques that have been successful in the past. An important aspect of intelligence synthesis, therefore, is recognizing patterns of activities and then determining in the analysis phase whether (a) the patterns represent a departure from what is known or expected and (b) the changes in patterns are significant enough to merit attention. The computer is a valuable ally here; it can display trends and allow the analyst to identify them. This capability is particularly useful when trends would be difficult or impossible to find by sorting through and mentally processing a large volume of data. Pattern analysis is one way to effectively handle complex issues.

 

One type of pattern model used by intelligence analysts relies on statistics. In fact, a great deal of pattern modeling is statistical. Intelligence deals with a wide variety of statistical modeling techniques. Some of the most useful techniques are easy to learn and require no previous statistical training.

 

Histograms, which are bar charts that show a frequency distribution, are one example of a simple statistical pattern.

 

Advanced Target Models

 

The example models introduced so far are frequently used in intelligence. They’re fairly straightforward and relatively easy to create. Intelligence also makes use of four model types that are more difficult to create and to analyze, but that give more in-depth analysis. We’ll briefly introduce them here.

 

Systems Models

 

Systems models are well known in intelligence for their use in assessing the performance of weapons systems.

 

 

Systems models have been created for all of the following examples:

 

A republic, a dictatorship, or an oligarchy can be modeled as a political system.

 

Air defense systems, carrier strike groups, special operations teams, and ballistic missile systems all are modeled as military systems.

 

Economic systems models describe the functioning of capitalist or socialist economies, international trade, and informal economies.

 

Social systems include welfare or antipoverty programs, health care systems, religious networks, urban gangs, and tribal groups.

 

Infrastructure systems could include electrical power, automobile manufacturing, railroads, and seaports.

 

A news gathering, production, and distribution system is an example of an information system.

Creating a systems model requires an understanding of the system, developed by examining the linkages and interactions between the elements that compose the system as a whole.

 

 

A system has structure. It is comprised of parts that are related (directly or indirectly). It has a defined boundary physically, temporally, and spatially, though it can overlap with or be a part of a larger system.

 

A system has a function. It receives inputs from, and sends outputs into, an outside environment. It is autonomous in fulfilling its function. A main battle tank standing alone is not a system. A tank with a crew, fuel, ammunition, and a communications subsystem is a system.

 

A system has a process that performs its function by transforming inputs into outputs.

 

 

Relationship Models

 

Relationships among entities—people, places, things, and events—are perhaps the most common subject of intelligence modeling. There are four levels of such relationship models, each using increasingly sophisticated analytic approaches: hierarchy, matrix, link, and network models. The four are closely related, representing the same fundamental idea at increasing levels of complexity.

 

Relationship models require a considerable amount of time to create, and maintaining the model (known to those who do it as “feeding the beast”) demands much effort. But such models are highly effective in analyzing complex problems, and the associated graphical displays are powerful in persuading customers to accept the results.

 

Hierarchy Models

 

The hierarchy model is a simple tree structure. Organizational modeling naturally lends itself to the creation of a hierarchy, as anyone who ever drew an organizational chart is aware. A natural extension of such a hierarchy is to use a weighting scheme to indicate the importance of individuals or suborganizations in it.

 

Matrix Models

 

The interaction matrix was introduced earlier. The relationship matrix model is different. It portrays the existence of an association, known or suspected, between individuals. It usually portrays direct connections such as face-to-face meetings and telephone conversations. Analysts can use association matrices to identify those personalities and associations needing a more in-depth analysis to determine the degree of relationships, contacts, or knowledge between individuals.

 

Link Models

 

A link model allows the view of relationships in more complex tree structures. Though it physically resembles a hierarchy model (both are trees), a link model differs in that it shows different kinds of relationships but does not indicate subordination.

 

Network Models

 

A network model can be thought of as a flexible interrelationship of multiple tree structures at multiple levels. The key limitation of the matrix model discussed earlier is that although it can deal with the interaction of two hierarchies at a given level, because it is a two-dimensional representation, it cannot deal with interactions at multiple levels or with more than two hierarchies. Network synthesis is an extension of the link or matrix synthesis concept that can handle such complex problems. There are several types of network models. Two are widely used in intelligence:

 

Social network models show patterns of human relationships. The nodes are people, and the links show that some type of relationship exists.

 

Target network models are most useful in intelligence. The nodes can be any type of entity—people, places, things, concepts—and the links show that some type of relationship exists between entities.

 

Spatial and Temporal Models

 

Another way to examine data and to search for patterns is to use spatial modeling—depicting locations of objects in space. Spatial modeling can be used effectively on a small scale. For example, within a building, computer-aided design/computer-aided modeling, known as CAD/CAM, can be a powerful tool for intelligence synthesis. Layouts of buildings and floor plans are valuable in physical security analysis and in assessing production capacity.

.

 

Spatial modeling on larger scales is usually called geospatial modeling.

 

Patterns of activity over time are important for showing trends. Pattern changes are often used to compare how things are going now with how they went last year (or last decade). Estimative analysis often relies on chronological models.

 

Scenarios

Arguably the most important model for estimative intelligence purposes is the scenario, a very sophisticated model.

 

Alternative scenarios are used to model future situations. These scenarios increasingly are produced as virtual reality models because they are powerful ways to convey intelligence and are very persuasive.

Target Model Combinations

Almost all target models are actually combinations of many models. In fact, most of the models described in the previous sections can be merged into combination mod- els. One simple example is a relationship-time display.

This is a dynamic model where link or network nodes and links (relationships) change, appear, and disappear over time.

We also typically want to have several distinct but interrelated models of the target in order to be able to answer different customer questions.

Submodels

One type of component model is a submodel, a more detailed breakout of the top-level model. It is typical, for complex targets, to have many such submodels of a target that provide different levels of detail.

Participants in the target-centric process then can reach into the model set to pull out the information they need. The collectors of information can drill down into more detail to refine collection targeting and to fill specific gaps.

The intelligence customer can drill down to answer questions, gain confidence in the analyst’s picture of the target, and understand the limits of the analyst’s work. The target model is a powerful collaborative tool.

Collateral Models

In contrast to the submodel, a collateral model may show particular aspects of the overall target model, but it is not simply a detailed breakout of a top-level model. A collateral model typically presents a different way of thinking about the target for a specific intelligence purpose.

The collateral models in Figures 6-7 to 6-9 are examples of the three general types—structural, functional, and process—used in systems analysis. Figures 6-7 and 6-8 are structural models. Figure 6-9 is both a process model and a functional mod- el. In analyzing complex intelligence targets, all three types are likely to be used.

These models, taken together, allow an analyst to answer a wide range of customer questions.

More complex intelligence targets can re- quire a combination of several model types. They may have system characteristics, take a network form, and have spatial and temporal characteristics.

Alternative and Competitive Target Models

Alternative and competitive models are somewhat different things, though they are frequently confused with each other.

Alternative Models

Alternative models are an essential part of the synthesis process. It is important to keep more than one possible target model in mind, especially as conflicting or contradict- ory intelligence information is collected.

 

“The disciplined use of alternative hypotheses could have helped counter the natural cognitive tendency to force new information into existing paradigms.” As law professor David Schum has noted, “the generation of new ideas in fact investigation usually rests upon arranging or juxtaposing our thoughts and evidence in different ways.” To do that we need multiple alternative models.

And, the more inclusive you can be when defining alternative models, the better…

In studies listing the analytic pitfalls that hampered past assessments, one of the most prevalent is failure to consider alternative scenarios, hypotheses, or models.

Analysts have to guard against allowing three things to interfere with their need to develop alternative models:

  • Ego. Former director of national intelligence Mike McConnell once observed that analysts inherently dislike alternative, dissenting, or competitive views. But, the opposite becomes true of analysts who operate within the target-centric approach—the focus is not on each other anymore, but instead on contributing to a shared target model.
  • Time. Analysts are usually facing tight deadlines. They must resist the temptation to go with the model that best fits the evidence without considering alternatives. Otherwise, the result is premature closure that can cost dearly in the end result.
  • The customer. Customers can view a change in judgment as evidence that the original judgment was wrong, not that new evidence forced the change. Furthermore, when presented with two or more target models, customers will tend to pick the one that they like best, which may or may not be the most likely model. Analysts know this.

 

It is the analyst’s responsibility to establish a tone of setting egos aside and of conveying to all participants in the process, including the customer, that time spent up front developing alternative models is time saved at the end if it keeps them from committing to the wrong model in haste.

Competitive Models

It is well established in intelligence that, if you can afford the resources, you should have independent groups providing competing analyses. This is because we’re dealing with uncertainty. Different analysts, given the same set of facts, are likely to come to different conclusions.

It is important to be inclusive when defining alternative or competitive models.

Summary

Creating a target model starts with defining the relevant system. The system model can be a structural, functional, or process model, or any combination. The next step is to select the generic models or model templates.

Lists and curves are the simplest form of model. In intelligence, comparative models or benchmarks are often used; almost any type of model can be made comparative, typically by creating models of one’s own system side by side with the target system model.

Pattern models are widely used in the intelligence business. Chronological models allow intelligence customers to examine the timing of related events and plan a way to change the course of these events. Geospatial models are popular in military intelligence for weapons targeting and to assess the location and movement of opposing forces.

Relationship models are used to analyze the relationships among elements of the tar- get—organizations, people, places, and physical objects—over time. Four general types of relationship models are commonly used: hierarchy, matrix, link, and network models. The most powerful of these, network models, are increasingly used to describe complex intelligence targets.

 

Competitive and alternative target models are an essential part of the process. Properly used, they help the analyst deal with denial and deception and avoid being trapped by analytic biases. But they take time to create, analysts find it difficult to change or chal- lenge existing judgments, and alternative models give policymakers the option to se- lect the conclusion they prefer—which may or may not be the best choice.

 

 

 

 

 

 

 

Chapter 7

 

Creating the Model

Believe nothing you hear, and only one half that you see.  – Edgar Allen Poe

This chapter describes the steps that analysts go through in populating the target model. Here, we focus on the synthesis part of the target-centric approach, often called collation in the intelligence business.

We discuss the importance of existing pieces of intelligence, both finished and raw, and how best to think about sources of new raw data.

We talk about how credentials of evidence must be established, introduce widely used in- formal methods of combining evidence, and touch on structured argumentation as a formal methodology for combining evidence.

Analysts generally go through the actions described here in service to collation. They may not think about them as separate steps and in any event aren’t likely to do them in the order presented. They nevertheless almost always do the following:

 

  • Review existing finished intelligence about the target and examine existing raw intelligence
  • Acquire new raw intelligence
  • Evaluate the new raw intelligence
  • Combine the intelligence from all sources into the target model

 

Existing Intelligence

Existing finished intelligence reports typic- ally define the current target model. So information gathering to create or revise a model begins with the existing knowledge base. Before starting an intelligence collection effort, analysts should ensure that they are aware of what has already been found on a subject.

Finished studies or reports on file at an analyst’s organization are the best place to start any research effort. There are few truly new issues.

The databases of intelligence organizations include finished intelligence reports as well as many specialized data files on specific topics. Large commercial firms typically have comparable facilities in-house, or they depend on commercially available databases.

a literature search should be the first step an analyst takes on a new project. The purpose is to both define the current state of knowledge—that is, to understand the existing model(s) of the intelligence target—and to identify the major controversies and disagreements surrounding the target model.

The existing intelligence should not be accepted automatically as fact. Few experienced analysts would blithely accept the results of earlier studies on a topic, though they would know exactly what the studies found. The danger is that, in conducting the search, an analyst naturally tends to adopt a preexisting target model.

In this case, premature closure, or a bias toward the status quo, leads the analyst to keep the existing model even when evidence indicates that a different model is more appropriate.

To counter this tendency, it’s important to do a key assumptions check on the existing model(s).

Do the existing analytic conclusions appear to be valid?

What are the premises on which these conclusions rest, and do they appear to be valid as well?

Has the underlying situation changed so that the premises may no longer apply?

Once the finished reports are in hand, the analyst should review all of the relevant raw intelligence data that already exist. Few things can ruin an analyst’s career faster than sending collectors after information that is already in the organization’s files.

Sources of New Raw Intelligence

Raw intelligence comes from a number of sources, but they typically are categorized as part of the five major “INTs” shown in this section.

 

 

 

The definitions of each INT follow:

  • Open source (OSINT). Information of potential intelligence value that is available to the general public
  • Human intelligence (HUMINT). Intelligence derived from information collected and provided by human sources
  • Measurements and signatures intelligence (MASINT). Scientific and technical intelligence obtained by quantitative and qualitative analysis of data (metric, angle, spatial, wavelength, time dependence, modulation, plasma, and hydromagnetic) derived from specific technical sensors
  • Signals intelligence (SIGINT). Intelligence comprising either individually or in combination all communications intelligence, electronics intelligence, and foreign instrumentation signals intelligence
  • Imagery intelligence (IMINT). Intelligence derived from the exploitation of collection by visual photography, infrared sensors, lasers, electro-optics, and radar sensors such as synthetic aperture radar wherein images of objects are reproduced optically or electronically on film, electronic dis- play devices, or other media

 

The taxonomy approach in this book is quite different. It strives for a breakout that focuses on the nature of the material collected and processed, rather than on the collection means.

Traditional COMINT, HUMINT, and open- source collection are concerned mainly with literal information, that is, information in a form that humans use for communication. The basic product and the general methods for collecting and analyzing literal information are usually well understood by intelligence analysts and the customers of intelligence. It requires no special exploitation after the processing step (which includes translation) to be understood. It literally speaks for itself.

Nonliteral information, in contrast, usually requires special processing and exploitation in order for analysts to make use of it.

 

The logic of this division has been noted by other writers in the intelligence business. British author Michael Herman observed that there are two basic types of collection: One produces evidence in the form of observations and measurements of things (nonlit- eral), and one produces access to human thought processes

 

The automation of data handling has been a major boon to intelligence analysts. Informa- tion collected from around the globe arrives at the analyst’s desk through the Internet or in electronic message form, ready for review and often presorted on the basis of keyword searches. A downside of this automation, however, is the tendency to treat all information in the same way. In some cases the analyst does not even know what collection source provided the information; after all, everything looks alike on the display screen. However, information must be treated depending on its source. And, no matter the source, all information must be evaluated before it is synthesized into the model—the subject to which we now turn.

Evaluating Evidence

The fundamental problem in weighing evidence is determining its credibility—its completeness and soundness.

checking the quality of information used in intelligence analysis is an ongoing, continuous process. Having multiple sources on an issue is not a substitute for having good information that has been thoroughly examined. Analysts should perform periodic checks of the information base for their analytic judgments.

Evaluating the Source

  • Is the source competent (knowledgeable about the information being given)?
  • Did the source have the access needed to get the information?
  • Does the source have a vested interest or bias?

Competence

The Anglo-American judicial system deals ef- fectively with competence: It allows people to describe what they observed with their senses because, absent disability, we are pre- sumed competent to sense things. The judi- cial system does not allow the average per- son to interpret what he or she sensed unless the person is qualified as an expert in such interpretation.

Access

The issue of source access typically does not arise because it is assumed that the source had access. When there is reason to be suspicious about the source, however, check whether the source might not have had the claimed access.

In the legal world, checks on source access come up regularly in witness cross-examinations. One of the most famous examples was the “Almanac Trial” of 1858, where Abraham Lincoln conducted the cross-examination. It was the dying wish of an old friend that

Lincoln represent his friend’s son, Duff Armstrong, who was on trial for murder. Lincoln gave his client a tough, artful, and ultimately successful defense; in the trial’s highlight, Lincoln consulted an almanac to discredit a prosecution witness who claimed that he saw the murder clearly because the moon was high in the sky. The almanac showed that the moon was lower on the horizon, and the wit- ness’s access—that is, his ability to see the murder—was called into question.

Vested Interest or Bias

In HUMINT, analysts occasionally encounter the “professional source” who sells information to as many bidders as possible and has an incentive to make the information as interesting as possible. Even the densest sources quickly realize that more interesting information gets them more money.

An intelligence organization faces a problem in using its own parent organization’s (or country’s) test and evaluation results: Many have been contaminated. Some of the test results are fabricated; some contain distortions or omit key points. An honestly conducted, objective test may be a rarity. Several reasons for this problem exist. Tests are sometimes conducted to prove or dis- prove a preconceived notion and thus unconsciously are slanted. Some results are fabricated because they would show the vulnerability or the ineffectiveness of a system and because procurement decisions often depend on the test outcomes.

Although the majority of contaminated cases probably are never discovered, history provides many examples of this issue.

In examining any test or evaluation results, begin by asking two questions:

  • Did the testing organization have a major stake in the outcome (such as the threat that a program would be canceled due to negative test results or the possibility that it would profit from positive results)?
  • Did the reported outcome support the organization’s position or interests?

If the answer to both questions is yes, be wary of accepting the validity of the test. In the pharmaceutical testing industry, for example, tests have been fraudulently conducted or the results skewed to support the regulatory approval of the pharmaceutical.

A very different type of bias can occur when collection is focused on a particular issue. This bias comes from the fact that, when you look for something in the intelligence business, you may find what you are looking for, whether or not it’s there. In looking at suspected Iraqi chemical facilities prior to 2003, analysts concluded from imagery reporting that the level of activity had increased at the facilities. But the appearance of an increase in activity may simply have been a result of an increase in imagery collection.

David Schum and Jon Morris have published a detailed treatise on human sources of intelligence analysis. They pose a set of twenty-five questions di- vided into four categories: source competence, veracity, objectivity, and observational sensitivity. Their questions cover in more explicit detail the three questions posed in thissection about competence, access, and vested interest.

Evaluating the Communications Channel

A second basic rule of weighing evidence is to look at the communications channel through which the evidence arrives.

The accuracy of a message through any communications system decreases with the length of the link or the number of intermediate nodes.

Large and complex systems tend to have more entropy. The result is often cited as “poor communication” problems in large organizations

In the business intelligence world, analysts recognize the importance of the communications channel by using the differentiating terms primary sources for firsthand information, acquired through discussions or other interaction directly with a human source, and secondary sources for information learned through an intermediary, a publication, or online. This division does not consider the many gradations of reliability, and national intelligence organizations commonly do not use the primary/secondary source division. Some national intelligence collection organizations use the term collateral to refer to intelligence gained from other collectors, but it does not have the same meaning as the terms primary and secondary as used in business intelligence.

It’s not un- heard of (though fortunately not common) for the raw intelligence to be misinterpreted or misanalyzed as it passes through the chain. Organizational or personal biases can shape the interpretation and analysis, especially of literal intelligence. It’s also possible for such biases to shape the analysis of non- literal intelligence, but that is a more difficult product for all-source analysts to challenge, as noted earlier.

Entropy has another effect in intelligence. An intelligence assertion that “X is a possibility” very often, over time and through diverse communications channels, can become “X may be true,” then “X probably is the case,” and eventually “X is a fact,” without a shred of new evidence to support the assertion. In intelligence, we refer to this as the “creeping validity” problem.

 

 

Evaluating the Credentials of Evidence

The major credentials of evidence, as noted earlier, are credibility, reliability, and inferential force. Credibility refers to the extent to which we can believe something. Reliability means consistency or replicability. Inferential force means that the evidence carries weight, or has value, in supporting a conclusion.

 

 

U.S. government intelligence organizations have established a set of definitions to distinguish levels of credibility of intelligence:

  • Fact. Verified information, something known to exist or to have happened.
  • Direct information. The content of reports, research, and reflection on an intelligence issue that helps to evaluate the likelihood that something is factual and thereby reduces uncertainty. This is information that can be considered factual because of the nature of the source (imagery, signal intercepts, and similar observations).
  • Indirect information. Information that may or may not be factual because of some doubt about the source’s reliability, the source’s lack of direct access, or the complex (non-concrete) character of the contents (hearsay from clandestine sources, foreign government reports, or local media accounts).

In weighing evidence, the usual approach is to ask three questions that are embedded in the oath that witnesses take before giving testimony in U.S. courts:

  • Is it true?
  • Is it the whole truth?
  • Is it nothing but the truth? (Is it relevant or significant?)

 

Is It True?

Is the evidence factual or opinion (someone else’s analysis)? If it is opinion, question its validity unless the source quotes evidence to support it.

How does it fit with other evidence? The relating of evidence—how it fits in—is best done in the synthesis phase. The data from different collection sources are most valuable when used together.

The synergistic effect of combining data from many sources both strengthens the conclusions and increases the analyst’s confidence in them.

 

 

 

  • HUMINT and OSINT are often melded together to give a more comprehensive picture of people, programs, products, facilities, and research specialties. This is excellent background information to interpret data derived from COMINT and IMINT.
  • Data on environmental conditions during weapons tests, acquired through specialized technical collection, can be used with ELINT and COMINT data obtained during the same test event to evaluate the cap- abilities of the opponent’s sensor systems.
  • Identification of research institutes and their key scientists and research- ers can be initially made through HUMINT, COMINT, or OSINT. Once the organization or individual has been identified by one intelligence collector, the other ones can often provide extensive additional information.
  • Successful analysis of COMINT data may require correlating raw COMINT data with external information such as ELINT and IMINT, or with knowledge of operational or technical practices.

Is It the Whole Truth?

When asking this question, it is time to do source analysis.

An incomplete picture can mislead as much as an outright lie.

 

 

 

Is It Nothing but the Truth?

It is worthwhile at this point to distinguish between data and evidence. Data become evidence only when the data are relevant to the problem or issue at hand. The simple test of relevance is whether it affects the likelihood of a hypothesis about the target.

Does it help answer a question that has been asked?

Or does it help answer a question that should be asked?

The preliminary or initial guidance from customers seldom tells what they really need to know—an important reason to keep them in the loop through the target-centric process.

Doctors encounter difficulties when they must deal with a patient who has two pathologies simultaneously. Some of the symptoms are relevant to one pathology, some to the other. If the doctor tries to fit all of the symptoms into one diagnosis, he or she is apt to make the wrong call. This is a severe enough problem for doctors, who must deal with relatively few symptoms. It is a much worse problem for intelligence analysts, who typically deal with a large volume of data, most of which is irrelevant.

Pitfalls in Evaluating Evidence

Vividness Weighting

In general, the channel for communication of intelligence should be as short as possible; but when could a short channel become a problem? If the channel is too short, the res- ult is vividness weighting—the phenomenon that evidence that is experienced directly is strongest (“seeing is believing”). Customers place the most weight on evidence that they collect themselves—a dangerous pitfall that senior executives fall into repeatedly and that makes them vulnerable to deception.

Michael Herman tells how Churchill, reading Field Marshal Erwin Rommel’s decrypted cables during World War II, concluded that the Germans were desperately short of supplies in North Africa. Basing his interpretation on this raw COMINT traffic, Churchill pressed his generals to take the offensive against Rommel. Churchill did not realize what his own intelligence analysts could have readily told him: Rommel consistently exaggerated his short- ages in order to bolster his demands for sup- plies and reinforcements.

Statistics are the least persuasive form of evidence; abstract (general) text is next; concrete (specific, focused, exemplary) text is a more persuasive form still; and visual evidence, such as imagery or video, is the most persuasive.

Weighing Based on the Source

One of the most difficult traps for an analyst to avoid is that of weighing evidence based on its source.

Favoring the Most Recent Evidence

Analysts often give the most recently acquired evidence the most weight.

The freshest intelligence—crisp, clear, and the focus of the analyst’s attention—often gets more weight than the fuzzy and half-re- membered (but possibly more important) in- formation that has had to travel down the long lines of time. The analyst has to remember this tendency and compensate for it. It sometimes helps to go back to the original (older) intelligence and reread it to bring it more freshly to mind.

Favoring or Disfavoring the Unknown

It is hard to decide how much weight to give to answers when little or no information is available for or against each one.

Trusting Hearsay

The chief problem with much of HUMINT (not including documents) is that it is hearsay evidence; and as noted earlier, the judiciary long ago learned to distrust hearsay for good reasons, including the biases of the source and the collector. Sources may deliberately distort or misinform because they want to influence policy or increase their value to the collector.

Finally, and most important, people can be misinformed or lie. COMINT can only report what people say, not the truth about what they say. So intelligence analysts have to use hearsay, but they must also weigh it accordingly.

Unquestioning Reliance on Expert Opinions

Expert opinion is often used as a tool for analyzing data and making estimates. Any intelligence community must often rely on its nation’s leading scientists, economists, and political and social scientists for insights into foreign developments.

outside experts often have issues with objectivity. With experts, an ana- lyst gets not only their expertise, but also their biases; there are those experts who have axes to grind or egos that convince them there is only one right way to do things (their way).

British counterintelligence officer Peter Wright once noted that “on the big issues, the experts are very rarely right.”

Analysts should treat expert opinion as HUMINT and be wary when the expert makes extremely positive comments (“that foreign development is a stroke of genius!”) or extremely negative ones (“it can’t be done”).

Analysis Principle 7-3

Many experts, particularly scientists, are not mentally prepared to look for deception, as intelligence officers should be. It is simply not part of the expert’s training. A second problem, as noted earlier, is that experts often are quite able to deceive themselves without any help from opponents.

Varying the way expert opinion is used is one way to attempt to head off the problems cited here. Using a panel of experts to make analytic judgments is a common method of trying to reach conclusions or to sort through a complex array of interdisciplinary data.

Such panels have had mixed results. One former CIA office director observed that “advisory panels of eminent scientists are usually useless. The members are seldom willing to commit the time to studying the data to be of much help.”

The quality of the conclusions reached by such panels depends

on several variables, including the panel’s

  • Expertise
  • Motivation to produce a quality product
  • Understanding of the problem area to be addressed
  • Effectiveness in working as a group

A major advantage of the target-centric approach is that it formalizes the process of obtaining independent opinions.

Both single-source and all-source analysts have to guard against falling into the trap of reaching conclusions too early.

Premature closure also has been described as “affirming conclusions,” based on the observation that people are inclined to verify or affirm their existing beliefs rather than modify or discredit those beliefs.

The primary danger of premature closure is not that one might make a bad assessment because the evidence is incomplete. Rather, the danger is that when a situation is changing quickly or when a major, unprecedented event occurs, the analyst will become trapped by the judgments already made. Chances increase that he or she will miss indications of change, and it becomes harder to revise an initial estimate

The counterintelligence technique of deception thrives on this tendency to ignore evidence that would disprove an existing assumption

Denial and deception succeed if one op- ponent can get the other to make a wrong initial estimate.

Combining Evidence

In almost all cases, intelligence analysis in- volves combining disparate types of evidence.

Analysts have to have methods for weighing the combined data to help them make qualitative judgments as to which conclusions the various data best support.

Convergent and Divergent Evidence

Two items of evidence are said to be conflicting or divergent if one item favors one conclusion and the other item favors a different conclusion.

two items of evidence are contradictory if they say logically opposing things.

Redundant Evidence

Convergent evidence can also be redundant. To understand the concept of redundancy, it helps to understand its importance in communications theory.

Redundant, or duplicative, evidence can have corroborative redundancy or cumulatsive redundancy. In both types, the weight of the evidence piles up to reinforce a given conclusion. A simple example illustrates the difference.

Formal Methods for Combining Evidence

The preceding sections describe some informal methods for evidence combination. It often is important to combine evidence and demonstrate the logical process of reaching a conclusion based on that evidence by careful argument. The formal process of making that argument is called structured argumentation. Such formal structured argumentation approaches have been around at least since the seventeenth century.

Structured Argumentation

Structured argumentation is an analytic process that relies on a framework to make assumptions, reasoning, rationales, and evidence explicit and transparent. The process begins with breaking down and organizing a problem into parts so that each one can be examined systematically, as discussed in earlier chapters.

As analysts work through each part, they identify the data require- ments, state their assumptions, define any terms or concepts, and collect and evaluate relevant information. Potential explanations or hypotheses are formulated and evaluated with empirical evidence, and information gaps are identified.

Formal graphical or numerical processes for combining evidence are time consuming to apply and are not widely used in intelligence analysis. They are usually reserved for cases in which the customer requires them because the issue is critically important, because the customer wants to examine the reasoning process, or because the exact probabilities associated with each alternative are import- ant to the customer.

Wigmore’s Charting Method

John Henry Wigmore was the dean of the Northwestern University School of Law in the early 1900s and author of a ten-volume treatise commonly known as Wigmore on Evidence. In this treatise he defined some principles for rational inquiry into disputed facts and methods for rigorously analyzing and ordering possible inferences from those facts.

Wigmore argued that structured argumentation brings into the open and makes explicit the important steps in an argument, and thereby makes it easier to judge both their soundness and their probative value. One of the best ways to recognize any inherent tendencies one may have in making biased or illogical arguments is to go through the body of evidence using Wigmore’s method.

  • Different symbols are used to show varying kinds of evidence: explanatory, testimonial, circumstantial, corroborative, undisputed fact, and combinations.
  • Relationships between symbols (that is, between individual pieces of evidence) are indicated by their relative positions (for example, evidence tending to prove a fact is placed be- low the fact symbol).
  • The connections between symbols indicate the probative effect of their relationship and the degree of uncertainty about the relationship.

Even proponents admit that it is too time-consuming for most practical uses, especially in intelligence analysis, where the analyst typically has limited time.

Nevertheless, making Wigmore’s approach, or something like it, widely usable in intelligence analysis would be a major contribution.

Bayesian Techniques for Combining Evidence

By the early part of the eighteenth century, mathematicians had solved what is called the “forward probability” problem: When all of the facts about a situation are known, what is the probability of a given event happening?

Intelligence analysts find this problem of far more interest than the forward probability problem, because they often must make judgments about an underlying situation from observing the events that the situation causes. Bayes developed a formula for the answer that bears his name: Bayes’ rule.

One advantage claimed for Bayesian analysis is its ability to blend the subjective probability judgments of experts with historical frequencies and the latest sample evidence.

Bayes seems difficult to teach. It is generally considered to be “advanced” statistics and, given the problem that many people (including intelligence analysts) have with traditional elementary probabilistic and statistical techniques, such a solution seems to require expertise not currently resident in the intelligence community or available only through expensive software solutions.

A Note about the Role of Information Technology

It may be impossible for new analysts today to appreciate the markedly different work environment that their counterparts faced 40 years ago. Incoming intelligence arrived at the analyst’s desk in hard copy, to be scanned, marked up, and placed in file drawers. Details about intelligence targets—installations, persons, and organizations—were often kept on 5” × 7” cards in card catalog boxes. Less tidy analysts “filed” their most interesting raw intelligence on their desktops and cabinet tops, sometimes in stacks over 2 feet high.

IT systems allow analysts to acquire raw intelligence material of interest (incoming classified cable traffic and open source) and to search, organize, and store it electronically. Such IT capabilities have been eagerly accepted and used by analysts because of their ad- vantages in dealing with the information explosion.

A major consequence of this information explosion is that we must deal with what is called “big data” in collating and analyzing intelligence. Big data has been defined as “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.”

Analysts, inundated by the flood, have turned to IT tools for extracting meaning from the data. A wide range of such tools exists, including ones for visualizing the data and identifying patterns of intelligence interest, ones for conducting statistical analysis, and ones for running simulation models. Analysts with responsibility for counterterrorism, organized crime, counternarcotics, counterproliferation, or financial fraud can choose from commercially available tools such as Palantir, CrimeLink, Analyst’s Notebook, NetMap, Orion, or VisuaLinks to produce matrix and link diagrams, timeline charts, telephone toll charts, and similar pattern displays.

Tactical intelligence units, in both the military and law enforcement, find geospatial analysis tools to be essential.

Some intelligence agencies also have in-house tools that replicate these capabilities. Depending on the analyst’s specialty, some tools may be more relevant than others. All, though, have definite learning curves and their database structures are generally not compatible with each other. The result is that these tools are used less effectively than they might be, and the absence of a single standard tool hinders collaborative work across intelligence organizations.

Summary

In gathering information for synthesizing the target model, analysts should start by re- viewing existing finished and raw intelligence. This provides a picture of the current target model. It is important to do a key assumptions check at this point: Do the premises that underlie existing conclusions about the target seem to be valid?

Next, the analyst must acquire and evaluate raw intelligence about the target, and fit it into the target model—a step often called col- lation. Raw intelligence is viewed and evalu- ated differently depending on whether it is literal or nonliteral. Literal sources include open source, COMINT, HUMINT, and cyber collection. Nonliteral sources involve several types of newer and highly focused collection techniques that depend heavily on processing, exploitation, and interpretation

to turn the material into usable intelligence.

Once a model template has been selected for the target, it becomes necessary to fit the relevant information into the template. Fitting the information into the model template re- quires a three-step process:

  • Evaluating the source, by determining whether the source (a) is competent, that is, knowledgeable about the information being given; (b) had the access needed to get the information; and (c) had a vested interest or bias regarding the information provided.
  • Evaluating the communications channel through which the information arrived. Information that passes through many intermediate points becomes distorted. Processors and exploiters of collected information can also have a vested interest or bias.
  • Evaluating the credentials of the evidence itself. This involves evaluating (a) the credibility of evidence, based in part on the previously completed source and communications channel evaluations; (b) the reliability; and (c) the relevance of the evidence. Relevance is a particularly important evaluation step; it is too easy to fit evidence into the wrong target model.
  • As evidence is evaluated, it must be combined and incorporated into the target mod- el. Multiple pieces of evidence can be convergent (favoring the same conclusion) or diver- gent (favoring different conclusions and leading to alternative target models). Convergent evidence can also be redundant, reinforcing a conclusion.

Tools to extract meaning from data, for example, by relation- ship, pattern, and geospatial analysis, are used by analysts where they add value that offsets the cost of “care and feeding” of the tool. Tools to support structured argumentation are available and can significantly im- prove the quality of the analytic product, but whether they will find serious use in intelligence analysis is still an open question.

Denial, Deception, and Signaling

There is nothing more deceptive than an obvious fact.

Sherlock Holmes, in “The Boscombe Valley Mystery”

In evaluating evidence and developing a target model, an analyst must constantly take into account the fact that evidence may have been deliberately shaped by an opponent.

Denial and deception are major weapons in the counterintelligence arsenal of a country or organization.

 

They may be the only weapons available for many countries to use against highly sophisticated technical intelligence.

At the opposite extreme, the opponent may intentionally shape what the analyst sees, not to mislead but rather to send a message or signal. It is important to be able to recognize signals and to understand their meaning.

Denial

Denial and deception come in many forms. Denial is somewhat more straightforward.

Deception

Deception techniques are limited only by our imagination. Passive deception might include using decoys or having the intelligence target emulate an activity that is not of intelligence interest—making a chemical or biological warfare plant look like a medical drug production facility, for example. Decoys that have been widely used in warfare include dummy ships, missiles, and tanks.

Active deception includes misinformation (false communications traffic, signals, stories, and documents), misleading activities, and double agents (agents who have been discovered and “turned” to work against their former employers), among others.

Illicit groups (for example, terrorists) con- duct most of the deception that intelligence must deal with. Illicit arms traffickers (known as gray arms traffickers) and narcotics traffickers have developed an extensive set of deceptive techniques to evade international restrictions. They use intermediaries to hide financial transactions. They change ship names or aircraft call signs en route to mislead law enforcement officials. One airline changed its corporate structure and name overnight when its name became linked to illicit activities.1 Gray arms traffickers use front companies and false end-user certificates.

Defense against Denial and Deception: Protecting Intelligence Sources and Methods

In the intelligence business, it is axiomatic that if you need information, someone will try to keep it from you. And we have noted repeatedly that if an opponent can model a system, he can defeat it. So your best defense is to deny your opponent an understanding of your intelligence capabilities. Without such understanding, the opponent cannot effectively conduct D&D.

For small governments, and in the business intelligence world, protection of sources and methods is relatively straightforward. Selective dissemination of and tight controls on intelligence information are possible. But a major government has too many intelligence customers to justify such tight restrictions. Thus these bureaucracies have established an elaborate system to simultaneously protect and disseminate intelligence information. This protection system is loosely called compartmentation, because it puts information in “compartments” and restricts access to the compartments.

In the U.S. intelligence community, the intelligence product, sources, and methods are protected by the sensitive compartmented information (SCI) system. The SCI system uses an extensive set of compartments to protect sources and methods. Only the col- lectors and processors have access to many of the compartmented materials. Much of the product, however, is protected only by standard markings such as “Secret,” and access is granted to a wide range of people.

Open-source intelligence has little or no protection because the source material is unclassified. However, the techniques for exploiting open-source material, and the specific material of interest for exploitation, can tell an opponent much about an intelligence service’s targets. For this reason, intelligence agencies that translate open source often restrict its dissemination, using markings such as “Official Use Only.”

Higher Level Denial and Deception

A few straightforward examples of denial and deception were cited earlier. But sophisticated deception must follow a careful path; it has to be very subtle (too-obvious clues are likely to tip off the deception) yet not so subtle that your opponent misses it. It is commonly used in HUMINT, but today it frequently requires multi-INT participation or a “swarm” attack to be effective. Increasingly, carefully planned and elaborate multi- INT D&D is being used by various countries. Such efforts even have been given a different name—perception management—that focuses on the end result that the effort is intended to achieve.

Perception management can be effective against an intelligence organization that, through hubris or bureaucratic politics, is reluctant to change its initial conclusions about a topic. If the opposing intelligence organization makes a wrong initial estimate, then long-term deception is much easier to pull off. If D&D are successful, the opposing organization faces an unlearning process: its predispositions and settled conclusions have to be discarded and replaced.

The best perception management results from highly selective targeting, intended to get a specific message to a specific person or organization. This requires knowledge of that person’s or organization’s preferences in intelligence—a difficult feat to accomplish, but the payoff of a successful perception management effort is very high. It can result in an opposing intelligence service making a miscall or causing it to develop a false sense of security. If you are armed with a well-developed model of the three elements of a foreign intelligence strategy —targets, operations, and linkages—an effective counterintelligence counterattack in the form of perception management or covert action is possible, as the following examples show.

The Farewell Dossier

Detailed knowledge of an opponent is the key to successful counterintelligence, as the “Farewell” operation shows. In 1980 the French internal security service Direction de la Surveillance du Territoire (DST) recruited a KGB lieutenant colonel, Vladimir I. Vetrov, codenamed “Farewell.” Vetrov gave the French some four thousand documents, de- tailing an extensive KGB effort to clandes- tinely acquire technical know-how from the West, primarily from the United States.

In 1981 French president François Mitterrand shared the source and the documents (which DST named “the Farewell Dossier”) with U.S. president Ronald Reagan.

In early 1982 the U.S. Department of Defense, the Federal Bureau of Investigation, and the CIA began developing a counterattack. Instead of simply improving U.S. defenses against the KGB efforts, the U.S. team used the KGB shopping list to feed back, through CIA-controlled channels, the items on the list—augmented with “improvements” that were designed to pass acceptance testing but would fail randomly in service. Flawed computer chips, turbines, and factory plans found their way into Soviet military and civilian factories and equipment. Misleading information on U.S. stealth technology and space defense flowed into the Soviet intelligence reporting. The resulting failures were a severe setback for major segments of Soviet industry. The most dramatic single event resulted when the United States provided gas pipeline management software that was in- stalled in the trans-Siberian gas pipeline. The software had a feature that would, at some time, cause the pipeline pressure to build up to a level far above its fracture pres- sure. The result was the Soviet gas pipeline explosion of 1982, described as the “most monumental non-nuclear explosion and fire ever seen from space.

Countering Denial and Deception

In recognizing possible deception, an analyst must first understand how deception works. Four fundamental factors have been identified as essential to deception: truth, denial, deception, and misdirection.

Truth—All deception works within the context of what is true. Truth establishes a foundation of perceptions and beliefs that are accepted by an opponent and can then be exploited in deception. Supplying the opponent with real data establishes the credibility of future communications that the opponent then relies on.

Denial—It’s essential to deny the op- ponent access to some parts of the truth. Denial conceals aspects of what is true, such as your real intentions and capabilities. Denial often is used when no deception is intended; that is, the end objective is simply to deny knowledge. One can deny without intent to deceive, but not the

converse.

Deceit—Successful deception requires the practice of deceit.

Misdirection—Deception depends on manipulating the opponent’s perceptions. You want to redirect the opponent away from the truth and toward a false perception. In operations, a feint is used to redirect the adversary’s attention away from where the real operation will occur.

 

The first three factors allow the deceiver to present the target with desirable, genuine data while reducing or eliminating signals that the target needs to form accurate perceptions. The fourth provides an attractive alternative that commands the target’s attention.

The effectiveness of hostile D&D is a direct reflection of the predictability of collection.

Collection Rules

The best way to defeat D&D is for all of the stakeholders in the target-centric approach to work closely together. The two basic rules for collection, described here, form a complementary set. One rule is intended to provide incentive for collectors to defeat D&D. The other rule suggests ways to defeat it.

The first rule is to establish an effective feedback mechanism.

Relevance of the product to intelligence questions is the correct measure of collection effectiveness, and analysts and customers—not collectors—determine relevance. The system must enforce a content-oriented evaluation of the product, because content is used to determine relevance.

The second rule is to make collection smarter and less predictable. There exist several tried-and-true tactics for doing so:

  • Don’t optimize systems for quality and quantity; optimize for content.
  • Apply sensors in new ways. Analysis groups often can help with new sensor approaches in their areas of responsibility.
  • Consider provocative techniques against D&D targets.

Probing an opponent’s system and watching the response is a useful tactic for learning more about the system. Even so, probing may have its own set of un- desirable consequences: The Soviets would occasionally chase and shoot down the reconnaissance aircraft to discourage the probing practice.

  • Hit the collateral or inferential tar- gets. If an opponent engages in D&D about a specific facility, then sup- porting facilities may allow inferences to be made or to expose the deception. Security measures around a facility and the nature and status of nearby communications, power, or transportation facilities may provide a more complete picture.
  • Finally, use deception to protect a collection capability.

The best weapon against D&D is to mis- lead or confuse opponents about intelligence capabilities, disrupt their warning programs, and discredit their intelligence services.

 

An analyst can often beat D&D simply by using several types of intelligence—HUMINT, COMINT, and so on—in combination, simultaneously, or successively. It is relatively easy to defeat one sensor or collection channel. It is more difficult to defeat all types of intelligence at the same time.

Increasingly, opponents can be expected to use “swarm” D&D, targeting several INTs in a coordinated effort like that used by the Soviets in the Cuban missile crisis and the Indian government in the Pokhran deception.

The Information Instrument

Analysts, whether single- or all-source, are the focal points for identifying D&D. In the types of conflicts that analysts now deal with opponents have made effective use of a weapon that relies on deception: using both traditional media and social media to paint a misleading picture of their adversaries. Nongovernmental opponents (insurgents and terrorists) have made effective use of this information instrument.

 

the prevalence of media reporters in all conflicts, and the easy access to social media, have given the information instrument more utility. Media deception has been used repeatedly by opponents to portray U.S. and allied “atrocities” during military campaigns in Kosovo, Iraq, Afghanistan, and Syria.

Signaling

Signaling is the opposite of denial and deception. It is the process of deliberately sending a message, usually to an opposing intelligence service.

its use depends on a good know- ledge of how the opposing intelligence ser- vice obtains and analyzes knowledge. Recognizing and interpreting an opponent’s signals is one of the more difficult challenges an analyst must face. Depending on the situation, signals can be made verbally, by actions, by displays, or by very subtle nuances that depend on the context of the signal.

In negotiations, signals can be both verbal and nonverbal.

True signals often are used in place of open declarations, to provide in- formation while preserving the right of deniability.

Analyzing signals requires examining the content of the signal and its context, timing, and source. Statements made to the press are quite different from statements made through diplomatic channels—the latter usually carry more weight.

Signaling between members of the same culture can be subtle, with high success rates of the signal being understood. Two U.S. corporate executives can signal to each other with confidence; they both understand the rules. A U.S. executive and an Indonesian executive would face far greater risks of misunderstanding each other’s signals. The cultural differences in signaling can be substantial. Cultures differ in their reliance on verbal and nonverbal signals to communicate their messages. The more people rely on nonverbal or indirect verbal signals and on context, the higher the complexity.

  • In July 1990 the U.S. State Department unintentionally sent several signals that Saddam Hussein apparently interpreted as a green light to attack Kuwait. State Department spokesperson Margaret Tutwiler said, “[W]e do not have any defense treaties with Kuwait. . . .” The next day, Ambassador April Glaspie told Saddam Hussein, “[W]e have no opinion on Arab-Arab conflicts like your border disagreement with Kuwait.” And two days before the invasion, Assistant Secretary of State John Kelly testified before the House Foreign Affairs Committee that there was no obligation on our part to come to the defense of Kuwait if it were attacked.

 

Analytic Tradecraft in a World of Denial and Deception

Writers often use the analogy that intelligence analysis is like the medical profession.

Analysts and doctors weigh evidence and reach conclusions in much the same fashion. In fact, intelligence analysis, like medicine, is a combination of art, tradecraft, and science. Different doctors can draw different conclusions from the same evidence, just as different analysts do.

But intelligence analysts have a different type of problem than doctors do. Scientific researchers and medical professionals do not routinely have to deal with denial and deception. Though patients may forget to tell them about certain symptoms, physicians typically don’t have an opponent who is trying to deny them knowledge. In medicine, once doctors have a process for treating a pathology, it will in most cases work as expected. The human body won’t develop countermeasures to the treatment. But in intelligence, your opponent may be able to identify the analysis process and counter it. If analysis becomes standardized, an opponent can predict how you will analyze the available intelligence, and then D&D become much easier to pull off.

One cannot establish a process and retain it indefinitely.

Intelligence analysis within the context of D&D is in fact analogous to being a professional poker player, especially in the games of Seven Card Stud or Texas Hold ’em. You have an opponent. Some of the opponent’s resources are in plain sight, some are hidden. You have to observe the opponent’s actions (bets, timing, facial expressions, all of which incorporate art and tradecraft) and do pattern analysis (using statistics and other tools of science).

Summary

In evaluating raw intelligence, analysts must constantly be aware of the possibility that they may be seeing material that was deliberately provided by the opposing side. Most targets of intelligence efforts practice some form of denial. Deception—providing false information—is less common than denial because it takes more effort to execute, and it can backfire.

Defense against D&D starts with your own denial of your intelligence capabilities to op- posing intelligence services.

Where one intelligence service has extensive knowledge of another service’s sources and methods, more ambitious and elaborate D&D efforts are possible. Often called perception management, these involve developing a coordinated multi-INT campaign to get the opposing service to make a wrong initial estimate. Once this happens, the opposing service faces an unlearning process, which is difficult. A high level of detailed knowledge also allows for covert actions to disrupt and discredit the opposing service.

A collaborative target-centric process helps

to stymie D&D by bringing together different perspectives from the customer, the collector, and the analyst. Collectors can be more effective in a D&D environment with the help of analysts. Working as a team, they can make more use of deceptive, unpredictable, and provocative collection methods that have proven effective in defeating D&D.

Intelligence analysis is a combination of art, tradecraft, and science. In large part, this is because analysts must constantly deal with denial and deception, and dealing with D&D is primarily a matter of artfully applying tradecraft.

Systems Modeling and Analysis

Believe what you yourself have tested and found to be reasonable.

Buddha

In chapter 3, we described the target as three things: as a complex system, as a network, and as having temporal and spatial attributes.

any entity having the attributes of structure, function, and process can be de- scribed and analyzed as a system, as noted in previous chapters.

the basic principles apply in modeling political and economic systems, as well. Systems analysis can be applied to analyze both existing systems and those under development.

A government can be considered a system and analyzed in much the same way—by creating structural, functional, and process models.

Analyzing an Existing System: The Mujahedeen Insurgency

a single weapon can be defeated, as in this case, by tactics. But the proper mix of antiair weaponry could not. The mix here included surface-to-air missiles (SA-7s, British Blowpipes, and Stinger missiles) and machine guns (Oerlikons and captured Soviet Dashika machine guns). The Soviet helicopter operators could defend against some of these, but not all simultaneously. SA-7s were vulnerable to flares; Blowpipes were not. The HINDs could stay out of range of the Dashikas, but then they would be at an effective range for the Oerlikons.3 Unable to know what they might be hit with, Soviet pilots were likely to avoid at- tacking or rely on defensive maneuvers that would make them almost ineffective—which is exactly what happened.

Analyzing a Developmental System: Methodology

In intelligence, we also are concerned about modeling a system that is un- der development. The first step in modeling a developmental system, and particularly a future weapons system, is to identify the system(s) under development. Two approaches traditionally have been applied in weapons systems analysis, both based on reasoning paradigms drawn from the writings of philosophers: deductive and inductive.

  • The deductive approach to prediction is to postulate desirable objectives, in the eyes of the opponent; identify the system requirements; and then search the incoming intelligence for evidence of work on the weapons systems, subsystems, components, devices, and basic research and development (R&D) required to reach those objectives.
  • The opposite, an inductive or synthesis approach, is to begin by looking at the evidence of development work and then synthesize the advances in systems, subsystems, and devices that are likely to follow.

A number of writers in the intelligence field have argued that intelligence uses a different method of reasoning—abduction, which seeks to develop the best hypothesis or inference from a given body of evidence. Abduction is much like induction, but its stress is on integrating the analyst’s own thoughts and intuitions into the reasoning process.

The deductive approach can be described as starting from a hypothesis and using evidence to test the hypothesis. The inductive approach is described as evidence-based reasoning to develop a conclusion.7 Evidence- based reasoning is applied in a number of professions. In medicine, it is known as evidence-based practice—applying a combination of theory and empirical evidence to make medical decisions.

Both (or all three) approaches have advantages and drawbacks. In practice, though, deduction has some advantages over induction or abduction in identifying future systems development.

The problem arises when two or more systems are under development at the same time. Each system will have its R&D process, and it can be very difficult to separate the processes out of the mass of in- coming raw intelligence. This is the “multiple pathologies” problem: When two or more pathologies are present in a patient, the symptoms are mixed together, and diagnosing the separate ill- nesses becomes very difficult. Generally, the deductive technique works better for dealing with the multiple pathologies issue in future systems assessments.

Once a system has been identified as being in development, analysis proceeds to the second step: answering customers’ questions about it. These questions usually are about the system’s functional, process, and structural characteristics—that is, about performance, schedule, risk, and cost.

As the system comes closer to completion, a wider group of customers will want to know what specific targets the system has been designed against, in what circumstances it will be used, and what its effectiveness will be. These matters typically require analysis of the system’s performance, including its suitability for operating in its environment or in accomplishing the mission for which it has been designed. The schedule for completing development and fielding the system, as well as associated risks, also become important. In some cases, the cost of development and deployment will be of interest.

Performance

Performance analyses are done on a wide range of systems, varying from simple to highly complex multidisciplinary systems. Determining the performance of a narrowly defined system is straightforward. More challenging is assessing the performance of a complex system such as an air defense network or a narcotics distribution network. Most complex system performance analysis is now done by using simulation, a topic to which we will return.

Comparative Modeling

Comparative modeling is similar to benchmarking, but the focus is on analysis of one group’s system or product performance, versus an opponent’s.

Comparing your country’s or organization’s developments with those of an opponent can involve four distinct fact patterns. Each pat- tern poses challenges that the analyst must deal with.

In short, the possibilities can be de- scribed as follows:

  • We did it—they did it.
    • We did it—they didn’t do it.
    • We didn’t do it—they did it.
    • We didn’t do it—they didn’t do it.

There are many examples of the “we did it—they did it” sort of intelligence problem, especially in industries in which competitors typically develop similar products.

In the second case, “we did it—they didn’t do it,” the intelligence officer runs into a real problem: It is almost impossible to prove a negative in intelligence. The fact that no intelligence information exists about an opponent’s development cannot be used to show that no such development exists.

The third pattern, “we didn’t do it—they did it,” is the most dangerous type that we en- counter. Here the intelligence officer has to overcome opposition from skeptics in his country, because he has no model to use for comparison.

The “we didn’t do it—they did it” case presents analysts with an opportunity to go off in the wrong direction analytically

This sort of transposition of cause and effect is not uncommon in human source report- ing. Part of the skill required of an intelli- gence analyst is to avoid the trap of taking sources too literally. Occasionally, intelli- gence analysts must spend more time than they should on problems that are even more fantastic or improbable than that of the German engine killer.

 

 

Simulation

Performance simulation typically is a parametric, sensitivity, or “what if” type of analysis; that is, the analyst needs to try a relationship between two variables (parameters), run a computer analysis and examine the results, change the input constants, and run the simulation again.

The case also illustrates the common systems analysis problem of presenting the worst- case estimate: National security plans often are made on the basis of a systems estimate; out of fear that policymakers may become complacent, an analyst will tend to make the worst case that is reasonably possible.

The Mirror-Imaging Challenge

Both comparative modeling and simulation have to deal with the risks of mirror imaging. The opponent’s system or product (such as an airplane, a missile, a tank, or a supercomputer) may be designed to do different things or to serve a different market than expected.

The risk in all systems analysis is one of mirror imaging, which is much the same as the mirror-imaging problem in decision-making.

Unexpected Simplicity

In effect, the Soviets applied a version of Occam’s razor (choose the simplest explanation that fits the facts at hand) in their industrial practice. Because they were cautious in adopting new technology, they tended to keep everything as simple as possible. They liked straightforward, proven designs. When they copied a design, they simplified it in obvious ways and got rid of the extra features that the United States tends to put on its weapons systems. The Soviets made maintenance as simple as possible, because the hardware was going to be maintained by people who did not have extensive training.

In a comparison of Soviet and U.S. small jet engine technology, the U.S. model engine was found to have 2.5 times as much materials cost per pound of weight. It was smaller and lighter than the Soviet engine, of course, but it had 12 times as many maintenance hours per flight-hour as the Soviet model, and overall the Soviet engine had a life cycle cost half that of the U.S. engine.10 The ability to keep things simple was the Soviets’ primary advantage over the United States in technology, especially military technology.

Quantity May Replace Quality

U.S. analysts often underestimated the number of units that the Soviets would produce. The United States needed fewer units of a given system to perform a mission, since each unit had more flexibility, quality, and performance ability than its Soviet counterpart. The United States forgot a lesson that it had learned in World War II—U.S. Sherman tanks were inferior to the German Tiger tanks in combat, but the United States deployed a lot of Shermans and overwhelmed the Tigers with numbers.

Schedule

The intelligence customer’s primary concern about systems under development usually centers on performance, as discussed previously.

the importance of the systems development process, which is one of the many types of processes we deal with in intelligence.

Process Models

The functions of any system are carried out by processes. The processes will be different for different systems. That’s true whether you are describing an organization, a weapons system, or an industrial system. Different types of organizations, for ex- ample—civil government, law enforcement, military, and commercial organizations—will have markedly different processes. Even similar types of organizations will have different processes, especially in different cultures.

Political, military, economic, and weapons systems analysts all use specialized process-analysis techniques.

Most processes and most process models have feedback loops. Feedback al- lows the system to be adaptive, that is, to ad- just its inputs based on the output. Even simple systems such as a home heating/air conditioning system provide feedback via a thermostat. For complex systems, feedback is essential to prevent the process from producing undesirable output. Feedback is an important part of both synthesis and analysis

Development Process Models

In determining the schedule for a systems development, we concentrate on examining the development process and identifying the critical points in that process.

An example development process model is shown in Figure 9-2. In this display, the pro- cess nodes are separated by function into “swim lanes” to facilitate analysis.

 

The Program Cycle Model

Beginning with the system requirement and progressing to production, deployment, and operations, each phase bears unique indicators and opportunities for collection and synthesis/analysis. Customers of intelligence often want to know where a major system is in this life cycle.

Different types of systems may evolve through different versions of the cycle, and product development differs somewhat from systems development. It is therefore important for the analyst to first determine the specific names and functions of the cycle phases for the target country, industry, or company and then determine exactly where the target program is in that cycle. With that information, analytic techniques can be used to predict when the program might become operational or begin producing output.

It is important to know where a program is in the cycle in order to make accurate predictions.

A general rule of thumb is that the more phases in the program cycle, the longer the process will take, all other things being equal. Countries and organizations with large, stable bureaucracies typically have many phases, and the process, whatever it may be, takes that much longer.

Program Staffing

The duration of any stage of the cycle shown in the Generic Program Cycle is determined by the type of work involved and the number and expertise of workers assigned.

 

Fred Brooks, one of the premier figures in computer systems development, defined four types of projects in his book The Mythical Man-Month. Each type of project has a unique relationship between the number of workers needed (the project loading) and the time it takes to complete the effort.

The first type of project is a perfectly partitionable task—that is, one that can be completed in half the time by doubling the number of workers.

A second type of project involves the unpartitionable task…The profile is referred to here as the “baby production curve,” because no matter how many women are assigned to the task, it takes nine months to produce a baby.

Most small projects fit the curve shown in the lower left of the figure, which is a com- bination of the first two curves. In this case a project can be partitioned into subtasks, but the time it takes for people working on different subtasks to communicate with one another will eventually balance out the time saved by adding workers, and the curve levels off.

Large projects tend to be dominated by communication. At some point, shown as the bottom point of the lower right curve, adding additional workers begins to slow the project because all workers have to spend more time in communication.

The Technology Factor

Technology is another important factor in any development schedule; and technology is neither available nor applied in the same way everywhere. An analyst in a technologically advanced country, such as the United States, tends to take for granted that certain equipment—test equipment, for example—will be readily available and will be of a certain quality.

There is also a definite schedule advantage to not being the first to develop a system. A country or organization that is not a leader in technology development has the advantage of learning from the leader’s mistakes, an ad- vantage that entails being able to keep research and development costs low and avoid wrong paths.

A basic rule of engineering is that you are halfway to a solution when you know that there is a solution, and you are three-quarters there when you know how a competitor solved the problem. It took much less time for the Soviets to develop atomic and hydrogen bombs than U.S. intelligence had predicted. The Soviets had no principles of impotence or doubts to slow them down.

Risk

Analysts often assume that the programs and projects they are evaluating will be completed on time and that the target system will work perfectly. They would seldom be so foolish in evaluating their own projects or the performance of their own organizations. Risk analysis needs to be done in any assessment of a target program.

It is typically difficult to do and, once done, difficult to get the customer to accept. But it is important to do because intelligence customers, like many analysts, also tend to assume that an opponent’s program will be executed perfectly.

One fairly simple but often overlooked approach to evaluating the probability of success is to examine the success rate of similar ventures.

Known risk areas can be readily identified from past experience and from discussions with technical experts who have been through similar projects. The risks fall into four major categories—programmatic, technical, production, and engineering. Analyzing potential problems requires identifying specific potential risks from each category. Some of these risks include the following:

 

  • Programmatic: funding, schedule, contract relationships, political issues
  • Technical: feasibility, survivability, system performance
  • Production: manufacturability, lead times, packaging, equipment
  • Engineering: reliability, maintainability, training, operations

Risk assessment assesses risks quantitatively and ranks them to establish those of most concern. A typical ranking is based on the risk factor, which is a mathematical combination of the probability of failure and the consequence of failure. This assessment requires a combination of expertise and software tools in a structured and consistent approach to ensure that all risk categories are considered and ranked.

Risk management is the definition of alternative paths to minimize risk and set criteria on which to initiate or terminate these activities. It includes identifying alternatives, options, and approaches to mitigation.

Examples are initiation of parallel developments (for example, funding two manufacturers to build a satellite, where only one satellite is needed), extensive development testing, addition of simulations to check performance predictions, design reviews by consultants, or focused management attention on specific elements of the program. A number of decision analysis tools are useful for risk management. The most widely used tool is the Program Evaluation and Review Technique (PERT) chart, which shows the interrelationships and dependencies among tasks in a program on a timeline.

Cost

Systems analysis usually doesn’t focus heavily on cost estimates. The usual assumption is that costs will not keep the system from being completed. Sometimes, though, the costs are important because of their effect on the overall economy of a country.

Estimating the cost of a system usually starts with comparative modeling. That is, you be- gin with an estimate of what it would cost your organization or an industry in your country to build something. You multiply that number by a factor that accounts for the difference in costs of the target organization (and they will always be different).

When several system models are being considered, cost-utility analysis may be necessary. Cost-utility analysis is an important part of decision prediction. Many decision-making processes, especially those that require resource allocation, make use of cost-utility analysis. For an analyst assessing a foreign military’s decision whether to produce a new weapons system, it is a useful place to start. But the analyst must be sure to take “rationality” into account. As noted earlier, what is “rational” is different across cultures and from one individual to the next. It is important for the analyst to understand the logic of the decision maker—that is, how the opposing decision maker thinks about topics such as cost and utility.

In performing cost-utility analysis, the analyst must match cost figures to the same time horizon over which utility is being assessed. This will be a difficult task if the horizon reaches past a few years away. Life-cycle costs should be considered for new systems, and many new systems have life cycles in the tens of years.

Operations Research

A number of specialized methodologies are used to do systems analysis. Operations re- search is one of the more widely used ones.

Operations research has a rigorous process for defining problems that can be usefully applied in intelligence. As one specialist in the discipline has noted, “It often occurs that the major contribution of the operations research worker is to decide what is the real problem.” Understanding the problem often requires understanding the environment and/or system in which an issue is embedded, and operations researchers do that well.

After defining the problem, the operations research process requires representing the system in mathematical form. That is, the operations researcher builds a computation- al model of the system and then manipulates or solves the model, using computers, to come up with an answer that approximates how the real-world system should function. Systems of interest in intelligence are characterized by uncertainty, so probability analysis is a commonly used approach.

Two widely used operations research techniques are linear programming and network analysis. They are used in many fields, such as network planning, reliability analysis, capacity planning, expansion capability de- termination, and quality control.

Linear Programming

Linear programming involves planning the efficient allocation of scarce resources, such as material, skilled workers, machines, money, and time.

Linear programs are simply systems of linear equations or in- equalities that are solved in a manner that yields as its solution an optimum value—the best way to allocate limited resources, for example. The optimum value is based on some single-goal statement (provided to the program in the form of what is called a linear objective function). Linear programming is often used in intelligence for estimating production rates, though it has applicability in a wide range of disciplines.

Network Analysis

In chapter 10 we’ll investigate the concept of network analysis as applied to relation- ships among entities. Network analysis in an operations research sense is not the same. Here, networks are interconnected paths over which things move. The things can be automobiles (in which case we are dealing with a network of roads), oil (with a pipeline system), electricity (with wiring diagrams or circuits), information signals (with communication systems), or people (with elevators or hallways).

In intelligence against networks, we frequently are concerned with things like maximum throughput of the system, the shortest (or cheapest) route between two or more locations, or bottlenecks in the system.

Summary

Any entity having the attributes of structure, function, and process can be described and analyzed as a system. Systems analysis is used in intelligence extensively for assessing foreign weapons systems performance. But it also is used to model political, economic, infrastructure, and social systems.

Modeling the structure of a system can rely on an inductive, a deductive, or an abductive approach.

Functional assessments typically require analysis of a system’s performance. Comparative performance analysis is widely used in such assessments. Simulations are used to prepare more sophisticated predictions of a system’s performance.

Process analysis is important for assessing organizations and systems in general. Organizational processes vary by organization type and across cultures. Process analysis also is used to determine systems development schedules and in looking at the life cycle of a program. Program staffing and the technologies involved are other factors that shape development schedules.

10

Network Modeling and Analysis

Future conflicts will be fought more by networks than by hierarchies, and whoever masters the network form will gain major advantages.

John Arquilla and David Ronfeldt, RAND Corporation

In intelligence, we’re concerned with many types of networks: communications, social, organizational, and financial networks, to name just a few. The basic principles of modeling and analysis apply across most different types of networks.

intelligence has the job of providing an advantage in conflicts by reducing uncertainty.

One of the most powerful tools in the analyst’s toolkit is network modeling and analysis. It has been used for years in the U.S. intelligence community against targets such as terrorist groups and narcotics traffickers. The netwar model of multidimensional conflict between opposing networks is more and more applicable to all intelligence, and network analysis is our tool for examining the opposing network.

a few definitions:

 

  • Network—that group of elements forming a unified whole, also known as a system
    • Node—an element of a system that represents a person, place, or physical thing
    • Cell—a subordinate organization formed around a specific process, capability, or activity within a designated larger organization
  • Link—a behavioral, physical, or functional relationship between nodes

 

Link Models

Link modeling has a long history; the Los Angeles police department reportedly used it first in the 1940s as a tool for assessing organized crime networks. Its primary purpose was to display relationships among people or between people and events. Link models demonstrated their value in discerning the complex and typically circuitous ties between entities.

some types of link diagrams are referred to as horizontal relevance trees. Their essence is the graphical representation of (a) nodes and their connection patterns or (b) entities and relationships.

Most humans simply cannot assimilate all the information collected on a topic over the course of several years. Yet a typical goal of intelligence synthesis and analysis is to develop precise, reliable, and valid inferences (hypotheses, estimations, and conclusions) from the available data for use in strategic decision-making or operational planning. Link models directly support such inferences.

The primary purpose of link modeling is to facilitate the organization and presentation of data to assist the analytic process. A major part of many assessments is the analysis of relationships among people, organizations, locations, and things. Once the relationships have been created in a database system, they can be displayed and analyzed quickly in a link analysis program.

To be useful in intelligence analysis, the links should not only identify relationships among data items but also show the nature of their ties. A subject-verb-object display has been used in the intelligence community for sever- al decades to show the nature of such ties, and it is sometimes used in link displays.

Quantitative and temporal (date stamping) relationships have also been used when the display software has a filtering capability. Filters allow the user to focus on connections of interest and can simplify by several orders of magnitude the data shown in a link dis- play.

Link modeling has been replaced almost completely by network modeling, discussed next, because it offers a number of advantages in dealing with complex networks.

Network Models

Most modeling and analysis in intelligence today focuses on networks.

Some Network Types

A target network can include friendly or allied entities

It can include neutrals that your customer wishes to influence—either to become an ally or to remain neutral.

Social Networks

When intelligence analysts talk about net- work analysis, they often mean social net- work analysis (SNA). SNA involves identifying and assessing the relationships among people and groups—the nodes of the network. The links show relationships or trans- actions between nodes. So a social network model provides a visual display of relation- ships among people, and SNA provides a visual or mathematical analysis of the relationships. SNA is used to identify key people in an organization or social network and to model the flow of information within the network.

Organizational Networks

Management consultants often use SNA methodology with their business clients, referring to it as organizational network analysis. It is a method for looking at communication and social networks within a formal organization. Organizational network modeling is used to create statistical and graphical models of the people, tasks, groups, knowledge, and resources of organizations.

Commercial Networks

In competitive intelligence, network analysis tends to focus on networks where the nodes are organizations.

As Babson College professor and business analyst Liam Fahey noted, competition in many industries is now as much competition between networked enterprises

Fahey has de- scribed several such networks and defined five principal types:

  • Vertical networks. Networks organized across the value chain; for example, 3M Corporation goes from mining raw materials to delivering finished products.
  • Technology networks. Alliances with technology sources that allow a firm to maintain technological superiority,

such as the CISCO Systems network. • Development networks. Alliances fo- cused on developing new products or processes, such as the multimedia entertainment venture DreamWorks SKG.

  • Ownership networks. Networks in which a dominant firm owns part or all of its suppliers, as do the Japanese keiretsu.
  • Political networks. Those focused on political or regulatory gains for its members, for example, the National Association of Manufacturers.

Hybrids of the five are possible, and in some cultures such as in the Middle East and Far East, families can be the basis for a type of hybrid business network.

 

Financial Networks

Financial networks tend to feature links among organizations, though individuals can be important nodes, as in the Abacha family funds-laundering case. These networks focus on topics such as credit relationships, financial exposures between banks, liquidity flows in the interbank payment system, and funds-laundering transactions. The relationships among financial institutions, and the relationships of financial institutions with other organizations and individuals, are best captured and analyzed with network modeling.

Global financial markets are interconnected and therefore amenable to large-scale modeling. Analysis of financial system networks helps economists to understand systemic risk and is key to preventing future financial crises.

Threat Networks

Military and law enforcement organizations define a specific type of network, called a threat network. These are networks that are opposed to friendly networks.

Such net- works have been defined as being “comprised of people, processes, places, and material—components that are identifiable, targetable, and exploitable.”

A premise of threat network modeling is that all such networks have vulnerabilities that can be exploited. Intelligence must provide an understanding of how the network operates so that customers can identify actions to exploit the vulnerabilities.

Threat networks, no matter their type, can access political, military, economic, social, infrastructure, and information resources. They may connect to social structures in multiple ways (kinship, religion, former association, and history)—providing them with resources and support. They may make use of the global information networks, especially social media, to obtain recruits and funding and to conduct information operations to gain recognition and international support.

Other Network Views

Target networks can be a composite of the types described so far. That is, they can have social, organizational, commercial, and financial elements, and they can be threat net- works. But target networks can be labeled another way. They generally take one of the following relationship forms:

  • Functional networks. These are formed for a specific purpose. Individuals and organizations in this net- work come together to undertake activities based primarily on the skills, expertise, or particular capabilities they offer. Commercial net- works, crime syndicates, and insurgent groups all fall under this label.
  • Family and cultural networks. Some members or associates have familial bonds that may span generations. Or the network shares bonds due to a shared culture, language, religion, ideology, country of origin, and/or sense of identity. Friendship net- works fall into this category as do proximity networks—where the network has bonds due to geographic or proximity ties (such as time spent together in correctional institutions).
  • Virtual network. This is a relatively new phenomenon. In these networks, participants seldom (possibly never) physically meet, but work together through the Internet or some other means of communication. Networks involved in online fraud, theft, or funds laundering are usually virtual networks. Social media often are used to operate virtual networks.

Modeling the Network

Target networks can be modeled manually, or by using computer algorithms to automate the process. Using open-source and classified HUMINT or COMINT, an analyst typically goes through the following steps in manually creating a network model:

  • Understand the environment.

You should start by understanding the setting in which the network operates. That may require looking at all six of the PMESII factors that constitute the environment, and almost certainly at more than one of these factors. This approach applies to most networks of intelligence interest, again recognizing that “military” refers to that part of the network that applies force (usually physical force) to serve network interests. Street gangs and narcotics traffickers, for example, typically have enforcement arms.

  • Select or create a network template.

Pattern analysis, link analysis, and social network analysis are the foundational analytic methods that enable intelligence analysts to begin templating the target network. To begin with, are the networks centralized or decentralized? Are they regional or transnational? Are they virtual, familial, or functional? Are they a combination? This information provides a rough idea of their structure, their adaptability, and their resistance to disruption.

  • Populate the network.

If you don’t have a good idea what the network template looks like, you can apply a technique that is sometimes called “snowballing.” You begin with a few key members of the target network. Then add nodes and linkages based on the information these key members provide about others. Over time, COMINT and other collection sources (open source, HUMINT) al- low the network to be fleshed out. You identify the nodes, name them, and determine the linkages among them. You also typically need to determine the nature of the link. For example, is it a familial link, a trans- actional link, or a hostile link

Computer-Assisted and Automated Modeling

Although manual modeling is still used, commercially available network tools such as Analyst’s Notebook and Palantir are now available to help. One option for using these tools is to enter the data manually but to rely on the tool to create and manipulate the network model electronically.

Analyzing the Network

Analyzing a network involves answering the classic questions—who-what-where-when- how-why—and placing the answers in a format that the customer can understand and act upon, what is known as “actionable intelligence.” Analysis of the network pattern can help identify the what, when, and where. Social network analysis typically identifies who. And nodal analysis can tell how and why.

Nodal Analysis

As noted throughout this book, nodes in a target network can include persons, places, objects, and organizations (which also could be treated as separate networks). Where the node is an organization, it may be appropriate to assess the role of the organization in the larger network—that is, to simply treat it as a node.

The usual purpose of nodal analysis is to identify the most critical nodes in a target network. This requires analyzing the properties of individual nodes, and how they affect or are affected by other nodes in the network. So the analyst must understand the behavior of many nodes and, where the nodes are organizations, the activities taking place within the nodes.

Social Network Analysis

Social network analysis, in which all of the network nodes are persons or groups, is widely used in the social sciences, especially in studies of organizational behavior. In intelligence, as noted earlier, we more frequently use target network analysis, in which almost anything can be a node.

 

To understand a social network, we need a full description of the social relationships in the network. Ideally, we would know about every relationship between each pair of actors in the network.

In summary, SNA is a tool for understanding the internal dynamics of a target network and how best to attack, exploit, or influence it. Instead of assuming that taking out the leader will disrupt the network, SNA helps to identify the distribution of power in the net- work and the influential nodes—those that can be removed or influenced to achieve a desired result. SNA also is used to describe how a network behaves and how its connectivity shapes its behavior.

Several analytic concepts that come along with SNA also apply to target network ana- lysis. The most useful concepts are centrality and equivalence. These are used today in the analysis of intelligence problems related to terrorism, arms networks, and illegal narcotics organizations.

the extent to which an actor can reach others in the network is a major factor in determining the power that the actor wields. Three basic sources of this advantage are high degree, high closeness, and high betweenness.

Actors who have many network ties have greater opportunities because they have choices. Their rich set of choices makes them less dependent than those with fewer ties and hence more powerful.

The network centrality of the individuals removed will determine the extent to which the removal impedes continued operation of the activity. Thus centrality is an important ingredient (but by no means the only one) in considering the identification of net- work vulnerabilities.

A second analytic concept that accompanies SNA is equivalence. The disruptive effectiveness of removing one individual or a set of individuals from a network (such as by making an arrest or hiring a key executive away from a business competitor) depends not only on the individual’s centrality but also on some notion of his uniqueness, that is, on whether or not he has equivalents.

The notion of equivalence is useful for strategic targeting and is tied closely to the concept of centrality. If nodes in the social network have a unique role (no equivalents), they will be harder to replace.

Network analysis literature offers a variety of concepts of equivalence. Three in particular are quite distinct and, between them, seem to capture most of the important ideas on the subject. The three concepts are substitutability, stochastic equivalence, and role equivalence. Each can be important in specific analysis and targeting applications.

Substitutability is easiest to understand; it can best be described as interchangeability. Two objects or persons in a category are substitutable if they have identical relationships with every other object in the category.

Individuals who have no network substitutes usually make the most worthwhile targets for removal.

Substitutability also has relevance to detecting the use of aliases. The use of an alias by a criminal will often show up in a network analysis as the presence of two or more substitutable individuals (who are in reality the same person with an alias). The interchangeability of the nodes actually indicates the interchangeability of the names.

Stochastic equivalence is a slightly more sophisticated idea. Two network nodes are stochastically equivalent if the probabilities of their being linked to any other particular node are the same. Narcotics dealers working for one distribution organization could be seen as stochastically equivalent if they, as a group, all knew roughly 70 percent of the group, did not mix with dealers from any other organizations, and all received their narcotics from one source.

Role equivalence means that two individuals play the same role in different organizations, even if they have no common acquaintances at all. Substitutability implies role equivalence, but not the converse.

Stochastic equivalence and role equivalence are useful in creating generic models of target organizations and in targeting by analogy—for example, the explosives expert is analogous to the biological expert in planning collection, analyzing terrorist groups, or attacking them.

Organizational Network Analysis

Organizational network analysis is a well-developed discipline for analyzing organizational structure. The traditional hierarchical description of an organizational structure does not sufficiently portray entities and their relationships.

the typical organization also is a system that can be viewed (and analyzed) from the same  three perspectives previously discussed:

structure, function, and process.

Structure here refers to the components of the organization, especially people and their relation- ships; this chapter deals with that.

Function refers to the outcome or results produced and tends to focus on decision making.

Process describes the sequences of activities and the expertise needed to produce the results or outcome. Fahey, in his assessment of organizational infrastructure, described four perspectives: structure, systems, people, and decision-making processes. Whatever their names, all three (or four, following Fahey’s example) perspectives must be considered.

Depending on the goal, an analyst may need to assess the network’s mission, its power distribution, its human resources, and its decision- making processes. The analyst might ask questions such as, Where is control exercised? Which elements provide support ser- vices? Are their roles changing? Network analysis tools are valuable for this sort of analysis.

Threat Network Analysis

We want to develop a detailed understanding of how a threat network functions by identifying its constituent elements, learning how its internal processes work to carry out operations, and seeing how all of the network components interact.

assessing threat networks requires, among other things, looking at the

  • Command-and-control structure. Threat networks can be decentralized, or flat. They can be centralized, or hierarchical. The structures will vary, but they are all designed to facilitate the attainment of the net- work’s goals and continued survival.
  • Closeness. This is a measure of the members’ shared objectives, kinship, ideology, religion, and personal relations that bond the network and facilitate recruiting new members.
    • Expertise. This includes the know- ledge, skills, and abilities of group leaders and members.
    • Resources. These include weapons, money, social connections, and public support.
  • Adaptability. This is a measure of the network’s ability to learn and adjust behaviors and modify operations in response to opposing actions.
  • Sanctuary. These are locations where the network can safely conduct planning, training, and resupply.

Primary is the ability to adapt over time, specifically to blend into the local population and to quickly replace losses of key personnel and recruit new members. The networks also tend to be difficult to penetrate because of their insular nature and the bonds that hold them together. They typically are organized into cells in a loose network where the loss of one cell does not seriously degrade the entire network.

To carry out the network’s functions, they must engage in activities that expose parts of the network to countermeasures.

They must communicate between cells and with their leadership, exposing the network to discovery and mapping of links.

Target Network Analysis

As we have said, in intelligence work we usually apply an extension of social network analysis that retains its basic concepts. So the techniques described earlier for SNA work for almost all target networks. But whereas all of the entities in SNA are people, again, in target network analysis they can be anything.

Automating the Analysis

Target network analysis has become one of the principal tools for dealing with complex systems, thanks to new, computer-based analytic methods. One tool that has been useful in assessing threat networks is the Organization Risk Analyzer (called *ORA) developed by the Computational Analysis of Social and Organizational Systems (CASOS) at Carnegie Mellon University. *ORA is able to group nodes and identify patterns of ana- lytic significance. It has been used to identify key players, groups, and vulnerabilities, and to model network changes over space and time.

Intelligence analysis relies heavily on graphical techniques to represent the descriptions of target networks compactly. The underlying mathematical techniques allow us to use computers to store and manipulate the information quickly and more accurately than we could by hand.

Summary

One of the most powerful tools in the analyst’s toolkit is network modeling and analysis. It is widely used in analysis disciplines. It is derived from link modeling, which organizes and presents raw intelligence in a visual form such that relationships among nodes (which can be people, places, things, organizations, or events) can be analyzed to extract finished intelligence.

We prefer to have network models created and updated automatically from raw intelligence data by software algorithms. Although some software tools exist for doing that, the analyst still must evaluate the sources and validate the results.

 

 

11 Geospatial and Temporal Modeling and Analysis

GEOINT is the professional practice of integrating and interpreting all forms of geospatial data to create historical and anticipatory intelligence products used for planning or that answer questions posed by decision-makers.

This definition incorporates the key ideas of an intelligence mission: all-source analysis and modeling in both space and time (from “historical and anticipatory”). These models are frequently used in analysis; insights about networks are often obtained by examining them in spatial and temporal ways.

  • During World War II, although the Germans maintained censorship as effectively as anyone else, they did publish their freight tariffs on all goods, including petroleum products. Working from those tariffs, a young U.S. Office of Strategic Services analyst, Walter Levy, conducted geospatial modeling based on the German railroad network to pinpoint the ex- act location of the refineries, which were subsequently targeted by allied bombers.

Static Geospatial Models

In the most general case, geospatial modeling is done in both space and time. But sometimes only a snapshot in time is needed.

Human Terrain Modeling

U.S. ground forces in Iraq and Afghanistan in the past few years have rediscovered and refined a type of static geospatial model that was used in the Vietnam War, though its use dates far back in history. Military forces now generally consider what they call “human terrain mapping” as an essential part of planning and conducting operations in populated areas.

In combating an insurgency, military forces have to develop a detailed model of the local situations that includes political, economic, and sociological inform- ation as well as military force information.

It involves acquiring the following details about each village and town:

  • The boundaries of each tribal area (with specific attention to where they adjoin or overlap)
  • Location and contact information for each sheik or village mukhtar and for government officials
  • Locations of mosques, schools, and markets
  • Patterns of activity such as movement into and out of the area; waking, sleeping, and shopping habits
  • Nearest locations and checkpoints of security forces
  • Economic driving forces including occupation and livelihood of inhabit- ants; employment and unemployment levels
  • Anti-coalition presence and activities
  • Access to essential services such as fuel, water, emergency care, and fire response
  • Particular local population concerns and issues

Human terrain mapping, or more correctly human terrain modeling, is an old intelligence technique.

Though Moses’s HUMINT mission failed because of poor analysis by the spies, it remains an excellent example of specific collection tasking as well as of the history of human terrain mapping.

1919 Paris Peace Conference

In 1917 President Woodrow Wilson established a study group to prepare materials for peace negotiations that would conclude World War I. He eventually tapped geographer Isaiah Bowman to head a group of 150 academics to prepare the study. It covered the languages, ethnicities, resources, and historical boundaries of Europe. With support from the American Geological Society, Bowman directed the production of over three hundred maps per week during January 1919.

The Tools of Human Terrain Modeling

Today, human terrain modeling is used extensively to support military operations in Syria, Iraq, and Afghanistan. Many tools have been developed to create and analyze such models. The ability to do human terrain mapping and other types of geospatial modeling has been greatly expanded and popularized by Google Earth and by Microsoft’s Virtual Earth. These geospatial modeling tools provide multiple layers of information.

This unclassified online material has a number of intelligence applications. For intelligence analysts, it permits planning HUMINT and COMINT operations. For military forces, it supports precise targeting. For terrorists, it facilitates planning of attacks.

Temporal Models

Pure temporal models are used less frequently than the dynamic geospatial models discussed next, because we typically want to observe activity in both space and time—sometimes over very short times. Timing shapes the consequences of planned events.

There are a number of different temporal model types; this chapter touches on two of them—timelines and pattern-of-life modeling and analysis.

Timelines

An opponent’s strategy often becomes apparent only when seemingly disparate events are placed on a timeline.

Event-time patterns tell analysts a great deal; they allow analysts to infer relationships among events and to examine trends. Activity patterns of a target network, for example, are useful in determining the best time to collect intelligence. An example is a plot of total telephone use over twenty-four hours—the plot peaks about 11 a.m., which is the most likely time for a per- son to be on the telephone.

Pattern-of-Life Modeling and Analysis

Pattern-of-life (POL) analysis is a method of modeling and understanding the behavior of a single person or group by establishing a re- current pattern of actions over time in a given situation. It has similarities to the concept of activity-based intelligence

 

Dynamic Geospatial Models

A dynamic variant of the geospatial model is the space-time model. Many activities, such as the movement of a satellite, a vehicle, a ship, or an aircraft, can best be shown spatially—as can population movements. A com- bination of geographic and time synthesis and analysis can show movement patterns, such as those of people or of ships at sea.

Dynamic geospatial modeling and analysis has been described using a number of terms. Three that are commonly used in intelligence are described in this section: movement intelligence, activity-based intelligence, and geographic profiling. Though they are similar, each has a somewhat different meaning. Dynamic modeling is also applied in understanding intelligence enigmas.

Movement Intelligence

Intelligence practitioners sometimes describe space-time models as movement intelligence, or “MOVINT” as if it were a collection “INT” instead of a target model. The name “movement intelligence” for a specialized intelligence product dates roughly to the wide use of two sensors for area surveillance.

One was the moving target indicator (MTI) capability for synthetic aperture radars. The other was the deployment of video cameras on intelligence collection platforms. MOVINT has been defined as “an intelligence gathering method by which images (IMINT), non-imaging products (MASINT), and signals (SIGINT) produce a movement history of objects of interest.”

Activity-Based Intelligence

Activity-based intelligence, or ABI, has been defined as “a discipline of intelligence where the analysis and subsequent collection is focused on the activity and transactions associated with an entity, population, or area of interest.”

ABI is a form of situational awareness that focuses on interactions over time. It has three characteristics:

  • Raw intelligence information is constantly collected on activities in a given region and stored in a database for later metadata searches.
  • It employs the concept of “sequence neutrality,” meaning that material is collected without advance knowledge of whether it will be useful for any intelligence purpose.
  • It also relies on “data neutrality,” meaning that any source of intelligence may contribute; in fact, open source may be the most valuable.

ABI therefore is a variant of the target-centric approach, focused on the activity of a target (person, object, or group) within a specified target area. So it includes both spatial and temporal dimensions. At a higher level of complexity, it can include network relationships as well.

Though the term ABI is of recent origin and is tied to the development of surveillance methods for collecting intelligence, the concept of solving intelligence problems by monitoring activity over time has been ap- plied for decades. It has been the primary tool for dealing with geographic profiling and intelligence enigmas.

Geographic Profiling

Geographic profiling is a term used in law enforcement for geospatial modeling, specifically a space-time model, that supports serial violent crime or sexual crime investigations. Such crimes, when committed by strangers, are difficult to solve. Their investigation can produce hundreds of tips and suspects, resulting in the problem of information overload

Intelligence Enigmas

Geospatial modeling and analysis frequently must deal with unidentified facilities, objects, and activities. These are often referred to by the term intelligence enigmas. For such targets, a single image—a snapshot in time—is insufficient.

Summary

One of the most powerful combination models is the geospatial model, which combines all sources of intelligence into a visual picture (often on a map) of a situation. One of the oldest of analytic products, geospatial modeling today is the product of all-source analysis that can incorporate OSINT, IMINT, HUMINT, COMINT, and advanced technical collection methods.

Many GEOINT models are dynamic; they show temporal changes. This combination of geospatial and temporal models is perhaps the single most important trend in GEOINT. Dynamic GEOINT models are used to observe how a situation develops over time and to extrapolate future developments

 

Part II

The Estimative Process

12 Predictive Analysis

“Your problem is that you are not able to see things before they happen.”

Wotan to Fricka, in Wagner’s opera Die Walküre

Describing a past event is not intelligence analysis; it is reciting history. The highest form of intelligence analysis requires structured thinking that results in an estimate of what is likely to happen.

True intelligence analysis is always predictive.

 

The value of a model of possible futures is in the insights that it produces. Those insights prepare customers to deal with the future as it unfolds. The analyst’s contribution lies in the assessment of the forces that will shape future events and the state of the target mod- el. If an analyst accurately assesses the forces, she has served the intelligence customer well, even if the prediction derived from that assessment turns out to be wrong.

policymaking customers tend to be skeptical of predictive analysis unless they do it themselves. They believe that their own opinions about the future are at least as good as those of intelligence analysts. So when an analyst offers an estimate without a compelling supporting argument, he or she should not be surprised if the policymaker ignores it.

By contrast, policymakers and executives will accept and make use of predictive analysis if it is well reasoned, and if they can follow the analyst’s logical development. This implies that we apply a formal methodology, one that the customer can understand, so that he or she can see the basis for the conclusions drawn.

Former national security adviser Brent Scowcroft observed, “What intelligence estimates do for the policymaker is to remind him what forces are at work, what the trends are, and what are some of the possibilities that he has to consider.” Any intelligence assessment that does these things will be readily accepted.

Introduction to Predictive Analysis

Intelligence can usually deal with near-term developments. Extrapolation—the act of making predictions based solely on past observations—serves us reasonably well in the short term for situations that involve established trends and normal individual or organizational behaviors.

Adding to the difficulty, intelligence estimates can also affect the future that they predict. Often, the estimates are acted on by policymakers—sometimes on both sides.

The first step in making any estimate is to consider the phenomena that are involved, in order to determine whether prediction is even possible.

Convergent and Divergent Phenomena

In examining trends and possible future events, we use the same terminology: Convergent phenomena make prediction possible; divergent phenomena frustrate it.

a basic question to ask at the outset of any predictive attempt is, Does the principle of causation apply? That is, are the phenomena we are to examine and prepare estimates about governed by the laws of cause and effect?

A good example of a divergent phenomenon in intelligence is the coup d’état. Policy- makers often complain that their intelligence organizations have failed to warn of coups. But a coup event is conspiratorial in nature, limited to a handful of people, and dependent on the preservation of secrecy for its success.

If a foreign intelligence service knows of the event, then secrecy has been com- promised and the coup is almost certain to fail—the country’s internal security services will probably forestall it. The conditions that encourage a coup attempt can be assessed and the coup likelihood estimated by using probability theory, but the timing and likelihood of success are not “predictable.”

The Estimative Approach

The target-centric approach to prediction follows an analytic pattern long established in the sciences, in organizational planning, and in systems synthesis and analysis.

 

The synthesis and analysis process discussed in this chapter and the next is derived from an estimative approach that has been formalized in several professional disciplines. In management theory, the approach has several names, one of which is the Kepner-Tregoe Rational Management Process. In engineering, the formalization is called the Kalman Filter. In the social sciences, it is called the Box-Jenkins method. Although there are differences among them, all are techniques for combining complex data to create estimates. They all require combining data to estimate an entity’s present state and evaluating the forces acting on the entity to predict its future state.

This concept—to identify the forces acting on an entity, to identify likely future forces, and to predict the likely changes in old and new forces over time, along with some indicator of confidence in these judgments—is the key to successful estimation. It takes into ac- count redundant and conflicting data as well as the analyst’s confidence in these data.

The key is to start from the present target model (and preferably, also with a past target model) and move to one of the future models, using an analysis of the forces involved as a basis. Other texts on estimative analysis describe these forces as issues, trends, factors, or drivers. All those terms have the same meaning: They are the entities that shape the future.

The methodology relies on three predictive mechanisms: extrapolation, projection, and forecasting. Those components and the general approach are defined here; later in the chapter, we delve deeper into “how-to” details of each mechanism.

An extrapolation assumes that these forces do not change between the present and future states, a projection assumes they do change, and a forecast assumes they change and that new forces are added.

The analysis follows these steps:

  1. Determine at least one past state and the present state of the entity. In intelligence, this entity is the target model, and it can be a model of almost anything—a terrorist organization, a government, a clandestine trade network, an industry, a technology, or a ballistic missile.
  2. Determine the forces that acted on the entity to bring it to its present state.

These same forces, acting unchanged, would result in the future state shown as an extrapolation (Scenario 1).

  1. To make a projection, estimate the changes in existing forces that are likely to occur. In the figure, a decrease in one of the existing forces (Force 1) is shown as causing a projected future state that is different from the extrapolation (Scenario 2).
  2. To make a forecast, start from either the extrapolation or the projection and then identify the new forces that may act on the entity, and incorporate their effect. In the figure, one new force is shown as coming to bear, resulting in a forecast future state that differs from both the extrapolated and the projected future states (Scenario 3).
  3. Determine the likely future state of the entity based on an assessment of the forces. Strong and certain forces are weighed most heavily in this pre- diction. Weak forces, and those in which the analyst lacks confidence (high uncertainty about the nature or effect of the force), are weighed least.

The process is iterative.

In this figure, we are concerned with a target (technology, system, person, organization, country, situation, industry, or some combination) that changes over time. We want to describe or characterize the entity at some future point.

the basic analytic paradigm is to create a model of the past and present state of the target, followed by alternative models of its possible future states, usually created in scenario form.

A CIA assessment of Mikhail Gorbachev’s economic reforms in 1985–1987 correctly estimated that his proposed reforms risked “confusion, economic disruption, and worker discontent” that could embolden potential rivals to his power.17 This projection was based on assessing the changing forces in Soviet society along with the inertial forces that would resist change.

The process we’ve illustrated in these examples has many names—force field analysis and system dynamics are two.

for forecasting, the analyst must identify new forces that are likely to come into play. Most of the chapters that follow focus on identifying and measuring these forces.

An analyst can (wrongly) shape the outcome by concentrating on some forces and ignoring or downplaying the significance of others.

Force Analysis According to Sun Tzu

Factor or force analysis is an ancient predictive technique. Successful generals have practiced it in warfare for thousands of years, and one of its earliest known pro- ponents was Sun Tzu. He described the art of war as being controlled by five factors, or forces, all of which must be taken into ac- count in predicting the outcome of an engagement. He called the five factors Moral Law, Heaven, Earth, the Commander, and Method and Discipline. In modern terms, the five would be called social, environmental, geospatial, leadership, and organizational factors.

The simplest approach to both projection and forecasting is to do it qualitatively. That is, an analyst who is an expert in the subject area begins the process by answering the following questions:

  1. What forces have affected this entity (organization, situation, industry, technical area) over the past several years?19
  2. Which five or six forces had more im- pact than others?
  3. What forces are expected to affect this entity over the next several years?
  4. Which five or six forces are likely to have more impact than others?
  5. What are the fundamental differ- ences between the answers to ques- tions two and four?
  6. What are the implications of these differences for the entity being analyzed?

The answers to those questions shape the changes in direction of the extrapolation… At more sophisticated levels of qualitative synthesis and analysis, the analyst might examine adaptive forces (feedback forces) and their changes over time.

High-Impact/Low-Probability Analysis

Projections and forecasts focus on the most likely outcomes. But customers also need to be aware of the unlikely outcomes that could have severe adverse effects on their interests.

 

The CIA’s tradecraft manual describes the analytic process as follows:

  • Define the high-impact outcome clearly. This definition will justify examining what most analysts believe to be a very unlikely development.
  • Devise one or more plausible explanations for or “pathways” to the low-probability outcome. This should be as precise as possible, as it can help identify possible indicators for later monitoring.
  • Insert possible triggers or changes in momentum if appropriate. These can be natural disasters, sudden health problems of key leaders, or new eco- nomic or political shocks that might have occurred historically or in other parts of the world.
  • Brainstorm with analysts having a broad set of experiences to aid the development of plausible but unpredictable triggers of sudden change.
  • Identify for each pathway a set of indicators or “observables” that would help you anticipate that events were beginning to play out this way.
  • Identify factors that would deflect a bad outcome or encourage a positive outcome.

The product of high-impact/low-probability analysis is a type of scenario called a demonstration scenario…

Two important types of bias can exist in predictive analysis: pattern, or confirmation, bias—looking for evidence that confirms rather than rejects a hypothesis; and heuristic bias—using inappropriate guidelines or rules to make predictions.

Two points are worth noting at the beginning of the discussion:

  • One must make careful use of the tools in synthesizing the model, as some will fail when applied to prediction. Expert opinion, for example, is often used in creating a target model; but experts’ biases, egos, and narrow focuses can interfere with their pre- dictions. (A useful exercise for the skeptic is to look at trade press or technical journal predictions that were made more than ten years ago that turned out to be way off base. Stock market predictions and popular science magazine predictions of automobile designs are particularly entertaining.)
  • Time constraints work against the analyst’s ability to consistently employ the most elaborate predictive techniques. Veteran analysts tend to use analytic techniques that are relatively fast and intuitive. They can view scenario development, red teams (teams formed to take the opponent’s perspective in planning or assessments), competing hypotheses, and alternative analysis as being too time-consuming to use in ordinary circumstances. An analyst has to guard against using just extrapolation because it is the fastest and easiest to do. But it is possible to use shortcut versions of many predictive techniques and sometimes the situation calls for that. This chapter and the following one contain some examples of shortcuts.

Extrapolation

An extrapolation is a statement, based only on past observations, of what is expected to happen. Extrapolation is the most conservative method of prediction. In its simplest form, an extrapolation, using historical performance as the basis, extends a linear curve on a graph to show future direction.

Extrapolation also makes use of correlation and regression techniques. Correlation is a measure of the degree of association between two or more sets of data, or a measure of the degree to which two variables are related. Regression is a technique for predicting the value of some unknown variable based only on information about the current values of other variables. Regression makes use of both the degree of association among variables and the mathematical function that is determined to best describe the relationships among variables.

the more bureaucracy and red tape involved in doing business, the more corruption is likely in the country.

Projection

Before moving on to projection and forecasting, let’s reinforce the differentiation from extrapolation. An extrapolation is a simple assertion about what a future scenario will look like. In contrast, a projection or a forecast is a probabilistic statement about some future scenario.

Projection is more reliable than extrapolation. It predicts a range of likely futures based on the assumption that forces that have operated in the past will change, whereas extrapolation assumes the forces do not change.

Projection makes use of two major analytic techniques. One technique, force analysis, was discussed earlier in this chapter. After a qualitative force analysis has been completed, the next technique is to apply probabilistic reasoning to it. Probabilistic reasoning is a systematic attempt to make subjective estimates of probabilities more explicit and consistent. It can be used at any of several levels of complexity (each successive level of sophistication adds new capability and completeness). But even the simplest level of generating alternatives, discussed next, helps to prevent premature closure and adds structure to complicated problems.

Generating Alternatives

The first step to probabilistic reasoning is no more complicated than stating formally that more than one outcome is possible. One can generate alternatives simply by listing all possible outcomes to the issue under consideration. One can generate alternatives simply by listing all possible outcomes to the issue under consideration. Remember that the possible outcomes can be defined as alternative scenarios.

The mere act of generating a complete, detailed list often provides a useful perspective on a problem.

Influence Trees or Diagrams

A list of alternative outcomes is the first step. A simple projection might not go beyond this level. But for more rigorous analysis, the next step typically is to identify the things that influence the possible outcomes and indicate the interrelationship of these influences. This process is frequently done by using an influence tree.

let’s assume that an analyst wants to assess the outcome of an ongoing African insurgency movement. There are three obvious possible outcomes: The insur- gency will be crushed, the insurgency will succeed, or there will be a continuing stale- mate. Other outcomes may be possible, but we can assume that they are so unlikely as not to be worth including. The three outcomes for the influence diagram are as follows:

  • Regime wins
  • Insurgency wins
  • Stalemate

The analyst now describes those forces that will influence the assessment of the relative likelihoods of each outcome. For instance, the insurgency’s success may depend on whether economic conditions improve, remain the same, or become worse during the next year. It also may depend on the success of a new government poverty relief program. The assumptions about these “driver” events are often described as linchpin premises in U.S. intelligence practice, and these assumptions need to be made explicit.

Having established the uncertain events that influence the outcome, the analyst proceeds to the first stage of an influence tree.

The thought process that is invoked when generating the list of influencing events and their outcomes can be useful in several ways. It helps identify and document factors that are relevant to judging whether an alternative outcome is likely to occur.

The audit trail is particularly useful in showing colleagues what the analyst’s thinking has been, especially if he desires help in upgrading the diagram with things that may have been overlooked. Software packages for creating influence trees allow the inclusion of notes that create an audit trail.

In the process of generating the alternative lists, the analyst must address the issue of whether the event (or outcome) being listed actually will make a difference in his assessment of the relative likelihood of the outcomes of any of the events being listed.

For instance, in the economics example, if the analyst knew that it would make no difference to the success of the insurgency whether economic conditions improved or remained the same, then there would be no need to differentiate these as two separate outcomes. The analyst should instead simplify the diagram.

The second question, having to do with additional influences not yet shown on the diagram, allows the analyst to extend this pictorial representation of influences to whatever level of detail is considered necessary. Note, however, that the analyst should avoid adding unneeded layers of detail.

Probabilistic reasoning is used to evaluate outcome scenarios.

This influence tree approach to evaluating possible outcomes is more convincing to customers than would be an unsupported ana- lytic judgment about the prospects for the insurgency. Human beings tend to do poorly at such complex assessments when they are approached in a totally unaided, subjective manner; that is, by the analyst mentally combining the force assessments in an un- structured way.

Influence Nets

Influence net modeling is an alternative to the influence tree.

To create an influence net, the analyst defines influence nodes, which depict events that are part of cause-effect relationships within the target model. The analyst also creates “influence links” between cause and effect that graphically illustrate the causal relation between the connected pair of events.

The influence can be either positive (sup- porting a given decision) or negative (decreasing the likelihood of the decision), as identified by the link “terminator.” The terminator is either an arrowhead (positive influence) or a filled circle (negative influence). The resulting graphical illustration is called the “influence net topology.”

 

Making Probability Estimates

Probabilistic projection is used to predict the probability of future events for some time- dependent random process… A number of these probabilistic techniques are used in industry for projection.

Two techniques that we use in intelligence analysis are as follows:

  • Point and interval estimation. This method attempts to describe the probability of outcomes for a single event. An example would be a country’s economic growth rate, and the event of concern might be an eco- nomic depression (the point where the growth rate drops below a certain level).
  • Monte Carlo simulation. This method simulates all or part of a process by running a sequence of events repeatedly, with random combinations of values, until sufficient statistical material is accumulated to determine the probability distribution of the outcome.

Most of the predictive problems we deal with in intelligence use subjective probability estimates. We routinely use subjective estimates of probabilities in dealing with broad issues for which no objective estimate is feasible.

Sensitivity Analysis

When a probability estimate is made, it is usually worthwhile to conduct a sensitivity analysis on the result. For example, the occurrence of false alarms in a security system can be evaluated as a probabilistic process.

Forecasting

Projections often work out better than extrapolations over the medium term. But even the best-prepared projections often seem very conservative when compared to reality years later. New political, economic, social, technological, or military developments will create results that were not foreseen even by experts in a field.

Forecasting uses many of the same tools that projection relies on—force analysis and probabilistic reasoning, for example. But it presents a stressing intellectual challenge, because of the difficulty in identifying and assessing the effect of new forces.

The development of alternative futures is essential for effective strategic decision-making. Since there is no single predictable future, customers need to formulate strategy within the context of alternative future states of the target. To this end, it is necessary to develop a model that will make it possible to show systematically the interrelationships of the individually forecast trends and events.

A forecast is not a blueprint of the future, and it typically starts from extrapolations or projections. Forecasters then must expand their scope to admit and juggle many additional forces or factors. They must examine key technologies and developments that are far afield but that nevertheless affect the subject of the forecast.

The Nonlinear Approach to Forecasting

Obviously, a forecasting methodology requires analytic tools or principles. But for any forecasting methodology to be successful, analysts who have significant understanding of many PMESII factors and the ability to think about issues in a nonlinear fashion are also required.

Futuristic thinking examines deeper forces and flows across many disciplines that have their own order and pattern. In predictive analysis, we may seem to wander about, making only halting progress toward the solution. This nonlinear process is not a flaw; rather it is the mark of a natural learning process when dealing with complex and nonlinear matters.

The sort of person who can do such multidisciplinary analysis of what is likely to happen in the future has a broad under- standing of the principles that cause a physical phenomenon, a chemical reaction, or a social reaction to occur. People who are multidisciplinary in their knowledge and thinking can pull together concepts from several fields and assess political, economic, and social, as well as technical, factors. Such breadth of understanding recognizes the similarity of principles and the underlying forces that make them work. It might also be called “applied common sense,” but unfortunately it is not very common. Analysts instead tend to specialize, because in-depth expertise is highly valued by both intelligence management and the intelligence customer.

The failure to do multidisciplinary analysis is often tied closely to mindset.

Techniques and Analytic Tools of Forecasting

Forecasting is based on a number of assumptions, among them the following:

  • The future cannot be predicted, but by taking explicit account of uncertainty, one can make probabilistic forecasts.
  • Forecasts must take into account possible future developments in such areas as organizational changes, demography, lifestyles, technology, economics, and regulation.

For policymakers and executives, the aim of defining alternative futures is to try to determine how to create a better future than the one that would materialize if we merely keep doing what we’re currently doing. Intelligence analysis contributes to this definition of alternative futures, with emphasis on the likely actions of others—allies, neutrals, and opponents.

Forecasting starts through examination of the changing political, military, economic, and social environments.

We first select issues or concerns that require attention. These issues and concerns have component forces that can be identified using a variant of the strategies-to-task methodology.

If the forecast is done well, these scenarios stimulate the customer of intelligence—the executive—to make decisions that are appropriate for each scenario. The purpose is to help the customer make a set of decisions that will work in as many scenarios as possible.

Evaluating Forecasts

Forecasts are judged on the following criteria:

  • Clarity. Can the customer under- stand the forecast and the forces involved? Is it clear enough to be useful?
  • Credibility. Do the results make sense to the customer? Do they appear valid on the basis of common sense?
  • Plausibility. Are the results consistent with what the customer knows about the world outside the scenario and how this world really works or is likely to work in the future?
  • Relevance. To what extent will the forecasts affect the successful achievement of the customer’s mission?
  • Urgency. To what extent do the forecasts indicate that, if action is required, time is of the essence in developing and implementing the necessary changes?
  • Comparative advantage. To what extent do the results provide a basis for customer decision-making, com- pared with other sources available to the customer?
  • Technical quality. Was the process that produced the forecasts technically sound? Are the alternative forecasts internally consistent?

 

A “good” forecast is one that meets all or most of these criteria. A “bad” forecast is one that does not. The analyst has to make clear to customers that forecasts are transitory and need constant adjustment to be helpful in guiding thought and action.

Customers typically have a number of complaints about forecasts. Common complaints are that the forecast is obvious; it states nothing new; it is too optimistic, pessimistic, or naïve; or it is not credible because it overlooks obvious trends, events, causes, or consequences. Such objections are actually desirable; they help to improve the product. There are a number of appropriate responses to these objections: If something important is missing, add it. If something unimportant is included, get rid of it. If the forecast seems either obvious or counterintuitive, probe the underlying logic and revise the forecast as necessary.

Summary

Intelligence analysis, to be useful, must be predictive. Some events or future states of a target are predictable because they are driven by convergent phenomena. Some are not predictable because they are driven by divergent phenomena.

The analysis product—a demonstration scenario—describes how such a development might plausibly start and identifies its consequences. This provides indicators that can be monitored to warn that the improbable event is actually happening.

For analysts predicting systems developments as many as five years into the future, extrapolations work reasonably well; for those looking five to fifteen years into the future, projections usually fare better.

13 Estimative Forces

Estimating is what you do when you don’t know.

The factors or forces that have to be considered in estimation—primarily PMESII factors—vary from one intelligence problem to another. I do not attempt to catalog them in this book; there are too many. But an important aspect of critical thinking, discussed earlier, is thinking about the underlying forces that shape the future. This chapter deals with some of those forces.

The CIA’s tradecraft manual describes an analytic methodology that is appropriate for identifying and assessing forces. Called “outside in” thinking, it has the objective of identifying the critical external factors that could influence how a given situation will develop. According to the tradecraft manual, analysts should develop a generic description of the problem or the phenomenon under study. Then, analysts should:

  • List all the key forces (social, technological, economic, environmental, and political) that could have an impact on the topic, but over which one can exert little influence (e.g., globalization, social stress, the Internet, or the global economy).
  • Focus next on key factors over which an actor or policymaker can exert some influence. In the business world this might be the market size, customers, the competition, suppliers or partners; in the government do- main it might include the policy actions or the behavior of allies or adversaries.
  • Assess how each of these forces could affect the analytic problem.
  • Determine whether these forces actually do have an impact on the particular issue based on the available evidence.

 

Political and military factors are often the focus of attention in assessing the likely out- come of conflicts. But the other factors can turn out to be dominant. In the developing conflict between the United States and Japan in 1941, Japan had a military edge in the Pacific. But the United States had a substantial edge in these factors:

  • Political. The United States could call on a substantial set of allies. Japan had Germany and Italy.
  • Economy. Japan lacked the natural resources that the United States and its allies controlled.
  • Social. The United States had almost twice the population of Japan. Japan initially had an edge in the solidarity of its population in support of the government, but that edge was matched within the United States after Pearl Harbor.
  • Infrastructure. The U.S. manufacturing capability far exceeded that of Japan and would be decisive in a prolonged conflict (as many Japanese military leaders foresaw).
  • Information. The prewar information edge favored Japan, which had more control of its news media, while a segment of the U.S. media strongly opposed involvement in war. That edge also evaporated after December 7, 1941.

Inertia

One force that has broad implications is inertia, the tendency to stay on course and resist change.

It has been observed that: “Historical inertia is easily underrated . . . the historical forces molding the outlook of Americans, Russians, and Chinese for centuries before the words capitalism and communism were invented are easy still to overlook.”

Opposition to change is a common reason for organizations’ coming to rest. Opposition to technology in general, for example, is an inertial matter; it results from a desire of both workers and managers to preserve society as it is, including its institutions and traditions.

A common manifestation of the law of inertia is the “not-invented-here,” or NIH, factor, in which the organization opposes pressures for change from the outside.

But all societies resist change to a certain extent. The societies that succeed seem able to adapt while preserving that part of their heritage that is useful or relevant.

From an analyst’s point of view, inertia is an important force in prediction. Established factories will continue to produce what they know how to produce. In the automobile industry, it is no great challenge to predict that next year’s autos will look much like this year’s. A naval power will continue to build ships for some time even if a large navy ceases to be useful.

Countervailing Forces

All forces are likely to have countervailing or resistive forces that must be considered.

The principle is summarized well by another of Newton’s laws of physics: For every action there is an equal and opposite reaction.

Applications of this principle are found in all organizations and groups, commercial, national, and civilizational. As Samuel P. Huntington noted, “[W]e know who we are . . . often only when we know who we are against.”

A predictive analysis will always be incomplete unless it identifies and assesses opposing forces. All forces eventually meet counterforces. An effort to expand free trade inevitably arouses protectionist reactions. One country’s expansion of its military strength always causes its neighbors to react in some fashion.

 

Counterforces need not be of the same nature as the force they are countering. A prudent organization is not likely to play to its opponent’s strengths. Today’s threats to U.S. national security are asymmetric; that is, there is little threat of a conventional force-on-force engagement by an opposing military, but there is a threat of an unconventional yet lethal attack by a loosely organized terrorist group, as the events of September 11, 2001, and more recently the Boston Marathon bombing, demonstrated. Asymmetric counterforces are common in industry as well. Industrial organizations try to achieve cost asymmetry by using defensive tactics that have a large favorable cost differential between their organization and that of an opponent.

Contamination

Contamination is the degradation of any of the six factors—political, military, economic, social, infrastructure, or information (PMESII factors)—through an infection-like process. Corruption is a form of political and social contamination. Funds laundering and counterfeiting are forms of economic contamination. The result of propaganda is information contamination.

Contamination phenomena can be found throughout organizations as well as in the scientific and technical disciplines. Once such an infection starts, it is almost impossible to eradicate.

Contamination phenomena have analogies in the social sciences, organization theory, and folklore.

At some point in organizations, contamination can become so thorough that only drastic measures will help—such as shutting down the glycerin plant or rebuilding the microwave tube plant. Predictive intelligence has to consider the extent of such social contamination in organizations, because contamination is a strong restraining force on an organization’s ability to deal with change.

The effects of social contamination are hard to measure, but they are often highly visible.

The contamination phenomenon has an interesting analogy in the use of euphemism in language. It is well known that if a word has or develops negative associations, it will be replaced by a succession of euphemisms. Such words have a half-life, or decay rate, that is shorter as the word association be- comes more negative. In older English, the word stink meant “to smell.” The problem is that most of the strong impressions we get from scents are unpleasant ones; so each word for olfactory senses becomes contaminated over time and must be replaced. Smell has a generally unpleasant connotation now

The renaming of a program or project is a good signal that the program or project is in trouble—especially in Washington, D.C., but the same rule holds in any culture.

Synergy

predictive intelligence analysis almost always requires multidisciplinary understanding. Therefore, it is essential that the analysis organization’s professional development program cultivate a professional staff that can understand a broad range of concepts and function in a multidisciplinary environment. One of the most basic concepts is that of synergy: The whole can be more than the sum of its parts due to interactions among the parts. Synergy is therefore, in some respects, the opposite of the countervailing forces discussed earlier.

Synergy is not really a force or factor as much as a way of thinking about how forces or factors interact. Synergy can result from cooperative efforts and alliances among organizations (synergy on a large scale).

Netwar is an application of synergy.

In electronics warfare, it is now well known that a weapons system may be unaffected by a single countermeasure; however, it may be degraded by a combination of countermeasures, each of which fail individually to defeat it. The same principle applies in a wide range of systems and technology developments: The combination may be much greater than the sum of the components taken individually.

Synergy is the foundation of the “swarm” approach that military forces have applied for centuries—the coordinated application of overwhelming force.

In planning a business strategy against a competitive threat, a company will often put in place several actions that, each taken alone, would not succeed. But the combination can be very effective. As a simple example, a company might use sever- al tactics to cut sales of a competitor’s new product: start rumors of its own improved product release, circulate reports on the defects or expected obsolescence of the competitor’s product, raise buyers’ costs of switching from its own to the competitor’s product, and tie up suppliers by using exclusive contracts. Each action, taken separately, might have little impact, but the synergy—the “swarm” effect of the actions taken in combination—might shatter the competitor’s market.

Feedback

In examining any complex system, it is important for the analyst to evaluate the system’s feedback mechanism. Feedback is the mechanism whereby the system adapts—that is, learns and changes itself. The following discussion provides more detail about how feedback works to change a system.

Many of the techniques for prediction de- pend on the assumption that the process being analyzed can be described, using systems theory, as a closed-loop system. Under the mathematical theory of such systems, feedback is a controlling force in which the out- put is compared with the objective or standard, and the input process is corrected as necessary to bring the output toward a desired state

The feedback function therefore determines the behavior of the total system over time. Only one feedback loop is shown in the figure, but many feedback loops can exist, and usually do in a complex system.