Notes from Knowledge Management in the Intelligence Enterprise

Notes from Knowledge Management in the Intelligence Enterprise

Knowledge Management in the Intelligence Enterprise

This book is about the application of knowledge management (KM) principles to the practice of intelligence to fulfill those consumers’ expectations.

Unfortunately, too many have reduced intelligence to a simple metaphor of “connecting the dots.” This process, it seems, appears all too simple after the fact—once you have seen the picture and you can ignore irrelevant, contradictory, and missing dots. Real-world intelligence is not a puzzle of connecting dots; it is the hard daily work of planning operations, focusing the collection of data, and then processing the collected data for deep analysis to produce a flow of knowledge for dissemination to a wide range of consumers.

this book… is an outgrowth of a 2-day military KM seminar that I teach in the United States to describe the methods to integrate people, processes, and technologies into knowledge- creating enterprises.

The book progresses from an introduction to KM applied to intelligence (Chapters 1 and 2) to the principles and processes of KM (Chapter 3). The characteristics of collaborative knowledge-based intelligence organizations are described (Chapter 4) before detailing its principle craft of analysis and synthesis (Chapter 5 introduces the principles and Chapter 6 illustrates the practice). The wide range of technology tools to support analytic thinking and allow analysts to interact with information is explained (Chapter 7) before describing the automated tools that perform all-source fusion and mining (Chapter 8). The organizational, systems, and technology concepts throughout the book are brought together in a representative intelligence enterprise (Chapter 9) to illustrate the process of architecture design for a small intelligence cell. An overview of core, enabling, and emerging KM technologies in this area is provided in conclusion (Chapter 10).

Knowledge Management and Intelligence

This is a book about the management of knowledge to produce and deliver a special kind of knowledge: intelligence—that knowledge that is deemed most critical for decision making both in the nation-state and in business.

  • Knowledge management refers to the organizational disciplines, processes, and information technologies used to acquire, create, reveal, and deliver knowledge that allows an enterprise to accomplish its mission (achieve its strategic or business objectives). The components of knowledge management are the people, their operations (practices and processes), and the information technology (IT) that move and transform data, information, and knowledge. All three of these components make up the entity we call the enterprise.
  • Intelligence refers to a special kind of knowledge necessary to accomplish a mission—the kind of strategic knowledge that reveals critical threats and opportunities that may jeopardize or assure mission accomplishment. Intelligence often reveals hidden secrets or conveys a deep understanding that is covered by complexity, deliberate denial, or out- right deception. The intelligence process has been described as the process of the discovery of secrets by secret means. In business and in national security, secrecy is a process of protection for one party; discovery of the secret is the object of competition or security for the competitor or adversary… While a range of definitions of intelligence exist, perhaps the most succinct is that offered by the U.S. Central Intelligence Agency (CIA): “Reduced to its simplest terms, intelligence is knowledge and foreknowledge of the world around us—the prelude to decision and action by U.S. policymakers”
  • The intelligence enterprise encompasses the integrated entity of people, processes, and technologies that collects and analyzes intelligence data to synthesize intelligence products for decision-making consumers.

intelligence (whether national or business) has always involved the management (acquisition, analysis, synthesis, and delivery) of knowledge.

At least three driving factors continue to make this increasing need for automation necessary. These factors include:

  • Breadth of data to be considered.
  • Depth of knowledge to be understood.
  • Speed required for decision making.

Throughout this book, we distinguish between three levels of abstraction of knowledge, each of which may be referred to as intelligence in forms that range from unprocessed reporting to finished intelligence products

  1. Individual observations, measurements, and primitive messages form the lowest level. Human communication, text messages, electronic queries, or scientific instruments that sense phenomena are the major sources of data. The terms raw intelligence and evidence (data that is determined to be relevant) are frequently used to refer to elements of data.
  2. Information. Organized sets of data are referred to as information. The organization process may include sorting, classifying, or indexing and linking data to place data elements in relational context for subsequent searching and analysis.
  3. Information once analyzed, understood, and explained is knowledge or foreknowledge (predictions or forecasts). In the context of this book, this level of understanding is referred to as the intelligence product. Understanding of information provides a degree of comprehension of both the static and dynamic relationships of the objects of data and the ability to model structure and past (and future) behavior of those objects. Knowledge includes both static con- tent and dynamic processes.

These abstractions are often organized in a cognitive hierarchy, which includes a level above knowledge: human wisdom.

In this text, we consider wisdom to be a uniquely human cognitive capability—the ability to correctly apply knowledge to achieve an objective. This book describes the use of IT to support the creation of knowledge but considers wisdom to be a human capacity out of the realm of automation and computation.

1.1 Knowledge in a Changing World

This strategic knowledge we call intelligence has long been recognized as a precious and critical commodity for national leaders.

the Hebrew leader Moses commissioned and documented an intelligence operation to explore the foreign land of Canaan. That classic account clearly describes the phases of the intelligence cycle, which proceeds from definition of the requirement for knowledge through planning, tasking, collection, and analysis to the dissemination of that knowledge. He first detailed the intelligence requirements by describing the eight essential elements of information to be collected, and he described the plan to covertly enter and reconnoiter the denied area

requirements articulation, planning, collection, analysis-synthesis, and dissemination

The U.S. defense community has developed a network-centric approach to intelligence and warfare that utilizes the power of networked information to enhance the speed of command and the efficiency of operations. Sensors are linked to shooters, commanders efficiently coordinate agile forces, and engagements are based on prediction and preemption. The keys to achieving information superiority in this network-centric model are network breadth (or connectivity) and bandwidth; the key technology is information networking.

The ability to win will depend upon the ability to select and convert raw data into accurate decision-making knowledge. Intelligence superiority will be defined by the ability to make decisions most quickly and effectively—with the same information available to virtually all parties. The key enabling technology in the next century will become processing and cognitive power to rapidly and accurately convert data into com- prehensive explanations of reality—sufficient to make rapid and complex decisions.

Consider several of the key premises about the significance of knowledge in this information age that are bringing the importance of intelligence to the forefront. First, knowledge has become the central resource for competitive advantage, displacing raw materials, natural resources, capital, and labor. This resource is central to both wealth creation and warfare waging. Second, the management of this abstract resource is quite complex; it is more difficult (than material resources) to value and audit, more difficult to create and exchange, and much more difficult to protect. Third, the processes for producing knowledge from raw data are as diverse as the manufacturing processes for physical materials, yet are implemented in the same virtual manufacturing plant—the computer. Because of these factors, the management of knowledge to produce strategic intelligence has become a necessary and critical function within nations-states and business enterprises—requiring changes in culture, processes, and infrastructure to compete.

with rapidly emerging information technologies, the complexities of globalization and diverse national interests (and threats), businesses and militaries must both adopt radically new and innovative agendas to enable continuous change in their entire operating concept. Innovation and agility are the watchwords for organizations that will remain competitive in Hamel’s age of nonlinear revolution.

Business concept innovation will be the defining competitive advantage in the age of revolution. Business concept innovation is the capacity to reconceive existing business models in ways that create new value for customers, rude surprises for competitors, and new wealth for investors. Business concept innovation is the only way for newcomers to succeed in the face of enormous resource disadvantages, and the only way for incumbents to renew their lease on success

 

A functional taxonomy based on the type of analysis and the temporal distinction of knowledge and foreknowledge (warning, prediction, and forecast) distinguishes two primary categories of analysis and five subcategories of intelligence products

Descriptive analyses provide little or no evaluation or interpretation of collected data; rather, they enumerate collected data in a fashion that organizes and structures the data so the consumer can perform subsequent interpretation.

Inferential analyses require the analysis of collected relevant data sets (evidence) to infer and synthesize explanations that describe the mean- ing of the underlying data. We can distinguish four different focuses of inferential analysis:

  1. Analyses that explain past events (How did this happen? Who did it?);
  2. Analyses that explain the structure of current structure (What is the organization? What is the order of battle?);
  3. Analyses that explain current behaviors and states (What is the competitor’s research and development process? What is the status of development?);
  4. Foreknowledge analyses that forecast future attributes and states (What is the expected population and gross national product growth over the next 5 years? When will force strength exceed that of a country’s neighbors? When will a competitor release a new product?).

1.3 The Intelligence Disciplines and Applications

While the taxonomy of intelligence products by analytic methods is fundamental, the more common distinctions of intelligence are by discipline or consumer.

The KM processes and information technologies used in all cases are identical (some say, “bits are bits,” implying that all digital data at the bit level is identical), but the content and mission objectives of these four intelligence disciplines are unique and distinct.

Nation-state security interests deal with sovereignty; ideological, political, and economic stability; and threats to those areas of national interest. Intelligence serves national leadership and military needs by providing strategic policymaking knowledge, warnings of foreign threats to national secu- rity interests (economic, military, or political) and tactical knowledge to support day-to-day operations and crisis responses. Nation-state intelligence also serves a public function by collecting and consolidating open sources of foreign information for analysis and publication by the government on topics of foreign relations, trade, treaties, economies, humanitarian efforts, environmental concerns, and other foreign and global interests to the public and businesses at large.

Similar to the threat-warning intelligence function to the nation-state, business intelligence is chartered with the critical task of foreseeing and alerting management of marketplace discontinuities. The consumers of business intelligence range from corporate leadership to employees who access supply-chain data, and even to customers who access information to support purchase decisions.

A European Parliament study has enumerated concern over the potential for national intelligence sources to be used for nation-state economic advantages by providing competitive intelligence directly to national business interests. The United States has acknowledged a policy of applying national intelligence to protect U.S. business interests from fraud and illegal activities, but not for the purposes of providing competitive advantage

1.3.1 National and Military Intelligence

National intelligence refers to the strategic knowledge obtained for the leadership of nation-states to maintain national security. National intelligence is focused on national security—providing strategic warning of imminent threats, knowledge on the broad spectrum of threats to national interests, and fore-knowledge regarding future threats that may emerge as technologies, economies, and the global environment changes.

The term intelligence refers to both a process and its product.

The U.S. Department of Defense (DoD) provides the following product definitions that are rich in description of the processes involved in producing the product:

  1. The product resulting from the collection, processing, integration, analysis, evaluation, and interpretation of available information concerning foreign countries or areas;
  2. Information and knowledge about an adversary obtained through observation, investigation, analysis, or understanding.

Michael Herman accurately emphasizes the essential components of the intelligence process: “The Western intelligence system is two things. It is partly the collection of information by special means; and partly the subsequent study of particular subjects, using all available information from all sources. The two activities form a sequential process.”

Martin Libicki has provided a practical definition of information dominance, and the role of intelligence coupled with command and control and information warfare:

Information dominance may be defined as superiority in the generation, manipulation, and use of information sufficient to afford its possessors military dominance. It has three sources:

  • Command and control that permits everyone to know where they (and their cohorts) are in the battlespace, and enables them to execute operations when and as quickly as necessary.
  • Intelligence that ranges from knowing the enemy’s dispositions to knowing the location of enemy assets in real-time with sufficient precision for a one-shot kill.
  • Information warfare that confounds enemy information systems at various points (sensors, communications, processing, and command), while protecting one’s own.

 

The superiority is achieved by gaining superior intelligence and protecting information assets while fiercely degrading the enemy’s information assets. The goal of such superiority is not the attrition of physical military assets or troops—it is the attrition of the quality, speed, and utility of the adversary’s decision-making ability.

“A knowledge environment is an organizations (business) environment that enhances its capability to deliver on its mission (competitive advantage) by enabling it to build and leverage it intellectual capital.”

1.3.2 Business and Competitive Intelligence

The focus of business intelligence is on understanding all aspects of a business enterprise: internal operations and the external environment, which includes customers and competitors (the marketplace), partners, and suppliers. The external environmental also includes independent variables that can impact the business, depending on the business (e.g., technology, the weather, government policy actions, financial markets). All of these are the objects of business intelligence in the broadest definition. But the term business intelligence is also used in a narrower sense to focus on only the internals of the business, while the term competitor intelligence refers to those aspects of intelligence that focus on the externals that influence competitiveness: competitors.

Each of the components of business intelligence has distinct areas of focus and uses in maintaining the efficiency, agility, and security of the business; all are required to provide active strategic direction to the business. In large companies with active business intelligence operations, all three components are essential parts of the strategic planning process, and all contribute to strategic decision making.

1.4 The Intelligence Enterprise

The intelligence enterprise includes the collection of people, knowledge (both internal tacit and explicitly codified), infrastructure, and information processes that deliver critical knowledge (intelligence) to the consumers. This enables them to make accurate, timely, and wise decisions to accomplish the mission of the enterprise.

This definition describes the enterprise as a process—devoted to achieving an objective for its stakeholders and users. The enterprise process includes the production, buying, selling, exchange, and promotion of an item, substance, service, or system.

the DoD three-view architecture description, which defines three interrelated perspectives or architectural descriptions that define the operational, system, and technical aspects of an enterprise [29]. The operational architecture is a people- or organization-oriented description of the operational elements, intelligence business processes, assigned tasks, and information and work flows required to accomplish or support the intelligence function. It defines the type of information, the frequency of exchange, and the tasks that are supported by these information exchanges. The systems architecture is a description of the systems and interconnections providing for or supporting intelligence functions. The system architecture defines the physical connection, location, and identification of the key nodes, circuits, networks, and users, and specifies system and component performance parameters. The technical architecture is the minimal set of rules (i.e., standards, protocols, interfaces, and services) governing the arrangement, interaction, and interdependence of the elements of the system.

 

These three views of the enterprise (Figure 1.4) describe three layers of people-oriented operations, system structure, and procedures (protocols) that must be defined in order to implement an intelligence enterprise.

The operational layer is the highest (most abstract) description of the concept of operations (CONOPS), human collaboration, and disciplines of the knowledge organization. The technical architecture layer describes the most detailed perspective, noting specific technical components and their operations, protocols, and technologies.

The intelligence supply chain that describes the flow of data into knowledge to create consumer value is measured by the value it provides to intelligence consumers. Measures of human intellectual capital and organizational knowledge describe the intrinsic value of the organization.

1.5 The State of the Art and the State of the Intelligence Tradecraft

The subject of intelligence analysis remained largely classified through the 1980s, but the 1990s brought the end of the Cold War and, thus, open publication of the fundamental operations of intelligence and the analytic methods employed by businesses and nation-states. In that same period, the rise of commercial information sources and systems produced the new disciplines of open source intelligence (OSINT) and business/competitor intelligence. In each of these areas, a wealth of resources is available for tracking the rapidly changing technology state of the art as well as the state of the intelligence tradecraft.

1.5.1 National and Military Intelligence

Numerous sources of information provide management, legal, and technical insight for national and military intelligence professionals with interests in analysis and KM

These sources include:

  • Studies in Intelligence—Published by the U.S. CIA Center for the Study of Intelligence and the Sherman Kent School of Intelligence, unclassified versions are published on the school’s Web site (http://odci. gov.csi), along with periodically issued monographs on technical topics related to intelligence analysis and tradecraft.
  • International Journal of Intelligence and Counterintelligence—This quarterly journal covers the breadth of intelligence interests within law enforcement, business, nation-state policymaking, and foreign affairs.
  • Intelligence and National Security—A quarterly international journal published by Frank Cass & Co. Ltd., London, this journal covers broad intelligence topics ranging from policy, operations, users, analysis, and products to historical accounts and analyses.
  • Defense Intelligence Journal—This is a quarterly journal published by the U.S. Defense Intelligence Agency’s Joint Military Intelligence College.
  • American Intelligence Journal—Published by the National Military Intelligence Association (NMIA), this journal covers operational, organizational, and technical topics of interest to national and military intelligence officers.
  • Military Intelligence Professional Bulletin—This is a quarterly bulletin of the U.S. Army Intelligence Center (Ft. Huachuca) that is available on- line and provides information to military intelligence officers on studies of past events, operations, processes, military systems, and emerging research and development.
  • Jane’s Intelligence Review—This monthly magazine provides open source analyses of international military organizations, NGOs that threaten or wage war, conflicts, and security issues.

1.5.2 Business and Competitive Intelligence

Several sources focus on the specific areas of business and competitive intelligence with attention to the management, ethical, and technical aspects of collection, analysis, and valuation of products.

  • Competitive Intelligence Magazine—This is a CI source for general applications-related articles on CI, published bimonthly by John Wiley & Sons with the Society for Competitive Intelligence (SCIP).
  • Competitive Intelligence Review—This quarterly journal, also published by John Wiley with the SCIP, contains best-practice case studies as well as technical and research articles.
  • Management International Review—This is a quarterly refereed journal that covers the advancement and dissemination of international applied research in the fields of management and business. It is published by Gabler Verlag, Germany, and is available on-line.
  • Journal of Strategy and Business—This quarterly journal, published by Booz Allen and Hamilton focuses on strategic business issues, including regular emphasis on both CI and KM topics in business articles.

1.5.3 KM

The developments in the field of KM are covered by a wide range of business, information science, organizational theory, and dedicated KM sources that pro- vide information on this diverse and fast growing area.

  • CIO Magazine—This monthly trade magazine for chief information officers and staff includes articles on KM, best practices, and related leadership topics.
  • Harvard Business Review, Sloan Management Review—These management journals cover organizational leadership, strategy, learning and change, and the application of supporting ITs.
  • Journal of Knowledge Management—This is a quarterly academic journal of strategies, tools, techniques, and technologies published by Emerald (UK). In addition, Emerald also publishes quarterly The Learning Organization—An International Journal.
  • IEEE Transactions of Knowledge and Data Engineering—This is an archival journal published bimonthly to inform researchers, developers, managers, strategic planners, users, and others interested in state-of- the-art and state-of-the-practice activities in the knowledge and data engineering area.
  • Knowledge and Process Management—A John Wiley (UK) journal for executives responsible for leading performance improvement and con- tributing thought leadership in business. Emphasis areas include KM, organizational learning, core competencies, and process management.
  • American Productivity and Quality Center (APQC)—THE APQC is a nonprofit organization that provides the tools, information, expertise, and support needed to discover and implement best practices in KM. Its mission is to discover, research, and understand emerging and effective methods of both individual and organizational improvement, to broadly disseminate these findings, and to connect individuals with one another and with the knowledge, resources, and tools they need to successfully manage improvement and change. They maintain an on-line site at www.apqc.org.
  • Data Mining and Knowledge Discovery—This Kluwer (Netherlands) journal provides technical articles on the theory, techniques, and practice of knowledge extraction from large databases.

1.6 The Organization of This Book

This book is structured to introduce the unique role, requirements, and stake- holders of intelligence (the applications) before introducing the KM processes, technologies, and implementations.

2
The Intelligence Enterprise

Intelligence, the strategic information and knowledge about an adversary and an operational environment obtained through observation, investigation, analysis, or understanding, is the product of an enterprise operation that integrates people and processes in a organizational and networked computing environment.

The intelligence enterprise exists to produce intelligence goods and service—knowledge and foreknowledge to decision- and policy-making customers. This enterprise is a production organization whose prominent infrastructure is an information supply chain. As in any business, it has a “front office” to manage its relations with customers, with the information supply chain in the “back office.”

The intellectual capital of this enterprise includes sources, methods, workforce competencies, and the intelligence goods and services produced. As in virtually no other business, the protection of this capital is paramount, and therefore security is integrated into every aspect of the enterprise.

2.1 The Stakeholders of Nation-State Intelligence

The intelligence enterprise, like any other enterprise providing goods and services, includes a diverse set of stakeholders in the enterprise operation. The business model for any intelligence enterprise, as for any business, must clearly identify the stakeholders who own the business and those who produce and consume its goods and services.

  • The owners of the process include the U.S. public and its elected officials, who measure intelligence value in terms of the degree to which national security is maintained. These owners seek awareness and warning of threats to prescribed national interests.
  • Intelligence consumers (customers or users) include national, military, and civilian user agencies that measure value in terms of intelligence contribution to the mission of each organization, measured in terms of its impact on mission effectiveness.
  • Intelligence producers, the most direct users of raw intelligence, include the collectors (HUMINT and technical), processor agencies, and analysts. The principal value metrics of these users are performance based: information accuracy, coverage breadth and depth, confidence, and timeliness.

The purpose and value chains for intelligence (Figure 2.2) are defined by the stakeholders to provide a foundation for the development of specific value measures that assess the contribution of business components to the overall enterprise. The corresponding chains in the U.S. IC include:

  • Source—the source or basis for defining the purpose of intelligence is found in the U.S. Constitution, derivative laws (i.e., the National Security Act of 1947, Central Intelligence Agency Act of 1949, National Security Agency Act of 1959, Foreign Intelligence Surveillance Act of 1978, and Intelligence Organization Act of 1992), and orders of the executive branch [2]. Derived from this are organizational mission documents, such as the Director of Central Intelligence (DCI) Strategic Intent [3], which documents communitywide purpose and vision, as well as derivative guidance documents prepared by intelligence providers.
  • Purpose chain—the causal chain of purposes (objectives) for which the intelligence enterprise exists. The ultimate purpose is national security, enabled by information (intelligence) superiority that, in turn, is enabled by specific purposes of intelligence providers that will result in information superiority.
  • Value chain—the chain of values (goals) by which achievement of the enterprise purpose is measured.
  • Measures—Specific metrics by which values are quantified and articulated by stakeholders and by which the value of the intelligence enterprise is evaluated.

In a similar fashion, business and competitive intelligence have stakeholders that include customers, shareholders, corporate officers, and employees… there must exist a purpose and value chain that guides the KM operations. These typically include:

  • Source—the business charter and mission statement of a business elaborates the market served and the vision for the businesses role in that market.
  • Purpose chain—the objectives of the business require knowledge about internal operations and the market (BI objectives) as well as competitors (CI).
  • Value chain—the chain of values (goals) by which achievement of the enterprise purpose is measured.
  • Measures—Specific metrics by which values are quantified. A balanced set of measures includes vision and strategy, customer, internal, financial, and learning-growth metrics.

2.2 Intelligence Processes and Products

The process that delivers strategic and operational intelligence products is gener- ally depicted in cyclic form (Figure 2.3), with five distinct phases.

In every case, the need is the basis for a logical process to deliver the knowledge to the requestor.

  1. Planning and direction. The process begins as policy and decision makers define, at a high level of abstraction, the knowledge that is required to make policy, strategic, or operational decisions. The requests are parsed into information required, then to data that must be collected to estimate or infer the required answers. Data requirements are used to establish a plan of collection, which details the elements of data needed and the targets (people, places, and things) from which the data may be obtained.
  2. Collection. Following the plan, human and technical sources of data are tasked to collect the required raw data. The next section introduces the major collection sources, which include both openly available and closed sources that are accessed by both human and technical methods.

These sources and methods are among the most fragile [5]—and most highly protected—elements of the process. Sensitive and specially compartmented collection capabilities that are particularly fragile exist across all of the collection disciplines.

  1. Processing. The collected data is processed (e.g., machine translation, foreign language translation, or decryption), indexed, and organized in an information base. Progress on meeting the requirements of the col- lection plan is monitored and the tasking may be refined on the basis of received data.
  2. All-source analysis-synthesis and production. The organized information base is processed using estimation and inferential (reasoning) techniques that combine all-source data in an attempt to answer the requestor’s questions. The data is analyzed (broken into components and studied) and solutions are synthesized (constructed from the accumulating evidence). The topics or subjects (intelligence targets) of study are modeled, and requests for additional collection and processing may be made to acquire sufficient data and achieve a sufficient level of understanding (or confidence to make a judgment) to answer the consumer’s questions.
  3. Dissemination. Finished intelligence is disseminated to consumers in a variety of formats, ranging from dynamic operating pictures of war- fighters’ weapon systems to formal reports to policymakers. Three categories of formal strategic and tactical intelligence reports are distinguished by their past, present, and future focus: current intelligence reports are news-like reports that describe recent events or indications and warnings, basic intelligence reports provide complete descriptions of a specific situation (e.g., order of battle or political situation), and intelligence estimates attempt to predict feasible future outcomes as a result of current situation, constraints, and possible influences [6].

Though introduced here in the classic form of a cycle, in reality the process operates as a continuum of actions with many more feedback (and feedforward) paths that require collaboration between consumers, collectors, and analysts.

2.3 Intelligence Collection Sources and Methods

A taxonomy of intelligence data sources includes sources that are openly accessible or closed (e.g., denied areas, secured communications, or clandestine activities). Due to the increasing access to electronic media (i.e., telecommunications, video, and computer networks) and the global expansion of democratic societies, OSINT is becoming an increasingly important source of global data. While OSINT must be screened and cross validated to filter errors, duplications, and deliberate misinformation (as do all sources), it provides an economical source of public information and is a contributor to other sources for cueing, indications, and confirmation

Measurements and signatures intelligence (MASINT) is technically derived knowledge from a wide variety of sensors, individual or fused, either to perform special measurements of objects or events of interest or to obtain signatures for use by the other intelligence sources. MASINT is used to characterize the observable phenomena (observables) of the environment and objects of surveillance.

U.S. intelligence studies have pointed out specific changes in the use of these sources as the world increases globalization of commerce and access to social, political, economic, and technical information [10–12]:

  • The increase in unstructured and transnational threats requires the robust use of clandestine HUMINT sources to complement extensive technical verification means.
  • Technical means of collection are required for both broad area coverage and detailed assessment of the remaining denied areas of the world.

2.3.1 HUMINT Collection

HUMINT refers to all information obtained directly from human sources

HUMINT sources may be overt or covert (clandestine); the most common categories include:

  • Clandestine intelligence case officers. These officers are own-country individuals who operate under a clandestine “cover” to collect intelligence and “control” foreign agents to coordinate collections.
  • Agents. These are foreign individuals with access to targets of intelligence who conduct clandestine collection operations as representatives of their controlling intelligence officers. These agents may be recruited or “walk-in” volunteers who act for a variety of ideological, financial, or personal motives.
  • Émigrés, refugees, escapees, and defectors. The open, overt (yet discrete) programs to interview these recently arrived foreign individuals provide background information on foreign activities as well as occasional information on high-value targets.
  • Third party observers. Cooperating third parties (e.g., third-party countries and travelers) can also provide a source of access to information.

The HUMINT discipline follows a rigorous process for acquiring, employing, and terminating the use of human assets that follows a seven-step sequence. The sequence followed by case officers includes:

  1. Spotting—locating, identifying, and securing low-level contact with agent candidates;
  2. Evaluation—assessment of the potential (i.e., value or risk) of the spotted individual, based on a background investigation;
  3. Recruitment—securing the commitment from the individual;
  4. Testing—evaluation of the loyalty of the agent;
  5. Training—supporting the agent with technical experience and tools;
  6. Handling—supporting and reinforcing the agent’s commitment;
  7. Termination—completion of the agent assignment by ending the relationship.

 

HUMINT is dependent upon the reliability of the individual source, and lacks the collection control of technical sensors. Furthermore, the level of security to protect human sources often limits the fusion of HUMINT reports with other sources and the dissemination of wider customer bases. Directed high-risk HUMINT collections are generally viewed as a precious resource to be used for high-value targets to obtain information unobtainable by technical means or to validate hypotheses created by technical collection analysis.

2.3.2 Technical Intelligence Collection

Technical collection is performed by a variety of electronic (e.g., electromechanical, electro-optical, or bioelectronic) sensors placed on platforms in space, the atmosphere, on the ground, and at sea to measure physical phenomena (observables) related to the subjects of interest (intelligence targets).

The operational utility of these collectors for each intelligence application depends upon several critical factors:

  • Timeliness—the time from collection of event data to delivery of a tactical targeting cue, operational warnings and alerts, or formal strategic report;
  • Revisit—the frequency with which a target of interest can be revisited to understand or model (track) dynamic behavior;
  • Accuracy—the spatial, identity, or kinematic accuracy of estimates and predictions;
  • Stealth—the degree of secrecy with which the information is gathered and the measure of intrusion required.

2.4 Collection and Process Planning

The technical collection process requires the development of a detailed collection plan, which begins with the decomposition of the subject target into activities, observables, and then collection requirements.

From this plan, technical collectors are tasked and data is collected and fused (a composition, or reconstruction that is the dual of the decomposition process) to derive the desired intelligence about the target.

2.5 KM in the Intelligence Process

The intelligence process must deal with large volumes of source data, converting a wide range of text, imagery, video, and other media types into organized information, then performing the analysis-synthesis process to deliver knowledge in the form of intelligence products.

IT is providing increased automation of the information indexing, discovery, and retrieval (IIDR) functions for intelligence, especially the exponentially increasing volumes of global open-source data.

 

The functional information flow in an automated or semiautomated facility (depicted in Figure 2.5) requires digital archiving and analysis to ingest continu- ous streams of data and manage large volumes of analyzed data. The flow can be broken into three phases:

  1. Capture and compile;
    2. Preanalysis;
    3. Exploitation (analysis-synthesis).

The preanalysis phase indexes each data item (e.g., article, message, news segment, image, book or chapter) by assigning a reference for storage; generating an abstract that summarizes the content of the item and metadata with a description of the source, time, reliability-confidence, and relationship to other items (abstracting); and extracting critical descriptors of content that characterize the contents (e.g., keywords) or meaning (deep indexing) of the item for subsequent analysis. Spatial data (e.g., maps, static imagery, or video imagery) must be indexed by spatial context (spatial location) and content (imagery content).

The indexing process applies standard subjects and relationships, maintained in a lexicon and thesaurus that is extracted from the analysis information base. Fol- lowing indexing, data items are clustered and linked before entry into the analy- sis base. As new items are entered, statistical analyses are performed to monitor trends or events against predefined templates that may alert analysts or cue their focus of attention in the next phase of processing.

The categories of automated tools that are applied to the analysis information base include the following tools:

  • Interactive search and retrieval tools permit analysts to search by content, topic, or related topics using the lexicon and thesaurus subjects.
  • Structured judgment analysis tools provide visual methods to link data, synthesize deductive logic structures, and visualize complex relation- ships between data sets. These tools enable the analyst to hypothesize, explore, and discover subtle patterns and relationships in large data volumes—knowledge that can be discerned only when all sources are viewed in a common context.
  • Modeling and simulation tools model hypothetical activities, allowing modeled (expected) behavior to be compared to evidence for validation or projection of operations under scrutiny.
  • Collaborative analysis tools permit multiple analysts in related subject areas, for example, to collaborate on the analysis of a common subject.
  • Data visualization tools present synthetic views of data and information to the analyst to permit patterns to be examined and discovered.

2.6 Intelligence Process Assessments and Reengineering

The U.S. IC has been assessed throughout and since the close of the Cold War to study the changes necessary to adapt to advanced collection capabilities, changing security threats, and the impact of global information connectivity and information availability. Published results of these studies provide insight into the areas of intelligence effectiveness that may be enhanced by organizing the community into a KM enterprise. We focus here on the technical aspects of the changes rather than the organizational aspects recommended in numerous studies.

2.6.1 Balancing Collection and Analysis

Intelligence assessments have evaluated the utility of intelligence products and the balance of investment between collection and analysis.

2.6.2 Focusing Analysis-Synthesis

An independent study [21] of U.S. intelligence recommended a need for intelligence to sharpen the focus of analysis-synthesis resources to deal with the increased demands by policymakers for knowledge on a wider ranges of topics, the growing breadth of secret and open sources, and the availability of commercial open-source analysis.

2.6.3

Balancing Analysis-Synthesis Processes

One assessment conducted by the U.S. Congress reviewed the role of analysis- synthesis and the changes necessary for the community to reengineer its processes from a Cold War to a global awareness focus. Emphasizing the crucial role of analysis, the commission noted:

The raison d’etre of the Intelligence Community is to provide accurate and meaningful information and insights to consumers in a form they can use at the time they need them. If intelligence fails to do that, it fails altogether. The expense and effort invested in collecting and processing the information have gone for naught.

The commission identified the KM challenges faced by large-scale intelligence analysis that encompasses global issues and serves a broad customer base.

The commission’s major observations provide insight into the emphasis on people- related (rather than technology-related) issues that must be addressed for intelligence to be valued by the policy and decision makers that consume intelligence:

  1. Build relationships. A concerted effort is required to build relationships between intelligence producers and the policymakers they serve. Producer-consumer relationships range from assignment of intelligence liaison officers with consumers (the closest relationship and greatest consumer satisfaction) to holding regular briefings, or simple producer-subscriber relationships for general broadcast intelligence. Across this range of relationships, four functions must be accomplished for intelligence to be useful:
  • Analysts must understand the consumer’s level of knowledge and the issues they face.
  • Intelligence producers must focus on issues of significance and make information available when needed, in a format appropriate to the unique consumer.
  • Consumers must develop an understanding of what intelligence can and—equally important—cannot do.
  • Both consumer and producer must be actively engaged in a dialogue with analysts to refine intelligence support to decision making.
  1. Increase and expand the scope of analytic expertise. The expertise of the individual analysts and the community of analysts must be maintained at the highest level possible. This expertise is in two areas: domain, or region of focus (e.g., nation, group, weapon systems, or economics), and analytic-synthetic tradecraft. Expertise development should include the use of outside experts, travel to countries of study, sponsor- ship of topical conferences, and other means (e.g., simulations and peer reviews).
  2. Enhance use of open sources. Open-source data (i.e., publicly available data in electronic and broadcast media, journals, periodicals, and commercial databases) should be used to complement (cue, provide con- text, and in some cases, validate) special, or closed, sources. The analyst must have command of all available information and the means to access and analyze both categories of data in complementary fashion.
  3. Make analysis available to users. Intelligence producers must increasingly apply dynamic, electronic distribution means to reach consumers for collaboration and distribution. The DoD Joint Deployable Intelligence Support System (JDISS) and IC Intelink were cited as early examples of networked intelligence collaboration and distribution systems.
  4. Enhance strategic estimates. The United States produces national intelligence estimates (NIEs) that provide authoritative statements and fore- cast judgments about the likely course of events in foreign countries and their implications for the United States. These estimates must be enhanced to provide timely, objective, and relevant data on a wider range of issues that threaten security.
  5. Broaden the analytic focus. As the national security threat envelope has broadened (beyond the narrower focus of the Cold War), a more open, collaborative environment is required to enable intelligence analysts to interact with policy departments, think tanks, and academia to analyze, debate, and assess these new world issues.

In the half decade since the commission recommendations were published, the United States has implemented many of the recommendations. Several examples of intelligence reengineering include:

  • Producer-consumer relationships. The introduction of collaborative networks, tools, and soft-copy products has permitted less formal interaction and more frequent exchange between consumers and producers. This allows intelligence producers to better understand consumer needs and decision criteria. This has enabled the production of more focused, timely intelligence.
  • Analytic expertise. Enhancements in analytic training and the increased use of computer-based analytic tools and even simulation are providing greater experience—and therefore expertise—to human analysts.
  • Open source. Increased use of open-source information via commercial providers (e.g., Lexis NexisTM subscription clipping services to tailored topics) and the Internet has provided an effective source for obtaining background information. This enables special sources and methods to focus on validation of critical implications.
  • Analysis availability. The use of networks continues to expand for both collaboration (between analysts and consumers as well as between analysts) and distribution. This collaboration was enabled by the intro- duction and expansion of the classified Internet (Intelink) that interconnects the IC [24].
  • Broadened focus. The community has coordinated open panels to dis- cuss, debate, and collaboratively analyze and openly publish strategic perspectives of future security issues. One example is the “Global Trends 2015” report that resulted from a long-term collaboration with academia, the private sector, and topic area experts [25].

2.7 The Future of Intelligence

The two primary dimensions of future threats to national (and global) security include the source (from nation-state actors to no-state actors) and the threat-generating mechanism (continuous results of rational nation-state behaviors to discontinuities in complex world affairs). These threat changes and the contrast in intelligence are summarized in Table 2.4. Notice that these changes coincide with the transition from sensor-centric to network- and knowledge-centric approaches to intelligence introduced in Chapter 1.

intelligence must focus on knowledge creation in an enterprise environment that is prepared to rapidly reinvent itself to adapt to emergent threats.

3
Knowledge Management Processes

KM is the term adopted by the business community in the mid 1990s to describe a wide range of strategies, processes, and disciplines that formalize and integrate an enterprise’s approach to organizing and applying its knowledge assets. Some have wondered what is truly new about the concept of managing knowledge. Indeed, many pure knowledge-based organizations (insurance companies, consultancies, financial management firms, futures brokers, and of course, intelligence organizations) have long “managed” knowledge—and such management processes have been the core competency of the business.

The scope of knowledge required by intelligence organizations has increased in depth and breadth as commerce has networked global markets and world threats have diversified from a monolithic Cold War posture. The global reach of networked information, both open and closed sources, has produced a deluge of data—requiring computing support to help human analysts sort, locate, and combine specific data elements to provide rapid, accurate responses to complex problems. Finally, the formality of the KM field has grown significantly in the past decade—developing theories for valuing, auditing, and managing knowledge as an intellectual asset; strategies for creating, reusing, and leveraging the knowledge asset; processes for con- ducting collaborative transactions of knowledge among humans and machines; and network information technologies for enabling and accelerating these processes.

3.1 Knowledge and Its Management

In the first chapter, we introduced the growing importance of knowledge as the central resource for competition in both the nation-state and in business. Because of this, the importance of intelligence organizations providing strategic knowledge to public- and private-sector decision makers is paramount. We can summarize this importance of intelligence to the public or private enterprise in three assertions about knowledge.

First, knowledge has become the central asset or resource for competitive advantage. In the Tofflers’ third wave, knowledge displaces capital, labor, and natural resources as the principal reserve of the enterprise. This is true in wealth creation by businesses and in national security and the conduct of warfare for nation-states.

Second, it is asserted that the management of the knowledge resource is more complex than other resources. The valuation and auditing of knowledge is unlike physical labor or natural resources; knowledge is not measured by “head counts” or capital valuation of physical inventories, facilities, or raw materials (like stockpiles of iron ore, fields of cotton, or petroleum reserves). New methods of quantifying the abstract entity of knowledge—both in people and in explicit representations—are required. In order to accomplish this complex challenge, knowledge managers must develop means to capture, store, create, and exchange knowledge, while dealing with the sensitive security issues of knowing when to protect and when to share (the trade-off between the restrictive “need to know” and the collaborative “need to share”).

The third assertion about knowledge is that its management therefore requires a delicate coordination of people, processes, and supporting technologies to achieve the enterprise objectives of security, stability, and growth in a dynamic world:

  • People. KM must deal with cultures and organizational structures that enable and reward the growth of knowledge through collaborative learning, reasoning, and problem solving.
  • Processes. KM must also provide an environment for exchange, discovery, retention, use, and reuse of knowledge across the organization.
  • Technologies. Finally, IT must be applied to enable the people and processes to leverage the intellectual asset of actionable knowledge.

 

Definitions of KM as a formal activity are as diverse as its practitioners (Table 3.1), but all have in common the following general characteristics:

KM is based on a strategy that accepts knowledge as the central resource to achieve business goals and that knowledge—in the minds of its people, embedded in processes, and in explicit representations in knowledge bases—must be regarded as an intellectual form of capital to be leveraged. Organizational values must be coupled with the growth of this capital.

KM involves a process that, like a supply chain, moves from raw materials (data) toward knowledge products. The process is involved in acquiring (data), sorting, filtering, indexing and organizing (information), reasoning (analyzing and synthesizing) to create knowledge, and finally disseminating that knowledge to users. But this supply chain is not a “stovepiped” process (a narrow, vertically integrated and compartmented chain); it horizontally integrates the organization, allowing collaboration across all areas of the enterprise where knowledge sharing provides benefits.

KM embraces a discipline and cultural values that accept the necessity for sharing purpose, values, and knowledge across the enterprise to leverage group diversity and perspectives to promote learning and intellectual problem solving. Collaboration, fully engaged communication and cognition, is required to network the full intellectual power of the enterprise.

The U.S. National Security Agency (NSA) has adopted the following “people-oriented” definition of KM to guide its own intelligence efforts:

Strategies and processes to create, identify, capture, organize and leverage vital skills, information and knowledge to enable people to best accomplish the organizational mission.7ryfcv

The DoD has further recognized that KM is the critical enabler for information superiority:

The ability to achieve and sustain information superiority depends, in large measure, upon the creation and maintenance of reusable knowledge bases; the ability to attract, train, and retain a highly skilled work force proficient in utilizing these knowledge bases; and the development of core business processes designed to capitalize upon these assets.

The processes by which abstract knowledge results in tangible effects can be examined as a net of influences that effect knowledge creation and decision making.

The flow of influences in the figure illustrates the essential contributions of shared knowledge.

  1. Dynamic knowledge. At the central core is a comprehensive and dynamic understanding of the complex (business or national security) situation that confronts the enterprise. This understanding accumulates over time to provide a breadth and depth of shared experience, or organizational memory.
  2. Critical and systems thinking. Situational understanding and accumulated experience enables dynamic modeling to provide forecasts from current situations—supporting the selection of adapting organizational goals. Comprehensive understanding (perception) and thorough evaluation of optional courses of actions (judgment) enhance decision making. As experience accumulates and situational knowledge is refined, critical explicit thinking and tacit sensemaking about current situations and the consequences of future actions is enhanced.
  3. Shared operating picture. Shared pictures of the current situation (common operating picture), past situations and outcomes (experience), and forecasts of future outcomes enable the analytic workforce to collaborate and self-synchronize in problem solving.
  4. Focused knowledge creation. Underlying these functions is a focused data and experience acquisition process that tracks and adapts as the business or security situation changes.

While Figure 3.1 maps the general influences of knowledge on goal setting, judgment, and decision making in an enterprise, an understanding of how knowledge influences a particular enterprise in a particular environment is necessary to develop a KM strategy. Such a strategy seeks to enhance organizational knowledge of these four basic areas as well as information security to protect the intellectual assets,

3.2 Tacit and Explicit Knowledge

In the first chapter, we offered a brief introduction to hierarchical taxonomy of data, information, and knowledge, but here we must refine our understanding of knowledge and its construct before we delve into the details of management processes.

In this chapter, we distinguish between the knowledge-creation processes within the knowledge-creating hierarchy (Figure 3.2). The hierarchy illustrates the distinctions we make, in common terminology, between explicit (represented and defined) processes and those that are implicit (or tacit; knowledge processes that are unconscious and not readily articulated).

3.2.1 Knowledge As Object

The most common understanding of knowledge is as an object—the accumulation of things perceived, discovered, or learned. From this perspective, data (raw measurements or observations), information (data organized, related, and placed in context), and knowledge (information explained and the underlying processes understood) are also objects. The KM field has adopted two basic distinctions in the categories of knowledge as object:

  1. Explicit knowledge. This is the better known form of knowledge that has been captured and codified in abstract human symbols (e.g., mathematics, logical propositions, and structured and natural language). It is tangible, external (to the human), and logical. This documented knowledge can be stored, repeated, and taught by books because it is impersonal and universal. It is the basis for logical reasoning and, most important of all, it enables knowledge to be communicated electronically and reasoning processes to be automated.
  2. Tacit knowledge. This is the intangible, internal, experiential, and intuitive knowledge that is undocumented and maintained in the human mind. It is a personal knowledge contained in human experience. Philosopher Michael Polanyi pioneered the description of such knowledge in the 1950s, considering the results of Gestalt psychology and the philosophic conflict between moral conscience and scientific skepticism. In The Tacit Dimension, he describes a kind of knowledge that we cannot tell. This tacit knowledge is characterized by intangible fac- tors such as perception, belief, values, skill, “gut” feel, intuition, “know-how,” or instinct; this knowledge is unconsciously internalized and cannot be explicitly described (or captured) without effort.

An understanding of the relationship between knowledge and mind is of particular interest to the intelligence discipline, because these analytic techniques will serve two purposes:

  1. Mind as knowledge manager. Understanding of the processes of exchanging tacit and explicit knowledge will, of course, aid the KM process itself. This understanding will enhance the efficient exchange of knowledge between mind and computer—between internal and external representations.
  2. Mind as intelligence target. Understanding of the complete human processes of reasoning (explicit logical thought) and sensemaking (tacit, emotional insight) will enable more representative modeling of adversarial thought processes. This is required to understand the human mind as an intelligence target—representing perceptions, beliefs, motives, and intentions

Previously, we have used the terms resource and asset to describe knowledge, but it is not only an object or a commodity to be managed. Knowledge can also be viewed as a dynamic, embedded in processes that lead to action. In the next section, we explore this complementary perspective of knowledge.

3.2.2 Knowledge As Process

Knowledge can also be viewed as the action, or dynamic process of creation, that proceeds from unstructured content to structured understanding. This perspective considers knowledge as action—as knowing. Because knowledge explains the basis for information, it relates static information to a dynamic reality. Knowing is uniquely tied to the creation of meaning.

Karl Weick introduced the term sensemaking to describe the tacit knowing process of retrospective rationality—the method by which individuals and organizations seek to rationally account for things by going back in time to structure events and explanations holistically. We do this, to “make sense” of reality, as we perceive it, and create a base of experience, shared meaning, and understanding.

To model and manage the knowing process of an organization requires attention to both of these aspects of knowledge—one perspective emphasizing cognition, the other emphasizing culture and context. The general knowing process includes four basic phases that can be described in process terms that apply to tacit and explicit knowledge, in human and computer terms, respectively.

  1. This process acquires knowledge by accumulating data through human observation and experience or technical sensing and measurement. The capture of e-mail discussion threads, point-of-sales transactions, or other business data, as well as digital imaging or signals analysis are but examples of the wide diversity of acquisition methods.
  1. Maintenance. Acquired explicit data is represented in a standard form, organized, and stored for subsequent analysis and application in digital databases. Tacit knowledge is stored by humans as experience, skill, or expertise, though it can be elicited and converted to explicit form in terms of accounts, stories (rich explanations), procedures, or explanations.
  2. Transformation. The conversion of data to knowledge and knowledge from one form to another is the creative stage of KM. This knowledge-creation stage involves more complex processes like internalization, intuition, and conceptualization (for internal tacit knowledge) and correlation and analytic-synthetic reasoning (for explicit knowledge). In the next subsection, this process is described in greater detail.
  3. Transfer. The distribution of acquired and created knowledge across the enterprise is the fourth phase. Tacit distribution includes the sharing of experiences, collaboration, stories, demonstrations, and hands-on training. Explicit knowledge is distributed by mathematical, graphical, and textual representations, from magazines and textbooks to electronic media.

the three phases of organizational knowing (focusing on culture) described by Davenport and Prusak in their text Working Knowledge [17]:

  1. Generation. Organizational networks generate knowledge by social processes of sharing, exploring, and creating tacit knowledge (stories, experiences, and concepts) and explicit knowledge (raw data, organized databases, and reports). But these networks must be properly organized for diversity of both experience and perspective and placed under appropriate stress (challenge) to perform. Dedicated cross- functional teams, appropriately supplemented by outside experts and provided a suitable challenge, are the incubators for organizational knowledge generation.
  2. Codification and coordination. Codification explicitly represents generated knowledge and the structure of that knowledge by a mapping process. The map (or ontology) of the organization’s knowledge allows individuals within the organization to locate experts (tacit knowledge holders), databases (of explicit knowledge), and tacit-explicit net- works. The coordination process models the dynamic flow of knowledge within the organization and allows the creation of narratives (stories) to exchange tacit knowledge across the organization.
  3. Transfer. Knowledge is transferred within the organization as people interact; this occurs as they are mentored, temporarily exchanged, transferred, or placed in cross-functional teams to experience new perspectives, challenges, or problem-solving approaches.

3.2.3 Knowledge Creation Model

Nonaka and Takeuchi describe four modes of conversion, derived from the possible exchanges between two knowledge types (Figure 3.5):

  1. Tacit to tacit—socialization. Through social interactions, individuals within the organization exchange experiences and mental models, transferring the know-how of skills and expertise. The primary form of transfer is narrative—storytelling—in which rich context is conveyed and subjective understanding is compared, “reexperienced,” and internalized. Classroom training, simulation, observation, mentoring, and on-the-job training (practice) build experience; moreover, these activities also build teams that develop shared experience, vision, and values. The socialization process also allows consumers and producers to share tacit knowledge about needs and capabilities, respectively.
  2. Tacit to explicit—externalization. The articulation and explicit codification of tacit knowledge moves it from the internal to external. This can be done by capturing narration in writing, and then moving to the construction of metaphors, analogies, and ultimately models. Externalization is the creative mode where experience and concept are expressed in explicit concepts—and the effort to express is in itself a creative act. (This mode is found in the creative phase of writing, invention, scientific discovery, and, for the intelligence analyst, hypothesis creation.)
  1. Explicit to explicit—combination. Once explicitly represented, different objects of knowledge can be characterized, indexed, correlated, and combined. This process can be performed by humans or computers and can take on many forms. Intelligence analysts compare multiple accounts, cable reports, and intelligence reports regarding a common subject to derive a combined analysis. Military surveillance systems combine (or fuse) observations from multiple sensors and HUMINT reports to derive aggregate force estimates. Market analysts search (mine) sales databases for patterns of behavior that indicate emerging purchasing trends. Business developers combine market analyses, research and development results, and cost analyses to create strategic plans. These examples illustrate the diversity of the combination processes that combine explicit knowledge.
  2. Explicit to tacit—internalization. Individuals and organizations internalize knowledge by hands-on experience in applying the results of combination. Combined knowledge is tested, evaluated, and results in new tacit experience. New skills and expertise are developed and integrated into the tacit knowledge of individuals and teams.

Nonaka and Takeuchi further showed how these four modes of conversion operate in an unending spiral sequence to create and transfer knowledge throughout the organization

Organizations that have redundancy of information (in people, processes, and databases) and diversity in their makeup (also in people, processes, and databases) will enhance the ability to move along the spiral. The modes of activity benefit from a diversity of people: socialization requires some who are stronger in dialogue to elicit tacit knowledge from the team; externalization requires others who are skilled in representing knowledge in explicit forms; and internalization benefits from those who experiment, test ideas, and learn from experience, with the new concepts or hypotheses arising from combination.

Organizations can also benefit from creative chaos—changes that punctuate states of organizational equilibrium. These states include static presumptions, entrenched mindsets, and established processes that may have lost validity in a changing environment. Rather than destabilizing the organization, the injection of appropriate chaos can bring new-perspective reflection, reassess- ment, and renewal of purpose. Such change can restart tacit-explicit knowledge exchange, where the equilibrium has brought it to a halt.

3.3 An Intelligence Use Case Spiral

We follow a distributed crisis intelligence cell, using networked collaboration tools, through one complete spiral cycle to illustrate the spiral. This case is deliberately chosen because it stresses the spiral (no face-to-face interaction by the necessarily distributed team, very short time to interact, the temporary nature of the team, and no common “organizational” membership), yet illustrates clearly the phases of tacit-explicit exchange and the practical insight into actual intelligence- analysis activities provided by the model.

3.3.1 The Situation

The crisis in small but strategic Kryptania emerged rapidly. Vital national inter- ests—security of U.S. citizens, U.S. companies and facilities, and the stability of the fledgling democratic state—were at stake. Subtle but cascading effects in the environment, economy, and political domains triggered the small political lib- eration front (PLF) to initiate overt acts of terrorism against U.S. citizens, facili- ties, and embassies in the region while seeking to overthrow the fledgling democratic government.

3.3.2 Socialization

Within 10 hours of the team formation, all members participate in an on-line SBU kickoff meeting (same-time, different-place teleconference collaboration) that introduces all members, describes the group’s intelligence charter and procedures, explains security policy, and details the use of the portal/collaboration workspace created for the team. The team leader briefs the current situation and the issues: areas of uncertainly, gaps in knowledge or collection, needs for information, and possible courses of events that must be better understood. The group is allowed time to exchange views and form their own subgroups on areas of contribution that each individual can bring to the problem. Individuals express concepts for new sources for collection and methods of analysis. In this phase, the dialogue of the team, even though not face to face, is invaluable in rapidly establishing trust and a shared vision for the critical task over the ensuing weeks of the crisis.

3.3.3 Externalization

The initial discussions lead to the creation of initial explicit models of the threat that are developed by various team members and posted on the portal for all to see

The team collaboratively reviews and refines these models by updating new versions (annotated by contributors) and suggesting new submodels (or linking these models into supermodels). This externalization process codifies the team’s knowledge (beliefs) and speculations (to be evaluated) about the threat. Once externalized, the team can apply the analytic tools on the portal to search for data, link evidence, and construct hypothesis structures. The process also allows the team to draw on support from resources outside the team to conduct supporting collections and searches of databases for evidence to affirm, refine, or refute the models.

3.3.4 Combination

The codified models become archetypes that represent current thinking—cur- rent prototype hypotheses formed by the group about the threat (who—their makeup; why—their perceptions, beliefs, intents, and timescales; what—their resources, constraints and limitations, capacity, feasible plans, alternative courses of action, vulnerabilities). This prototype-building process requires the group to structure its arguments about the hypotheses and combine evidence to support its claims. The explicit evidence models are combined into higher level explicit explanations of threat composition, capacity, and behavioral patterns.

Initial (tentative) intelligence products are forming in this phase, and the team begins to articulate these prototype products—resulting in alternative hypotheses and even recommended courses of action

3.3.5 Internalization

As the evidentiary and explanatory models are developed on the portal, the team members discuss (and argue) over the details, internally struggling with acceptance or rejection of the validity of the various hypotheses. Individual team members search for confirming or refuting evidence in their own areas of expertise and discuss the hypotheses with others on the team or colleagues in their domain of expertise (often expressing them in the form of stories or metaphors) to experience support or refutation. This process allows the members to further refine and develop internal belief and confidence in the predictive aspects of the models. As accumulating evidence over the ensuing days strengthens (or refutes) the hypotheses, the process continues to internalize those explanations that the team has developed that are most accurate; they also internalize confidence in the sources and collaborative processes that were most productive for this ramp-up phase of the crisis situation.

3.3.6 Socialization

As the group periodically reconvenes, the subject focuses away from “what we must do” to the evidentiary and explanatory models that have been produced. The dialogue turns from issues of startup processes to model-refinement processes. The group now socializes around a new level of the problem: Gaps in the models, new problems revealed by the models, and changes in the evolving crisis move the spiral toward new challenges to create knowledge about vulnerabilities in the PLF and supporting networks, specific locations of black propaganda creation and distribution, finances of certain funding organizations, and identification of specific operation cells within the Kryptanian government.

3.3.7 Summary

This example illustrates the emergent processes of knowledge creation over the several day ramp-up period of a distributed crisis intelligence team.

The full spiral moved from team members socializing to exchange the tacit knowledge of the situation toward the development of explicit representations of their tacit knowledge. These explicit models allowed other supporting resources to be applied (analysts external to the group and online analytic tools) to link further evidence to the models and structure arguments for (or against) the models. As the models developed, team members discussed, challenged, and internalized their understanding of the abstractions, developing confidence and hands-on experience as they tested them against emerging reports and discussed them with team members and colleagues. The confidence and internalized understanding then led to a drive for further dialogue—initializing a second cycle of the spiral.

3.4 Taxonomy of KM

Using the fundamental tacit-explicit distinctions, and the conversion processes of socialization, externalization, internalization, and combination, we can establish a helpful taxonomy of the processes, disciplines, and technologies of the broad KM field applied to the intelligence enterprise. A basic taxonomy that categorizes the breadth of the KM field can be developed by distinguishing three areas of distinct (though very related) activities:

  1. People. The foremost area of KM emphasis is on the development of intellectual capital by people and the application of that knowledge by those people. The principal knowledge-conversion process in this area is socialization, and the focus of improvement is on human operations, training, and human collaborative processes. The basis of collaboration is human networks, known as communities of practice—sharing purpose, values, and knowledge toward a common mission. The barriers that challenge this area of KM are cultural in nature.
  2. Processes. The second KM area focuses on human-computer interaction (HCI) and the processes of externalization and internalization. Tacit-explicit knowledge conversions have required the development of tacit-explicit representation aids in the form of information visuali- zation and analysis tools, thinking aids, and decision support systems. This area of KM focuses on the efficient networking of people and machine processes (such autonomous support processes are referred to as agents) to enable the shared reasoning between groups of people and their agents through computer networks. The barrier to achieving robustness in such KM processes is the difficulty of creating a shared context of knowledge among humans and machines.
  3. Processors. The third KM area is the technological development and implementation of computing networks and processes to enable explicit-explicit combination. Network infrastructures, components, and protocols for representing explicit knowledge are the subject of this fast-moving field. The focus of this technology area is networked computation, and the challenges to collaboration lie in the ability to sustain growth and interoperability of systems and protocols.

 

Because the KM field can also be described by the many domains of expertise (or disciplines of study and practice), we can also distinguish five distinct areas of focus that help describe the field. The first two disciplines view KM as a competence of people and emphasize making people knowledgeable:

  1. Knowledge strategists. Enterprise leaders, such as the chief knowledge officer (CKO), focus on the enterprise mission and values, defining value propositions that assign contributions of knowledge to value (i.e., financial or operational). These leaders develop business models to grow and sustain intellectual capital and to translate that capital into organizational values (e.g., financial growth or organizational performance). KM strategists develop, measure, and reengineer business processes to adapt to the external (business or world) environment.
  2. Knowledge culture developers. Knowledge culture development and sustainment is promoted by those who map organizational knowledge and then create training, learning, and sharing programs to enhance the socialization performance of the organization. This includes the cadre of people who make up the core competencies of the organization (e.g., intelligence analysis, intelligence operations, and collection management). In some organizations a chief learning officer (CLO) is designated this role to oversee enterprise human capital, just as the chief financial officer (CFO) manages (tangible) financial capital.

The next three disciplines view KM as an enterprise capability and emphasize building the infrastructure to make knowledge manageable:

  1. KM applications. Those who apply KM principles and processes to specific business applications create both processes and products (e.g., software application packages) to provide component or end-end serv- ices in a wide variety of areas listed in Table 3.10. Some commercial KM applications have been sufficiently modularized to allow them to be outsourced to application service providers (ASPs) [20] that “package” and provide KM services on a per-operation (transaction) basis. This allows some enterprises to focus internal KM resources on organizational tacit knowledge while outsourcing architecture, infra- structure, tools, and technology.
  2. Enterprise architecture. Architects of the enterprise integrate people, processes, and IT to implement the KM business model. The architecting process defines business use cases and process models to develop requirements for data warehouses, KM services, network infrastructures, and computation.
  3. KM technology and tools. Technologists and commercial vendors develop the hardware and software components that physically implement the enterprise. Table 3.10 provides only a brief summary of the key categories of technologies that make up this broad area that encompasses virtually all ITs.

3.5 Intelligence As Capital

We have described knowledge as a resource (or commodity) and as a process in previous sections. Another important perspective of both the resource and the process is that of the valuation of knowledge. The value (utility or usefulness) of knowledge is first and foremost quantified by its impact on the user in the real world.

the value of intelligence goes far beyond financial considerations in national and MI application. In these cases, the value of knowledge must be measured in its impact on national interests: the warning time to avert a crisis, the accuracy necessary to deliver a weapon, the completeness to back up a policy decision, or the evidential depth to support an organized criminal conviction. Knowledge, as an abstraction, has no intrinsic value—its value is measured by its impact in the real world.

In financial terms, the valuation of the intangible aspects of knowledge is referred to as capital—intellectual capital. These intangible resources include the personal knowledge, skills, processes, intellectual property, and relationships that can be leveraged to produce assets of equal or greater importance than other organizational resources (land, labor, and capital).

What is this capital value in our representative business? It is comprised of four intangible components:

  1. Customer capital. This is the value of established relationships with customers, such as trust and reputation for quality.

Intelligence tradecraft recognizes this form of capital in the form of credibility with consumers—“the ability to speak to an issue with sufficient authority to be believed and relied upon by the intended audience”

  1. Innovation capital. Innovation in the form of unique strategies, new concepts, processes, and products based on unique experience form this second category of capital. In intelligence, new and novel sources and methods for unique problems form this component of intellectual capital.
  2. Process capital. Methodologies and systems or infrastructure (also called structural capital) that are applied by the organization make up its process capital. The processes of collection sources and both collection and analytic methods form a large portion of the intelligence organization’s process (and innovation) capital; they are often fragile (once discovered, they may be forever lost) and are therefore carefully protected.
  3. Human capital. The people, individually and in virtual organizations, comprise the human capital of the organization. Their collective tacit knowledge—expressed as dedication, experience, skill, expertise, and insight—form this critical intangible resource.

O’Dell and Grayson have defined three fundamental categories of value propositions in If Only We Knew What We Know [23]:

  1. Operational excellence. These value propositions seek to boost revenue by reducing the cost of operations through increased operating efficiencies and productivity. These propositions are associated with business process reengineering (BPR), and even business transformation using electronic commerce methods to revolutionize the operational process. These efforts contribute operational value by raising performance in the operational value chain.
  2. Product-to-market excellence. The propositions value the reduction in the time to market from product inception to product launch. Efforts that achieve these values ensure that new ideas move to development and then to product by accelerating the product development process. This value emphasizes the transformation of the business, itself (as explained in Section 1.1).
  3. Customer intimacy. These values seek to increase customer loyalty, customer retention, and customer base expansion by increasing intimacy (understanding, access, trust, and service anticipation) with customers. Actions that accumulate and analyze customer data to reduce selling cost while increasing customer satisfaction contribute to this proposition.

For each value proposition, specific impact measures must be defined to quantify the degree to which the value is achieved. These measures quantify the benefits, and utility delivered to stakeholders. Using these measures, the value added by KM processes can be observed along the sequential processes in the business operation. This sequence of processes forms a value chain that adds value from raw materials to delivered product.

Different kinds of measures are recommended for organizations in transition from legacy business models. During periods of change, three phases are recognized [24]. In the first phase, users (i.e., consumers, collection managers, and analysts) must be convinced of the benefits of the new approach, and the measures include metrics as simple as the number of consumers taking training and beginning to use serv- ices. In the crossover phase, when users begin to transition to the systems, measurers change to usage metrics. Once the system approaches steady-state use, financial-benefit measures are applied. Numerous methods have been defined and applied to describe and quantify economic value, including:

  1. Economic value added (EVA) subtracts cost of capital invested from net operating profit;
  2. Portfolio management approaches treats IT projects as individual investments, computing risks, yields, and benefits for each component of the enterprise portfolio;
  3. Knowledge capital is an aggregate measure of management value added (by knowledge) divided by the price of capital [25];
  4. Intangible asset monitor (IAM) [26] computes value in four categories—tangible capital, intangible human competencies, intangible internal structure, and intangible external structure [27].

The four views of the BSC provide a means of “balancing” the measurement of the major causes and effects of organizational performance but also provide a framework for modeling the organization.

3.6 Intelligence Business Strategy and Models

The commercial community has explored a wide range of business models that apply KM (in the widest sense) to achieve key business objectives. These objectives include enhancing customer service to provide long-term customer satisfaction and retention, expanding access to customers (introducing new products and services, expanding to new markets), increasing efficiency in operations (reduced cost of operations), and introducing new network-based goods and services (eCommerce or eBusiness). All of these objectives can be described by value propositions that couple with business financial performance.

The strategies that leverage KM to achieve these objectives fall into two basic categories. The first emphasizes the use of analysis to understand the value chain from first customer contact to delivery. Understanding the value added to the customer by the transactions (as well as delivered goods and services) allows the producer to increase value to the customer. Values that may be added to intelligence consumers by KM include:

• Service values. Greater value in services are provided to policymakers by anticipating their intelligence needs, earning greater user trust in accuracy and focus of estimates and warnings, and providing more timely delivery of intelligence. Service value is also increased as producers personalize (tailor) and adapt services to the consumer’s interests (needs) as they change.

• Intelligence product values. The value of intelligence products is increased when greater value is “added” by improving accuracy, providing deeper and more robust rationale, focusing conclusions, and building increased consumer confidence (over time).

The second category of strategies (prompted by the eBusiness revolution) seeks to transform the value chain by the introduction of electronic transactions between the customer and retailer. These strategies use network-based advertising, ordering, and even delivery (for information services like banking, investment, and news) to reduce the “friction” of physical-world retailer-customer

These strategies introduce several benefits—all applicable to intelligence:

  • Disintermediation. This is the elimination of intermediate processes and entities between the customer and producer to reduce transaction fric- tion. This friction adds cost and increases the difficulty for buyers to locate sellers (cost of advertising), for buyers to evaluate products (cost of travel and shopping), for buyers to purchase products (cost of sales) and for sellers to maintain local inventories (cost of delivery). The elimination of “middlemen” (e.g., wholesalers, distributors, and local retailers) in eRetailers such as Amazon.com has reduced transaction and intermediate costs and allowed direct transaction and delivery from producer to customer with only the eRetailer in between. The effect of disintermediation in intelligence is to give users greater and more immediate access to intelligence products (via networks such as the U.S. Intelink) and to analysis services via intelligence portals that span all sources of intelligence.
  • Infomediation. The effect of disintermediation has introduced the role of the information broker (infomediary) between customer and seller, providing navigation services (e.g., shopping agents or auctioning and negotiating agents) that act on the behalf of customers [31]. Intelligence communities are moving toward greater cross-functional collection management and analysis, reducing the stovepiped organization of intelligence by collection disciplines (i.e., imagery, signals, and human sources). As this happens, the traditional analysis role requires a higher level of infomediation and greater automation because the analyst is expected (by consumers) to become a broker across a wider range of intelligence sources (including closed and open sources).
  • Customer aggregation. The networking of customers to producers allows rapid analysis of customer actions (e.g., queries for information, browsing through catalogs of products, and purchasing decisions based on information). This analysis enables the producers to better understand customers, aggregate their behavior patterns, and react to (and perhaps anticipate) customer needs. Commercial businesses use these capabilities to measure individual customer patterns and mass market trends to more effectively personalize and target sales and new product developments. Intelligence producers likewise are enabled to analyze warfighter and policymaker needs and uses of intelligence to adapt and tailor products and services to changing security threats.

 

These value chain transformation strategies have produced a simple taxonomy to distinguish eBusiness models into four categories by the level of transaction between businesses and customers

  1. Business to business (B2B). The large volume of trade between businesses (e.g., suppliers and manufacturers) has been enhanced by network-based transactions (releases of specifications, requests for quotations, and bid responses) reducing the friction between suppliers and producers. High-volume manufacturing industries such as the auto- makers are implementing B2B models to increase competition among suppliers and reduce bid-quote-purchase transaction friction.
  2. 2. Business to customer (B2C). Direct networked outreach from producer to consumer has enabled the personal computer (e.g., Dell Computer) and book distribution (e.g., Amazon.com) industries to disintermediate local retailers and reach out on a global scale directly to customers. Similarly, intelligence products are now being delivered (pushed) to consumers on secure electronic networks, via subscription and express order services, analogous to the B2B model.
  3. Customer to business (C2B). Networks also allow customers to reach out to a wider range of businesses to gain greater competitive advantage in seeking products and services.

the introduction of secure intelligence networks and on-line intelligence product libraries (e.g., common operating picture and map and imagery libraries) allows consumers to pull intelligence from a broader range of sources. (This model enables even greater competition between source providers and provides a means of measuring some aspects of intelligence utility based on actual use of product types.)

  1. Customer to customer (C2C). The C2C model automates the mediation process between consumers, enabling consumers to locate those with similar purchasing-selling interests.

3.7 Intelligence Enterprise Architecture and Applications

Just like commercial businesses, intelligence enterprises:

  • Measure and report to stakeholders the returns on investment. These returns are measured in terms of intelligence performance (i.e., knowledge provided, accuracy and timeliness of delivery, and completeness and sufficiency for decision making) and outcomes (i.e., effects of warnings provided, results of decisions based on knowledge delivered, and utility to set long-term policies).
  • Service customers, the intelligence consumers. This is done by providing goods (intelligence products such as reports, warnings, analyses, and target folders) and services (directed collections and analyses or tailored portals on intelligence subjects pertinent to the consumers).
  • Require intimate understanding of business operations and must adapt those operations to the changing threat environment, just as businesses must adapt to changing markets.
  • Manage a supply chain that involves the anticipation of future needs of customers, the adjustment of the delivery of raw materials (intelligence collections), the production of custom products to a diverse customer base, and the delivery of products to customers just in time [33].

3.7.1 Customer Relationship Management

CRM processes that build and maintain customer loyalty focus on managing the relationship between provider and consumer. The short-term goal is customer satisfaction; the long-term goal is loyalty. Intelligence CRM seeks to provide intelligence content to consumers that anticipates their needs, focuses on the specific information that supports their decision making, and provides drill down to supporting rationale and data behind all conclusions. In order to accomplish this, the consumer-producer relationship must be fully described in models that include:

  • Consumer needs and uses of intelligence—applications of intelligence for decision making, key areas of customer uncertainty and lack of knowledge, and specific impact of intelligence on the consumer’s decision making;
  • Consumer transactions—the specific actions that occur between the enterprise and intelligence consumers, including urgent requests, subscriptions (standing orders) for information, incremental and final report deliveries, requests for clarifications, and issuances of alerts.

CRM offers the potential to personalize intelligence delivery to individual decision makers while tracking their changing interests as they browse subject offerings and issue requests through their own custom portals.

3.7.2 Supply Chain Management

The SCM function monitors and controls the flow of the supply chain, providing internal control of planning, scheduling, inventory control, processing, and delivery.

SCM is the core of B2B business models, seeking to integrate front-end suppliers into an extended supply chain that optimizes the entire production process to slash inventory levels, improve on-time delivery, and reduce the order-to-delivery (and payment) cycle time. In addition to throughput efficiency, the B2B models seek to aggregate orders to leverage the supply chain to gain greater purchasing power, translating larger orders to reduced prices. The key impact measures sought by SCM implementations include:

  • Cash-to-cash cycle time (time from order placement to delivery/ payment);
  • Delivery performance (percentage of orders fulfilled on or before request date);
  • Initial fill rate (percentage of orders shipped in supplier’s first ship- ment);
  • Initial order lead time (supplier response time to fulfill order);
  • On-time receipt performance (percentage of supplier orders received on time).

Like the commercial manufacturer, the intelligence enterprise operates a supply chain that “manufactures” all-source intelligence products from raw sources of intelligence data and relies on single-source suppliers (i.e., imagery, signals, or human reports).

3.7.3 Business Intelligence

The BI function provides all levels of the organization with relevant information on internal operations and the external business environment (via marketing) to be exploited (analyzed and applied) to gain a competitive advantage. The BI function serves to provide strategic insight into overall enterprise operations based on ready access to operating data.

The emphasis of BI is on explicit data capture, storage, and analysis; through the 1990s, BI was the predominant driver for the implementation of corporate data warehouses, and the development of online analytic processing (OLAP) tools. (BI preceded KM concepts, and the subsequent introduction of broader KM concepts added the complementary need for capture and analysis of tacit and explicit knowledge throughout the enterprise.)

The intelligence BI function should collect and analyze real- time workflow data to provide answers to questions such as:

  • What are the relative volumes of requests (for intelligence) by type?
  • What is the “cost” of each category of intelligence product?
  • What are the relative transaction costs of each stage in the supply chain?
  • What are the trends in usage (by consumers) of all forms of intelligence over the past 12 months? Over the past 6 months? Over the past week?
  • Which single sources of incoming intelligence (e.g., SIGINT, IMINT, and MASINT) have greatest utility in all-source products, by product category?

Like their commercial counterparts, the intelligence BI function should not only track the operational flows, they should also track the history of operational decisions—and their effects.

Both operational and decision-making data should be able to be conveniently navigated and analyzed to provide timely operational insight to senior leadership who often ask the question, “What is the cost of a pound of intelligence?”

3.8 Summary

KM provides a strategy and organizational discipline for integrating people, processes, and IT into an effective enterprise.

as noted by Tom Davenport, a leading observer of the discipline:

The first generation of knowledge management within enterprises emphasized the “supply side” of knowledge: acquisition, storage, and dissemination of business operations and customer data. In this phase knowledge was treated much like physical resources and implementation approaches focused on building “warehouses” and “channels” for supply processing and distribution. This phase paid great attention to systems, technology and infrastructure; the focus was on acquiring, accumulating and distributing explicit knowledge in the enterprise [35].

Second generation KM emphasis has turned attention to the demand side of the knowledge economy—seeking to identify value in the collected data to allow the enterprise to add value from the knowledge base, enhance the knowledge spiral, and accelerate innovation. This generation has brought more focus to people (the organization) and the value of tacit knowledge; the issues of sustainable knowledge creation and dissipation throughout the organization are emphasized in this phase. The attention in this generation has moved from understanding knowledge systems to understanding knowledge workers. The third generation to come may be that of KM innovation, in which the knowledge process is viewed as a complete life cycle within the organization, and the emphasis will turn to revolutionizing the organization and reducing the knowledge cycle time to adapt to an ever-changing world environment

 

4

The Knowledge-Based Intelligence Organization

National intelligence organizations following World War II were characterized by compartmentalization (insulated specialization for security purposes) that required individual learning, critical analytic thinking, and problem solving by small, specialized teams working in parallel (stovepipes or silos). These stovepipes were organized under hierarchical organizations that exercised central control. The approach was appropriate for the centralized organizations and bipolar security problems of the relatively static Cold War, but the global breadth and rapid dynamics of twenty-first century intelligence problems require more agile networked organizations that apply organization-wide collaboration to replace the compartmentalization of the past. Founded on the virtues of integrity and trust, the disciplines of organizational collaboration, learning, and problem solving must be developed to support distributed intelligence collection, analysis, and production.

This chapter focuses on the most critical factor in organizational knowl- edge creation—the people, their values, and organizational disciplines. The chapter is structured to proceed from foundational virtues, structures, and com- munities of practice (Section 4.1) to the four organizational disciplines that sup- port the knowledge creation process: learning, collaboration, problem solving, and best practices—called intelligence tradecraft.

the people perspective of KM presented in this chapter can be contrasted with the process and technology perspectives (Table 4.1) five ways:

  1. Enterprise focus. The focus is on the values, virtues, and mission shared by the people in the organization.
  2. Knowledge transaction. Socialization, the sharing of tacit knowledge by methods such as story and dialogue, is the essential mode of transac- tion between people for collective learning, or collaboration to solve problems.
  3. The basis for human collaboration lies in shared pur- pose, values, and a common trust.
  4. A culture of trust develops communities that share their best practices and experiences; collaborative problem solving enables the growth of the trusting culture.
  5. The greatest barrier to collaboration is the inability of an organization’s culture to transform and embrace the sharing of values, virtues, and disciplines.

The numerous implementation failures of early-generation KM enterprises have most often occurred because organizations have not embraced the new business models introduced, nor have they used the new systems to collaborate. As a result, these KM implementations have failed to deliver the intellectual capital promised. These cases were generally not failures of process, technology, or infrastructure; rather, they were failures of organizational culture change to embrace the new organizational model. In particular, they failed to address the cultural barriers to organizational knowledge sharing, learning, and problem solving.

Numerous texts have examined these implementation challenges, and all have emphasized that organizational transformation must precede KM system implementations.

4.1 Virtues and Disciplines of the Knowledge-Based Organization

At the core of an agile knowledge-based intelligence organization is the ability to sustain the creation of organizational knowledge through learning and collaboration. Underlying effective collaboration are values and virtues that are shared by all. The U.S. IC, recognizing the need for such agility as its threat environment changes, has adopted knowledge-based organizational goals as the first two of five objectives in its Strategic Intent:

  • Unify the community through collaborative processes. This includes the implementation of training and business processes to develop an inter-agency collaborative culture and the deployment of supporting technologies.
  • Invest in people and knowledge. This area includes the assessment of customer needs and the conduct of events (training, exercises, experiments, and conferences/seminars) to develop communities of practice and build expertise in the staff to meet those needs. Supporting infrastructure developments include the integration of collaborative networks and shared knowledge bases.

Clearly identified organizational propositions of values and virtues (e.g., integrity and trust) shared by all enable knowledge sharing—and form the basis for organizational learning, collaboration, problem solving, and best-practices (intelligence tradecraft) development introduced in this chapter. This is a necessary precedent before KM infrastructure and technology is introduced to the organization. The intensely human values, virtues, and disciplines introduced in the following sections are essential and foundational to building an intelligence organization whose business processes are based on the value of shared knowledge.

4.1.1 Establishing Organizational Values and Virtues

The foundation of all organizational discipline (ordered, self-controlled, and structured behavior) is a common purpose and set of values shared by all. For an organization to pursue a common purpose, the individual members must conform to a common standard and a common set of ideals for group conduct.

The knowledge-based intelligence organization is a society that requires virtuous behavior of its members to enable collaboration. Dorothy Leonard-Barton, in Wellsprings of Knowledge, distinguishes two categories of values: those that relate to basic human nature and those that relate to performance of the task. In the first category are big V values (also called moral virtues) that include basic human traits such as personal integrity (consistency, honesty, and reliability), truthfulness, and trustworthiness. For the knowledge worker’s task, the second category (of little v values) includes those values long sought by philosophers to arrive at knowledge or justify true belief. Some epistemologies define intellectual virtue as the foundation of knowledge: Knowledge is a state of belief arising out of intellectual virtue. Intellectual virtues include organizational conformity to a standard of right conduct in the exchange of ideas, in reasoning and in judgment.

Organizational integrity is dependent upon the individual integrity of all contributor—as participants cooperate and collaborate around a central purpose, the virtue of trust (built upon shared trust- worthiness of individuals) opens the doors of sharing and exchange. Essential to this process is the development of networks of conversations that are built on communication transactions (e.g., assertions, declarations, queries, or offers) that are ultimately based in personal commitments. Ultimately, the virtue of organizational wisdom—seeking the highest goal by the best means—must be embraced by the entire organization recognizing a common purpose.

Trust and cooperative knowledge sharing must also be complemented by an objective openness. Groups that place consensus over objectivity become subject to certain dangerous decision-making errors.

4.1.2 Mapping the Structures of Organizational Knowledge

Every organization has a structure and flow of knowledge—a knowledge environment or ecology (emphasizing the self-organizing and balancing characteristics of organizational knowledge networks). The overall process of studying and characterizing this environment is referred to as mapping—explicitly rep- resenting the network of nodes (competencies) and links (relationships, knowledge flow paths) within the organization. The fundamental role of KM organizational analysis is the mapping of knowledge within an existing organization.

the knowledge mapping identifies the intangible tacit assets of the organization. The mapping process is conducted by a variety of means: passive observation (where the analyst works within the community), active interviewing, formal questionnaires, and analysis. As an ethnographic research activity, the mapping analyst seeks to understand the unspoken, informal flows and sources of knowledge in the day-to-day operations of the organization. The five stages of mapping (Figure 4.1) must be conducted in partnership with the owners, users, and KM implementers.

The first phase is the definition of the formal organization chart—the for- mal flows of authority, command, reports, intranet collaboration, and information systems reporting. In this phase, the boundaries, or focus of mapping interest is established. The second phase audits (identifies, enumerates, and quantifies as appropriate) the following characteristics of the organization:

  1. Knowledge sources—the people and systems that produce and articulate knowledge in the form of conversation, developed skills, reports, implemented (but perhaps not documented) processes, and databases.
  2. Knowledge flowpaths—the flows of knowledge, tacit and explicit, for- mal and informal. These paths can be identified by analyzing the transactions between people and systems; the participants in the trans- actions provide insight into the organizational network structure by which knowledge is created, stored, and applied. The analysis must distinguish between seekers and providers of knowledge and their relationships (e.g., trust, shared understanding, or cultural compatibility) and mutual benefits in the transaction.
  3. Boundaries and constraints—the boundaries and barriers that control, guide, or constrict the creation and flow of knowledge. These may include cultural, political (policy), personal, or electronic system characteristics or incompatibilities.
  4. Knowledge repositories—the means of maintaining organizational knowledge, including tacit repositories (e.g., communities of experts that share experience about a common practice) and explicit storage (e.g., legacy hardcopy reports in library holdings, databases, or data warehouses).

Once audited, the audit data is organized in the third phase by clustering the categories of knowledge, nodes (sources and sinks), and links unique to the organization. The structure of this organization, usually a table or a spreadsheet, provides insight into the categories of knowledge, transactions, and flow paths; it provides a format to review with organization members to convey initial results, make corrections, and refine the audit. This phase also provides the foundation for quantifying the intellectual capital of the organization, and the audit categories should follow the categories of the intellectual capital accounting method adopted.

The fourth phase, mapping, transforms the organized data into a structure (often, but not necessarily, graphical) that explicitly identifies the current knowledge network. Explicit and tacit knowledge flows and repositories are distinguished, as well as the social networks that support them. This process of visualizing the structure may also identify clusters of expertise, gaps in the flows, chokepoints, as well as areas of best (and worst) practices within the network.

Once the organization’s current structure is understood, the structure can be compared to similar structures in other organizations by benchmarking in the final phase. Benchmarking is the process of identifying, learning, and adapting outstanding practices and processes from any organization, anywhere in the world, to help an organization improve its performance. Benchmarking gathers the tacit knowledge—the know-how, judgments, and enablers—that explicit knowledge often misses. This process allows the exchange of quantitative performance data and qualitative best-practice knowledge to be shared and com- pared with similar organizations to explore areas for potential improvement and potential risks.

Because the repository provides a pointer to the originating authors, it also provides critical pointers to people, or a directory that identifies people within the agency with experience and expertise by subject

4.1.3 Identifying Communities of Organizational Practice

A critical result of any mapping analysis is the identification of the clusters of individuals who constitute formal and informal groups that create, share, and maintain tacit knowledge on subjects of common interest.

The functional workgroup benefits from stability, established responsibilities, processes and storage, and high potential for sharing. Functional workgroups provide the high-volume knowledge production of the organization but lack the agility to respond to projects and crises.

Cross-functional project teams are shorter term project groups that can be formed rapidly (and dismissed just as rapidly) to solve special intelligence problems, maintain special surveillance watches, prepare for threats, or respond to crises. These groups include individuals from all appropriate functional disciplines—with the diversity often characteristic of the makeup of the larger organization, but on a small scale—with reach back to expertise in functional departments.

M researchers have recognized that such organized commu- nities provide a significant contribution to organizational learning by providing a forum for:

  • Sharing current problems and issues;
  • Capturing tacit experience and building repositories of best practices;
  • Linking individuals with similar problems, knowledge, and experience;
  • Mentoring new entrants to the community and other interested parties.

Because participation in communities of practice is based on individual interest, not organizational assignment, these communities may extend beyond the duration of temporary assignments and cut across organizational boundaries.

The activities of working, learning, and innovating have traditionally been treated as independent (and conflicting) activities performed in the office, in the classroom, and in the lab. However, studies by John Seely Brown, chief scientist of Xerox PARC, have indicated that once these activities are unified in communities of practice, they have the potential to significantly enhance knowledge transfer and creation.

4.1.4 Initiating KM Projects

The knowledge mapping and benchmarking process must precede implementation of KM initiatives, forming the understanding of current competencies and processes and the baseline for measuring any benefits of change. KM implementation plans within intelligence organizations generally consider four components, framed by the kind of knowledge being addressed and the areas of investment in KM initiatives:

  1. Organizational competencies. The first area includes assessment of workforce competencies and forms the basis of an intellectual capital audit of human capital. This area also includes the capture of best practices (the intelligence business processes, or tradecraft) and the development of core competencies through training and education. This assessment forms the basis of intellectual capital audit.
  2. Social collaboration. Initiatives in this area enforce established face-to-face communities of practice and develop new communities. These activities enhance the socialization process through meetings and media (e.g., newsletters, reports, and directories).
  3. KM networks. Infrastructure initiatives implement networks (e.g., corporate intranets) and processes (e.g., databases, groupware, applications, and analytic tools) to provide for the capture and exchange of explicit knowledge.
  4. Virtual collaboration. The emphasis in this area is applying technology to create connectivity among and between communities of practice. Intranets and collaboration groupware (discussed in Section 4.3.2) enable collaboration at different times and places for virtual teams—and provide the ability to identify and introduce communities with similar interests that may be unaware of each other.

4.1.5 Communicating Tacit Knowledge by Storytelling

The KM community has recognized the strength of narrative communication—dialogue and storytelling—to communicate the values, emotion (feelings, passion), and sense of immersed experience that makeup personalized, tacit knowledge.

 

The introduction of KM initiatives can bring significant organizational change because it may require cultural transitions in several areas:

  • Changes in purpose, values, and collaborative virtues;
  • Construction of new social networks of trust and communication;
  • Organizational structure changes (networks replace hierarchies);
  • Business process agility, resulting a new culture of continual change (training to adopt new procedures and to create new products).

All of these changes require participation by the workforce and the communication of tacit knowledge across the organization.

Storytelling provides a complement to abstract, analytical thinking and communication, allowing humans to share experience, insight, and issues (e.g., unarticulated concerns about evidence expressed as “negative feelings,” or general “impressions” about repeated events not yet explicitly defined as threat patterns).

The organic school of KM that applies storytelling to cultural transformation emphasizes a human behavioral approach to organizational socialization, accepting the organization as a complex ecology that may be changed in a large way by small effects.

These effects include the use of a powerful, effective story that communicates in a way that spreads credible tacit knowledge across the entire organization.

This school classifies tacit knowledge into artifacts, skills, heuristics, experience, and natural talents (the so-called ASHEN classification of tacit knowledge) and categorizes an organizations’ tacit knowledge in these classes to understand the flow within informal communities.

Nurturing informal sharing within secure communities of practice and distinguishing such sharing from formal sharing (e.g., shared data, best practices, or eLearning) enables the rich exchange of tacit knowledge when creative ideas are fragile and emergent.

4.2 Organizational Learning

Senge asserted that the fundamental distinction between traditional controlling organizations and adaptive self-learning organizations are five key disciplines including both virtues (commitment to personal and team learning, vision shar- ing, and organizational trust) and skills (developing holistic thinking, team learning, and tacit mental model sharing). Senge’s core disciplines, moving from the individual to organizational disciplines, included:

• Personal mastery. Individuals must be committed to lifelong learning toward the end of personal and organization growth. The desire to learn must be to seek a clarification of one’s personal vision and role within the organization.

• Systems thinking. Senge emphasized holistic thinking, the approach for high-level study of life situations as complex systems. An element of learning is the ability to study interrelationships within complex dynamic systems and explore and learn to recognize high-level patterns of emergent behavior.

• Mental models. Senge recognized the importance of tacit knowledge (mental, rather than explicit, models) and its communication through the process of socialization. The learning organization builds shared mental models by sharing tacit knowledge in the storytelling process and the planning process. Senge emphasized planning as a tacit- knowledge sharing process that causes individuals to envision, articulate, and share solutions—creating a common understanding of goals, issues, alternatives, and solutions.

• Shared vision. The organization that shares a collective aspiration must learn to link together personal visions without conflicts or competition, creating a shared commitment to a common organizational goal set.

• Team learning. Finally, a learning organization acknowledges and understands the diversity of its makeup—and adapts its behaviors, pat- terns of interaction, and dialogue to enable growth in personal and organizational knowledge.

It is important, here, to distinguish the kind of transformational learning that Senge was referring to (which brings cultural change across an entire organization), from the smaller scale group learning that takes place when an intelligence team or cell conducts a long-term study or must rapidly “get up to speed” on a new subject or crisis.

4.2.1 Defining and Measuring Learning

The process of group learning and personal mastery requires the development of both reasoning and emotional skills. The level of learning achievement can be assessed by the degree to which those skills have been acquired.

The taxonomy of cognitive and affective skills can be related to explicit and tacit knowledge categories, respectively, to provide a helpful scale for measuring the level of knowledge achieved by an individual or group on a particular subject.

4.2.2 Organizational Knowledge Maturity Measurement

The goal of organizational learning is the development of maturity at the organizational level—a measure of the state of an organization’s knowledge about its domain of operations and its ability to continuously apply that knowledge to increase corporate value to achieve business goals.

Carnegie-Mellon University Software Engineering Institute has defined a five-level People Capability Maturity Model® (P-CMM ®) that distinguishes five levels of organizational maturity, which can be measured to assess and quantify the maturity of the workforce and its organizational KM performance. The P-CMM® framework can be applied, for example, to an intelligence organization’s analytic unit to measure current maturity and develop strategy to increase to higher levels of performance. The levels are successive plateaus of practice, each building on the preceding foundation.

4.2.3 Learning Modes

4.2.3.1 Informal Learning

We gain experience by informal modes of learning on the job alone, with men- tors, team members, or while mentoring others. The methods of informal learning are as broad as the methods of exchanging knowledge introduced in the last chapter. But the essence of the learning organization is the ability to translate what has been learned into changed organizational behavior. David Garvin has identified five fundamental organizational methodologies that are essential to implementing the feedback from learning to change; all have direct application in an intelligence organization.

  1. Systematic problem solving. Organizations require a clearly defined methodology for describing and solving problems, and then for implementing the solutions across the organization. Methods for acquiring and analyzing data, synthesizing hypothesis, and testing new ideas must be understood by all to permit collaborative problem solving. The process must also allow for the communication of lessons learned and best practices developed (the intelligence tradecraft) across the organization.
  2. Experimentation. As the external environment changes, the organization must be enabled to explore changes in the intelligence process. This is done by conducting experiments that take excursions from the normal processes to attack new problems and evaluate alternative tools and methods, data sources, or technologies. A formal policy to encourage experimentation, with the acknowledgment that some experiments will fail, allows new ideas to be tested, adapted, and adopted in the normal course of business, not as special exceptions. Experimentation can be performed within ongoing programs (e.g., use of new analytic tools by an intelligence cell) or in demonstration programs dedicated to exploring entirely new ways of conducting analysis (e.g., the creation of a dedicated Web-based pilot project independent of normal operations and dedicated to a particular intelligence subject domain).
  3. Internal experience. As collaborating teams solve a diversity of intelligence problems, experimenting with new sources and methods, the lessons that are learned must be exchanged and applied across the organization. This process of explicitly codifying lessons learned and making them widely available for others to adopt seems trivial, but in practice requires significant organizational discipline. One of the great values of communities of common practice is their informal exchange of lessons learned; organizations need such communities and must support formal methods that reach beyond these communities. Learning organizations take the time to elicit the lessons from project teams and explicitly record (index and store) them for access and application across the organization. Such databases allow users to locate teams with similar problems and lessons learned from experimentation, such as approaches that succeeded and failed, expected performance levels, and best data sources and methods.
  4. External sources of comparison. While the lessons learned just described applied to self learning, intelligence organizations must look to external sources (in the commercial world, academia, and other cooperating intelligence organizations) to gain different perspectives and experiences not possible within their own organizations. A wide variety of methods can be employed to secure the knowledge from external perspectives, such as making acquisitions (in the business world), establishing strategic relationships, the use of consultants, establishing consortia. The process of sharing, then critically comparing qualitative and quantitative data about processes and performance across organizations (or units within a large organization), enables leaders and process owners to objectively review the relative effectiveness of alter- native approaches. Benchmarking is the process of improving performance by continuously identifying, understanding, and adapting outstanding practices and processes found inside and outside the organization [23]. The benchmarking process is an analytic process that requires compared processes to be modeled, quantitatively measured, deeply understood, and objectively evaluated. The insight gained is an understanding of how best performance is achieved; the knowledge is then leveraged to predict the impact of improvements on over- all organizational performance.
  5. Transferring knowledge. Finally, an intelligence organization must develop the means to transfer people (tacit transfer of skills, experience, and passion by rotation, mentoring, and integrating process teams) and processes (explicit transfer of data, information, business processes on networks) within the organization. In Working Knowledge [24], Davenport and Prusak point out that spontaneous, unstructured knowledge exchange (e.g., discussions at the water cooler, exchanges among informal communities of interest, and discussions at periodic knowledge fairs) is vital to an organization’s success, and the organization must adopt strategies to encourage such sharing.

4.2.3.2 Formal Learning

In addition to informal learning, formal modes provide the classical introduc- tion to subject-matter knowledge.

Information technologies have enabled four distinct learning modes that are defined by distinguishing both the time and space of interaction between the learner and the instructor

  1. Residential learning (RL). Traditional residential learning places the students and instructor in the physical classroom at the same time and place. This proximity allows direct interaction between the student and instructor and allows the instructor to tailor the material to the students.
  2. Distance learning remote (DL-remote). Remote distance learning pro- vides live transmission of the instruction to multiple, distributed locations. The mode effectively extends the classroom across space to reach a wider student audience. Two-way audio and video can permit limited interaction between extended classrooms and the instructor.
  3. Distance learning canned (DL-canned). This mode simply packages (or cans) the instruction in some media for later presentation at the student’s convenience (e.g., traditional hardcopy texts, recorded audio or video, or softcopy materials on compact discs) DL-canned materials include computer-based training courseware that has built-in features to interact with the student to test comprehension, adaptively present material to meet a student’s learning style, and link to supplementary materials to the Internet.
  4. Distance learning collaborative (DL-collaborative). The collaborative mode of learning (often described as e-learning) integrates canned material while allowing on-line asynchronous interaction between the student and the instructor (e.g., via e-mail, chat, or videoconference). Collaboration may also occur between the student and software agents (personal coaches) that monitor progress, offer feedback, and recommend effective paths to on-line knowledge.

4.3 Organizational Collaboration

The knowledge-creation process of socialization occurs as communities (or teams) of people collaborate (commit to communicate, share, and diffuse knowledge) to achieve a common purpose.

Collaboration is a stronger term than cooperation because participants are formed around and committed to a com- mon purpose, and all participate in shared activity to achieve the end. If a problem is parsed into independent pieces (e.g., financial analysis, technology analysis, and political analysis), cooperation may be necessary—but not collabo- ration. At the heart of collaboration is intimate participation by all in the creation of the whole—not in cooperating to merely contribute individual parts to the whole.

 

Collaboration is widely believed to have the potential to perform a wide range of functions together:

  • Coordinate tasking and workflow to meet shared goals;
  • Share information, beliefs, and concepts;
  • Perform cooperative problem-solving analysis and synthesis;
  • Perform cooperative decision making;
  • Author team reports of decisions and rationale.

This process of collaboration requires a team (two or more) of individuals that shares a common purpose, enjoys mutual respect and trust, and has an established process to allow the collaboration process to take place. Four levels (or degrees) of intelligence collaboration can be distinguished, moving toward increasing degrees of interaction and dependence among team members

Sociologists have studied the sequence of collaborative groups as they move from inception to decision commitment. Decision emergence theory (DET) defines four stages of collaborative decision making within an individual group: orientation of all members to a common perspective; conflict, during which alternatives are compared and competed; emergence of collaborative alternatives; and finally reinforcement, when members develop consensus and commitment to the group decisions.

4.3.1 Collaborative Culture

First among the means to achieve collaboration is the creation of a collaborating culture—a culture that shares the belief that collaboration (as opposed to competition or other models) is the best approach to achieve a shared goal and that shares a commitment to collaborate to achieve organizational goals.

The collaborative culture must also recognize that teams are heterogeneous in nature. Team members have different tacit (experience, personality style) and cognitive (reasoning style) preferences that influence their unique approach to participating in the collaborative process.

The mix of personalities within a team must be acknowledged and rules of collaborative engagement (and even groupware) must be adapted to allow each member to contribute within the constraints and strengths of their individual styles.

Collaboration facilitators may use Myers-Brigg or other categorization schemes to analyze a particular team’s structure to assess the team’s strengths, weaknesses and overall balance

4.3.2 Collaborative Environments

Collaborative environments describe the physical, temporal, and functional setting within which organizations interact.

4.3.3 Collaborative Intelligence Workflow

The representative team includes:

• Intelligence consumer. The State Department personnel requesting the analysis define high-level requirements and are the ultimate customers for the intelligence product. They specify what information is needed: the scope or breadth of coverage, the level of depth, the accuracy required, and the timeframe necessary for policy making.

• All-source analytic cell. The all-source analysis cell, which may be a dis- tributed virtual team across several different organizations, has the responsibility to produce the intelligence product and certify its accuracy.

• Single-source analysts. Open-source and technical-source analysts (e.g., imagery, signals, or MASINT) are specialists that analyze the raw data collected as a result of special tasking; they deliver reports to the all- source team and certify the conclusions of special analysis.

• Collection managers. The collection managers translate all-source requests for essential information (e.g., surveillance of shipping lines, identification of organizations, or financial data) into specific collection tasks (e.g., schedules, collection parameters, and coordination between different sources). They provide the all-source team with a status of their ability to satisfy the team’s requests.

4.3.3.3 The Collaboration Paths

  1. Problem statement.

Interacting with the all-source analytic leader (LDR)—and all-source analysts on the analytic team—the problem is articulated in terms of scope (e.g., area of world, focus nations, and expected depth and accuracy of estimates), needs (e.g., specific questions that must be answered and pol- icy issues) urgency (e.g., time to first results and final products), and expected format of results (e.g., product as emergent results portal or softcopy document).

  1. Problem refinement. The analytic leader (LDR) frames the problem with an explicit description of the consumer requirements and intelligence reporting needs. This description, once approved by the consumer, forms the terms of reference for the activity. The problem statement-refinement loop may be iterated as the situation changes or as intelligence reveals new issues to be studied.
  2. Information requests to collection tasking. Based on the requirements, the analytic team decomposes the problem to deduce specific elements of information needed to model and understand the level of trafficking. (The decomposition process was described earlier in Section 2.4.) The LDR provides these intelligence data requirements to the collec- tion manger (CM) to prepare a collection plan. This planning requires the translation of information needs to a coordinated set of data- collection tasks for humans and technical collection systems. The CM prepares a collection plan that traces planned collection data and means to the analytic team’s information requirements.
  3. Collection refinement. The collection plan is fed back to the LDR to allow the analytic team to verify the completeness and sufficiency of the plan—and to allow a review of any constraints (e.g., limits to coverage, depth, or specificity) or the availability of previously collected relevant data. The information request–collection planning and refinement loop iterates as the situation changes and as the intelligence analysis proceeds. The value of different sources, the benefits of coordinated collection, and other factors are learned by the analytic team as the analysis proceeds, causing adjustments to the collection plan to satisfy information needs.
  4. Cross cueing. The single-source analysts acquire data by searching exist- ing archived data and open sources and by receiving data produced by special collections tasked by the CM. Single-source analysts perform source-unique analysis (e.g., imagery analysis; open-source foreign news report, broadcast translation, and analysis; and human report analysis) As the single-source analysts gain an understanding of the timing of event data, and the relationships between data observed across the two domains, the single-source analysts share these temporal and functional relationships. The cross-cueing collaboration includes one analyst cueing the other to search for corroborating evidence in another domain; one analyst cueing the other to a possible correlated event; or both analysts recommending tasking for the CM to coordinate a special collection to obtain time or functionally correlated data on a specific target. It is important to note that this cross-cueing collaboration, shown here at the single-source analysis level function is also performed within the all-source analysis unit (8), where more subtle cross-source relations may be identified.
  5. Single-source analysis reporting. Single-source analysts report the interim results of analysis to the all-source team, describing the emerging picture of the trafficking networks as well as gaps in information. This path provides the all-source team with an awareness of the progress and contribution of collections, and the added value of the analysis that is delivering an emerging trafficking picture.
  6. Single-source analysis refinement. The all-source team can provide direction for the single-source analysts to focus (“Look into that organization in greater depth”), broaden (“Check out the neighboring countries for similar patterns”), or change (“Drop the study of those shipping lines and focus on rail transport”) the emphasis of analysis and collection as the team gains a greater understanding of the subject. This reporting-refinement collaboration (paths 6 and 7, respectively) precedes publication of analyzed data (e.g., annotated images, annotated foreign reports on trafficking, maps of known and suspect trafficking routes, and lists of known and suspect trafficking organizations) into the analysis base.
  7. All-source analysis collaboration. The all-source team may allocate components of the trafficking-analysis task to individuals with areas of subject matter specialties (e.g., topical components might include organized crime, trafficking routes, finances, and methods), but all contribute to the construction of a single picture of illegal trafficking. The team shares raw and analyzed data in the analysis base, as well as the intelligence products in progress in a collaborative workspace. The LDR approves all product components for release onto the digital production system, which places them onto the intelligence portal for the consumer.

In the initial days, the portal is populated with an initial library of related subject matter data (e.g., open source and intelligence reports and data on illegal trafficking in general). As the analysis proceeds, analytic results are posted to the portal,

4.4 Organizational Problem Solving

Intelligence organizations face a wide range of problems that require planning, searching, and explanation to provide solutions. These problems require reactive solution strategies to respond to emergent situations as well as opportunistic (proactive) strategies to identify potential future problems to be solved (e.g., threat assessments, indications, and warnings).

The process of solving these problems collaboratively requires a defined strategy for groups to articulate a problem and then proceed to collectively develop a solution. In the context of intelligence analysis, organizational problem solving focuses on the following kinds of specific problems:

  • Planning. Decomposing intelligence needs for data requirements, developing analysis-synthesis procedures to apply to the collected data to draw conclusions, and scheduling the coordinated collection of data to meet those requirements
  • Discovery. Searching and identifying previously unknown patterns (of objects, events, behaviors, or relationships) that reveal new understanding about intelligence targets. (The discovery reasoning approach is inductive in nature, creating new, previously unrevealed hypotheses.)
  • Detection. Searching and matching evidence against previously known target hypotheses (templates). (The detection reasoning approach is deductive in nature, testing evidence against known hypotheses.)
  • Explanation. Estimating (providing mathematical proof in uncertainty) and arguing (providing logical proof in uncertainty) are required to provide an explanation of evidence. Inferential strategies require the description of multiple hypotheses (explanations), the confidence in each one, and the rationale for justifying a decision. Problem-solving descriptions may include the explanation of explicit knowledge via technical portrayals (e.g., graphical representations) and tacit knowledge via narrative (e.g., dialogue and story).

To perform organizational (or collaborative) problem solving in each of these areas, the individuals in the organization must share an awareness of the reasoning and solution strategies embraced by the organization. In each of these areas, organizational training, formal methodologies, and procedural templates provide a framework to guide the thinking process across a group. These methodologies also form the basis for structuring collaboration tools to guide the way teams organize shared knowledge, structure problems, and proceed from problem to solution.

Collaborative intelligence analysis is a difficult form of collaborative problem solving, where the solution often requires the analyst to overcome the efforts of a subject of study (the intelligence target) to both deny the analyst information and provide deliberately deceptive information.

4.4.1 Critical, Structured Thinking

Critical, or structured, thinking is rooted in the development of methods of careful, structured thinking, following the legacy of the philosophers and theologians that diligently articulated their basis for reasoning from premises to conclusions.

Critical thinking is based on the application of a systematic method to guide the collection of evidence, reason from evidence to argument, and apply objective decision-making judgment (Table 4.10). The systematic methodology assures completeness (breadth of consideration), objectivity (freedom from bias in sources, evidence, reasoning, or judgment), consistency (repeatability over a wide range of problems), and rationality (consistency with logic). In addition, critical thinking methodology requires the explicit articulation of the reasoning process to allow review and critique by others. These common methodologies form the basis for academic research, peer review, and reporting—as well as for intelligence analysis and synthesis.

structured methods that move from problem to solution provide a helpful common framework for groups to communicate knowledge and coordi- nate a process from problem to solution. The TQM initiatives of the 1980s expanded the practice of teaching entire organizations common strategies for articulating problems and moving toward solutions. A number of general problem-solving strategies have been developed and applied to intelligence applications, for example (moving from general to specific):

  • Kepner-TregoeTM. This general problem-solving methodology, introduced in the classic text The Rational Manager [38] and taught to generations of managers in seminars, has been applied to management, engineering, and intelligence-problem domains. This method carefully distinguishes problem analysis (specifying deviations from expectations, hypothesizing causes, and testing for probable causes) and decision analysis (establishing and classifying decision objectives, generating alternative decisions, and comparing consequences).
  • Multiattribute utility analysis (MAUA). This structured approach to decision analysis quantifies a utility function, or value of all decision factors, as a weighted sum of contributing factors for each alternative decision. Relative weights of each factor sum to unity so the overall utility scale (for each decision option) ranges from 0 to 1.
  • Alternative competing hypotheses (ACH). This methodology develops and organizes alternative hypotheses to explain evidence, evaluates the evidence across multiple criteria, and provides rationale for reasoning to the best explanation.
  • Lockwood analytic method for prediction (LAMP). This methodology exhaustively structures and scores alternative futures hypotheses for complicated intelligence problems with many factors. The process enumerates, then compares the relative likelihood of COAs for all actors (e.g., military or national leaders) and their possible outcomes. The method provides a structure to consider all COAs while attempting to minimize the exponential growth of hypotheses.

A basic problem-solving process flow (Figure 4.7), which encompasses the essence of each of these three approaches, includes five fundamental component stages:

  1. Problem assessment. The problem must be clearly defined, and criteria for decision making must be established at the beginning. The problem, as well as boundary conditions, constraints, and the format of the desired solution, is articulated.
  2. Problem decomposition. The problem is broken into components by modeling the “situation” or context of the problem. If the problem is a corporate need to understand and respond to the research and develop- ment initiatives of a particular foreign company, for example, a model of that organization’s financial operations, facilities, organizational structure (and research and development staffing), and products is con- structed. The decomposition (or analysis) of the problem into the need for different kinds of information necessarily requires the composition (or synthesis) of the model. This models the situation of the problem and provides the basis for gathering more data to refine the problem (refine the need for data) and better understand the context.
  3. Alternative analysis. In concert with problem decomposition, alterna- tive solutions (hypotheses) are conceived and synthesized. Conjecture and creativity are necessary in this stage; the set of solutions are catego- rized to describe the range of the solution space. In the example of the problem of understanding a foreign company’s research and develop- ment, these solutions must include alternative explanations of what the competitor might be doing and what business responses should be taken to respond if there is a competitive threat. The competitor ana- lyst must explore the wide range of feasible solutions and associated constraints and variables; alternatives may range from no research and
  4. Problem decomposition. The problem is broken into components by modeling the “situation” or context of the problem. If the problem is a corporate need to understand and respond to the research and develop- ment initiatives of a particular foreign company, for example, a model of that organization’s financial operations, facilities, organizational structure (and research and development staffing), and products is con- structed. The decomposition (or analysis) of the problem into the need for different kinds of information necessarily requires the composition (or synthesis) of the model. This models the situation of the problem and provides the basis for gathering more data to refine the problem (refine the need for data) and better understand the context.
  5. Alternative analysis. In concert with problem decomposition, alternative solutions (hypotheses) are conceived and synthesized. Conjecture and creativity are necessary in this stage; the set of solutions are categorized to describe the range of the solution space. In the example of the problem of understanding a foreign company’s research and development, these solutions must include alternative explanations of what the competitor might be doing and what business responses should be taken to respond if there is a competitive threat. The competitor analyst must explore the wide range of feasible solutions and associated constraints and variables; alternatives may range from no research and development investment to significant but hidden investment in a new, breakthrough product development. Each solution (or explanation, in this case) must be compared to the model, and this process may cause the scope of the model to be expanded in scope, refined, and further decomposed to smaller components.
  6. Decision analysis. In this stage the alternative solutions are applied to the model of the situation to determine the consequences of each solution. In the foreign firm example, consequences are related to both the likelihood of the hypothesis being true and the consequences of actions taken. The decision factors, defined in the first stage, are applied to evaluate the performance, effectiveness, cost, and risk associated with each solution. This stage also reveals the sensitivity of the decision factors to the situation model (and its uncertainties) and may send the analyst back to gather more information about the situation to refine the model [42].
  7. Solution evaluation. The final stage, judgment, compares the outcome of decision analysis with the decision criteria established at the onset. Here, the uncertainties (about the problem, the model of the situation, and the effects of the alternative solutions) are considered and other subjective (tacit) factors are weighed to arrive at a solution decision.

This approach underlies the basis for traditional analytic intelligence methods, because it provides structure, rationale, and formality. But most recognize that the solid tacit knowledge of an experienced analyst provides a complementary basis—or an unspoken confidence that underlies final decisions—that is recognized but not articulated as explicitly as the quantified decision data.

4.4.2 Systems Thinking

In contrast with the reductionism of a purely analytic approach, a more holistic approach to understanding complex processes acknowledges the inability to fully decompose many complex problems into a finite and complete set of linear processes and relationships. This approach, referred to as holism, seeks to understand high-level patterns of behavior in dynamic or complex adaptive systems that transcend complete decomposition (e.g., weather, social organizations, or large-scale economies and ecologies). Rather than being analytic, systems approaches tend to syn- thetic—that is, these approaches construct explanations at the aggregate or large scale and compare them to real-world systems under study.

Complexity refers the property of real-world systems that prohibits any formalism to represent or completely describe its behavior. In contrast with simple systems that may be fully described by some formalism (i.e., mathematical equations that fully describe a real-world process to some level of satisfaction for the problem at hand), complex systems lack a fully descriptive formalism that captures all of their properties, especially global behavior.

systems of subatomic scale, human organizational systems, and large-scale economies, where very large numbers of independent causes interact in large numbers of interactive ways, are characterized by inability to model global behavior—and a frustrating inability to predict future behavior.

The expert’s judgment is based not on an external and explicit decomposition of the problem, but on an internal matching of high-level patterns of prior experience with the current situation. The experienced detective as well as the experienced analyst applies such high-level comparisons of current behaviors with previous tacit (unarticulated, even unconscious) patterns gained through experience.

It is important to recognize that analytic and systems-thinking approaches, though in contrast, are usually applied in a complementary fashion by individuals and team alike. The analytic approach provides the structure, record keeping, and method for articulating decision rationale, while the systems approach guides the framing of the problem, provides the synoptic perspective for exploring alternatives, and provides confidence in judgments.

4.4.3     Naturalistic Decision Making

in times of crisis, when time does not permit the careful methodologies, humans apply more naturalistic methods that, like the systems-thinking mode, rely entirely on the only basis available—prior experience.

Uncontrolled, [information] will control you and your staffs … and lengthen your decision-cycle times.” (Insightfully, the Admiral also noted, “You can only manage from your Desktop Computer … you cannot lead from it”

While long-term intelligence analysis applies the systematic, critical analytic approaches described earlier, crisis intelligence analy- sis may be forced to the more naturalistic methods, where tacit experience (via informal on-the-job learning, simulation, or formal learning) and confidence are critical.

4.5 Tradecraft: The Best Practices of Intelligence

The capture and sharing of best practices was developed and matured through- out the 1980s when the total quality movement institutionalized the processes of benchmarking and recording lessons learned. Two forms of best practices and lessons capture and recording are often cited:

  1. Explicit process descriptions. The most direct approach is to model and describe the best collection, analytic, and distribution processes, their performance properties, and applications. These may be indexed, linked, and organized for subsequent reuse by a team posed with simi- lar problems and instructors preparing formal curricula.
  2. Tacit learning histories. The methods of storytelling, described earlier in this chapter, are also applied to develop a “jointly told” story by the team developing the best practice. Once formulated, such learning histories provide powerful tools for oral, interactive exchanges within the organization; the written form of the exchanges may be linked to the best-practice description to provide context.

While explicit best-practices databases explain the how, learning histories provide the context to explain the why of particular processes.

The CIA maintains a product evaluation staff to evaluate intelligence products, learn from the large range of products produced (estimates, forecasts, technical assessments, threat assessments, and warnings) and maintains the database of best practices for training and distribution to the analytic staff.

4.6 Summary

In this chapter, we have introduced the fundamental cultural qualities, in terms of virtues and disciplines that characterize the knowledge-based intelligence organization. The emphasis has necessarily been on organizational disciplines—learning, collaborating, problem solving—that provide the agility to deliver accurate and timely intelligence products in a changing environment. The virtues and disciplines require support—technology to support collaboration over time and space, to support the capture and retrieval of explicit knowledge, to enable the exchange of tacit knowledge, and to support the cognitive processes in analytic and holistic problem solving.

5

Principles of Intelligence Analysis and Synthesis

At the core of all knowledge creation are the seemingly mysterious reasoning processes that proceed from the known to the assertion of entirely new knowledge about the previously unknown. For the intelligence analyst, this is the process by which evidence [1], that data deter- mined to be relevant to a problem, is used to infer knowledge about a subject of investigation—the intelligence target. The process must deal with evidence that is often inadequate, undersampled in time, ambiguous, and carries questionable pedigree.

We refer to this knowledge-creating discipline as intelligence analysis and the practitioner as analyst. But analysis properly includes both the processes of analysis (breaking things down) and synthesis (building things up).

5.1 The Basis of Analysis and Synthesis

The process known as intelligence analysis employs both the functions of analysis and synthesis to produce intelligence products.

In a criminal investigation, this leads from a body of evidence, through feasible explanations, to an assembled case. In intelligence, the process leads from intelligence data, through alternative hypotheses, to an intelligence product. Along this trajectory, the problem solver moves forward and backward iteratively seeking a path that connects the known to the solution (that which was previously unknown).

Intelligence analysis-synthesis is very interested in financial, political, economic, military, and many other evidential relationships that may not be causal, but provide understanding of the structure and behavior of human, organizational, physical, and financial entities.

Descriptions of the analysis-synthesis processes can be traced from its roots in philosophy and problem solving to applications in intelligence assessments.

Philosophers distinguish between propositions as analytic or synthetic based on the direction in which they are developed. Propositions in which the predicate (conclusion) is contained within the subject are called analytic because the predicate can be derived directly by logical reasoning forward from the subject; the subject is said to contain the solution. Synthetic propositions on the other hand have predicates and subjects that are independent. The synthetic proposition affirms a connection between otherwise independent concepts.

The empirical scientific method applies analysis and synthesis to develop and then to test hypotheses:

  • Observation. A phenomenon is observed and recorded as data.
  • Hypothesis creation. Based upon a thorough study of the data, a working hypothesis is created (by the inductive analysis process or by pure inspi- ration) to explain the observed phenomena.
  • Experiment development. Based on the assumed hypothesis, the expected results (the consequences) of a test of the hypothesis are synthesized (by deduction).
  • Hypothesis testing. The experiment is performed to test the hypothesis against the data.
  • When the consequences of the test are confirmed, the hypothesis is verified (as a theory or law depending upon the degree of certainty).

The analyst iteratively applies analysis and synthesis to move forward from evidence and backward from hypothesis to explain the available data (evidence). In the process, the analyst identifies more data to be collected, critical missing data, and new hypotheses to be explored. This iterative analysis-synthesis process provides the necessary traceability from evidence to conclusion that will allow the results (and the rationale) to be explained with clarity and depth when completed.

 

5.2 The Reasoning Processes

Reasoning processes that analyze evidence and synthesize explanations perform inference (i.e., they create, manipulate, evaluate, modify, and assert belief). We can characterize the most fundamental inference processes by their process and products:

  • Process. The direction of the inference process refers to the way in which beliefs are asserted. The process may move from specific (or particular) beliefs toward more general beliefs, or from general beliefs to assert more specific beliefs.
  • Products. The certainty associated with an inference distinguishes two categories of results of inference. The asserted beliefs that result from inference may be infallible (e.g., an analytic conclusion is derived from infallible beliefs and infallible logic is certain) or fallible judgments (e.g., a synthesized judgment is asserted with a measure of uncertainty; “probably true,” “true with 0.95 probability,” or “more likely true than false”).

 

5.2.1 Deductive Reasoning

Deduction is the method of inference by which a conclusion is inferred by applying the rules of a logical system to manipulate statements of belief to form new logically consistent statements of belief. This form of inference is infallible, in that the conclusion (belief) must be as certain as the premise (belief). It is belief preserving in that conclusions reveal no more than that expressed in the original premises. Deduction can be expressed in a variety of syllogisms, including the more common forms of propositional logic.

5.2.2 Inductive Reasoning

Induction is the method of inference by which a more general or more abstract belief is developed by observing a limited set of observations or instances.

Induction moves from specific beliefs about instances to general beliefs about larger and future populations of instances. It is a fallible means of inference.

The form of induction most commonly applied to extend belief from a sample of instances to a larger population, is inductive generalization:

By this method, analysts extend the observations about a limited number of targets (e.g., observations of the money laundering tactics of several narcotics rings within a drug cartel) to a larger target population (e.g., the entire drug cartel).

Inductive prediction extends belief from a population to a specific future sample.

By this method, an analyst may use several observations of behavior (e.g., the repeated surveillance behavior of a foreign intelligence unit) to create a general detection template to be used to detect future surveillance activities by that or other such units. The induction presumes future behavior will follow past patterns.

In addition to these forms, induction can provide a means of analogical reasoning (induction on the basis of analogy or similarity) and inference to relate cause and effect. The basic scientific method applies the principles of induction to develop hypotheses and theories that can subsequently be tested by experimentation over a larger population or over future periods of time. The subject of induction is central to the challenge of developing automated systems that generalize and learn by inducing patterns and processes (rules).

Koestler uses the term bisociation to describe the process of viewing multiple explanations (or multiple associations) of the same data simultaneously. In the example in the figure, the data can be projected onto a common plane of discernment in which the data represents a simple curved line; projected onto an orthogonal plane, the data can explain a sinusoid. Though undersampled, as much intelligence data is, the sinusoid represents a new and novel explanation that may remain hidden if the analyst does not explore more than the common, immediate, or simple interpretation.

In a similar sense, the inductive discovery by an intelligence analyst (aha!) may take on many different forms, following the simple geometric metaphor. For example:

  • A subtle and unique correlation between the timing of communications (by traffic analysis) and money transfers of a trading firm may lead to the discovery of an organized crime operation.
  • A single anomalous measurement may reveal a pattern of denial and deception to cover the true activities at a manufacturing facility in which many points of evidence, are, in fact deceptive data “fed” by the deceiver. Only a single piece of anomalous evidence (D5 in the figure) is the clue that reveals the existence of the true operations (a new plane in the figure). The discovery of this new plane will cause the analyst to search for additional supporting evidence to support the deception hypothesis.

Each frame of discernment (or plane in Koestler’s metaphor) is a framework for creating a single or a family of multiple hypotheses to explain the evidence. The creative analyst is able to entertain multiple frames of discernment, alternatively analyzing possible “fits” and constructing new explanations, exploring the many alternative explanations. This is Koestler’s constructive-destructive process of discovery.

Collaborative intelligence analysis (like collaborative scientific discovery) may produce a healthy environment for creative induction or an unhealthy competitive environment that stifles induction and objectivity. The goal of collaborative analysis is to allow alternative hypotheses to be conceived and objectively evaluated against the available evidence and to guide the tasking for evidence to confirm or disconfirm the alternatives.

5.2.3 Abductive Reasoning

Abduction is the informal or pragmatic mode of reasoning to describe how we “reason to the best explanation” in everyday life. Abduction is the practical description of the interactive use of analysis and synthesis to arrive at a solution or explanation creating and evaluating multiple hypotheses.

Unlike infallible deduction, abduction is fallible because it is subject to errors (there may be other hypotheses not considered or another hypothesis, however unlikely, may be correct). But unlike deduction, it has the ability to extend belief beyond the original premises. Peirce contended that this is the logic of discovery and is a formal model of the process that scientists apply all the time.

Consider a simple intelligence example that implements the basic abduc- tive syllogism. Data has been collected on a foreign trading company, TraderCo, which indicates its reported financial performance is not consistent with (less than) its level of operations. In addition, a number of its executives have subtle ties with organized crime figures.

The operations of the company can be explained by at least three hypotheses:

Hypothesis (H1)—TraderCo is a legitimate but poorly run business; its board is unaware of a few executives with unhealthy business contacts.

Hypothesis (H2)—TraderCo is a legitimate business with a naïve board that is unaware that several executives who gamble are using the business to pay off gambling debts to organized crime.

Hypothesis (H3)—TraderCo is an organized crime front operation that is trading in stolen goods and laundering money through the business, which reports a loss.

Hypothesis H3 best explains the evidence.

∴ Therefore, Accept Hypothesis H3 as the best explanation.

Of course, the critical stage of abduction unexplained in this set of hypotheses is the judgment that H3 is the best explanation. The process requires a criteria for ranking hypotheses, a method for judging which is best, and a method to assure that the set of candidate hypotheses cover all possible (or feasible) explanations.

 

5.2.3.1 Creating and Testing Hypotheses

Abduction introduces the competition among multiple hypotheses, each being an attempt to explain the evidence available. These alternative hypotheses can be compared, or competed on the basis of how well they explain (or fit) the evidence. Furthermore, the created alternative hypotheses provide a means of identifying three categories of evidence important to explanation:

  • Positive evidence. This is evidence revealing the presence of an object or occurrence of an event in a hypothesis.
  • Missing evidence. Some hypotheses may fit the available evidence, but the hypothesis “predicts” that additional evidence that should exist if the hypothesis were true is “missing.” Subsequent searches and testing for this evidence may confirm or disconfirm the hypothesis.
  • Negative evidence. Hypotheses that contain evidence of a nonoccurrence of an event (or nonexistence of an object) may confirm a hypothesis.

5.2.3.2 Hypothesis Selection

Abduction also poses the issue of defining which hypothesis provides the best explanation of the evidence. The criteria for comparing hypotheses, at the most fundamental level, can be based on two principle approaches established by philosophers for evaluating truth propositions about objective reality [18]. The correspondence theory of the truth of a proposition p is true is to maintain that “p corresponds to the facts.”

For the intelligence analyst this would equate to “hypothesis h corresponds to the evidence”—it explains all of the pieces of evidence, with no expected evidence missing, all without having to leave out any contradictory evidence. The coherence theory of truth says that a proposition’s truth consists of its fitting into a coherent system of propositions that create the hypothesis. Both concepts contribute to practical criteria for evaluating competing hypotheses

5.3 The Integrated Reasoning Process

The analysis-synthesis process combines each of the fundamental modes of reasoning to accumulate, explore, decompose to fundamental elements, and then fit together evidence. The process also creates hypothesized explanations of the evidence and uses these hypotheses to search for more confirming or refuting elements of evidence to affirm or prune the hypotheses, respectively.

This process of proceeding from an evidentiary pool to detections, explanations, or discovery has been called evidence marshaling because the process seeks to marshal (assemble and organize) into a representation (a model) that:

  • Detects the presence of evidence that match previously known premises (or patterns of data);
  • Explains underlying processes that gave rise to the evidence;
  • Discovers new patterns in the evidence—patterns of circumstances or behaviors not known before (learning).

The figure illustrates four basic paths that can proceed from the pool of evidence, our three fundamental inference modes and a fourth feedback path:

  1. Deduction. The path of deduction tests the evidence in the pool against previously known patterns (or templates) that represent hypotheses of activities that we seek to detect. When the evidence fits the hypothesis template, we declare a match. When the evidence fits multiple hypotheses simultaneously, the likelihood of each hypothesis (determined by the strength of evidence for each) is assessed and reported. (This likelihood may be computed probabilistically using Bayesian methods, where evidence uncertainty is quantified as a probability and prior probabilities of the hypotheses are known.)
  2. Retroduction. This feedback path, recognized and named by C.S. Peirce as yet another process of reasoning, occurs when the analyst conjectures (synthesizes) a new conceptual hypothesis (beyond the cur- rent frame of discernment) that causes a return to the evidence to seek evidence to match (or test) this new hypothesis. The insight Peirce provided is that in the testing of hypotheses, we are often inspired to realize new, different hypotheses that might also be tested. In the early implementation of reasoning systems, the forward path of deduction was often referred to as forward chaining by attempting to automatically fit data to previously stored hypothesis templates; the path of retroduction was referred to as backward chaining, where the system searched for data to match hypotheses queried by an inspired human operator.
  3. Abduction. The abduction process, like induction, creates explanatory hypotheses inspired by the pool evidence and then, like deduction, attempts to fit items of evidence to each hypothesis to seek the best explanation. In this process, the candidate hypotheses are refined and new hypotheses are conjectured. The process leads to comparison and ranking of the hypotheses, and ultimately the best is chosen as the explanation. As a part of the abductive process, the analyst returns to the pool of evidence to seek support for these candidate explanations; this return path is called retroduction.
  4. Induction. The path of induction considers the entire pool of evidence to seek general statements (hypotheses) about the evidence. Not seeking point matches to the small sets of evidence, the inductive path conjectures new and generalized explanation of clusters of similar evidence; these generalizations may be tested across the evidence to determine the breadth of applicability before being declared as a new discovery.

5.4 Analysis and Synthesis As a Modeling Process

The fundamental reasoning processes are applied to a variety of practical ana- lytic activities performed by the analyst.

  • Explanation and description. Find and link related data to explain entities and events in the real world.
  • Detection. Detect and identify the presence of entities and events based on known signatures. Detect potentially important deviations, including anomaly detection of changes relative to “normal” or “expected” state or change detection of changes or trends over time.
  • Discovery. Detect the presence of previously unknown patterns in data (signatures) that relate to entities and events.
  • Estimation. Estimate the current qualitative or quantitative state of an entity or event.
  • Prediction. Anticipate future events based on detection of known indicators; extrapolate current state forward, project the effects of linear fac- tors forward, or simulate the effects of complex factors to synthesize possible future scenarios to reveal anticipated and unanticipated (emergent) futures.

In each of these cases, we can view the analysis-synthesis process as an evidence-decomposing and model-building process.

The objective of this process is to sort through and organize data (analyze) and then to assemble (synthesize), or marshal related evidence to create a hypothesis—an instantiated model that represents one feasible representation of the intelligence subject (target). The model is used to marshal evidence, evaluate logical argumentation, and provide a tool for explanation of how the available evidence best fits the analyst’s conclusion. The model also serves to help the analyst understand what evidence is missing, what strong evidence supports the model, and where negative evidence might be expected. The terminology we use here can be clarified by the following distinctions:

  • A real intelligence target is abstracted and represented by models.
  • A model has descriptive and stated attributes or properties.
  • A particular instance of a model, populated with evidence-derived and conjectured properties, is a hypothesis.

A target may be described by multiple models, each with multiple instances (hypotheses). For example, if our target is the financial condition of a designated company, we might represent the financial condition with a single financial model in the form of a spreadsheet that enumerates many financial attributes. As data is collected, the model is populated with data elements, some reported publicly and others estimated. We might maintain three instances of the model (legitimate company, faltering legitimate company, and illicit front organization), each being a competing explanation (or hypothesis) of the incomplete evidence. These hypotheses help guide the analyst to identify the data required to refine, affirm, or discard existing hypotheses or to create new hypotheses.

Explicit model representations provide a tool for collaborative construction, marshaling of evidence, decomposition, and critical examination. Mental and explicit modeling are complementary tools of the analyst; judgment must be applied to balance the use of both.

Former U.S. National Intelligence Officer for Warning (1994–1996) Mary McCarthy has emphasized the importance of the explicit modeling to analysis:

Rigorous analysis helps overcome mindset, keeps analysts who are immersed in a mountain of new information from raising the bar on what they would consider an alarming threat situation, and allows their minds to expand other possibilities. Keeping chronologies, maintaining databases and arraying data are not fun or glamorous. These techniques are the heavy lifting of analysis, but this is what analysts are supposed to do [19].

 

The model is an abstract representation that serves two functions:

  1. Model as hypothesis. Based on partial data or conjecture alone, a model may be instantiated as a feasible proposition to be assessed, a hypothesis. In a homicide investigation, each conjecture for “who did it” is a hypothesis, and the associated model instance is a feasible explanation for “how they did it.” The model provides a framework around which data is assembled, a mechanism for examining feasibility, and a basis for exploring data to confirm or refute the hypothesis.
  2. Model as explanation. As evidence (relevant data that fits into the model) is assembled on the general model framework to form a hypothesis, different views of the model provide more robust explanations of that hypothesis. Narrative (story), timeline, organization relationships, resources, and other views may be derived from a common model.

 

 

The process of implementing data decomposition (analysis) and model construction-examination (synthesis) can be depicted in three process phases or spaces of operation (Figure 5.6):

  1. Data space. In this space, data (relevant and irrelevant, certain and ambiguous) are indexed and accumulated. Indexing by time (of collection and arrival), source, content topic, and other factors is performed to allow subsequent search and access across many dimensions.
  2. Argumentation space. The data is reviewed; selected elements of potentially relevant data (evidence) are correlated, grouped, and assembled into feasible categories of explanations, forming a set (structure) of high-level hypotheses to explain the observed data. This process applies exhaustive searches of the data space, accepting some as relevant and discarding others. In this phase, patterns in the data are dis- covered, although all the data in the patterns may not be present; these patterns lead to the creation of hypotheses even though all the data may not exist. Examination of the data may lead to creation of hypotheses by conjecture, even though no data supports the hypothesis at this point. The hypotheses are examined to determine what data would be required to reinforce or reject each; hypotheses are ranked in terms of likelihood and needed data (to reinforce or refute). The models are tested and various excursions are examined. This space is the court in which the case is made for each hypothesis, and they are judged for completeness, sufficiency, and feasibility. This examination can lead to requests for additional data, refinements of the current hypotheses, and creation of new hypotheses.
  3. Explanation space. Different “views” of the hypothesis model provide explanations that articulate the hypothesis and relate the supporting evidence. The intelligence report can include a single model and explanation that best fits the data (when data is adequate to assert the single answer) or alternative competing models, as well as the sup- porting evidence for each and an assessment of the implications of each. Figure 5.6 illustrates several of the views often used: timelines of events, organization-relationship diagrams, annotated maps and imagery, and narrative story lines.

For a single target under investigation, we may create and consider (or entertain) several candidate hypotheses, each with a complete set of model views. If, for example, we are trying to determine the true operations of the foreign company introduced earlier, TradeCo, we may hold several hypotheses:

  1. H1—The company is a legal clothing distributor, as advertised.
  2. H2 —The company is a legal clothing distributor, but company executives are diverting business funds for personal interests.
  3. H3—The company is a front operation to cover organized crime, where hypothesis 3 has two sub-hypotheses:
  • H31—The company is a front for drug trafficking.
    • H32—The company is a front for terrorism money laundering.

In this case, H1, H2, H31, and H32 are the four root hypotheses, and the analyst identifies the need to create an organizational model, an operations flow-process model, and a financial model for each of the four hypotheses—creating 4 × 3 = 12 models.

 

5.5 Intelligence Targets in Three Domains

We have noted that intelligence targets may be objects, events, or dynamic processes—or combinations of these. The development of information operations has brought a greater emphasis on intelligence targets that exist not only in the physical domain, but in the realms of information (e.g., networked computers and information processes) and human decision making.

Information operations (IO) are those actions taken to affect an adversary’s information and information systems, while defending one’s own information and information systems. The U.S. Joint Vision 2020 describes the Joint Chiefs of Staff view of the ultimate purpose of IO as “to facilitate and protect U.S. decision-making processes, and in a conflict, degrade those of an adversary”.

The JV2020 builds on the earlier JV2010 [26] and retains the fundamental operational concepts, two with significant refinements that emphasize IO. The first is the expansion of the vision to encompass the full range of operations (nontraditional, asymmetric, unconventional ops), while retaining warfighting as the primary focus. The second refinement moves information superiority concepts beyond technology solutions that deliver information to the concept of superiority in decision making. This means that IO will deliver increased information at all levels and increased choices for commanders. Conversely, it will also reduce information to adversary commanders and diminish their decision options. Core to these concepts and challenges is the notion that IO uniquely requires the coordination of intelligence, targeting, and security in three fundamental realms, or domains of human activities.

 

These are likewise the three fundamental domains of intelligence targets, and each must be modeled:

  1. The physical domain encompasses the material world of mass and energy. Military facilities, vehicles, aircraft, and personnel make up the principal target objects of this domain. The orders of battle that measure military strength, for example, are determined by enumerating objects of the physical world.
  2. The abstract symbolic domain is the realm of information. Words, numbers, and graphics all encode and represent the physical world, storing and transmitting it in electronic formats, such as radio and TV signals, the Internet, and newsprint. This is the domain that is expanding at unprecedented rates, as global ideas, communications, and descriptions of the world are being represented in this domain. The domain includes the cyberspace that has become the principal means by which humans shape their perception of the world. It interfaces the physical to the cognitive domains.
  3. The cognitive domain is the realm of human thought. This is the ultimate locus of all information flows. The individual and collective thoughts of government leaders and populations at large form this realm. Perceptions, conceptions, mental models, and decisions are formed in this cognitive realm. This is the ultimate target of our adversaries: the realm where uncertainties, fears, panic, and terror can coerce and influence our behavior.

Current IO concepts have appropriately emphasized the targeting of the second domain—especially electronic information systems and their information content. The expansion of networked information systems and the reliance on those systems has focused attention on network-centric forms of warfare. Ultimately, though, IO must move toward a focus on the full integration of the cognitive realm with the physical and symbolic realms to target the human mind

Intelligence must understand and model the complete system or complex of the targets of IO: the interrelated systems of physical behavior, information perceived and exchanged, and the perception and mental states of decision makers.

Of importance to the intelligence analyst is the clear recognition that most intelligence targets exist in all three domains, and models must consider all three aspects.

The intelligence model of such an organization must include linked models of all three domains—to provide an understanding of how the organization perceives, decides, and communicates through a networked organization, as well as where the people and other physical objects are moving in the physical world. The concepts of detection, identification, and dynamic tracking of intelligence targets apply to objects, events, and processes in all three domains.

5.6 Summary

the analysis-synthesis process proceeds from intelligence analysis to operations analysis and then to policy analysis.

The knowledge-based intelligence enterprise requires the capture and explicit representation of such models to permit collaboration among these three disciplines to achieve the greatest effectiveness and sharing of intellectual capital.

6

The Practice of Intelligence Analysis and Synthesis

The chapter moves from high-level functional flow models toward the processes implemented by analysts.

A practical description of the process by one author summarizes the perspective of the intelligence user:

A typical intelligence production consists of all or part of three main elements: descriptions of the situation or event with an eye to identifying its essential characteristics; explanation of the causes of a development as well as its significance and implications; and the prediction of future developments. Each element contains one or both of these components: data, pro- vided by knowledge and incoming information and assessment, or judgment, which attempts to fill the gaps in the data

Consumers expect description, explanation, and prediction; as we saw in the last chapter, the process that delivers such intelligence is based on evidence (data), assessment (analysis-synthesis), and judgment (decision).

6.1 Intelligence Consumer Expectations

The U.S. Government Accounting Office (GAO) noted the need for greater clarity in the intelligence delivered in U.S. national intelligence estimates (NIEs) in a 1996 report, enumerating five specific standards for analysis, from the perspective of policymakers.

Based on a synthesis of the published views of current and former senior intelligence officials, the reports of three independent commissions, and a CIA publication that addressed the issue of national intelligence estimating, an objective NIE should meet the following standards [2]:

  • [G1]: quantify the certainty level of its key judgments by using percentages or bettors’ odds, where feasible, and avoid overstating the certainty of judgments (note: bettors’ odds state the chance as, for example, “one out of three”);
  • [G2]: identify explicitly its assumptions and judgments;
  • [G3]: develop and explore alternative futures: less likely (but not impossible) scenarios that would dramatically change the estimate if they occurred;
  • [G4]: allow dissenting views on predictions or interpretations;
  • [G5]: note explicitly what the IC does not know when the information gaps could have significant consequences for the issues under consideration.

 

The Commission would urge that the [IC] adopt as a standard of its meth- odology that in addition to considering what they know, analysts consider as well what they know they don’t know about a program and set about fill- ing gaps in their knowledge by:

  • [R1] taking into account not only the output measures of a program, but the input measures of technology, expertise and personnel from both internal sources and as a result of foreign assistance. The type and rate of foreign assis- tance can be a key indicator of both the pace and objective of a program into which the IC otherwise has little insight.
  • [R2] comparing what takes place in one country with what is taking place in others, particularly among the emerging ballistic missile powers. While each may be pursuing a somewhat different development program, all of them are pursuing programs fundamentally different from those pursued by the US, Russia and even China. A more systematic use of comparative methodologies might help to fill the information gaps.
  • [R3] employing the technique of alternative hypotheses. This technique can help make sense of known events and serve as a way to identify and organize indicators relative to a program’s motivation, purpose, pace and direction. By hypothesizing alternative scenarios a more adequate set of indicators and col- lection priorities can be established. As the indicators begin to align with the known facts, the importance of the information gaps is reduced and the likely outcomes projected with greater confidence. The result is the possibility for earlier warning than if analysts wait for proof of a capability in the form of hard evidence of a test or a deployment. Hypothesis testing can provide a guide to what characteristics to pursue, and a cue to collection sensors as well.
  • [R4] explicitly tasking collection assets to gather information that would dis- prove a hypothesis or fill a particular gap in a list of indicators. This can prove a wasteful use of scarce assets if not done in a rigorous fashion. But moving from the highly ambiguous absence of evidence to the collection of specific evidence of absence can be as important as finding the actual evidence [3].

 

 

 

intelligence consumers want more than estimates or judgments; they expect concise explanations of the evidence and reasoning processes behind judgments with substantiation that multiple perspectives, hypotheses, and consequences have been objectively considered.

They expect a depth of analysis-synthesis that explicitly distinguishes assumptions, evidence, alternatives, and consequences—with a means of quantifying each contribution to the outcomes (judgments).

6.2 Analysis-Synthesis in the Intelligence Workflow

Analysis-synthesis is one process within the intelligence cycle… It represents a process that is practically implemented as a continuum rather than a cycle, with all phases being implemented concurrently and addressing a multitude of different intelligence problems or targets.

The stimulus-hypothesis-option-response (SHOR) model, described by Joseph Wohl in 1986, emphasizes the consideration of multiple perception hypotheses to explain sensed data and assess options for response.

The observe-orient-decide-act (OODA) loop, developed by Col. John Warden, is a high-level abstraction of the military command and control loop that considers the human decision-making role and its dependence on observation and orientation—the process of placing the observations in perceptual framework for decision making.

The tasking, processing, exploitation, dissemination (TPED) model used by U.S. technical collectors and processors [e.g., the U.S. National Reconnaissance Office (NRO), the National Imagery and Mapping Agency (NIMA), and the National Security Agency (NSA)] distinguishes between the processing elements of the national technical-means intelligence channels (SIGINT, IMINT, and MASINT) and the all-source analytic exploitation roles of the CIA and DIA.

The DoD Joint Directors of Laboratories (JDL) data fusion model is a more detailed technical model that considers the use of multiple sources to produce a common operating picture of individual objects, situations (the aggregate of objects and their behaviors), and the consequences or impact of those situations. The model includes a hierarchy of data correlation and combination processes at three levels (level 0: signal refinement; level 1: object refinement; level 2: situation refinement; level 3: impact refinement) and a corresponding feedback control process (level 4: process refinement) [10]. The JDL model is a functional representation that accommodates automated processes and human processes and provides detail within both the processing and analysis steps. The model is well suited to organize the structure of automated processing stages for technical sensors (e.g., imagery, signals, and radar).

  • Level 0: signal refinement automated processing correlates and combines raw signals (e.g., imagery pixels or radar signals intercepted from multiple locations) to detect objects and derive their location, dynamics, or identity.
  • Level 1: object refinement processing detects individual objects and correlates and combines these objects across multiple sources to further refine location, dynamics, or identity information.
  • Level 2: situation refinement analysis correlates and combines the detected objects across all sources within the background context to produce estimates of the situation—explaining the aggregate of static objects and their behaviors in context to derive an explanation of activities with estimated status, plans, and intents.
  • Level 3: impact refinement analysis estimates the consequences of alternative courses of action.
  • The level 4 process refinement flows are not shown in the figure, though all forward processing levels can provide inputs to refine the process to: focus collection or processing on high-value targets, refine processing parameters to filter unwanted content, adjust database indexing of intermediate data, or improve overall efficiency of the production process. The level 4 process effectively performs the KM business intelligence functions introduced in Section 3.7.

The analysis stage employs semiautomated detection and discovery tools to access the data in large databases produced by the processing stage. In general, the processing stage can be viewed as a factory of processors, while the analysis stage is a lower volume shop staffed by craftsmen—the analytic team.

6.3 Applying Automation

Automated processing has been widely applied to level 1 object detection (e.g., statistical pattern recognition) and to a lesser degree to level 2 situation recognition problems (e.g., symbolic artificial intelligence systems) for intelligence applications.

Viewing these dimensions as the number of nodes (causes) and number of interactions (influencing the scale of effects) in a dynamic system, the problem space depicts the complexity of the situation being analyzed:

  • Causal diversity. The first dimension relates to the number of causal fac- tors, or actors, that influence the situation behavior.
  • Scale of effects. The second dimension relates to the degree of interaction between actors, or the degree to which causal factors influence the behavior of the situation.

As both dimensions increase, the potential for nonlinear behavior increases, making it more difficult to model the situation being analyzed.

These problems include the detection of straightforward objects in images, content patterns in text, and emitted signal matching. More difficult problems still in this category include dynamic situations with moderately higher numbers of actors and scales of effects that require qualitative (propositional logic) or quantitative (statistical modeling) reasoning processes.

The most difficult category 3 problems, intractable to fully automated analysis, are those complex situations characterized by high numbers of actors with large-scale interactions that give rise to emergent behaviors.

6.4 The Role of the Human Analyst

The analyst applies tacit knowledge to search through explicit information to create tacit knowledge in the form of mental models and explicit intelligence reports for consumers.

The analysis process requires the analyst to integrate the cognitive reasoning and more emotional sensemaking processes with large bodies of explicit information to produce explicit intelligence products for consumers. To effectively train and equip analysts to perform this process, we must recognize and account for these cognitive and emotion components of comprehension. The complete process includes the automated workflow, which processes explicit information, and the analyst’s internal mental workflow, which integrates the cognitive and emotional modes

 

Complementary logical and emotional frameworks are based on the current mental model of beliefs and feelings and the new information is compared to these frameworks; differences have the potential for affirming the model (agreement), learning and refining the model (acceptance and model adjustment), or rejecting the new information. Judgment integrates feelings about consequences and values (based on experience) with reasoned alternative consequences and courses of action that construct the meaning of the incoming stimulus. Decision making makes an intellectual-emotional commitment to the impact of the new information on the mental model (acceptance, affirmation, refinement, or rejection).

6.5 Addressing Cognitive Shortcomings

The intelligence analyst is not only confronted with ambiguous information about complex subjects, but is often placed under time pressures and expectations to deliver accurate, complete, and predictive intelligence. Consumer expectations often approach infallibility and omniscience.

In this situation, the analyst must be keenly aware of the vulnerabilities of human cognitive short- comings and take measures to mitigate the consequences of these deficiencies. The natural limitations in cognition (perception, attention span, short- and long-term memory recall, and reasoning capacity) constrain the objectivity of our reasoning processes, producing errors in our analysis.

In “Combatting Mind-Set,” respected analyst Jack Davis has noted that analysts must recognize the subtle influence of mindset, the cumulative mental model that distills analysts’ beliefs about a complex subject and “find[s] strategies that simultaneously harness its impressive energy and limit[s] the potential damage”.

Davis recommends two complementary strategies:

  1. Enhancing mindset. Creating explicit representation of the mind- set—externalizing the mental model—allows broader collaboration, evaluation from multiple perspectives, and discovery of subtle biases.
  2. Ensuring mind-set. Maintaining multiple explicit explanations and projections and opportunity analyses provides insurance against single-point judgments and prepares the analyst to switch to alternatives when discontinuities occur.

Davis has also cautioned analysts to beware the paradox of expertise phenomenon that can distract attention from the purpose of an analysis. This error occurs when discordant evidence is present and subject experts tend to be distracted and focus on situation analysis (solving the discordance to understand the subject situation) rather than addressing the impact on the analysis of the consequences of the discrepancy. In such cases, the analyst must focus on providing value added by addressing what action alternatives exist for alternatives and their consequences in cost-benefit terms

Heuer emphasized the importance of supporting tools and techniques to overcome natural analytic limitations [20]: “Weaknesses and biases inherent in human thinking processes can be demonstrated through carefully designed experiments. They can be alleviated by conscious application of tools and techniques that should be in the analytical tradecraft toolkit of all intelligence analysts.”

6.6 Marshaling Evidence and Structuring Argumentation

Instinctive analysis focuses on a single or limited range of alternatives, moves on a path to satisfy minimum needs (satisficing, or finding an acceptable explanation), and is performed implicitly using tacit mental models. Structured analysis follows the principles of critical thinking introduced in Chapter 4, organizing the problem to consider all reasonable alternatives, systematically and explicitly representing the alternative solutions to comprehensively analyze all factors.

6.6.1 Structuring Hypotheses

6.6.2 Marshaling Evidence and Structuring Arguments

There exist a number of classical approaches to representing hypotheses, marshaling evidence to them, and arguing for their validity. Argumentation structures propositions to move from premises to conclusions. Three perspectives or disciplines of thought have developed the most fundamental approaches to this process.

Each discipline has contributed methods to represent knowledge and to provide a structure for reasoning to infer from data to relevant evidence, through intermediate hypotheses to conclusion. The term knowledge representation refers to the structure used to represent data and show its relevance as evidence, the representation of rules of inference, and the asserted conclusions.

6.6.3 Structured Inferential Argumentation

Philosophers, rhetoricians, and lawyers have long sought accurate means of structuring and then communicating, in natural language, the lines of reasoning, that lead from complicated sets of evidence to conclusions. Lawyers and intelligence analysts alike seek to provide a clear and compelling case for their conclusions, reasoned from a mass of evidence about a complex subject.

We first consider the classical forms of argumentation described as infor- mal logic, whereby the argument connects premises to conclusions. The com- mon forms include:

  1. Multiple premises, when taken together, lead to but one con- clusion. For example: The radar at location A emits at a high pulse repetition frequency (PRF); when it emits at high PRF, it emits on fre- quency (F) → the radar at A is a fire control radar.
  2. Multiple premises independently lead to the same conclu- sion. For example: The radar at A is a fire control radar. Also Location A stores canisters for missiles. → A surface to air missile (SAM) battery must be at location A.
  3. A single premise leads to but one conclusion, for example: A SAM battery is located at A the battery at A → must be linked to a command and control (C2) center.
  4. A single premise can support more than one conclusion. For example: The SAM battery could be controlled by the C2 center at golf, or The SAM battery could be controlled by the C2 center at hotel.

 

These four basic forms may be combined to create complex sets of argu- mentation, as in the simple sequential combination and simplification of these examples:

  • The radar at A emits at a high PRF; when it emits at high PRF, it emits on frequency F, so it must be a fire control radar. Also, location A stores canisters for missiles, so there must be a SAM battery there. The battery at A must be linked to a C2 center. It could be controlled by the C2 centers at golf or at hotel.

The structure of this argument can be depicted as a chain of reasoning or argumentation (Figure 6.7) using the four premise structures in sequence.

Toulmin distinguished six elements of all arguments [24]:

  1. Data (D), at the beginning point of the argument, are the explicit elements of data (relevant data, or evidence) that are observed in the external world.
  1. Claim (C), is the assertion of the argument.
  2. Qualifier (Q), imposes any qualifications on the claim.
  3. Rebuttals (R) are any conditions that may refute the claim.
  4. Warrants (W) are the implicit propositions (rules, principles) that permit inference from data to claim.
  5. Backing (B) are assurances that provide authority and currency to the warrants.

Applying Toulmin’s argumentation scheme requires the analyst to distinguish each of the six elements of argument and to fit them into a standard structure of reasoning—see Figure 6.8(a)—which leads from datum (D) to claim (C). The scheme separates the domain-independent structure from the warrants and backing, which are dependent upon the field in which we are working (e.g., legal cases, logical arguments, or morals).

The general structure, described in natural language then proceeds from datum (D) to claim (I) as follows:

  • The datum (D), supported by the warrant (W), which is founded upon the backing (B), leads directly to the claim (C), qualified to the degree (Q), with the caveat that rebuttal (R) is present.

 

 

Such a structure requires the analyst to identify all of the key components of the argument—and explicitly report if any components are missing (e.g., if rebuttals or contradicting evidence is not existent).

The benefits of this scheme are the potential for the use of automation to aid analysts in the acquisition, examination, and evaluation of natural-language arguments. As an organizing tool, the Toulmin scheme distinguishes data (evidence) from the warrants (the universal premises of logic) and their backing (the basis for those premises).

It must be noted that formal logicians have criticized Toulmin’s scheme due to its lack of logical rigor and ability to address probabilistic arguments. Yet, it has contributed greater insight and formality to developing structured natural-language argumentation.

6.6.4 Inferential Networks

Moving beyond Toulmin’s structure, we must consider the approaches to create network structures to represent complex chains of inferential reasoning.

The use of graph theory to describe complex arguments allows the analyst to represent two crucial aspects of an argument:

  • Argument structure. The directed graph represents evidence (E), events, or intermediate hypotheses inferred by the evidence (i), and the ultimate, or final, hypotheses (H) as graph nodes. The graph is directed because the lines connecting nodes include a single arrow indicating the single direction of inference. The lines move from a source element of evidence (E) through a series of inferences (i1, i2, i3, … in) toward a terminal hypothesis (H). The graph is acyclic because the directions of all arrows move from evidence, through intermediate inferences to hypothesis, but not back again: there are no closed-loop cycles.
  • Force of evidence and propagation. In common terms we refer the force, strength, or weight of evidence to describe the relative degree of contribution of evidence to support an intermediate inference (in), or the ultimate hypothesis (H). The graph structure provides a means of describing supporting and refuting evidence, and, if evidence is quantified (e.g., probabilities, fuzzy variables, or other belief functions), a means of propagating the accumulated weight of evidence in an argument.

Like a vector, evidence includes a direction (toward certain hypotheses) and a magnitude (the inferential force). The basic categories of argument can be structured to describe four basic categories of evidence combination (illustrated in Figure 6.9):

Direct. The most basic serial chain of inference moves from evidence (E) that the event E occurred, to the inference (i1) that E did in fact occur. This inference expresses belief in the evidence (i.e., belief in the veracity and objectivity of human testimony). The chain may go on serially to further inferences because of the belief in E.

Consonance. Multiple items of evidence may be synergistic resulting in one item enhancing the force of another; their joint contribution pro- vides more inferential force than their individual contributions. Two items of evidence may provide collaborative consonance; the figure illustrates the case where ancillary evidence (E2) is favorable to the credibility of the source of evidence (E1), thereby increasing the force of E1. Evidence may also be convergent when E1 and E2 provide evidence of the occurrence of different events, but those events, together, favor a common subsequent inference. The enhancing contribution

(i1) to (i2) is indicated by the dashed arrow.

Redundant. Multiple items of evidence (E1, E2) that redundantly lead to a common inference (i1) can also diminish the force of each other in two basic ways. Corroborative redundancy occurs when two or more sources supply identical evidence of a common event inference (i1). If one source is perfectly credible, the redundant source does not contribute inferential force; if both have imperfect credibility, one may diminish the force of the other to avoid double counting the force of the redundant evidence. Cumulative redundancy occurs when multiple items of evidence (E1, E2), though inferring different intermediate hypotheses (i1,i2), respectively, lead to a common hypothesis (i3) farther up the reasoning chain. This redundant contribution to (i3), indicated by the dashed arrow, necessarily reduces the contribution of inferential force from E2.

Dissonance. Dissonant evidence may be contradictory when items of evidence E1 and E2 report, mutually exclusively, that the event E did occur and did not occur, respectively. Conflicting evidence, on the other hand, occurs when E1and E2 report two separate events i1 and i2 (both of which may have occurred, but not jointly), but these events favor mutually exclusive hypotheses at i3.

The graph moves from bottom to top in the following sequence:

  1. Direct evidence at the bottom;
  2. Evidence credibility inferences are the first row above evidence, infer- ring the veracity, objectivity, and sensitivity of the source of evidence;
  3. Relevance inferences move from credibility-conditioned evidence through a chain of inferences toward final hypothesis;
  4. The final hypothesis is at the top.

Some may wonder why such rigor is employed for such a simple argument. This relatively simple example illustrates the level of inferential detail required to formally model even the simplest of arguments. It also illustrates the real problem faced by the analyst in dealing with the nuances of redundant and conflicting evidence. Most significantly, the example illustrates the degree of care required to accurately represent arguments to permit machine-automated reasoning about all-source analytic problems.

We can see how this simple model demands the explicit representation of often-hidden assumptions, every item of evidence, the entire sequence of inferences, and the structure of relationships that leads to our conclusion that H1 is true.

Inferential networks provide a logical structure upon which quantified calculations may be performed to compute values of inferential force of evidence and the combined contribution of all evidence toward the final hypothesis.

6.7 Evaluating Competing Hypotheses

Heuer’s research indicated that the single most important technique to over- come cognitive shortcomings is to apply a systematic analytic process that allows objective comparison of alternative hypotheses

“The simultaneous evaluation of multiple, competing hypotheses entails far greater cognitive strain than examining a single, most-likely hypothesis”

Inferential networks are useful at the detail level, where evidence is rich and the ACH approach is useful at the higher levels of abstraction and where evidence is sparse. Networks are valuable for automated computation; ACH is valuable for collaborative analytic reasoning, presentation, and explanation. The ACH approach provides a methodology for the concurrent competition of multiple explanations, rather than the focus on the currently most plausible.

The ACH structure approach described by Heuer uses a matrix to organize and describe the relationship between evidence and alternative hypotheses. The sequence of the analysis-synthesis process (Figure 6.11) includes:

  1. Hypothesis synthesis. A multidisciplinary team of analysts creates a set of feasible hypotheses, derived from imaginative consideration of all possibilities before constructing a complete set that merits detailed consideration.
  2. Evidence analysis. Available data is reviewed to locate relevant evidence and inferences that can be assigned to support or refute the hypotheses. Explicitly identify the assumptions regarding evidence and the arguments of inference. Following the processes described in the last chapter, list the evidence-argument pairs (or chains of inference) and identify, for each, the intrinsic value of its contribution and the potential for being subject to denial or deception (D&D).
  3. Matrix synthesis. Construct an ACH matrix that relates evidence- inference to the hypotheses defined in step 1.
  4. Matrix analysis. Assess the diagnosticity (the significance or diagnostic value of the contribution of each component of evidence and related inferences) of each evidence-inference component to each hypothesis. This process proceeds for each item of evidence-inference across the rows, considering how each item may contribute to each hypothesis. An entry may be supporting (consistent with), refuting (inconsistent with), or irrelevant (not applicable) to a hypothesis; a contribution notation (e.g., +, –, or N/A, respectively) is marked within the cell. Where possible, annotate the likelihood (or probability) that this evi- dence would be observed if the hypothesis is true. Note that the diagnostic significance of an item of evidence is reduced as it is consistent with multiple hypotheses; it has no diagnostic contribution when it supports, to any degree, all hypotheses.
  5. Matrix synthesis (refinement). Evidence assignments are refined, eliminating evidence and inferences that have no diagnostic value.
  6. Hypotheses analysis. The analyst now proceeds to evaluate the likelihood of each hypothesis, by evaluating entries down the columns. The likelihood of each hypothesis is estimated by the characteristics of supporting and refuting evidence (as described in the last chapter). Inconsistencies and gaps in expected evidence provide a basis for retasking; a small but high-confidence item that refutes the preponderance of expected evidence may be a significant indicator of deception. The analyst also assesses the sensitivity of the likely hypothesis to contributing assumptions, evidence, and the inferences; this sensitivity must be reported with conclusions and the consequences if any of these items are in error. This process may lead to retasking of collectors to acquire more data to sup- port or refute hypotheses and to reduce the sensitivity of a conclusion.
  7. Decision synthesis (judgment). Reporting the analytic judgment requires the description of all of the alternatives (not just the most likely), the assumptions, evidence, and inferential chains. The report must also describe the gaps, inconsistencies, and their consequences on judgments. The analyst must also specify what should be done to provide an update on the situation and what indictors might point to significant changes in current judgments.

 

Notice that the ACH approach deliberately focuses the analyst’s attention on the contribution, significance, and relationships of evidence to hypotheses, rather than on building a case for any one hypothesis. The analytic emphasis is, first, on evidence and inference across the rows, before evaluating hypotheses, down the columns.

The stages of the structured analysis-synthesis methodology (Figure 6.12) are summarized in the following list:

  • Organize. A data mining tool (described in Chapter 8, Section 8.2.2) automatically clusters related data sets by identifying linkages (relation- ships) across the different data types. These linked clusters are visualized using link-clustering tools used to visualize clusters and linkages to allow the analyst to consider the meaningfulness of data links and dis- cover potentially relevant relationships in the real world.
  • Conceptualize. The linked data is translated from the abstract relation- ship space to diagrams in the temporal and spatial domains to assess real-world implications of the relationships. These temporal and spatial models allow the analyst to conceptualize alternative explanations that will become working hypotheses. Analysis in the time domain considers the implications of sequence, frequency, and causality, while the spatial domain considers the relative location of entities and events.
  • Hypothesize. The analyst synthesizes hypotheses, structuring evidence and inferences into alternative arguments that can be evaluated using the method of alternative competing hypotheses. In the course of this process, the analyst may return to explore the database and linkage diagrams further to support or refute the working hypotheses.

 

6.8 Countering Denial and Deception

Because the targets of intelligence are usually high-value subjects (e.g., intentions, plans, personnel, weapons or products, facilities, or processes), they are generally protected by some level of secrecy to prevent observation. The means of providing this secrecy generally includes two components:

  1. Denial. Information about the existence, characteristics, or state of a target is denied to the observer by methods of concealment. Camouflage of military vehicles, emission control (EMCON), operational security (OPSEC), and encryption of e-mail messages are common examples of denial, also referred to as dissimulation (hiding the real).
  2. Deception. Deception is the insertion of false information, or simulation (showing the false), with the intent to distort the perception of the observer. The deception can include misdirection (m-type) deception to reduce ambiguity and direct the observer to a simulation—away from the truth—or ambiguity (a-type) deception, which simulates effects to increase the observer’s ambiguity or understanding about the truth

D&D methods are used independently or in concert to distract or disrupt the intelligence analyst, introducing distortions in the collection channels, ambiguity in the analytic process, errors in the resulting intelligence product, and misjudgment in decisions based on the product. Ultimately, this will lead to distrust of the intelligence product by the decision maker or consumer. Strategic D&D poses an increasing threat to the analyst, as an increasing number of channels for D&D are available to deceivers. Six distinct categories of strategic D&D operations have different target audiences, means of implementation, and objectives.

Propaganda or psychological operations (PSYOP) target a general population using several approaches. White propaganda openly acknowledges the source of the information, gray propaganda uses undeclared sources. Black propaganda purports to originate from a source other its actual sponsor, protecting the true source (e.g., clandestine radio and Internet broadcast, independent organizations, or agents of influence. Coordinated white, gray, and black propaganda efforts were strategically conducted by the Soviet Union throughout the Cold War as active measures of disinformation

Leadership deception targets leadership or intelligence consumers, attempting to bypass the intelligence process by appealing directly to the intelligence consumer via other channels. Commercial news channels, untrustworthy diplomatic channels, suborned media, and personal relationships can be exploited to deliver deception messages to leadership (before intelligence can offer D&D cautions) in an effort to establish mindsets in decision makers.

Intelligence deception specifically targets intelligence collectors (technical sensors, communications interceptors, and humans) and subsequently analysts by combining denial of the target data and by introducing false data to disrupt, distract, or deceive the collection or analysis processes (or both processes). The objective is to direct the attention of the sensor or the analyst away from a correct knowledge of a specific target.

Denial operations by means of OPSEC seek to deny access to true intentions and capabilities by minimizing the signatures of entities and activities.

Two primary categories of countermeasures for intelligence deception must be orchestrated to counter either the simple deception of a parlor magician or the complex intelligence deception program of a rogue nation-state. Both collection and analysis measures are required to provide the careful observation and critical thinking necessary to avoid deception. Improvements in collection can provide broader and more accurate coverage, even limited penetration of some covers.

The problem of mitigating intelligence surprise, therefore, must be addressed by considering both large numbers of models or hypotheses (analysis) and large sets of data (collection, storage, and analysis)

In his classic treatise, Strategem, Barton Whaley exhaustively studied over 100 historical D&D efforts and concluded, “Indeed, this is the general finding of my study—that is, the deceiver is almost always successful regardless of the sophistication of his victim in the same art. On the face of it, this seems an intolerable conclusion, one offending common sense. Yet it is the irrefutable conclusion of historical evidence”

 

The components of a rigorous counter D&D methodology, then, include the estimate of the adversary’s D&D plan as an intelligence subject (target) and the analysis of specific D&D hypotheses as alternatives. Incorporating this process within the ACH process described earlier amounts to assuring that reasonable and feasible D&D hypotheses (for which there may be no evidence to induce a hypothesis) are explicitly considered as alternatives.

two active searches for evidence to support, refute, or refine the D&D hypotheses [44]:

  1. Reconstructive inference. This deductive process seeks to detect the presence of spurious signals (Harris call these sprignals) that are indicators of D&D—the faint evidence predicted by conjectured D&D plans. Such sprignals can be strong evidence confirming hypothesis A (the simulation), weak contradictory evidence of hypothesis C (leakage from the adversary’s dissimulation effort), or missing evidence that should be present if hypothesis A were true.
  2. Incongruity testing. This process searches for inconsistencies in the data and inductively generates alternative explanations that attribute the incongruities to D&D (i.e., D&D explains the incongruity of evidence for more than one reality in simultaneous existence).

These processes should be a part of any rigorous alternative hypothesis process, developing evidence for potential D&D hypotheses while refining the estimate of the adversaries’ D&D intents, plans, and capabilities. The processes also focus attention on special collection tasking to support, refute, or refine current D&D hypotheses being entertained.

  • Summary

Central to the intelligence cycle, analysis-synthesis requires the integration of human skills and automation to provide description, explanation, and prediction with explicit and quantified judgments that include alternatives, missing evidence, and dissenting views carefully explained. The challenge of discovering the hidden, forecasting the future, and warning of the unexpected cannot be performed with infallibility, yet expectations remain high for the analytic com- munity.

The practical implementation of collaborative analysis-synthesis requires a range of tools to coordinate the process within the larger intelligence cycle, augment the analytic team with reasoning and sensemaking support, overcome human cognitive shortcomings, and counter adversarial D&D.

 

7

Knowledge Internalization and Externalization

The process of conducting knowledge transactions between humans and computing machines occurs at the intersection between tacit and explicit knowledge, between human reasoning and sensemaking, and the explicit computation of automation. The processes of externalization (tacit-to-explicit transactions) and internalization (explicit-to-tacit transactions) of knowledge, however, are not just interfaces between humans and machines; more properly, the intersection is between human thought, symbolic representations of thought, and the observed world.

7.1 Externalization and Internalization in the Intelligence Workflow

The knowledge-creating spiral described in Chapter 3 introduced the four phases of knowledge creation.

Externalization

Following social interactions with collaborating analysts, an analyst begins to explicitly frame the problem. The process includes the decomposition of the intelligence problem into component parts (as described in Section 2.2) and explicit articulation of essential elements of information required to solve the problem. The tacit-to-explicit transfer includes the explicit listing of these essential elements of information needed, candidate sources of data, the creation of searches for relevant SMEs, and the initiation of queries for relevant knowledge within current holdings and collected all-source data. The primary tools to interact with all-source holdings are query and retrieval tools that search and retrieve information for assessment of relevance by the analyst.

Combination

This explicit-explicit transfer process correlates and combines the collected data in two ways:

  1. Interactive analytic tools. The analyst uses a wide variety of analytic tools to compare and combine data elements to identify relationships and marshal evidence against hypotheses.
  2. Automated data fusion and mining services. Automated data combination services also process high-volume data to bring detections of known patterns and discoveries of “interesting” patterns to the attention of the analyst.

Internalization

The analyst integrates the results of combination in two domains: external hypotheses (explicit models and simulations) and decision models (like the alter- native competing hypothesis decision model introduced in the last chapter) are formed to explicitly structure the rationale between hypotheses, and internally, the analyst develops tacit experience with the structured evidence, hypotheses, and decision alternatives.

Services in the data tier capture incoming data from processing pipelines (e.g., imagery and signals producers), reporting sources (news services, intelligence reporting sources), and open Internet sources being monitored. Content appropriate for immediate processing and production, such as news alerts, indications, and warning events, and critical change data are routed to the operational storage for immediate processing. All data are indexed, transformed, and loaded into the long-term data warehouse or into specialized data stores (e.g., imagery, video, or technical databases). The intelligence services tier includes six basic service categories:

  1. Operational processing. Information filtered for near-real-time criticality are processed to extract and tag content, correlate and combine with related content, and provide updates to operational watch officers. This path applies the automated processes of data fusion and data mining to provide near-real-time indicators, tracks, metrics, and situation summaries.
  2. Indexing, query, and retrieval. Analysts use these services to access the cumulating holdings by both automated subscriptions for topics of interest to be pushed to the user upon receipt and interactive query and retrieval of holdings.
  3. Cognitive (analytic) services. The analysis-synthesis and decision- making processes described in Chapters 5 and 6 are supported by cognitive services (thinking-support tools).
  4. Collaboration services. These services, described in Chapter 4, allow synchronous and asynchronous collaboration between analytic team members.
  5. Digital production services. Analyst-generated and automatically created dynamic products are produced and distributed to consumers based on their specified preferences.
  6. Workflow management. The workflow is managed across all tiers to monitor the flow from data to product, to monitor resource utilization, to assess satisfaction of current priority intelligence requirements, and to manage collaborating workgroups.

7.2 Storage, Query, and Retrieval Services

At the center of the enterprise is the knowledge base, which stores explicit knowledge and provides the means to access that knowledge to create new knowledge.

7.2.1 Data Storage

Intelligence organizations receive a continuous stream of data from their own tasked technical sensors and human sources, as well as from tasked collections of data from open sources. One example might be Web spiders that are tasked to monitor Internet sites for new content (e.g., foreign news services), then to collect, analyze, and index the data for storage. The storage issues posed by the continual collection of high-volume data are numerous:

Diversity. All-source intelligence systems require large numbers of inde- pendent data stores for imagery, text, video, geospatial, and special technical data types. These data types are served by an equally high number of specialized applications (e.g., image and geospatial analysis and signal analysis).

Legacy. Storage system designers are confronted with the integration of existing (legacy) and new storage systems; this requires the integration of diverse logical and physical data types.

Federated retrieval and analysis. The analyst needs retrieval, application, and analysis capabilities that span across the entire storage system.

7.2.2 Information Retrieval

Information retrieval (IR) is formally defined as “… [the] actions, methods and procedures for recovering stored data to provide information on a given subject” [2]. Two approaches to query and retrieve stored data or text are required in most intelligence applications:

  1. Data query and retrieval is performed on structured data stored in relational database applications. Imagery, signals, and MASINT data are generally structured and stored in structured formats that employ structured query language (SQL) and SQL extensions for a wide variety of databases (e.g., Access, IBM DB2 and Informix, Microsoft SQL Server, Oracle, and Sybase). SQL allows the user to retrieve data by context (e.g., by location in data tables, such as date of occurrence) or by content (e.g., retrieve all record with a defined set of values).
  2. Text query and retrieval is performed on both structured and unstructured text in multiple languages by a variety of natural language search engines to locate text containing specific words, phrases, or general concepts within a specified context.

Data query methods are employed within the technical data processing pipelines (IMINT, SIGINT, and MASINT). The results of these analyses are then described by analysts in structured or unstructured text in an analytic database for subsequent retrieval by text query methods.

Moldovan and Harabagiu have defined a five-level taxonomy of Q&A systems (Table 7.1) that range from the common keyword search engine that searches for relevant content (class 1) to reasoning systems that solve complex natural language problems (class 5) [3]. Each level requires increasing scope of knowledge, depth of linguistic understanding, and sophistication of reasoning to translate relevant knowledge to an answer or solution.

 

The first two levels of current search capabilities locate and return relevant content based on keywords (content) or the relationships between clusters of words in the text (concept).

While class 1 capabilities only match and return content that matches the query, class 2 capabilities integrate the relevant data into a simple response to the question.

Class 3 capabilities require the retrieval of relevant knowledge and reasoning about that knowledge to deduce answers to queries, even when the specific answer is not explicitly stated in the knowledge base. This capability requires the ability to both reason from general knowledge to specific answers and provide rationale for those answers to the user.

Class 4 and 5 capabilities represent advanced capabilities, which require robust knowledge bases that contain sophisticated knowledge representation (assertions and axioms) and reasoning (mathematical calculation, logical inference, and temporal reasoning).

7.3 Cognitive (Analytic Tool) Services

Cognitive services support the analyst in the process of interactively analyzing data, synthesizing hypotheses, and making decisions (choosing among alternatives). These interactive services support the analysis-synthesis activities described in Chapters 5 and 6. Alternatively called thinking tools, analytics, knowledge discovery, or analytic tools, these services enable the human to trans- form and view data, create and model hypotheses, and compare alternative hypotheses and consequences of decisions.

  • Exploration tools allow the analyst to interact with raw or processed multi- media (text, numerical data, imagery, video, or audio) to locate and organize content relevant to an intelligence problem. These tools provide the ability to search and navigate large volumes of source data; they also provide automated taxonomies of clustered data and summaries of individual documents. The information retrieval functions described in the last subsection are within this category. The product of exploration is generally a relevant set of data/text organized and metadata tagged for subsequent analysis. The analyst may drill down to detail from the lists and summaries to view the full content of all items identified as relevant.
  • Reasoning tools support the analyst in the process of correlating, comparing, and combining data across all of the relevant sources. These tools support a wide variety of specific intelligence target analyses:
  • Temporal analysis. This is the creation of timelines of events, dynamic relationships, event sequences, and temporal transactions (e.g., electronic, financial, or communication).
  • Link analysis. This involves automated exploration of relationships among large numbers of different types of objects (entities and events).
  • Spatial analysis. This is the registration and layering of 3D data sets and creation of 3D static and dynamic models from all-source evidence. These capabilities are often met by commercial geospatial information system and computer-aided design (CAD) software.
  • Functional analysis. This is the analysis of processes and expected observables (e.g., manufacturing, business, and military operations, social networks and organizational analysis, and traffic analysis).

These tools aid the analyst in five key analytic tasks:

  1. Correlation: detection and structuring of relationships or linkages between different entities or events in time, space, function, or interaction; association of different reports or content related to a common entity or event;
  2. Combination: logical, functional, or mathematical joining of related evidence to synthesize a structured argument, process, or quantitative estimate;
  3. Anomaly detection: detection of differences between expected (or modeled) characteristics of a target;
  4. Change detection: detection of changes in a target over time—the changes may include spectral, spatial, or other phenomenological changes;
  5. Construction: synthesis of a model or simulation of entities or events and their interactions based upon evidence and conjecture.

Sensemaking tools support the exploration, evaluation, and refinement of alternative hypotheses and explanations of the data. Argumentation structuring, modeling, and simulation tools in this category allow analysts to be immersed in their hypotheses and share explicit representations with other collaborators. This immersion process allows the analytic team to create shared meaning as they experience the alternative explanations.

Decision support (judgment) tools assist analytic decision making by explicitly estimating and comparing the consequences and relative merits of alternative decisions.

These tools include models and simulations that permit the analyst to create and evaluate alternative COAs and weigh the decision alternatives against objective decision criteria. Decision support systems (DSSs) apply the principles of probability to express uncertainty and decision theory to create and assess attributes of decision alternatives and quantify the relative utility of alternatives. Normative, or decision-analytic DSSs, aid the analyst in structuring the decision problem and in computing the many factors that lead from alternatives to quantifiable attributes and resulting utilities. These tools often relate attributes to utility by influence diagrams and compute utilities (and associated uncertainties) using Bayes networks.

The tools progressively move from data as the object of analysis (for exploration) to clusters of related information, to hypotheses, and finally on to decisions, or analytic judgments.

intelligence workflow management software can provide a means to organize the process by providing the following functions:

  • Requirements and progress tracking: maintains list of current intelligence requirements, monitors tasking to meet the requirements, links evidence and hypotheses to those requirements, tracks progress toward meeting requirements, and audits results;
  • Relevant data linking: maintains ontology of subjects relevant to the intelligence requirements and their relationships and maintains a data- base of all relevant data (evidence);
  • Collaboration directory: automatically locates and updates a directory of relevant subject matter experts as the problem topic develops.

In this example, an intelligence consumer has requested specific intelligence on a drug cartel named “Zehga” to support counter-drug activities in a foreign country. The sequence of one analyst’s use of tools in the example include:

  1. The process begins with synchronous collaboration with other analysts to discuss the intelligence target (Zehga) and the intelligence requirements to understand the cartel organization structure, operations, and finances. The analyst creates a peer-to-peer collaborative workspace that contains requirements, essential elements of information (EEIs) needed, current intelligence, and a directory of team members before inviting additional counter-drug subject matter experts to the shared space.
  2. The analyst opens a workflow management tool to record requirements, key concepts and keywords, and team members; the analyst will link results to the tool to track progress in delivering finished intelligence. The tool is also used to request special tasking from technical collectors (e.g., wiretaps) and field offices.
  3. Once the problem has been externalized in terms of requirements and EEIs needed, the sources and databases to be searched are selected (e.g., country cables, COMINT, and foreign news feeds and archives). Key concepts and keywords are entered into IR tools; these tools search current holdings and external sources, retrieving relevant multi- media content. The analyst also sets up monitor parameters to continually check certain sources (e.g., field office cables and foreign news sites) for changes or detections of relevant topics; when detected, the analyst will be alerted to the availability of new information.
  1. The IR tools also create a taxonomy of the collected data sets, structuring the catch into five major categories: Zehga organization (personnel), events, finances, locations, and activities. The taxonomy breaks each category into subcategories of clusters of related content. Documents located in open-source foreign news reports are translated into English, and all documents are summarized into 55-word abstracts.
  2. The analyst views the taxonomy and drills down to summaries, then views the full content of the most critical items to the investigation. Selected items (or hyperlinks) are saved to the shared knowledge base for a local repository relevant to the investigation.
  3. The retrieved catch is analyzed with text mining tools that discover and list the multidimensional associations (linkages or relationships) between entities (people, phone numbers, bank account numbers, and addresses) and events (meetings, deliveries, and crimes).
  4. The linked lists are displayed on a link-analysis tool to allow the analyst to manipulate and view the complex web of relationships between people, communications, finances, and the time sequence of activities. From these network visuals, the analyst begins discovering the Zehga organizational structure, relationships to other drug cartels and financial institutions, and the timeline of explosive growth of the cartel’s influence.
  5. The analyst internalizes these discoveries by synthesizing a Zehga organization structure and associated financial model, filling in the gaps with conjectures that result in three competing hypotheses: a centralized model, a federated model, and a loose network model. These models are created using a standard financial spreadsheet and a net- work relationship visualization tool. The process of creating these hypotheses causes the analyst to frequently return to the knowledge base to review retrieved data, to issue refined queries to fill in the gaps, and to further review the results of link analyses. The model synthesis process causes the analyst to internalize impressions of confidence, uncertainty, and ambiguity in the evidence, and the implications of potential missing or negative evidence. Here, the analyst ponders the potential for denial and deception tactics and the expected subtle “sprignals” that might appear in the data.
  6. An ACH matrix is created to compare the accrued evidence and argumentation structures supporting each of the competing models. At any time, this matrix and the associated organizational-financial models summarize the status of the intelligence process; these may be posted on the collaboration space and used to identify progress on the work- flow management tool.
  7. The analyst further internalizes the situation by applying a decision sup- port tool to consider the consequences or implications of each model on counter-drug policy courses of action relative to the Zehga cartel.
  8. Once the analyst has reached a level of confidence to make objective analytic judgments about hypotheses, results can be digitally published to the requesting consumers and to the collaborative workgroup to begin socialization—and another cycle to further refine the results. (The next section describes the digital publication process.)

 

Commercial tool suites such as Wincite’s eWincite, Wisdom Builder’s Wisdombuilder, and Cipher’s Knowledge. Works similarly integrate text-based tools to support the competitive intelligence analysis.

Tacit capture and collaborative filtering monitors the activities of all users on the network and uses statistical clustering methods to identify the emergent clusters of interest that indicate communities of common practice. Such filtering could identify and alert these two analysts to other ana- lysts that are converging on a common suspect from other directions (e.g., money laundering and drug trafficking).

7.4 Intelligence Production, Dissemination, and Portals

The externalization-to-internalization workflow results in the production of digital intelligence content suitable for socialization (collaboration) across users and consumers. This production and dissemination of intelligence from KM enterprises has transitioned from static, hardcopy reports to dynamically linked digital softcopy products presented on portals.

Digital production processes employ content technologies that index, structure, and integrate fragmented components of content into deliverable products. In the intelligence context, content includes:

  1. Structured numerical data (imagery, relational database queries) and text [e.g., extensible markup language (XML)-formatted documents] as well as unstructured information (e.g., audio, video, text, and HTML content from external sources);
  2. Internally or externally created information;
  3. Formally created information (e.g., cables, reports, and imagery or signals analyses) as well as informal or ad hoc information (e.g., e-mail, and collaboration exchanges);
  4. Static or active (e.g., dynamic video or even interactive applets) content.

The key to dynamic assembly is the creation and translation of all content to a form that is understood by the KM system. While most intelligence data is transactional and structured (e.g., imagery, signals, MASINT), intelligence and open-source documents are unstructured. While the volume of open-source content available on Internet and closed-source intelligence content grows exponentially, the content remains largely unstructured.

Content technology pro- vides the capability to transform all-sources to a common structure for dynamic integration and personalized publication. The XML offers a method of embed- ding content descriptions by tagging each component with descriptive information that allows automated assembly and distribution of multimedia content

Intelligence standards being developed include an intelligence information markup language (ICML) specification for intelligence reporting and metadata standards for security, specifying digital signatures (XML-DSig), security/encryption (XML-Sec), key management (XML-KMS), and information security marking (XML-ISM) [12]. Such tagging makes the content interoperable; it can be reused and automatically integrated in numerous ways:

  • Numerical data may be correlated and combined.
  • Text may be assembled into a complete report (e.g., target abstract, tar- getpart1, targetpart2, …, related targets, most recent photo, threat summary, assessment).
  • Various formats may be constructed from a single collection of contents to suit unique consumer needs (e.g., portal target summary format, personal digital assistant format, or pilot’s cockpit target folder format).

a document object model (DOM) tree can be created from the integrated result to transform the result into a variety of formats (e.g., HTML or PDF) for digital publication.

The analysis and single-source publishing architecture adopted by the U.S. Navy Command 21 K-Web (Figure 7.7) illustrates a highly automated digital production process for intelligence and command applications [14]. The production workflow in the figure includes the processing, analysis, and dissemination steps of the intelligence cycle:

  1. Content collection and creation (processing and analysis). Both quantitative technical data and unstructured text are received, and content is extracted and tagged for subsequent processing. This process is applied to legacy data (e.g., IMINT and SIGINT reports), structured intelligence message traffic, and unstructured sources (e.g., news reports and intelligence e-mail). Domain experts may support the process by creating metadata in a predefined XML metadata format to append to audio, video, or other nontext sources. Metadata includes source, pedigree, time of collection, and format information. New content created by analysts is entered in standard XML DTD templates.
  2. Content applications. XML-tagged content is entered in the data mart, where data applications recognize, correlate, consolidate, and summarize content across the incoming components. A correlation agent may, for example, correlate all content relative to a new event or entity and pass the content on to a consolidation agent to index the components for subsequent integration into an event or target report. The data (and text) fusion and mining functions described in the next chapter are performed here.
  3. Content management-product creation (production). Product templates dictate the aggregation of content into standard intelligence products: warnings, current intelligence, situation updates, and target status. These composite XML-tagged products are returned to the data mart.
  4. Content publication and distribution. Intelligence products are personalized in terms of both style (presentation formats) and distribution (to users with an interest in the product). Users may explicitly define their areas of interests, or the automated system may monitor user activities (through queries, collaborative discussion topics, or folder names maintained) to implicitly estimate areas of interest to create a user’s personal profile. Presentation agents choose from the style library and user profiles to create distribution lists for content to be delivered via e-mail, pushed to users’ custom portals, or stored in the data mart for subsequent retrieval. The process of content syndication applies an information and content exchange (ICE) standard to allow a single product to be delivered in multiple styles and to provide automatic content update across all users.

The user’s single entry point is a personalized portal (or Web portal) that provides an organized entry into the information available on the intelligence enterprise.

7.5 Human-Machine Information Transactions and Interfaces

In all of the services and tools described in the previous sections, the intelligence analyst interacts with explicitly collected data, applying his or her own tacit knowledge about the domain of interest to create estimates, descriptions, expla- nations, and predictions based on collected data. This interaction between the analyst and KM systems requires efficient interfaces to conduct the transaction between the analyst and machine.

7.5.1 Information Visualization

Edward Tufte introduced his widely read text Envisioning Information with the prescient observation that, “Even though we navigate daily through a perceptual world of three dimensions and reason occasionally about higher dimensional arena with mathematical ease, the world portrayed on our information displays is caught up in the two-dimensionality of the flatlands of paper and video screen”. Indeed, intelligence organizations are continually seeking technologies that will allow analysts to escape from this flatland.

The essence of visualization is to provide multidimensional information to the analyst in a form that allows immediate understanding by this visual form of thinking.

A wide range of visualization methods are employed in analysis (Table 7.6) to allow the user to:

  • Perceive patterns and rapidly grasp the essence of large complex (multi-dimensional) information spaces, then navigate or rapidly browse through the space to explore its structure and contents;
  • Manipulate the information and visual dimensions to identify clusters of associated data, patterns of linkages and relationships, trends (temporal behavior), and outlying data;
  • Combine the information by registering, mathematically or logically jointing (fusing), or overlaying.

 

7.5.2 Analyst-Agent Interaction

Intelligent software agents tailored to support knowledge workers are being developed to provide autonomous automated support in the information retrieval and exploration tasks introduced throughout this chapter. These collaborative information agents, operating in multiagent networks, provide the

potential to amplify the analyst’s exploration of large bodies of data, as they search, organize, structure, and reason about findings before reporting results. Information agents are being developed to perform a wide variety of functions, as an autonomous collaborating community under the direction of a human analyst, including:

  • Personal information agents (PIMs) coordinate an analyst’s searches and organize bookmarks to relevant information; like a team of librarians, the PIMs collect, filter, and recommend relevant materials for the analyst.
  • Brokering agents mediate the flow of information between users and sources (databases, external sources, collection processors); they can also act as sentinels to monitor sources and alert users to changes or the availability of new information.
  • Planning agents accept requirements and create plans to coordinate agents and task resources to meet user goals.

agents also offer the promise of a means of interaction with the analyst that emulates face- to-face conversation, and will ultimately allow information agents to collaborate as (near) peers with individuals and teams of human analysts. These interactive agents (or avatars) will track the analyst (or analytic team) activities and needs to conduct dialogue with the analysts—in terms of the semantic concepts familiar to the topic of interest—to contribute the following kinds of functions:

  • Agent conversationalists that carry on dialogue to provide high- bandwidth interactions that include multimodal input from the analyst (e.g., spoken natural language, keyboard entries, and gestures and gaze) and multimodal replies (e.g., text, speech, and graphics). Such conversationalists will increase “discussions” about concepts, relevant data, and possible hypotheses [23].
  • Agent observers that monitor analyst activity, attention, intention, and task progress to converse about suggested alternatives, potentials for denial and deception, or warnings that the analyst’s actions imply cognitive shortcomings (discussed in Chapter 6) may be influencing the analysis process.
  • Agent contributors that will enter into collaborative discussions to interject alternatives, suggestions, or relevant data.

The integration of collaborating information agents and information visualization technologies holds the promise of more efficient means of helping analysts find and focus on relevant information, but these technologies require greater maturity to manage uncertainty, dynamically adapt to the changing ana- lytic context, and understand the analyst’s intentions.

7.6 Summary

The analytic workflow requires a constant interaction between the cognitive and visual-perceptive processes in the analyst’s mind and the explicit representations of knowledge in the intelligence enterprise.

 

8

Explicit Knowledge Capture and Combination

In the last chapter, we introduced analytic tools that allow the intelligence analyst to interactively correlate, compare, and combine numerical data and text to discover clusters and relationships among events and entities within large databases. These interactive combination tools are considered to be goal-driven processes: the analyst is driven by a goal to seek solutions within the database, and the reasoning process is interactive with the analyst and machine in a common reasoning loop. This chapter focuses on the largely automated combination processes that tend to be data driven: as data continuously arrives from intelligence sources, the incoming data drives a largely automated process that continually detects, identifies, and tracks emerging events of interest to the user. These parallel goal-driven and data-driven processes were depicted as complementary combination processes in the last chapter

In all cases, the combination processes help sources to cross-cue each other, locate and identify target events and entities, detect anomalies and changes, and track dynamic targets.

8.1 Explicit Capture, Representation, and Automated Reasoning

The term combination introduced by Nonaka and Takeuchi in the knowledge-creation spiral is an abstraction to describe the many functions that are performed to create knowledge, such as correlation, association, reasoning, inference, and decision (judgment). This process requires the explicit representation of knowledge; in the intelligence application this includes knowledge about the world (e.g., incoming source information), knowledge of the intelligence domain (e.g., characteristics of specific weapons of mass destruction and their production and deployment processes), and the more general procedural knowledge about reasoning.

 

The DARPA Rapid Knowledge Formation (RKF) project and its predecessor, the High-Performance Knowledge Base project, represent ambitious research aimed at providing a robust explicit knowledge capture, representation, and combination (reasoning) capability targeted toward the intelligence analysis application [1]. The projects focused on developing the tools to create and manage shared, reusable knowledge bases on specific intelligence domains (e.g., biological weapons subjects); the goal is to enable creation of over one million axioms of knowledge per year by collaborating teams of domain experts. Such a knowledge base requires a computational ontology—an explicit specification that defines a shared conceptualization of reality that can be used across all processes.

The challenge is to encode knowledge through the instantiation and assembly of generic knowledge components that can be readily entered and understood by domain experts (appropriate semantics) and provide sufficient coverage to encompass an expert-level of understanding of the domain. The knowledge base must have fundamental knowledge of entities (things that are), events (things that happen), states (descriptions of stable event characteristics), and roles (entities in the context of events). It must also describe knowledge of the relationships between (e.g. cause, object of, part of, purpose of, or result of) and properties (e.g., color, shape, capability, and speed) of each of these.

8.2 Automated Combination

Two primary categories of the combination processes can be distinguished, based on their approach to inference; each is essential to intelligence processing and analysis.

The inductive process of data mining discovers previously unrecognized patterns in data (new knowledge about characteristics of an unknown pattern class) by searching for patterns (relationships in data) that are in some sense “interesting.” The discovered candidates are usually presented to human users for analysis and validation before being adopted as general cases [3].

The deductive process, data fusion, detects the presence of previously known patterns in many sources of data (new knowledge about the existence of a known pattern in the data). This is performed by searching for specific pattern templates in sensor data streams or databases to detect entities, events, and complex situations comprised of interconnected entities and events.

data sets used by these processes for knowledge creation are incomplete, dynamic, and contain data contaminated by noise. These factors make the following process characteristics apply:

  • Pattern descriptions. Data mining seeks to induce general pattern descriptions (reference patterns, templates, or matched filters) to characterize data understood, while data fusion applies those descriptions to detect the presence of patterns in new data.
  • Uncertainty in inferred knowledge. The data and reference patterns are uncertain, leading to uncertain beliefs or knowledge.
  • Dynamic state of inferred knowledge. The process is sequential and inferred knowledge is dynamic, being refined as new data arrives.
  • Use of domain knowledge. Knowledge about the domain (e.g., constraints, context) may be used in addition to collected raw intelligence data.

8.2.1 Data Fusion

Data fusion is an adaptive knowledge creation process in which diverse elements of similar or dissimilar observations (data) are aligned, correlated, and combined into organized and indexed sets (information), which are further assessed to model, understand, and explain (knowledge) the makeup and behavior of a domain under observation.

The data-fusion process seeks to explain an adversary (or uncooperative) intelligence target by abstracting the target and its observable phenomena into a causal or relationship model, then applying all-source observation to detect entities and events to estimate the properties of the model. Consider the levels of representation in the simple target-observer processes in Figure 8.2 [6]. The adversary leadership holds to goals and values that create motives; these motives, combined with beliefs (created by perception of the current situation), lead to intentions. These intentions lead to plans and responses to the current situation; from alternative plans, decisions are made that lead to commands for action. In a hierarchical military, or a networked terrorist organization, these commands flow to activities (communication, logistics, surveillance, and movements). Using the three domains of reality terminology introduced in Chapter 5, the motive-to-decision events occur in the adversary’s cognitive domain with no observable phenomena.

The data-fusion process uses observable evidence from both the symbolic and physical domains to infer the operations, communications, and even the intentions of the adversary.

The emerging concept of effects-based military operations (EBO) requires intelligence products that provide planners with the ability to model the various effects influencing a target that make up a complex system. Planners and opera- tors require intelligence products that integrate models of the adversary physical infrastructure, information networks, and leadership and decision making

The U.S. DoD JDL has established a formal process model of data fusion that decomposes the process into five basic levels of information-refining processes (based upon the concept of levels of information abstraction) [8]:

  • Level 0: Data (or subobject) refinement. This is the correlation across signals or data (e.g., pixels and pulses) to recognize components of an object and the correlation of those components to recognize an object.
  • Level 1: Object refinement. This is the correlation of all data to refine individual objects within the domain of observation. (The JDL model uses the term object to refer to real-world entities, however, the subject of interest may be a transient event in time as well.)
  • Level 2: Situation refinement. This is the correlation of all objects (information) within the domain to assess the current situation.
  • Level 3: Impact refinement. This is the correlation of the current situation with environmental and other constraints to project the meaning of the situation (knowledge). The meaning of the situation refers to its implications to the user: threat, opportunity, change, or consequence.
  • Level 4: Process refinement. This is the continual adaptation of the fusion process to optimize the delivery of knowledge against a defined mission objective.

 

8.2.1.1 Level 0: Data Refinement

Raw data from sensors may be calibrated, corrected for bias and gain errors, limited (thresholded), and filtered to remove systematic noise sources. Object detection may occur at this point—in individual sensors or across multiple sensors (so-called predetection fusion). The object-detection process forms observation reports that contain data elements such as observation identifier, time of measurement, measurement or decision data, decision, and uncertainty data.

8.2.1.2 Level 1: Object Refinement

Sensor and source reports are first aligned to a common spatial reference (e.g., a geographic coordinate system) and temporal reference (e.g., samples are propagated forward or backward to a common time.) These alignment transformations place the observations in a common time-space coordinate system to allow an association process to determine which observations from different sensors have their source in a common object. The association process uses a quantitative correlation metric to measure the relative similarity between observations. The typical correlation metric, C, takes on the following form:

n
c = ∑wi xi

i1=1

Where;
wi = weighting coefficient for attribute xi.

xi = ith correlation attribute metric.

The correlation metric may be used to make a hard decision (an association), choosing the most likely parings of observations, or a deferred decision, assigning more that one hypothetical paring and deferring a hard decision until more observations arrive. Once observations have been associated, two functions are performed on each associated set of measurements for common object:

  1. Tracking. For dynamic targets (vehicles or aircraft), the current state of the object is correlated with previously known targets to determine if the observation can update a model of an existing model (track). If the newly associated observations are determined to be updates to an existing track, the state estimation model for the track (e.g., a Kalman filter) is updated; otherwise, a new track is initiated.
  2. Identification. All associated observations are used to determine if the object identity can be classified to any one of several levels (e.g., friend/foe, vehicle class, vehicle type or model, or vehicle status or intent).

8.2.1.3 Level 2: Situation Refinement

All objects placed in space-time context in an information base are analyzed to detect relationships based on spatial or temporal characteristics. Aggregate sets of objects are detected by their coordinated behavior, dependencies, proximity, common point of origin, or other characteristics using correlation metrics with high-level attributes (e.g., spatial geometries or coordinated behavior). The synoptic understanding of all objects, in their space-time context, provides situation knowledge, or awareness.

8.2.1.4 Level 3: Impact (or Threat) Refinement

Situation knowledge is used to model and analyze feasible future behaviors of objects, groups, and environmental constraints to determine future possible out- comes. These outcomes, when compared with user objectives, provide an assessment of the implications of the current situation. Consider, for example, a simple counter-terrorism intelligence situation that is analyzed in the sequence in Figure 8.4.

8.2.1.5 Level 4: Process Refinement

This process provides feedback control of the collection and processing activities to achieve the intelligence requirements. At the top level, current knowledge (about the situation) is compared to the intelligence requirements required to achieve operational objectives to determine knowledge shortfalls. These shortfalls are parsed, downward, into information, then data needs, which direct the future acquisition of data (sensor management) and the control of internal processes. Processes may be refined, for example, to focus on certain areas of interest, object types, or groups. This forms the feedback loop of the data-fusion process.

8.2.2 Data Mining

Data mining is the process by which large sets of data (or text in the specific case of text mining) are cleansed and transformed into organized and indexed sets (information), which are then analyzed to discover hidden and implicit, but previously undefined, patterns. These patterns are reviewed by domain experts to determine if they reveal new understandings of the general structure and relationships (knowledge) in the data of a domain under observation.

The object of discovery is a pattern, which is defined as a statement in some language, L, that describes relationships in subset Fs of a set of data, F, such that:

  1. The statement holds with some certainty, c;
  2. The statement is simpler (in some sense) than the enumeration of all facts in Fs [13].

This is the inductive generalization process described in Chapter 5. Mined knowledge, then, is formally defined as a pattern that is interesting, according to some user-defined criterion, and certain to a user-defined measure of degree.

In application, the mining process is extended from explanations of limited data sets to more general applications (induction). In this example, a relationship pattern between three terrorist cells may be discovered that includes intercommunication, periodic travel to common cities, and correlated statements posted on the Internet.

Data mining (also called knowledge discovery) is distinguished from data fusion by two key characteristics:

  1. Inference method. Data fusion employs known patterns and deductive reasoning, while data mining searches for hidden patterns using inductive reasoning.
  2. Temporal perspective. The focus of data fusion is retrospective (determining current state based on past data), while data mining is both retrospective and prospective—focused on locating hidden patterns that may reveal predictive knowledge.

Beginning with sensors and sources, the data warehouse is populated with data, and successive functions move the data toward learned knowledge at the top. The sources, queries, and mining processes may be refined, similar to data fusion. The functional stages in the figure are described next.

  • Data warehouse. Data from many sources are collected and indexed in the warehouse, initially in the native format of the source. One of the chief issues facing many mining operations is the reconciliation of diverse database formats that have different formats (e.g., field and record sizes and parameter scales), incompatible data definitions, and other differences. The warehouse collection process (flow in) may mediate between these input sources to transform the data before storing in common form [20].
  • Data cleansing. The warehoused data must be inspected and cleansed to identify and correct or remove conflicts, incomplete sets, and incompatibilities common to combined databases. Cleansing may include several categories of checks:
  1. Uniformity checks verify the ranges of data, determine if sets exceed limits, and verify that formats versions are compatible.
  2. Completeness checks evaluate the internal consistency of data sets to ensure, for example, that aggregate values are consistent with individual data components (e.g., “verify that total sales is equal to sum of all sales regions, and that data for all sales regions is present”).
  3. Conformity checks exhaustively verify that each index and reference exists.
  4. Genealogy checks generate and check audit trails to primitive data to permit analysts to drill down from high-level information.
  • Data selection and transformation. The types of data that will be used for mining are selected on the basis of relevance. For large operations, ini- tial mining may be performed on a small set, then extended to larger sets to check for the validity of abducted patterns. The selected data may then be transformed to organize all data into common dimensions and to add derived dimensions as necessary for analysis.
  • Data mining operations. Mining operations may be performed in a supervised manner in which the analyst presents the operator with a selected set of training data, in which the analyst has manually determined the existence of pattern classes. Alternatively, the operation may proceed without supervision, performing an automated search for patterns. A number of techniques are available (Table 8.4), depending upon the type of data and search objectives (interesting pattern types).
  • Discovery modeling. Prediction or classification models are synthesized to fit the data patterns detected. This is the proscriptive aspect of mining: modeling the historical data in the database (the past) to provide a model to predict the future. The model attempts to abduct a generalized description that explains discovered patterns of interest and, using statistical inference from larger volumes of data, seeks to induct generally applicable models. Simple extrapolation, time-series trends, com- plex linked relationships, and causal mathematical models are examples of models created.
  • Visualization. The analyst uses visualization tools that allow discovery of interesting patterns in the data. The automated mining operations cue the operator to discovered patterns of interest (candidates), and the analyst then visualizes the pattern and verifies if, indeed, it contains new and useful knowledge. OLAP refers to the manual visualization process in which a data manipulation engine allows the analyst to create data “views” from the human perspective and to perform the following categories of functions:
  1. Multidimensional analysis of the data across dimensions, through relationships (e.g., command hierarchies and transaction networks) and in perspectives natural to the analyst (rather that inherent in the data);
  2. Transformation of the viewing dimensions or slicing of the multidimensional array to view a subset of interest;
  3. Drill down into the data from high levels of aggregation, downward into successively deeper levels of information;
  4. Reach through from information levels to the underlying raw data, including reaching beyond the information base, back to raw data by the audit trail generated in genealogy checking;
  5. Modeling of hypothetical explanations of the data, in terms of trend analysis, extrapolations.
  • Refinement feedback. The analyst may refine the process, by adjusting the parameters that control the lower level processes, as well as requesting more or different data on which to focus the mining operations.

 

 

8.2.3 Integrated Data Fusion and Mining

In a practical intelligence application, the full reasoning process integrates the discovery processes of data mining with the detection processes of data fusion. This integration helps the analyst to coordinate learning about new signatures and patterns and apply that new knowledge, in the form of templates, to detect other cases of the situation. A general application of these integrated tools can support the search for nonliteral target signatures, the use of those learned and validated signatures to detect new targets [21]. (Nonliteral target signatures refer to those signatures that extend across many diverse observation domains and are not intuitive or apparent to analysts, but may be discovered only by deeper analysis of multidimensional data.)

The mining component searches the accumulated database of sensor data, with discovery processes focused on relationships that may have relevance to the nonliteral target sets. Discovered models (templates) of target objects or processes are then tested, refined, and verified using the data-fusion process. Finally, the data-fusion process applies the models deductively for knowledge detection in incoming sensor data streams.

8.3 Intelligence Modeling and Simulation

Modeling activities take place in externalization (as explicit models are formed to describe mental models), combination (as evidence is combined and compared with the model), and in internalization (as the analyst ponders the matches, mismatches, and incongruities between evidence and model).

While we have used the general term model to describe any abstract representation, we now distinguish here between two implementations made by the modeling and simulation (M&S) community. Models refer to physical, mathematical, or otherwise logical representations of systems, entities, phenomena, or processes, while simulations refer to those methods to implement models over time (i.e., a simulation is a time-dynamic model)

Models and simulations are inherently collaborative; their explicit representations (versus mental models) allow analytic teams to collectively assemble, and explore the accumulating knowledge that they represent. They support the analysis-synthesis process in multiple ways:

  • Evidence marshaling. As described in Chapter 5, models and simulations provide the framework for which inference and evidence is assembled; they provide an audit trail of reasoning.
  • Exploration. Models and simulations also provide a means for analysts to be immersed in the modeled situation, its structure, and dynamics. It is a tool for experimentation and exploration that provides deeper understanding to determine necessary confirming or falsifying evidence, to evaluate potential sensing measures, and to examine potential denial and deception effects.
  • Dynamic process tracking. Simulations model the time-dynamic behavior of targets to forecast future behavior, compare with observations, and refine the behavior model over time. Dynamic models provide the potential for estimation, anticipation, forecasting, and even prediction (these words imply increasing accuracy and precision in their estimates of future behavior).
  • Explanation. Finally, the models and simulations provide a tool for presenting alternative hypotheses, final judgments, and rationale.

chance favors the prepared prototype: models and simulations can and should be media to create and capture surprise and serendipity

The table (8.5) illustrates independent models and simulations in all three domains, however these domains can be coupled to create a robust model to explore how an adversary thinks (cognitive domain), transacts (e.g., finances, command, and intelligence flows), and acts (physical domain).

A recent study of the advanced methods required to support counter-terrorism analysis recommended the creation of scenarios using top-down synthesis (manual creation by domain experts and large-scale simulation) to create synthetic evidence for comparison with real evidence discovered by bottom-up data mining.

8.3.1 M&S for I&W

The challenge of I&W demands predictive analysis, where “the analyst is looking at something entirely new, a discontinuous phenomenon, an outcome that he or she has never seen before. Furthermore, the analyst only sees this new pat- tern emerge in bits and pieces”

The tools monitor world events to track the state and time-sequence of state transitions for comparison with indicators of stress. These analytic tools apply three methods to provide indicators to analysts:

  1. Structural indicator matching. Previously identified crisis patterns (statistical models) are matched to current conditions to seek indications in background conditions and long-term trends.
  2. Sequential tracking models. Simulations track the dynamics of events to compare temporal behavior with statistical conflict accelerators in cur- rent situations that indicate imminent crises.
  3. Complex behavior analysis. Simulations are used to support inductive exploration of the current situation, so the analyst can examine possible future scenarios to locate potential triggering events that may cause instability (though not in prior indicator models).

A general I&W system architecture (Figure 8.7), organized following the JDL data-fusion structure, accepts incoming news feed text reports of current situations and encodes the events into a common format (by human or automated coding). The event data is encoded into time-tagged actions (assault, kid- nap, flee, assassinate), proclamations (threaten, appeal, comment) and other pertinent events from relevant actors (governments, NGOs, terror groups). The level 1 fusion process correlates and combines similar reports to produce a single set of current events organized in time series for structural analysis of back- ground conditions and sequential analysis of behavioral trends by groups and interactions between groups. This statistical analysis is an automatic target-recognition process, comparing current state and trends with known clusters of unstable behaviors. The level 2 process correlates and aggregates individual events into larger patterns of behavior (situations). A dynamic simulation tracks the current situation (and is refined by the tracking loop shown) to enable the analyst to explore future excursions from the present condition. By analysis of the dynamics of the situation, the analyst can explore a wide range of feasible futures, including those that may reveal surprising behavior that is not intuitive—increasing the analyst’s awareness of unstable regions of behavior or the potential of subtle but potent triggering events.

8.3.2 Modeling Complex Situations and Human Behavior

The complex behavior noted in the prior example may result from random events, human free will, or the nonlinearity introduced by the interactions of many actors. The most advanced applications of M&S are those that seek to model environments (introduced in Section 4.4.2) that exhibit complex behaviors—emergent behaviors (surprises) that are not predictable from the individual contributing actors within the system. Complexity is the property of a system that prohibits the description of its overall behavior even when all of the components are described completely. Complex environments include social behaviors of significant interest to intelligence organizations: populations of nation states, terrorist organizations, military commands, and foreign leaders [32]. Perhaps the grand challenge of intelligence analysis is to understand an adversary’s cognitive behavior to provide both warning and insight into the effects of alternative preemptive actions that may avert threats.

Nonlinear mathematical solutions are intractable for most practical problems, and the research community has applied dynamic systems modeling and agent-based simulation (ABS) to represent systems that exhibit complex behavior [34]. ABS research is being applied to the simulation of a wide range of organizations to assess intent, decision making and planning (cognitive), com- mand and finances (symbolic), and actions (physical). The applications of these simulations include national policies [35], military C2 [36], and terrorist organizations [37].

9
The Intelligence Enterprise Architecture

The processing, analysis, and production components of intelligence operations are implemented by enterprises—complex networks of people and their business processes, integrated information and communication systems and technology components organized around the intelligence mission. As we have emphasized throughout this text, an effective intelligence enterprise requires more than just these components; the people require a collaborative culture, integrated electronic networks require content and contextual compatibility, and the implementing components must constantly adapt to technology trends to remain competitive. The effective implementation of KM in such enterprises requires a comprehensive requirements analysis and enterprise design (synthesis) approach to translate high-level mission statements into detailed business processes, networked systems, and technology implementations.

9.1 Intelligence Enterprise Operations

In the early 1990s the community implemented Intelink, a communitywide network to allow the exchange of intelligence between agencies that maintained internal compartmented networks [2]. The DCI vision for “a unified IC optimized to provide a decisive information advantage…” in the mid-1990s led to the IC CIO to establish an IC Operational Network (ICON) office to perform enterprise architecture analysis and engineering to define the system and communication architectures in order to integrate the many agency networks within the IC [3]. This architecture is required to provide the ability to collaborate securely and synchronously from the users’ desktops across the IC and with customers (e.g., federal government intelligence consumers), partners (component agencies of the IC), and suppliers (intelligence data providers within and external to the IC).

The undertaking illustrates the challenge of implementing a mammoth intelligence enterprise that is comprised of four components:

  1. Policies. These are the strategic vision and derivative policies that explicitly define objectives and the approaches to achieve the vision.
  1. Operational processes. These are collaborative and operationally secure processes to enable people to share knowledge and assets securely and freely across large, diverse, and in some cases necessarily compartmented organizations. This requires processes for dynamic modification of security controls, public key infrastructure, standardized intelligence product markup, the availability of common services, and enterprisewide search, collaboration, and application sharing.
  2. System (network). This is an IC system for information sharing (ICSIS) that includes an agreed set of databases and applications hosted within shared virtual spaces within agencies and across the IC. The system architecture (Figure 9.1) defines three virtual collaboration spaces, one internal to each organization and a second that is accessible across the community (an intranet and extranet, respectively). The internal space provides collaboration at the Special Compartmented Intelligence (SCI) level within the organization; owners tightly control their data holdings (that are organizationally sensitive). The community space enables IC-wide collaboration at the SCI level; resource protection and control is provided by a central security policy. A separate collateral community space provides a space for data shared with DoD and other federal agencies.
  1. The enterprise requires the integration of large installed bases of legacy components and systems with new technologies. The integration requires definition of standards (e.g., metadata, markup languages, protocols, and data schemas) and the plans for incremental technology transitions.

9.2 Describing the Enterprise Architecture

Two major approaches to architecture design that are immediately applicable to the intelligence enterprise have been applied by the U.S. DoD and IC for intelligence and related applications. Both approaches provide an organizing method- ology to assure that all aspects of the enterprise are explicitly defined, analyzed, and described to assure compatibility, completeness, and traceability back to the mission objectives. The approaches provide guidance to develop a comprehensive abstract model to describe the enterprise; the model may be understood from different views in which the model is observed from a particular perspective (i.e., the perspectives of the user or developer) and described by specific products that makeup the viewpoint.

The first methodology is the Zachman Architecture FrameworkTM, developed by John Zachman in the late1980s while at IBM. Zachman pioneered a concept of multiple perspectives (views) and descriptions (viewpoints) to completely define the information architecture [6]. This framework is organized as a matrix of 30 perspective products, defined by the cross product of two dimensions:

  1. Rows of the matrix represent the viewpoints of architecture stakeholders: the owner, planner, designer, builder (e.g., prime contractor), and subcontractor. The rows progress from higher level (greater degree of abstraction) descriptions by the owner toward lower level (details of implementation) by the subcontractor.
  2. Columns represent the descriptive aspects of the system across the dimensions of data handled, functions performed, network, people involved, time sequence of operations, and motivation of each stakeholder.

Each cell in the framework matrix represents a descriptive product required to describe an aspect of the architecture.

 

This framework identifies a single descriptive product per view, but permits a wide range of specific descriptive approaches to implement the products in each cell of the framework:

  • Mission needs statements, value propositions, balanced scorecard, and organizational model methods are suitable to structure and define the owner’s high-level view.
  • Business process modeling, the object-oriented Unified Modeling Language (UML), or functional decomposition using Integrated Definition Models (IDEF) explicitly describe entities and attributes, data, functions, and relationships. These methods also support enterprise functional simulation at the owner and designer level to permit evaluation of expected enterprise performance.
  • Detailed functional standards (e.g., IEEE and DoD standards specification guidelines) provide guidance to structure detailed builder- and subcontractorlevel descriptions that define component designs.

The second descriptive methodology is the U.S. DoD Architecture Frame- work (formally the C4ISR Architecture Framework), which defines three inter- related perspectives or architectural views, each with a number of defined products [7]. The three interrelated views (Figure 9.2) are as follows:

    1. Operational architecture is a description (often graphical) of the operational elements, intelligence business processes, assigned tasks, work- flows, and information flows required to accomplish or support the intelligence function. It defines the type of information, the frequency of exchange, and what tasks are supported by these information exchanges.
    2. Systems architecture is a description, including graphics, of the systems and interconnections providing for or supporting intelligence functions. The system architecture defines the physical connection, location, and identification of the key nodes, circuits, networks, and users and specifies system and component performance parameters. It is constructed to satisfy operational architecture requirements per standards defined in the technical architecture. This architecture view shows how multiple systems within a subject area link and interoperate and may describe the internal construction or operations of particular systems within the architecture.
    3. Technical architecture is a minimal set of rules governing the arrangement, interaction, and interdependence of the parts or elements whose purpose is to ensure that a conformant system satisfies a specified set of requirements. The technical architecture identifies the services, interfaces, standards, and their relationships. It provides the technical guidelines for implementation of systems upon which engineering specifications are based, common building blocks are built, and product lines are developed.

 

 

Both approaches provide a framework to decompose the enterprise into a comprehensive set of perspectives that must be defined before building; following either approach introduces the necessary discipline to structure the enterprise architecture design process.

The emerging foundation for enterprise architecting using framework models is distinguished from the traditional systems engineering approach, which focuses on optimization, completeness, and a build-from-scratch originality [11]. Enterprise (or system) architecting recognizes that most enterprises will be constructed from a combination of existing and new integrating components:

  • Policies, based on the enterprise strategic vision;
  • People, including current cultures that must change to adopt new and changing value propositions and business processes;
  • Systems, including legacy data structures and processes that must work with new structures and processes until retirement;
  • IT, including legacy hardware and software that must be integrated with new technology and scheduled for planned retirement.

The adoption of the architecture framework models and system architecting methodologies are developed in greater detail in a number of foundational papers and texts [12].

9.3 Architecture Design Case Study: A Small Competitive Intelligence Enterprise

The enterprise architecture design principles can be best illustrated by developing the architecture description for a fictional small-scale intelligence enterprise: a typical CI unit for a Fortune 500 business. This simple example defines the introduction of a new CI unit, deliberately avoiding the challenges of introducing significant culture change across an existing organization and integrating numerous legacy systems.

The CI unit provides legal and ethical development of descriptive and inferential intelligence products for top management to assess the state of competitors’ businesses and estimate their future actions within the current marketplace. The unit is not the traditional marketing function (which addresses the marketplace of customers) but focuses specifically on the competitive environment, especially competitors’ operations, their business options, and likely decision-making actions.

The enterprise architect recognizes the assignment as a corporate KM project that should be evaluated against O’Dell and Grayson’s four-question checklist for KM projects [14]:

  1. Select projects to advance your business performance. This project will enhance competitiveness and allow FaxTech to position and adapt its product and services (e.g., reduce cycle time and enhance product development to remain competitive).
  2. Select projects that have a high success probability. This project is small, does not confront integration with legacy systems, and has a high probability of technical success. The contribution of KM can be articulated (to deliver competitive intelligence for executive decision making), there is a champion on the board (the CIO), and the business case (to deliver decisive competitor knowledge) is strong. The small CI unit implementation does not require culture change in the larger Fax- Tech organization—and it may set an example of the benefits of collaborative knowledge creation to set the stage for a larger organization-wide transformation.
  3. Select projects appropriate for exploring emerging technologies. The project is an ideal opportunity to implement a small KM enterprise in FaxTech that can demonstrate intelligence product delivery to top management and can support critical decision making.
  4. Select projects with significant potential to build KM culture and discipline within the organization. The CI enterprise will develop reusable processes and tools that can be scaled up to support the larger organization; the lessons learned in implementation will be invaluable in planning for an organization-wide KM enterprise.

9.3.1 The Value Proposition

The CI value proposition must define the value of competitive intelligence.

The quantitative measures may be difficult to define; the financial return on CI investment measure, for example, requires a careful consideration of how the derived intelligence couples with strategy and impacts revenue gains. Kilmetz and Bridge define a top-level measure of CI return on investment (ROI) metric that considers the time frame of the payback period (t, usually updated quarterly and accumulated to measure the long-term return on strategic decisions) and applies the traditional ROI formula, which subtracts the cost of the CI investment (C CI+I,, the initial implementation cost, plus accumulating quarterly operations costs using net present values) from the revenue gain [17]:

ROICI =∑[(P×Q)−CCI+I]t

The expected revenue gain is estimated by the increase in sales (units sold, Q, multiplied by price, P, in this case) that are attributable to CI-induced decisions. Of course, the difficulty in defining such quantities is the issue of assuring that the gains are uniquely attributable to decisions possible only by CI information [18].

In building the scorecard, the enterprise architect should seek the lessons learned from others, using sources such as the Society for Competitive Intelligence Professionals or the American Productivity and Quality Center

9.3.2 The CI Business Process

The Society of Competitive Intelligence Professionals has defined a CI business cycle that corresponds to the intelligence cycle; the cycle differs by distinguishing primary and published source information, while eliminating the automated processing of technical intelligence sources. The five stages, or business processes, of this high-level business model include:

  1. Planning and direction. The cycle begins with the specific identification of management needs for competitive intelligence. Management defines the specific categories of competitors (companies, alliances) and threats (new products or services, mergers, market shifts, technology discontinuities) for focus and the specific issues to be addressed. The priorities of intelligence needed, routine reporting expectations, and schedules for team reporting enables the CI unit manager to plan specific tasks for analysts, establish collection and reporting schedules, and direct day-to-day operations.
  1. Published source collection. The collection of articles, reports, and financial data from open sources (Internet, news feeds, clipping services, commercial content providers) includes both manual searches by analysts and active, automated searches by software agents that explore (crawl) the networks and cue analysts to rank-ordered findings. This collection provides broad, background knowledge of CI targets; the results of these searches provide cues to support deeper, more focused primary source collection.
  2. Primary source collection. The primary sources of deep competitor information are humans with expert knowledge; ethical collection process includes the identification, contact, and interview of these individuals. Such collections range from phone interviews, formal meetings, and consulting assignments to brief discussions with competitor sales representatives at trade shows. The results of all primary collections are recorded on standard format reports (date, source, qualifications, response to task requirement, results, further sources suggested, references learned) for subsequent analysis.
  3. Analysis and production. Once indexed and organized, the corpus of data is analyzed to answer the questions posed by the initial tasks. Collected information is placed in a framework that includes organizational, financial, and product-service models that allow analysts to estimate the performance and operations of the competitor and predict likely strategies and planned activities. This process relies on a synoptic view of the organized information, experience, and judgment. SMEs may be called in from within FaxTech or from the outside (consultants) to support the analysis of data and synthesis of models.
  4. Reporting. Once approved by the CI unit manager, these quantitative models and more qualitative estimative judgments of competitor strategies are published for presentation in a secure portal or for formal presentation to management. As result of this reporting, management provides further refining direction and the cycle repeats.

9.3.4 The CI Unit Organizational Structure and Relationships

This manager accepts tasking from executive management, issues detailed tasks to the analytic team, and then reviews and approves results before release to management. The manager also manages the budget, secures consultants for collection or analysis support, manages special collections, and coordinates team training and special briefings by SMEs.

9.3.5 A Typical Operational Scenario

For each of the five processes, a number of use cases may be developed to describe specific actions that actors (CI team members or system components) perform to complete the process. In object-oriented design processes, the devel- opment of such use cases drives the design process by first describing the many ways in which actors interact to perform the business process [22]. A scenario or process thread provides a view of one completed sequence through a single or numerous use case(s) to complete an enterprise task. A typical crisis response scenario is summarized in Table 9.3 to illustrate the sequence of interactions between the actors (management, CI manager, deputy, knowledge-base man- ager and analysts, system, portal, and sources) to complete a quick response thread. The scenario can be further modeled by an activity diagram [23] that models the behavior between objects.

The development of the operational scenario also raises nonfunctional performance issues that are identified and defined, generally in parametric terms, for example:

  • Rate and volume of data ingested daily;
  • Total storage capacity of the on-line and offline archived holdings;
  • Access time for on-line and off-line holdings;
  • Number of concurrent analysts, searches, and portal users;
  • Information assurance requirements (access, confidentiality, and attack rejection).

9.3.6 CI System Abstraction

The purpose of use cases and narrative scenarios is to capture enterprise behavior and then to identify the classes of object-oriented design. The italicized text in the scenario identifies the actors, and the remaining nouns are candidates for objects (instantiated software classes). From these use cases, software designers can identify the objects of design, their attributes, and interactions. Based upon the use cases, object-oriented design proceeds to develop sequence diagrams that model messages passing between objects, state diagrams that model the dynamic behavior within each object, and object diagrams that model the static description of objects. The object encapsulates state attributes and provides services to manipulate the internal attributes

 

Based on the scenario of the last section, the enterprise designer defines the class diagram (Figure 9.7) that relates objects that accept the input CI requirements through the entire CI process to a summary of finished intelligence. This diagram does not include all objects; the objects presented illustrate those that acquire data related to specific competitors, and these objects are only a subset of the classes required to meet the full enterprise requirements defined earlier. (The objects in this are included in the analysis package described in the next section.) The requirement object accepts new CI requirements for a defined competitor; requirements are specified in terms of essential elements of information (EEI), financial data, SWOT characteristics, and organization structure. In this object, key intelligence topics may be selected from predefined templates to specify specific intelligence requirements for a competitor or for a marketplace event [24]. The analyst translates the requirements to tasks in the task object; the task object generates search and collect objects that specify the terms for automated search and human collection from primary sources, respectively. The results of these activities generate data objects that organize and present accumulated evidence that is related to the corresponding search and collect objects.

The analyst reviews the acquired data, creating text reports and completing analysis templates (SWOT, EEI, financial) in the analysis object. Analysis entries are linked to the appropriate competitor in the competitor list and to the supporting evidence in data objects. As results are accumulated in the templates, the status (e.g., percentage of required information in template completed) is computed and reported by the status object. Summary of current intelligence and status are rolled up in the summary object, which may be used to drive the CI portal.

9.3.7 System and Technical Architecture Descriptions

The abstractions that describe functions and data form the basis for partitioning packages of software services and the system hardware configuration. The system architecture description includes a network hardware view (Figure 9.8, top) and a comparable view of the packaged software objects (Figure 9.8, bottom)

The enterprise technical architecture is described by the standards for commercial and custom software packages (e.g., the commercial and developed software components with versions, as illustrated in Table 9.4) to meet the requirements developed in system model row of the matrix. Fuld & Company has published periodic reviews of software tools to support the CI process; these reviews provide a helpful evaluation of available commercial packages to support the CI enterprise [25]. The technical architecture is also described by the standards imposed on the implementing components—both software and hardware. These standards include general implementation standards [e.g., American National Standards Institute (ANSI), International Standards Organization (ISO), and Institute of Electrical and Electronics Engineers (IEEE)] and federal standards regulating workplace environments and protocols. The applicable standards are listed to identify applicability to various functions within the enterprise.

A technology roadmap should also be developed to project future transitions as new components are scheduled to be integrated and old components are retired. It is particularly important to plan for integration of new software releases and products to assure sustained functionality and compatibility across the enterprise.

10
Knowledge Management Technologies

IT has enabled the growth of organizational KM in business and government; it will continue to be the predominant influence on the progress in creating knowledge and foreknowledge within intelligence organizations.

10.1 Role of IT in KM

When we refer to technology, the application of science by the use of engineering principles to solve a practical problem, it is essential that we distinguish the difference between three categories of technologies that all contribute to our ability to create and disseminate knowledge (Table 10.1). We may view these as three technology layers, with the basic computing materials sciences providing the foundation technology applications for increasing complexity and scale of communications and computing.

10.4.1 Explicit Knowledge Combination Technologies

Future explicit knowledge combination technologies include those that trans- form explicit knowledge into useable forms and those that perform combination processes to create new knowledge.

  • Multimedia content-context tagged knowledge bases. Knowledgebase technology will support the storage of multimedia data (structured and unstructured) with tagging of both content and context to allow com- prehensive searches for knowledge across heterogeneous sources.
  • Multilingual natural language. Global natural language technologies will allow accurate indexing, tagging, search, linking, and reasoning about multilingual text (and recognized human speech at both the content level and the concept level. This technology will allow analysts to conduct multilingual searches by topic and concept at a global scale
  • Integrated deductive-inductive reasoning. Data-fusion and-data mining technologies will become integrated to allow interactive deductive and inductive reasoning for structured and unstructured (text) data sources. Data-fusion technology will develop level 2 (situation) and level 3 (impact, or explanation) capabilities using simulations to represent complex and dynamic situations for comparison with observed situations.
  • Purposeful deductive-inductive reasoning. Agent-based intelligence will coordinate inductive (learning and generalization) and deductive (decision and detection) reasoning processes (as well as abductive explanatory reasoning) across unstructured multilingual natural language, common sense, and structured knowledge bases. This reasoning will be goal-directed based upon agent awareness of purpose, values, goals, and beliefs.
  • Automated ontology creation. Agent-based intelligence will learn the structure of content and context, automatically populating knowledge bases under configuration management by humans.

 

10.4.3 Knowledge-Based Organization Technologies

Technologies that support the socialization processes of tacit knowledge exchange will enhance the performance and effectiveness of organizations; these technologies will increasingly integrate intelligence agents into the organization as aids, mentors, and ultimately as collaborating peers.

  • Tailored naturalistic collaboration. Collaboration technologies will provide environments with automated capabilities that will track the con- text of activities (speech, text, graphics) and manage the activity toward defined goals. These environments will also recognize and adapt to individual personality styles, tailoring the collaborative process (and the mix of agents-humans) to the diversity of the human-team composition.
  • Intimate tacit simulations. Simulation and game technologies will enable human analysts to be immersed in the virtual physical, symbolic, and cognitive environments they are tasked to understand. These technologies will allow users to explore data, information, and complex situations in all three domains of reality to gain tacit experience and to be able to share the experience with others.
  • Human-like agent partners. Multiagent system technologies will enable the formation of agent communities of practice and teams—and the creation of human-agent organizations. Such hybrid organizations will enable new analytic cultures and communities of problem-solving.
  • Combined human-agent learning. Personal agent tutors, mentors, and models will shadow their human partners, share experiences and observations, and show what they are learning. These agents will learn monitor subtle human cues about the capture and use of tacit knowledge in collaborative analytic processes.
  • Direct brain tacit knowledge. Direct brain biological-to-machine connections will allow monitors to provide awareness, tracking, articulation, and capture of tacit experiences to augment human cognitive performance.

10.5 Summary

KM technologies are built upon materials and ITs that enable the complex social (organizational) and cognitive processes of collaborative knowledge creation and dissemination to occur over large organizations, over massive scales of knowledge. Technologists, analysts, and developers of intelligence enterprises must monitor these fast-paced technology developments to continually reinvent the enterprise to remain competitive in the global competition for knowledge. This continual reinvention process requires a wise application of technology in three modes. The first mode is the direct adoption of technologies by upgrade and integration of COTS and GOTS products. This process requires the continual monitoring of industry standards, technologies, and the marketplace to project the lifecycle of products and forecast adoption transitions. The second application mode is adaptation, in which a commercial product component may be adapted for use by wrapping, modifying, and integrating with commercial or custom components to achieve a desired capability. The final mode is custom development of a technology unique to the intelligence application. Often, such technologies may be classified to protect the unique investment in, the capability of, and in some cases even the existence of the technology.

Technology is enabling, but it is not sufficient; intelligence organizations must also have the vision to apply these technologies while transforming the intelligence business in a rapidly changing world.

 

Notes on The Threat Closer to Home: Hugo Chavez and the War Against America

Michael Rowan is the author of The Threat Closer to Home: Hugo Chavez and the War Against America and is a political consultant for U.S. and Latin American leaders. He has advised former Bolivian president Jaime Paz Zamora and Costa Rican president Oscar Arias. Mr. Rowan has also counseled winning Democratic candidates in 30 U.S. states. He is a former president of the International Association of Political Consultants.

(2)

Hugo Chavez, the president of Venezuela, is a much more dangerous individuals than the famously elusive leader of al-Quaeda. He has made the United States his sworn enemy, and the sad truth is that few people are really listening.

“I’m still a subversive,” Chavez has admitted. “I think the entire world should be subverted.”

 

Hugo Chavez to Jan James of the Associated Press, September 23, 2007

 

 

(4)

 

One cannot discount how much Castro’s aura has shaped Chavez’s thoughts and actions.

 

(5)

 

There are many who harbor bad intentions towards the United States, but only a few who possess the capability to do anything about it. Chavez is one of these few because:

 

His de facto dictatorship gives him absolute control over Venezuela’s military, oil production, and treasury.

He harbors oil reserves second only to those of Saudi Arabia; Venezuela’s annual windfall profits exceed the net worth of Bill Gates.

He has a strategic military and oil alliance with a major American foe and terrorism sponsor, the Islamic Republic of Iran

He has more soldiers on active and reserve duty and more modern weapons – mostly from Russia and China – than any other nation in Latin America

Fulfilling Castro’s dream, he has funded a Communist insurgency against the United States, effectively annexing Bolivia, Nicaragua, Dominica, and Ecuador as surrogate states, and is developing cells in dozens of countries to create new fronts in this struggle.

He is allied with the narcotics-financed guerrillas against the government of Colombia, which the United States supports in its war against drug trafficking

He has numerous assocaiions with terorrists, money launderers, kidnappers, and drug traffickers.

He has more hard assets (the Citgo oil company) and soft assets (Hollywood stars, politicians, lobbyists, and media connections) than any other foreign power.

 

 

(6)

 

Chavez longs for the ear when there will be no liberal international order to constrain his dream of a worldwide “socialist” revolution: no World Bank, no International Monetary Fund, no Organization for Economic Cooperation and Development, no World Trade Organization, no international law, not economic necessity for modernization and globalilzation. And perhaps more important, he longs for the day when the United States no longer policies the world’s playing fields. Chavez has spent more than $100 billion trying to minimize the impact of each international institution on Latin America. He is clearly opposed to international cooperation that does not endorse the Cuba-Venezuela government philosophy.

 

(10)

 

According to reports from among its 2,400 former members, the FARC resembles a mafia crime gang more than a Communist guerrilla army, but Chavez disagrees, calling the FARC, “insurgent forces that have a political project.” They “are not terrorists, they are true armies… they must be recognized.”

 

(11)

 

Chavez’s goal in life are to complete Simon Bolivar’s dream to united Latin America and Castro’s dream to communize it.

 

(13)

 

Since he was elected, Chavez’s public relations machinery has spent close to a billion dollars in the United States to convince Americas that he alone is telling the true story.

 

(14)

 

There are a number of influential Americans who have been attracted by Chavez’s money. These influde the 1996 Republican vice-presidential candidate Jack Kemp, who has repaed large dees trying to sell Chavez’s oil to the U.S. government; Tom Boggs, one of the most powerful lobbyists in Washington D.C.; Giuliani Partners, the lobbying arms of the former New York mayor and presidential hopeful (principal lobbyists for Chavez’s CITGO oil company in Texas); former Massachusetts governor Mitt Romney’s Bain Associates, which prospered by handling Chavez’s oil and bond interests; and Joseph P. Kennedy II of Massachusetts, who advertises Chavez’s oil discounts to low-income Americans, a program that reaches more than a million American families (Kennedy and Chavez cast this program as nonpolitical philanthropy).

 

(19)

 

Chavez’s schoolteacher parents could not afford to raise all of their six children at home, so the two older boys, Adan and Hugo, were sent to live with their grandmother, Rosa Ines. Several distinguished Chavez-watchers, including Alvaro Vargas Llosa, have theorized that his being locked in cloastes at home and then sent away by his parents to grow up elsewhere constituted a seminal rejection that gave rise to what Vargas Llosa called Chavez’s “messianic inferiority complex” – his overarching yearning to be loved and his irrepressible need to act out.

(26)

Chavez began living the life of a Communist double agent. “During the day I’m a career military officer who does his job,” he told his lover Herma Marksman, “but at night I work on achieving the transformations this country needs.” His nights were filled with secret meetings of Communist subversives and co-conspirators, often in disguises, planning the armed overthrow of the government.

 

(27)

 

In 1979, he was transferred to Caracas to teach at his former military academic. It was the perfect perch from which to build a network of officers sympathetic to his revolutionary cause.

Chavez also expanded the circle of his ideological mentors. By far the most important of these was Douglas Bravo, an unreconstructed communist who disobeyed Moscow’s orders after détente to give up the armed struggle against the United States. Bravo was the leader of the Party of the Venezuelan Revolution (PVR) and the Armed Forces of National Liberation. Chavez actively recruited his military friends to the PVR, couching it in the rhetoric of Bolivarianism to make it more palatable to their sensibilities.

 

(32)

 

From 1981 to 1984, a determined Chavez began secretly converting his students at the military academy to co-conspirators; ironically his day job was to teach Venezuelan military history with an emphasis on promoting military professionalism and noninvolvement in politics.

 

(45)

 

Chavez emerged from jail in 1994 a hero to Venezuela’s poor. He had also, while imprisoned, assiduously courted the international left, who helped him build an impressive war-chest – including, it was recently revealed, $150,000 from the FARC guerrillas of Colombia.

 

(46)

 

John Maisto, the US ambassador to Venezuela, at one point called Chavez a “terrorist” because of his coup attempt and denied him a visa to visit the United States. In reply, Chavez mocked Maisto by taking his Visa credit card from his wallet and waiving it about, saying, “I already have a Visa!”

 

(48)

 

Corruption made a good campaign issue for Chavez, but when it came time to do something about it, he balked. Chavez initially appointed Jesus Urdaneta – one of the four saman tree oath takers – as anticorruption czar. But Urdaneta was too energetic and effective for the President, within five months he had identified forty cases of corruption within Chavez’s own administration. Chavez refused to back his czar, who was eventually pushed out of office by the very people he was investigating. Chavez did nothing to save him.

 

In 1999 Chavez started a give-away project called “Plan Bolivar 2000.” Implemented by Chavez loyalists organized in groups known as Bolivarian Circles, the project was modeled after the Communist bloc committees in Castro’s Cuba The plan was basically a social welfare program that mirrored the populist ethic…. In eighteen months, Bolivar 2000 had become so corrupt that it had to be disbanded.

 

(49)

 

Independent studies estimate that the amounts taken from Venezuelan poverty and development funds by middlemen, brokers, and subcontractors – all of whom charge an “administrative” cost for passing on the funds – range as high as 80 percent to 90 percent. By contrast, the U.S. government, the World Bank, nongovernmental organizations, and international charities limit their administrative costs to 20 percent of project funds; the Nobal Peace Prize winning Doctors without Borders, for example, spends only 16 percent on administration.

 

(52)

 

Between 1999 and 2009, Chavez has spent some 20,000 hours on television.

 

(69)

 

Hugo Chavez is implementing a sophisticated oil war against the United Sates. To understand this you have to look back to 1999, when he asked the Venezuelan Congress for emergency executive powers and got them, whereupon he consolidated government power to his advantage. His big move was to take full control over the national oil company PDVSA. Chavez replaced PDVSA’s directors and managers with military or political loyalists, many of whom knew little to nothing about the oil business. This action rankled the company’s professional and technical employees – some 50,000 of them – who enjoyed the only true meritocracy in the country. Citgo…. Later received similar treatment.

 

Chavez in effect demodernized and de-Americanized PDVAS, which had adopted organizational efficiency cultures similar to its predecessors ExxonMobil and Shell, by claiming that they were ideologically incorrect. Chavez compared this to Haiti’s elimination of French culture under Toussain L’Ouverture in the early 1800s.

 

The president’s effort to dumb down the business was evident early on. In 1999 Chavez fired Science Applications International Corporations (Known as SAIC), an enormous U.S.-based global information technology firm that had served as PDVSA’s back office since 1995 (as it had for British Petroleum and other energy companies).

 

SAIC appealed to an international court and got a judgement against Chavez for stealing SAIC’s knowledge without compensation. Chavez ignored the judgement, refusing to pay “one penny”.

 

Stripped of SAIC technology and thousands of oil professionals who quit out of frustration, PDVSA steadily lost operational capacity from 1999 to 2001. Well maintenance suffered; production investment was slashed, oil productivity declined; environmental standards were ignored; and safety accidents proliferated. After the 2002 stroke that led to Chavez’s brief removal from power, PDVSA sacked some 18,000 more of it’s knowledge workers. Its production fell to 2.4 million barrels per day.

 

(68)

 

After Venezuela’s 2006 presidential election, Chavez…told three American oil companies – ExxonMobil, ConocoPhillips, and Chevron – to turn over 60 percent of their heavy oil exploration [which they had spent a decade and nearly $20 billion developing] or leave Venezuela.

 

(72)

 

Oil has caused a massive shift in the wealth of nations. All told, $12 trillion has been transferred from the oil consumers to the oil producers since 2002. This is a very large figure – it is comparable to the 2006 GDP of the United States – and it has contributed greatly to our unprecedented trade deficit; a weakening of the dollar; and the weakness of the U.S. financial system in surviving the housing mortgage crisis.

 

Two decades ago, private companies controlled half the world’s oil reserved, but today they only control 13 percent… While many Americans believe that big oil is behind the high prices at the gas pump, the fact is that the national oil companies controlled by Chavez of Venezuela, Ahmadinejad of Iran, and Putin of Russia are the real culprits.

 

(73)

 

When Chavez’s plane first landed in Havana in 1994, Fidel Castro greeted him at the airport. What made Hugo Chavez important to Castro then was the same thing that makes him important to the United States now: oil. Castro’s plan to weaken America – which he had to shelve when the Soviet Union collapsed and Cuba lost its USSR oil and financial subsidy – was dusted off.

The Chavez Castro condominium was a two-way street. Chavez soon began delivering from 50,000 to 90,000 barrels of oil per day to Castro, a subsidy eventually worth $3 billion to $4 billion per year, which far exceeded the sugar subsidy Castro once received from the Soviet Union until Gorbachev ended it around 1980. Castro used the huge infusion of Chavez’s cash to solidify his absolute control in Cuba and to crack down on political dissidents.

 

 

(79)

 

Chavez’s predatory, undemocratic, and destabilizing actions are not limited to Venezuela.

 

Chavez is striving to remake Latin America in his own image, and for his own purposes – purposes that mirror Fidel Castro’s half-aborted but never abandoned plans for hemispheric revolution hatched half a decade ago.

 

(81)

 

Hugo Chavez sees himself as leading the revolutionary charge that Fidel Castro always wanted to mount but was never able to spread beyond the shores of the island prison he created in the Caribbean. Ye four decades after taking power, Castro found a surrogate, a right arm who could carry on the work that he could not.

 

(82)

 

[Chavez] routinely uses oil to bribe Latin American states into lining up against the United States, either by subsidizing oil in the surrogate state or by using oil to interfere in other countries’ elections.

 

For instance, in 1999 Chavez created Petrocaribe, a company that provides oil discounts with delayed payments to thirteen Caribbean nations. It was so successful at fulfilling it’s real purpose – buying influence and loyalty – that two years later Chavez created PetroSur, which does the same for twenty Central and South American nations, at an annual cost to Venezuela’s treasury of an estimated $1 billion.

 

(83)

 

From 2005 to 2007 alone, Chavez gave away a total of $39 billion in oil and cash; $9.9 billion to Argentina, $7.5 billion to Cuba, $4.9 billion to Ecuador, and $4.9 billion from Nicaragua were the largest sums Chavez gave…

 

At a time when U.S. influence is waning – in part owing to Washington’s preoccupation with Iraq and the Middle East – Chavez has filled the void. The United States provides less than $1 billion in foreign economic aid to the entire region, a figure that rises to only $1.6 billion in foreign economic aid to the entire region… Chavez, meanwhile, spends nearly $9 billion in the region every single year. And his money is always welcome because it comes with no strings. The World Bank and IMF, by contrast, require concomitant reforms – for instance, efforts to fight corruption, drug trafficking, and money laundering – in return for grants and loans.

Consequently, over the course of a handful of years, virtually all the Latin American countries have wound up dependent on Venezuela’s oil or money or both. These include not just resource-poor nations; in Latin America only Mexico and Peru are fully independent of Chavez’s money.

One consequence: at the Organization of American States (OAS), which serves as a mini-United Nations for Latin America, Venezuela has assumed the position of the “veto” vote that once belonged to the United States.

 

(84)

Since Chavez has been president of Venezuela, the OAS has not passed on substantive resolution supported by the United States when Chavez was on the opposite side.

In all, since coming to power in 1999, Chavez has spent or committed an estimated $110 billion – some say twice the amount needed to eliminate poverty in Venezuela forever – in more than thirty countries to advance his anti-American agenda. Since 2005, Chavez’s total foreign aid budget for Latin America has been more than $50 billion – much more than the amount of U.S. foreign aid for the region over the same period.

Many of these expenditures have been hidden from the Venezuelan public in secret off-budget slush funds.The result is that Chavez now, by any measure, the most powerful figure in Latin America.

(85)

During Morale’s first year in office, 2006, Chavez contributed a whopping $1 billion in aid to Bolivia (equivalent to 12 percent of the country’s GDP). He also provided access to one of Venezuela’s presidential jets, sent a forty-soldier personal guard to accompany Morales at all times, subsidized the pay of Bolivia’s military, and paid to send thousands of Cuban doctors to Bolivia’s barrio health clinics.

(86)

After his political success in Bolivia, Chavez has aggressively supported every anti-American presidential candidate in the region. U.S. policymakers console themselves by claiming that Chavez’s favorites have mostly been defeated by pro-American centrists. The truth is more complex. Chavez came close to winning every one of those contests, and lost only when he overplayed his hand. More troubling, U.S. influence and prestige in Latin America is at perhaps its lowest ebb ever; today, being considered America’s ally is the political kiss of death.

 

(91)

 

Since turning unabashedly criminal, the FARC has imported arms, exported drugs, recruited minors, kidnapped thousands for ransom, executed hostages, hijacked planes, planted land mines, operated an extortion and protection racket in peasant communities, committed atrocities against innocent civilians, and massacred farmers as traitors…

 

A long-held ambition of the FARC’s leadership is to have the group officially recognized as a belligerent force, a legitimate army in rebellion. Such a designation – conferred by individual nations and under international law – would give the FARC rights normally accorded only to sovereign powers.

(93)

Uribe, a calm and soft-spoken attorney, set out methodologically to finish what Pastrana had begun.

 

To Chavez, any friend of the United States is his enemy, and any enemy of a friend of the United States is his friend – even a terrorist organization working to destabilize one of his country’s most important neighbors.

 

(94)

The relationship [between Chavez and the FARC] began more than a decade and a half ago, in the wake of Chavez’s failed coup. In 1992, the FARC gave a jailed Chavez $150,000, money that launched him to the presidency.

(95)

Perhaps the most sinister aspect to Chavez’s relationship with the FARC is the help he has provided to maximize its cocaine sales to the United States and Europe. British journalist John Carlin, who writes for The Guardian, a newspaper generally supportive of Chavez, secured interviews with several of the 2,400 FARC guerrillas who deserted the group in 2007. One of his subject told him that “the guerillas have a non-aggression pact with the Venezuelan military. The Venezuelan government lets FARC operate freely because they share the same left-wing, Bolivarian ideals, and because FARC bribes their people. Without cocaine revenues, the FARC would disappear, its former members assert. “If it were not for cocaine, the fuel that feeds the Colombian war, FARC would long ago have disbanded.”

(104)

Iran and Venezuela are working together to drive up the price of oil in hopes of crippling the American economy and enhancing their hegemonies in the Middle East and Latin America. They are using their windfall petro-revenues to finance a simmering war – sometimes cold, sometimes hot, sometimes covert, sometimes overt – against the United States.

(105)

As Chavez told Venezuelans repeatedly, Saddam’s fate was also what he feared for himself.

 

(119)

Hugo Chavez’s first reaction after the attack on the camp of narcoterrorist Raul Reyes was to accuse Colombia of behaving like Israel. “We’re not going to allow an Israel in the region,” he said.

 

Actually the parallel is not far off. Like Colombia, Israel is a state that wishes to live in peace with its neighbors, but they insist on destroying it. Israel’s fondest wish would be for the Palestinians to be capable of building a peaceful and prosperous nation with which Israel could establish normal relations.

 

(123)

American officials have also submitted some 130 written requests for basic biographical or immigration-related information, such as entry and exit dates into and out of Venezuela, for suspected terrorists. Not one of the requests has generated a substantive response.

(126)

***

 

Michael Rowan talked about the book he co-wrote, The Threat Closer to Home: Hugo Chavez and the War Against America, on C-SPAN. Former U.S. Ambassador to Venezuela Otto Reich joined him to comment on the book. Ray Walser moderated. Discussion topics included the global geopolitical impact of Venezuela’s decreasing economic and personal freedoms and what the U.S. can do. Then both men responded to questions from members of the audience.

Notes on Intelligence Analysis: A Target-Centric Approach

A major contribution of the 9/11 Commission and the Iraqi WMD Commission was their focus on a failed process, specifically on that part of the process where intelligence analysts interact with their policy customers.

“Thus, this book has two objectives:

The first objective is to redefine the intelligence process to help make all parts of what is commonly referred to as the “intelligence cycle” run smoothly and effectively, with special emphasis on both the analyst-collector and the analyst-customer relationships.

The second goal is to describe some methodologies that make for better predictive analysis.”

 

“An intelligence process should accomplish three basic tasks. First, it should make it easy for customers to ask questions. Second, it should use the existing base of intelligence information to provide immediate responses to the customer. Third, it should manage the expeditious creation of new information to answer remaining questions. To do these things, intelligence must be collaborative and predictive: collaborative to engage all participants while making it easy for customers to get answers; predictive because intelligence customers above all else want to know what will happen next.”

“the target-centric process outlines a collaborative approach for intelligence collectors, analysts, and customers to operate cohesively against increasingly complex opponents. We cannot simply provide more intelligence to customers; they already have more information than they can process, and information overload encourages intelligence failures. The community must provide what is called “actionable intelligence”—intelligence that is relevant to customer needs, is accepted, and is used in forming policy and in conducting operations.”

“The second objective is to clarify and refine the analysis process by drawing on existing prediction methodologies. These include the analytic tools used in organizational planning and problem solving, science and engineering, law, and economics. In many cases, these are tools and techniques that have endured despite dramatic changes in information technology over the past fifty years. All can be useful in making intelligence predictions, even in seemingly unrelated fields.”

“This book, rather, is a general guide, with references to lead the reader to more in-depth studies and reports on specific topics or techniques. The book offers insights that intelligence customers and analysts alike need in order to become more proactive in the changing world of intelligence and to extract more useful intelligence.”

“The common theme of these and many other intelligence failures discussed in this book is not the failure to collect intelligence. In each of these cases, the intelligence had been collected. Three themes are common in intelligence failures: failure to share information, failure to analyze collected material objectively, and failure of the customer to act on intelligence.”

 

“ though progress has been made in the past decade, the root causes for the failure to share remain, in the U.S. intelligence community as well as in almost all intelligence services worldwide:

Sharing requires openness. But any organization that requires secrecy to perform its duties will struggle with and often reject openness. Most governmental intelligence organizations, including the U.S. intelligence community, place more emphasis on secrecy than on effectiveness. The penalty for producing poor intelligence usually is modest. The penalty for improperly handling classified information can be career-ending. There are legitimate reasons not to share; the U.S. intelligence community has lost many collection assets because details about them were too widely shared. So it comes down to a balancing act between protecting assets and acting effectively in the world. ”

 

“Experts on any subject have an information advantage, and they tend to use that advantage to serve their own agendas. Collectors and analysts are no different. At lower levels in the organization, hoarding information may have job security benefits. At senior levels, unique knowledge may help protect the organizational budget. ”

 

“Finally, both collectors of intelligence and analysts find it easy to be insular. They are disinclined to draw on resources outside their own organizations.12 Communication takes time and effort. It has long-term payoffs in access to intelligence from other sources, but few short-term benefits.”

 

Failure to Analyze Collected Material Objectively

In each of the cases cited at the beginning of this introduction, intelligence analysts or national leaders were locked into a mindset—a consistent thread in analytic failures. Falling into the trap that Louis Pasteur warned about in the observation that begins this chapter, they believed because, consciously or unconsciously, they wished it to be so. ”

 

 

 

  • Ethnocentric bias involves projecting one’s own cultural beliefs and expectations on others. It leads to the creation of a “mirror-image” model, which looks at others as one looks at oneself, and to the assumption that others will act “rationally” as rationality is defined in one’s own culture.”
  • Wishful thinking involves excessive optimism or avoiding unpleasant choices in analysis.
  • Parochial interests cause organizational loyalties or personal agendas to affect the analysis process.
  • Status quo biases cause analysts to assume that events will proceed along a straight line. The safest weather prediction, after all, is that tomorrow’s weather will be like today’s.
  • Premature closure results when analysts make early judgments about the answer to a question and then, often because of ego, defend the initial judgments tenaciously. This can lead the analyst to select (usually without conscious awareness) subsequent evidence that supports the favored answer and to reject (or dismiss as unimportant) evidence that conflicts with it.

 

Summary

 

Intelligence, when supporting policy or operations, is always concerned with a target. Traditionally, intelligence has been described as a cycle: a process starting from requirements, to planning or direction, collection, processing, analysis and production, dissemination, and then back to requirements. That traditional view has several shortcomings. It separates the customer from the process and intelligence professionals from one another. A gap exists in practice between dissemination and requirements. The traditional cycle is useful for describing structure and function and serves as a convenient framework for organizing and managing a large intelligence community. But it does not describe how the process works or should work.”

 

 

 

Intelligence is in practice a nonlinear and target-centric process, operated by a collaborative team of analysts, collectors, and customers collectively focused on the intelligence target. The rapid advances in information technology have enabled this transition.

All significant intelligence targets of this target-centric process are complex systems in that they are nonlinear, dynamic, and evolving. As such, they can almost always be represented structurally as dynamic networks—opposing networks that constantly change with time. In dealing with opposing networks, the intelligence network must be highly collaborative.

 

“Historically, however, large intelligence organizations, such as those in the United States, provide disincentives to collaboration. If those disincentives can be removed, U.S. intelligence will increasingly resemble the most advanced business intelligence organizations in being both target-centric and network-centric.”

 

 

“Having defined the target, the first question to address is, What do we need to learn about the target that our customers do not already know? This is the intelligence problem, and for complex targets, the associated intelligence issues are also complex. ”

 

 

 

 

 

 

 

Chapter 4

Defining the Intelligence Issue

A problem well stated is a problem half solved.

Inventor Charles Franklin Kettering

“all intelligence analysis efforts start with some form of problem definition.”

“The initial guidance that customers give analysts about an issue, however, almost always is incomplete, and it may even be unintentionally misleading.”

“Therefore, the first and most important step an analyst can take is to understand the issue in detail. He or she must determine why the intelligence analysis is being requested and what decisions the results will support. The success of analysis depends on an accurate issue definition. As one senior policy customer noted in commenting on intelligence failures, “Sometimes, what they [the intelligence officers] think is important is not, and what they think is not important, is.”

 

“The poorly defined issue is so common that it has a name: the framing effect. It has been described as “the tendency to accept problems as they are presented, even when a logically equivalent reformulation would lead to diverse lines of inquiry not prompted by the original formulation.”

 

 

“veteran analysts go about the analysis process quite differently than do novices. At the beginning of a task, novices tend to attempt to solve the perceived customer problem immediately. Veteran analysts spend more time thinking about it to avoid the framing effect. They use their knowledge of previous cases as context for creating mental models to give them a head start in addressing the problem. Veterans also are better able to recognize when they lack the necessary information to solve a problem,6 in part because they spend enough time at the beginning, in the problem definition phase. In the case of the complex problems discussed in this chapter, issue definition should be a large part of an analyst’s work.

Issue definition is the first step in a process known as structured argumentation.”

 

 

“structured argumentation always starts by breaking down a problem into parts so that each part can be examined systematically.”

 

Statement of the Issue

 

In the world of scientific research, the guidelines for problem definition are that the problem should have “a reasonable expectation of results, believing that someone will care about your results and that others will be able to build upon them, and ensuring that the problem is indeed open and underexplored.”8 Intelligence analysts should have similar goals in their profession. But this list represents just a starting point. Defining an intelligence analysis issue begins with answering five questions:

 

When is the result needed? Determine when the product must be delivered. (Usually, the customer wants the report yesterday.) In the traditional intelligence process, many reports are delivered too late—long after the decisions have been made that generated the need—in part because the customer is isolated from the intelligence process… The target-centric approach can dramatically cut the time required to get actionable intelligence to the customer because the customer is part of the process.”

 

Who is the customer? Identify the intelligence customers and try to understand their needs. The traditional process of communicating needs typically involves several intermediaries, and the needs inevitably become distorted as they move through the communications channels.

 

What is the purpose? Intelligence efforts usually have one main purpose. This purpose should be clear to all participants when the effort begins and also should be clear to the customer in the result…Customer involvement helps to make the purpose clear to the analyst.”

 

 

What form of output, or product, does the customer want? Written reports (now in electronic form) are standard in the intelligence business because they endure and can be distributed widely. When the result goes to a single customer or is extremely sensitive, a verbal briefing may be the form of output.”

 

“Studies have shown that customers never read most written intelligence. Subordinates may read and interpret the report, but the message tends to be distorted as a result. So briefings or (ideally) constant customer interaction with the intelligence team during the target-centric process helps to get the message through.”

 

What are the real questions? Obtain as much background knowledge as possible about the problem behind the questions the customer asks, and understand how the answers will affect organizational decisions. The purpose of this step is to narrow the problem definition. A vaguely worded request for information is usually misleading, and the result will almost never be what the requester wanted.”

 

Be particularly wary of a request that has come through several “nodes” in the organization. The layers of an organization, especially those of an intelligence bureaucracy, will sometimes “load” a request as it passes through with additional guidance that may have no relevance to the original customer’s interests. A question that travels through several such layers often becomes cumbersome by the time it reaches the analyst.

 

“The request should be specific and stripped of unwanted excess. ”

 

“The time spent focusing the request saves time later during collection and analysis. It also makes clear what questions the customer does not want answered—and that should set off alarm bells, as the next example illustrates.”

 

“After answering these five questions, the analyst will have some form of problem statement. On large (multiweek) intelligence projects, this statement will itself be a formal product. The issue definition product helps explain the real questions and related issues. Once it is done, the analyst will be able to focus more easily on answering the questions that the customer wants answered.”

 

The Issue Definition Product

 

“When the final intelligence product is to be a written report, the issue definition product is usually in précis (summary, abstract, or terms of reference) form. The précis should include the problem definition or question, notional results or conclusions, and assumptions. For large projects, many intelligence organizations require the creation of a concept paper or outline that provides the stakeholders with agreed terms of reference in précis form.”

 

“Whether the précis approach or the notional briefing is used, the issue definition should conclude with an issue decomposition view.”

 

Issue Decomposition

 

“taking a seemingly intractable problem and breaking it into a series of manageable subproblems.”

 

 

“Glenn Kent of RAND Corporation uses the name strategies-to-task for a similar breakout of U.S. Defense Department problems.12 Within the U.S. intelligence community, it is sometimes referred to as problem decomposition or “decomposition and visualization.”

 

 

 

“Whatever the name, the process is simple: Deconstruct the highest level abstraction of the issue into its lower-level constituent functions until you arrive at the lowest level of tasks that are to be performed or subissues to be dealt with. In intelligence, the deconstruction typically details issues to be addressed or questions to be answered. Start from the problem definition statement and provide more specific details about the problem.”

 

“Start from the problem definition statement and provide more specific details about the problem. The process defines intelligence needs from the top level to the specific task level via taxonomy—a classification system in which objects are arranged into natural or related groups based on some factor common to each object in the group. ”

 

“At the top level, the taxonomy reflects the policymaker’s or decision maker’s view and reflects the priorities of that customer. At the task level, the taxonomy reflects the view of the collection and analysis team. These subtasks are sometimes called key intelligence questions (KIQs) or essential elements of information (EEIs).”

 

“Issue decomposition follows the classic method for problem solving. It results in a requirements, or needs, hierarchy that is widely used in intelligence organizations. ”

 

it is difficult to evaluate how well an intelligence organization is answering the question, “What is the political situation in Region X?” It is much easier to evaluate the intelligence unit’s performance in researching the transparency, honesty, and legitimacy of elections, because these are very specific issues.

 

“Obviously there can be several different issues associated with a given intelligence target or several different targets associated with a given issue.”

 

Complex Issue Decomposition

 

We have learned that the most important step in the intelligence process is to understand the issue accurately and in detail. Equally true, however, is that intelligence problems today are increasingly complex—often described as nonlinear, or “wicked.” They are dynamic and evolving, and thus their solutions are, too. This makes them difficult to deal with—and almost impossible to address within the traditional intelligence cycle framework. A typical example of a wicked issue is that of a drug cartel—the cartel itself is dynamic and evolving and so are the questions being posed by intelligence consumers who have an interest in it.”

 

 

 

 

 

 

 

 

“A typical real-world customer’s issue today presents an intelligence officer with the following challenges:

 

It represents an evolving set of interlocking issues and constraints.

“There are many stakeholders—people who care about or have something at stake in how the issue is resolved.”

 

“There are many stakeholders—people who care about or have something at stake in how the issue is resolved. (Again, this makes the problem-solving process a fundamentally social one, in contrast to the antisocial traditional intelligence cycle.) ”

 

The constraints on the solution, such as limited resources and political ramifications, change over time. The target is constantly changing, as the Escobar example illustrates, and the customers (stakeholders) change their minds, fail to communicate, or otherwise change the rules of the game.”

 

Because there is no final issue definition, there is no definitive solution. The intelligence process often ends when time runs out, and the customer must act on the most currently available information.”

 

“Harvard professor David S. Landes summarized these challenges nicely when he wrote, “The determinants of complex processes are invariably plural and interrelated.”15 Because of this—because complex or wicked problems are an evolving set of interlocking issues and constraints, and because the introduction of new constraints cannot be prevented—the decomposition of a complex problem must be dynamic; it will change with time and circumstances. ”

 

 

“As intelligence customers learn more about the targets, their needs and interests will shift.

Ideally, a complex issue decomposition should be created as a network because of the interrelationship among the elements.

 

 

Although the hierarchical decomposition approach may be less than ideal for complex problems, it works well enough if it is constantly reviewed and revised during the analysis process. It allows analysts to define the issue in sufficient detail and with sufficient accuracy so that the rest of the process remains relevant. There may be redundancy in a linear hierarchy, but the human mind can usually recognize and deal with the redundancy. To keep the decomposition manageable, analysts should continue to use the hierarchy, recognizing the need for frequent revisions, until information technology comes up with a better way.

 

 

 

Structured Analytic Methodologies for Issue Definition

 

Throughout the book we discuss a class of analytic methodologies that are collectively referred to as structured analytic methodologies or SATs. ”

 

 

“a relevancy check needs to be done. To be “key,” an assumption must be essential to the analytic reasoning that follows it. That is, if the assumption turns out to be invalid, then the conclusions also probably are invalid. CIA’s Tradecraft Primer identifies several questions that need to be asked about key assumptions:

 

How much confidence exists that this assumption is correct?

What explains the degree of confidence in the assumption?

What circumstances or information might undermine this assumption?

Is a key assumption more likely a key uncertainty or key factor?

Could the assumption have been true in the past but less so now?

If the assumption proves to be wrong, would it significantly alter the analytic line? How?

Has this process identified new factors that need further analysis?”

 

Example: Defining the Counterintelligence Issue

 

Counterintelligence (CI) in government usually is thought of as having two subordinate problems: security (protecting sources and methods) and catching spies (counterespionage).

 

 

If the issue is defined this way—security and counterespionage—the response in both policy and operations is defensive. Personnel background security investigations are conducted. Annual financial statements are required of all employees. Profiling is used to detect unusual patterns of computer use that might indicate computer espionage. Cipher-protected doors, badges, personal identification numbers, and passwords are used to ensure that only authorized persons have access to sensitive intelligence. The focus of communications security is on denial, typically by encryption. Leaks of intelligence are investigated to identify their source.

 

But whereas the focus on security and counterespionage is basically defensive, the first rule of strategic conflict is that the offense always wins. So, for intelligence purposes, you’re starting out on the wrong path if the issue decomposition starts with managing security and catching spies.

 

A better issue definition approach starts by considering the real target of counterintelligence: the opponent’s intelligence organization. Good counterintelligence requires good analysis of the hostile intelligence services. As we will see in several examples later in this book, if you can model an opponent’s intelligence system, you can defeat it. So we start with the target as the core of the problem and begin an issue decomposition.

 

If the counterintelligence issue is defined in this fashion, then the counterintelligence response will be forward-leaning and will focus on managing foreign intelligence perceptions through a combination of covert action, denial, and deception. The best way to win the CI conflict is to go on the offensive (model the target, anticipate the opponent’s actions, and defeat him or her). Instead of denying information to the opposing side’s intelligence machine, for example, you feed it false information that eventually degrades the leadership’s confidence in its intelligence services.

 

To do this, one needs a model of the opponent’s intelligence system that can be subjected to target-centric analysis, including its communications channels and nodes, its requirements and targets, and its preferred sources of intelligence.

 

Summary

Before beginning intelligence analysis, the analyst must understand the customer’s issue. This usually involves close interaction with the customer until the important issues are identified. The problem then has to be deconstructed in an issue decomposition process so that collection, synthesis, and analysis can be effective.”

 

All significant intelligence issues, however, are complex and nonlinear. The complex problem is a dynamic set of interlocking issues and constraints with many stakeholders and no definitive solution. Although the linear issue decomposition process is not an optimal way to approach such problems, it can work if it is reviewed and updated frequently during the analysis process.

 

 

“Issue definition is the first step in a process known as structured argumentation. As an analyst works through this process, he or she collects and evaluates relevant information, fitting it into a target model (which may or may not look like the issue decomposition); this part of the process is discussed in chapters 5–7. The analyst identifies information gaps in the target model and plans strategies to fill them. The analysis of the target model then provides answers to the questions posed in the issue definition process. The next chapter discusses the concept of a model and how it is analyzed.”

 

 

 

 

 

 

 

 

Chapter 5

Conceptual Frameworks for Intelligence Analysis

 

“If we are to think seriously about the world, and act effectively in it, some sort of simplified map of reality . . . is necessary.”

Samuel P. Huntington, The Clash of Civilizations and the Remaking of World Order

 

 

“Balance of power,” for example, was an important conceptual framework used by policymakers during the Cold War. A different conceptual framework has been proposed for assessing the influence that one country can exercise over another.”

 

Analytic Perspectives—PMESII

 

In chapter 2, we discussed the instruments of national power—an actions view that defines the diplomatic, information, military, and economic (DIME) actions that executives, policymakers, and military or law enforcement officers can take to deal with a situation.

 

The customer of intelligence may have those four “levers” that can be pulled, but intelligence must be concerned with the effects of pulling those levers. Viewed from an effects perspective, there are usually six factors to consider: political, military, economic, social, infrastructure, and information, abbreviated PMESII.

 

Political. Describes the distribution of responsibility and power at all levels of governance—formally constituted authorities, as well as informal or covert political powers. (Who are the tribal leaders in the village? Which political leaders have popular support? Who exercises decision-making or veto power in a government, insurgent group, commercial entity, or criminal enterprise?)

 

Military. Explores the military and/or paramilitary capabilities or other ability to exercise force of all relevant actors (enemy, friendly, and neutral) in a given region or for a given issue. (What is the force structure of the opponent? What weaponry does the insurgent group possess? What is the accuracy of the rockets that Hamas intends to use against Israel? What enforcement mechanisms are drug cartels using to protect their territories?)

 

Economic. Encompasses individual and group behaviors related to producing, distributing, and consuming resources. (What is the unemployment rate? Which banks are supporting funds laundering? What are Egypt’s financial reserves? What are the profit margins in the heroin trade?)

 

Social. Describes the cultural, religious, and ethnic makeup within an area and the beliefs, values, customs, and behaviors of society members. (What is the ethnic composition of Nigeria? What religious factions exist there? What key issues unite or divide the population?)

Infrastructure. Details the composition of the basic facilities, services, and installations needed for the functioning of a community, business enterprise, or society in an area. (What are the key modes of transportation? Where are the electric power substations? Which roads are critical for food supplies?)

 

Information. Explains the nature, scope, characteristics, and effects of individuals, organizations, and systems that collect, process, disseminate, or act on information. (How much access does the local population have to news media or the Internet? What are the cyber attack and defense capabilities of the Saudi government? How effective would attack ads be in Japanese elections?)

 

The typical intelligence problem seldom must deal with only one of these factors or systems. Complex issues are likely to involve them all. The events of the Arab Spring in 2011, the Syrian uprising that began that year, and the Ukrainian crisis of 2014 involved all of the PMESII factors. But PMESII is also relevant in issues that are not necessarily international. Law enforcement must deal with them all (in this case, “military” refers to the use of violence or armed force by criminal elements).

 

Modeling the Intelligence Target

 

Models are used so extensively in intelligence that analysts seldom give them much thought, even as they use them.

 

The model paradigm is a powerful tool in many disciplines.

 

“Former national intelligence officer Paul Pillar described them as “guiding images” that policymakers rely on in making decisions. We’ve discussed one guiding image—that of the PMESII concept. The second guiding image—that of a map, theory, concept, or paradigm—in this book is merged into a single entity called a model.Or, as the CIA’s Tradecraft Primer puts it succinctly:

 

“all individuals assimilate and evaluate information through the medium of “mental models…”

 

Modeling is usually thought of as being quantitative and using computers. However, all models start in the human mind. Modeling does not always require a computer, and many useful models exist only on paper. Models are used widely in fields such as operations research and systems analysis. With modeling, one can analyze, design, and operate complex systems. One can use simulation models to evaluate real-world processes that are too complex to analyze with spreadsheets or flowcharts (which are themselves models, of course) to test hypotheses at a fraction of the cost of undertaking the actual activities. Models are an efficient communication tool for showing how the target functions and stimulating creative thinking about how to deal with an opponent.

 

Models are essential when dealing with complex targets (Analysis Principle 5-1). Without a device to capture the full range of thinking and creativity that occurs in the target-centric approach to intelligence, an analyst would have to keep in mind far too many details. Furthermore, in the target-centric approach, the customer of intelligence is part of the collaborative process. Presented with a model as an organizing construct for thinking about the target, customers can contribute pieces to the model from their own knowledge—pieces that the analyst might be unaware of. The primary suppliers of information (the collectors) can do likewise.

 

The Concept of a Model

 

A model, as used in intelligence, is an organizing constraint. It is a combination of facts, hypotheses, and assumptions about a target, developed in a form that is useful for analyzing the target and for customer decision making (producing actionable intelligence). The type of model used in intelligence typically comprises facts, hypotheses, and assumptions, so it’s important to distinguish them here:

 

Fact. Something that is indisputably the case.

Hypothesis. A proposition that is set forth to explain developments or observed phenomena. It can be posed as conjecture to guide research (a working hypothesis) or accepted as a highly probable conclusion from established facts.

Assumption. A thing that is accepted as true or as certain to happen, without proof.

 

These are the things that go into a model. But, it is important to distinguish them when you present the model. Customers should never wonder whether they are hearing facts, hypotheses, or assumptions.

 

A model is a replica or representation of an idea, an object, or an actual system. It often describes how a system behaves. Instead of interacting with the real system, an analyst can create a model that corresponds to the actual one in certain ways.

 

 

Physical models are a tangible representation of something. A map, a globe, a calendar, and a clock are all physical models. The first two represent the Earth or parts of it, and the latter two represent time. Physical models are always descriptive.

 

Conceptual models—inventions of the mind—are essential to the analytic process. They allow the analyst to describe things or situations in abstract terms both for estimating current situations and for predicting future ones.”

 

 

A normative model may contain some descriptive segments, but its purpose is to describe a best, or preferable, course of action.

 

A decision-support model—that is, a model used to choose among competing alternatives—is normative.

 

 

A conceptual model may be either descriptive, describing what it represents, or normative. A normative model may contain some descriptive segments, but its purpose is to describe a best, or preferable, course of action. A decision-support model—that is, a model used to choose among competing alternatives—is normative.

In intelligence analysis, the models of most interest are conceptual and descriptive rather than normative. Some common traits of these conceptual models follow.

 

Descriptive models can be deterministic or stochastic.

In a deterministic model the relationships are known and specified explicitly. A model that has any uncertainty incorporated into it is a stochastic model (meaning that probabilities are involved), even though it may have deterministic properties.

 

Descriptive models can be linear or nonlinear.

Linear models use only linear equations (for example, x = Ay + B) to describe relationships.

 

Nonlinear models use any type of mathematical function. Because nonlinear models are more difficult to work with and are not always capable of being analyzed, the usual practice is to make some compromises so that a linear model can be used.

 

Descriptive models can be static or dynamic.

A static model assumes that a specific time period is being analyzed and the state of nature is fixed for that time period. Static models ignore time-based variances. For example, one cannot use them to determine the impact of an event’s timing in relation to other events. Returning to the example of a combat model, a snapshot of the combat that shows where opposing forces are located and their directions of movement at that instant is static. Static models do not take into account the synergy of the components of a system, where the actions of separate elements can have a different effect on the system than the sum of their individual effects would indicate. Spreadsheets and most relationship models are static.

 

Dynamic modeling (also known as simulation) is a software representation of the time-based behavior of a system. Where a static model involves a single computation of an equation, a dynamic model is iterative; it constantly recomputes its equations as time changes.

 

Descriptive models can be solvable or simulated.

A solvable model is one in which there is an analytic way of finding the answer. The performance model of a radar, a missile, or a warhead is a solvable problem. But other problems require such a complicated set of equations to describe them that there is no way to solve them. Worse still, complex problems typically cannot be described in a manageable set of equations. In complex cases—such as the performance of an economy or a person—one can turn to simulation.

 

Using Target Models for Analysis

 

Operations

Intelligence services prefer specific sources of intelligence, shaped in part by what has worked for them in the past; by their strategic targets; and by the size of their pocketbooks. The poorer intelligence services rely heavily on open source (including the web) and HUMINT, because both are relatively inexpensive. COMINT also can be cheap, unless it is collected by satellites. The wealthier services also make use of satellite-collected imagery intelligence (IMINT) and COMINT, and other types of technical collection.

 

“China relies heavily on HUMINT, working through commercial organizations, particularly trading firms, students, and university professors far more than most other major intelligence powers do.

 

In addition to being acquainted with opponents’ collection habits, CI also needs to understand a foreign intelligence service’s analytic capabilities. Many services have analytic biases, are ethnocentric, or handle anomalies poorly. It is important to understand their intelligence communications channels and how well they share intelligence within the government. In many countries, the senior policymaker or military commander is the analyst. That provides a prime opportunity for “perception management,” especially if a narcissistic leader like Hitler, Stalin, or Saddam Hussein is in charge and doing his own analysis. Leaders and policymakers find it difficult to be objective; they are people of action, and they always have an agenda. They have lots of biases and are prone to wishful thinking.

 

Linkages

Almost all intelligence services have liaison relationships with foreign intelligence or security services. It is important to model these relationships because they can dramatically extend the capabilities of an intelligence service.

 

Summary

Two conceptual frameworks are invaluable for doing intelligence analysis. One deals with the instruments of national or organizational power and the effects of their use. The second involves the use of target models to produce analysis.

 

The intelligence customer has four instruments of national or organizational power, as discussed in chapter 2. Intelligence is concerned with how opponents will use those instruments and the effects that result when customers use them. Viewed from both the opponent’s actions and the effects perspectives, there are usually six factors to consider: political, military, economic, social, infrastructure, and information, abbreviated PMESII:

 

 

Political. The distribution of power and control at all levels of governance.

 

Military. The ability of all relevant actors (enemy, friendly, and neutral) to exercise force.

 

Economic. Behavior relating to producing, distributing, and consuming resources.

 

Social. The cultural, religious, and ethnic composition of a region and the beliefs, values, customs, and behaviors of people.

 

Infrastructure. The basic facilities, services, and installations needed for the functioning of a community or society.

 

Information. The nature, scope, characteristics, and effects of individuals, organizations, and systems that collect, process, disseminate, or act on information.”

 

 

Models in intelligence are typically conceptual and descriptive. The easiest ones to work with are deterministic, linear, static, solvable, or some combination. Unfortunately, in the intelligence business the target models tend to be stochastic, nonlinear, dynamic, and simulated.

 

From an existing knowledge base, a model of the target is developed. Next, the model is analyzed to extract information for customers or for additional collection. The “model” of complex targets will typically be a collection of associated models that can serve the purposes of intelligence customers and collectors.

 

Chapter 6

Overview of Models in Intelligence

 

One picture is worth more than ten thousand words.

Chinese proverb

 

“The process of populating the appropriate model is known as synthesis, a term borrowed from the engineering disciplines. Synthesis is defined as putting together parts or elements to form a whole—in this case, a model of the target. It is what intelligence analysts do, and their skill at it is a primary measure of their professional competence. ” .

 

 

Creating a Conceptual Model

 

 

The first step in creating a model is to define the system that encompasses the intelligence issues of interest, so that the resulting model answers any problem that has been defined by using the issue definition process.

 

few questions in strategic intelligence or in-depth research can be answered by using a narrowly defined target.

 

For the complex targets that are typical of in-depth research, an analyst usually will deal with a complete system, such as an air defense system that will use the new fighter aircraft

 

In law enforcement, analysis of an organized crime syndicate involves consideration of people, funds, communications, operational practices, movement of goods, political relationships, and victims. Many intelligence problems will require consideration of related systems as well. The energy production system, for example, will give rise to intelligence questions about related companies, governments, suppliers and customers, and nongovernmental organizations (such as environmental advocacy groups). The questions that customers pose should be answerable by reference only to the target model, without the need to reach beyond it.

 

A major challenge in defining the relevant system is to use restraint. The definition must include essential subsystems or collateral systems, but nothing more. Part of an analyst’s skill lies in being able to include in a definition the relevant components, and only the relevant components, that will address the issue.

 

The systems model can therefore be structural, functional, process oriented, or any combination thereof. Structural models include actors, objects, and the organization of their relationships to each other. Process models focus on interactions and their dynamics. Functional models concentrate on the results achieved, for example, a model that simulates the financial consequences of a proposed trade agreement.

 

After an analyst has defined the relevant system, the next step is to select the generic models, or model templates, to be used. These model templates then will be made specific, or “populated,” using evidence (discussed in chapter 7). Several types of generic models are used in intelligence. The three most basic types are textual, mathematical, and visual.

 

Textual Models

 

Almost any model can be described using written text. The CIA’s World Factbook is an example of a set of textual models—actually a series of models (political, military, economic, social, infrastructure, and information)—of a country. Some common examples of textual models that are used in intelligence analysis are lists, comparative models, profiles, and matrix models.

 

 

 

 

Lists

 

Lists and outlines are the simplest examples of a model.

 

The list continues to be used by analysts today for much the same purpose—to reach a yes-or-no decision.

 

Comparative Models

 

Comparative techniques, like lists, are a simple but useful form of modeling that typically does not require a computer simulation. Comparative techniques are used in government, mostly for weapons systems and technology analyses. Both governments and businesses use comparative models to evaluate a competitor’s operational practices, products, and technologies. This is called benchmarking.

 

A powerful tool for analyzing a competitor’s developments is to compare them with your own organization’s developments. Your own systems or technologies can provide a benchmark for comparison.

 

Comparative models have to be culture specific to help avoid mirror imaging.

 

A keiretsu is a network of businesses, usually in related industries, that own stakes in one another and have board members in common as a means of mutual security. A network of essentially captive (because they are dependent on the keiretsu) suppliers provides the raw material for the keiretsu manufacturers, and the keiretsu trading companies and banks provide marketing services. Keiretsu have their roots in prewar Japan.

 

Profiles

 

Profiles are models of individuals—in national intelligence, of leaders of foreign governments; in business intelligence, of top executives in a competing organization; in law enforcement, of mob leaders and serial criminals.

 

 

Profiles depend heavily on understanding the pattern of mental and behavioral traits that are shared by adult members of a society—referred to as the society’s modal personality. Several modal personality types may exist in a society, and their common elements are often referred to as national character.

 

Defining the modal personality type is beyond the capabilities of the journeyman intelligence analyst, and one must turn to experts.

 

 

The modal personality model usually includes at least the following elements:

 

Concept of self—the conscious ideas of what a person thinks he or she is, along with the frequently unconscious motives and defenses against ego-threatening experiences such as withdrawal of love, public shaming, guilt, or isolation.

 

Relation to authority—how an individual adapts to authority figures

Modes of impulse control and expressing emotion

Processes of forming and manipulating ideas”

 

 

Three model types are often used for studying modal personalities and creating behavioral profiles:

 

Cultural pattern models are relatively straightforward to analyze and are useful in assessing group behavior.

 

 

Child-rearing systems can be studied to allow the projection of adult personality patterns and behavior. They may allow more accurate assessments of an individual than a simple study of cultural patterns, but they cannot account for the wide range of possible pattern variations occurring after childhood.

 

Individual assessments are probably the most accurate starting points for creating a behavioral model, but they depend on detailed data about the specific individual. Such data are usually gathered from testing techniques; the Rorschach (or “Inkblot”) test—a projective personality assessment based on the subject’s reactions to a series of ten inkblot pictures—is an example.

 

Interaction Matrices

A textual variant of the spreadsheet (discussed later) is the interaction matrix, a valuable analytic tool for certain types of synthesis. It appears in various disciplines and under different names and is also called a parametric matrix or a traceability matrix.

 

Mathematical Models

The most common modeling problem involves solving an equation. Most problems in engineering or technical intelligence are single equations of the form.

 

Most analysis involves fixing all of the variables and constants in such an equation or system of equations, except for two variables. The equation is then solved repetitively to obtain a graphical picture of one variable as a function of another. A number of software packages perform this type of solution very efficiently. For example, as a part of radar performance analysis, the radar range equation is solved for signal-to-noise ratio as a function of range, and a two-dimensional curve is plotted. Then, perhaps, signal-to-noise ratio is fixed and a new curve plotted for radar cross-section as a function of range.

 

Often the requirement is to solve an equation, get a set of ordered pairs, and plug those into another equation to get a graphical picture rather than solving simultaneous equations.

 

Spreadsheets

 

The computer is a powerful tool for handling the equation-solution type of problem. Spreadsheet software has made it easy to create equation-based models. The rich set of mathematical functions that can be incorporated in it, and its flexibility, make the spreadsheet a widely used model in intelligence.

 

Simulation Models

 

A simulation model is a mathematical model of a real object, a system, or an actual situation. It is useful for estimating the performance of its real-world analogue under different conditions. We often wish to determine how something will behave without actually testing it in real life. So simulation models are useful for helping decision makers choose among alternative actions by determining the likely outcomes of those actions.

 

In intelligence, simulation models also are used to assess the performance of opposing weapons systems, the consequences of trade embargoes, and the success of insurgencies.

 

Simulation models can be challenging to build. The main challenge usually is validation: determining that the model accurately represents what it is supposed to represent, under different input conditions.

 

Visual Models

 

Models can be described in written text, as noted earlier. But the models that have the most impact for both analysts and customers in facilitating understanding take a visual form.

 

Visualization involves transforming raw intelligence into graphical, pictorial, or multimedia forms so that our brains can process and understand large amounts of data more readily than is possible from simply reading text. Visualization lets us deal with massive quantities of data and identify meaningful patterns and structures that otherwise would be incomprehensible.

 

 

Charts and Graphs

 

Graphical displays, often in the form of curves, are a simple type of model that can be synthesized both for analysis and for presenting the results of analysis.

 

 

Pattern Models

 

Many types of models fall under the broad category of pattern models. Pattern recognition is a critical element of all intelligence

 

Most governmental and industrial organizations (and intelligence services) also prefer to stick with techniques that have been successful in the past. An important aspect of intelligence synthesis, therefore, is recognizing patterns of activities and then determining in the analysis phase whether (a) the patterns represent a departure from what is known or expected and (b) the changes in patterns are significant enough to merit attention. The computer is a valuable ally here; it can display trends and allow the analyst to identify them. This capability is particularly useful when trends would be difficult or impossible to find by sorting through and mentally processing a large volume of data. Pattern analysis is one way to effectively handle complex issues.

 

One type of pattern model used by intelligence analysts relies on statistics. In fact, a great deal of pattern modeling is statistical. Intelligence deals with a wide variety of statistical modeling techniques. Some of the most useful techniques are easy to learn and require no previous statistical training.

 

Histograms, which are bar charts that show a frequency distribution, are one example of a simple statistical pattern.

 

Advanced Target Models

 

The example models introduced so far are frequently used in intelligence. They’re fairly straightforward and relatively easy to create. Intelligence also makes use of four model types that are more difficult to create and to analyze, but that give more in-depth analysis. We’ll briefly introduce them here.

 

Systems Models

 

Systems models are well known in intelligence for their use in assessing the performance of weapons systems.

 

 

Systems models have been created for all of the following examples:

 

A republic, a dictatorship, or an oligarchy can be modeled as a political system.

 

Air defense systems, carrier strike groups, special operations teams, and ballistic missile systems all are modeled as military systems.

 

Economic systems models describe the functioning of capitalist or socialist economies, international trade, and informal economies.

 

Social systems include welfare or antipoverty programs, health care systems, religious networks, urban gangs, and tribal groups.

 

Infrastructure systems could include electrical power, automobile manufacturing, railroads, and seaports.

 

A news gathering, production, and distribution system is an example of an information system.

Creating a systems model requires an understanding of the system, developed by examining the linkages and interactions between the elements that compose the system as a whole.

 

 

A system has structure. It is comprised of parts that are related (directly or indirectly). It has a defined boundary physically, temporally, and spatially, though it can overlap with or be a part of a larger system.

 

A system has a function. It receives inputs from, and sends outputs into, an outside environment. It is autonomous in fulfilling its function. A main battle tank standing alone is not a system. A tank with a crew, fuel, ammunition, and a communications subsystem is a system.

 

A system has a process that performs its function by transforming inputs into outputs.

 

 

Relationship Models

 

Relationships among entities—people, places, things, and events—are perhaps the most common subject of intelligence modeling. There are four levels of such relationship models, each using increasingly sophisticated analytic approaches: hierarchy, matrix, link, and network models. The four are closely related, representing the same fundamental idea at increasing levels of complexity.

 

Relationship models require a considerable amount of time to create, and maintaining the model (known to those who do it as “feeding the beast”) demands much effort. But such models are highly effective in analyzing complex problems, and the associated graphical displays are powerful in persuading customers to accept the results.

 

Hierarchy Models

 

The hierarchy model is a simple tree structure. Organizational modeling naturally lends itself to the creation of a hierarchy, as anyone who ever drew an organizational chart is aware. A natural extension of such a hierarchy is to use a weighting scheme to indicate the importance of individuals or suborganizations in it.

 

Matrix Models

 

The interaction matrix was introduced earlier. The relationship matrix model is different. It portrays the existence of an association, known or suspected, between individuals. It usually portrays direct connections such as face-to-face meetings and telephone conversations. Analysts can use association matrices to identify those personalities and associations needing a more in-depth analysis to determine the degree of relationships, contacts, or knowledge between individuals.

 

Link Models

 

A link model allows the view of relationships in more complex tree structures. Though it physically resembles a hierarchy model (both are trees), a link model differs in that it shows different kinds of relationships but does not indicate subordination.

 

Network Models

 

A network model can be thought of as a flexible interrelationship of multiple tree structures at multiple levels. The key limitation of the matrix model discussed earlier is that although it can deal with the interaction of two hierarchies at a given level, because it is a two-dimensional representation, it cannot deal with interactions at multiple levels or with more than two hierarchies. Network synthesis is an extension of the link or matrix synthesis concept that can handle such complex problems. There are several types of network models. Two are widely used in intelligence:

 

Social network models show patterns of human relationships. The nodes are people, and the links show that some type of relationship exists.

 

Target network models are most useful in intelligence. The nodes can be any type of entity—people, places, things, concepts—and the links show that some type of relationship exists between entities.

 

Spatial and Temporal Models

 

Another way to examine data and to search for patterns is to use spatial modeling—depicting locations of objects in space. Spatial modeling can be used effectively on a small scale. For example, within a building, computer-aided design/computer-aided modeling, known as CAD/CAM, can be a powerful tool for intelligence synthesis. Layouts of buildings and floor plans are valuable in physical security analysis and in assessing production capacity.

.

 

Spatial modeling on larger scales is usually called geospatial modeling.

 

Patterns of activity over time are important for showing trends. Pattern changes are often used to compare how things are going now with how they went last year (or last decade). Estimative analysis often relies on chronological models.

 

Scenarios

Arguably the most important model for estimative intelligence purposes is the scenario, a very sophisticated model.

 

Alternative scenarios are used to model future situations. These scenarios increasingly are produced as virtual reality models because they are powerful ways to convey intelligence and are very persuasive.

Target Model Combinations

Almost all target models are actually combinations of many models. In fact, most of the models described in the previous sections can be merged into combination mod- els. One simple example is a relationship-time display.

This is a dynamic model where link or network nodes and links (relationships) change, appear, and disappear over time.

We also typically want to have several distinct but interrelated models of the target in order to be able to answer different customer questions.

Submodels

One type of component model is a submodel, a more detailed breakout of the top-level model. It is typical, for complex targets, to have many such submodels of a target that provide different levels of detail.

Participants in the target-centric process then can reach into the model set to pull out the information they need. The collectors of information can drill down into more detail to refine collection targeting and to fill specific gaps.

The intelligence customer can drill down to answer questions, gain confidence in the analyst’s picture of the target, and understand the limits of the analyst’s work. The target model is a powerful collaborative tool.

Collateral Models

In contrast to the submodel, a collateral model may show particular aspects of the overall target model, but it is not simply a detailed breakout of a top-level model. A collateral model typically presents a different way of thinking about the target for a specific intelligence purpose.

The collateral models in Figures 6-7 to 6-9 are examples of the three general types—structural, functional, and process—used in systems analysis. Figures 6-7 and 6-8 are structural models. Figure 6-9 is both a process model and a functional mod- el. In analyzing complex intelligence targets, all three types are likely to be used.

These models, taken together, allow an analyst to answer a wide range of customer questions.

More complex intelligence targets can re- quire a combination of several model types. They may have system characteristics, take a network form, and have spatial and temporal characteristics.

Alternative and Competitive Target Models

Alternative and competitive models are somewhat different things, though they are frequently confused with each other.

Alternative Models

Alternative models are an essential part of the synthesis process. It is important to keep more than one possible target model in mind, especially as conflicting or contradict- ory intelligence information is collected.

 

“The disciplined use of alternative hypotheses could have helped counter the natural cognitive tendency to force new information into existing paradigms.” As law professor David Schum has noted, “the generation of new ideas in fact investigation usually rests upon arranging or juxtaposing our thoughts and evidence in different ways.” To do that we need multiple alternative models.

And, the more inclusive you can be when defining alternative models, the better…

In studies listing the analytic pitfalls that hampered past assessments, one of the most prevalent is failure to consider alternative scenarios, hypotheses, or models.

Analysts have to guard against allowing three things to interfere with their need to develop alternative models:

  • Ego. Former director of national intelligence Mike McConnell once observed that analysts inherently dislike alternative, dissenting, or competitive views. But, the opposite becomes true of analysts who operate within the target-centric approach—the focus is not on each other anymore, but instead on contributing to a shared target model.
  • Time. Analysts are usually facing tight deadlines. They must resist the temptation to go with the model that best fits the evidence without considering alternatives. Otherwise, the result is premature closure that can cost dearly in the end result.
  • The customer. Customers can view a change in judgment as evidence that the original judgment was wrong, not that new evidence forced the change. Furthermore, when presented with two or more target models, customers will tend to pick the one that they like best, which may or may not be the most likely model. Analysts know this.

 

It is the analyst’s responsibility to establish a tone of setting egos aside and of conveying to all participants in the process, including the customer, that time spent up front developing alternative models is time saved at the end if it keeps them from committing to the wrong model in haste.

Competitive Models

It is well established in intelligence that, if you can afford the resources, you should have independent groups providing competing analyses. This is because we’re dealing with uncertainty. Different analysts, given the same set of facts, are likely to come to different conclusions.

It is important to be inclusive when defining alternative or competitive models.

Summary

Creating a target model starts with defining the relevant system. The system model can be a structural, functional, or process model, or any combination. The next step is to select the generic models or model templates.

Lists and curves are the simplest form of model. In intelligence, comparative models or benchmarks are often used; almost any type of model can be made comparative, typically by creating models of one’s own system side by side with the target system model.

Pattern models are widely used in the intelligence business. Chronological models allow intelligence customers to examine the timing of related events and plan a way to change the course of these events. Geospatial models are popular in military intelligence for weapons targeting and to assess the location and movement of opposing forces.

Relationship models are used to analyze the relationships among elements of the tar- get—organizations, people, places, and physical objects—over time. Four general types of relationship models are commonly used: hierarchy, matrix, link, and network models. The most powerful of these, network models, are increasingly used to describe complex intelligence targets.

 

Competitive and alternative target models are an essential part of the process. Properly used, they help the analyst deal with denial and deception and avoid being trapped by analytic biases. But they take time to create, analysts find it difficult to change or chal- lenge existing judgments, and alternative models give policymakers the option to se- lect the conclusion they prefer—which may or may not be the best choice.

 

 

 

 

 

 

 

Chapter 7

 

Creating the Model

Believe nothing you hear, and only one half that you see.  – Edgar Allen Poe

This chapter describes the steps that analysts go through in populating the target model. Here, we focus on the synthesis part of the target-centric approach, often called collation in the intelligence business.

We discuss the importance of existing pieces of intelligence, both finished and raw, and how best to think about sources of new raw data.

We talk about how credentials of evidence must be established, introduce widely used in- formal methods of combining evidence, and touch on structured argumentation as a formal methodology for combining evidence.

Analysts generally go through the actions described here in service to collation. They may not think about them as separate steps and in any event aren’t likely to do them in the order presented. They nevertheless almost always do the following:

 

  • Review existing finished intelligence about the target and examine existing raw intelligence
  • Acquire new raw intelligence
  • Evaluate the new raw intelligence
  • Combine the intelligence from all sources into the target model

 

Existing Intelligence

Existing finished intelligence reports typic- ally define the current target model. So information gathering to create or revise a model begins with the existing knowledge base. Before starting an intelligence collection effort, analysts should ensure that they are aware of what has already been found on a subject.

Finished studies or reports on file at an analyst’s organization are the best place to start any research effort. There are few truly new issues.

The databases of intelligence organizations include finished intelligence reports as well as many specialized data files on specific topics. Large commercial firms typically have comparable facilities in-house, or they depend on commercially available databases.

a literature search should be the first step an analyst takes on a new project. The purpose is to both define the current state of knowledge—that is, to understand the existing model(s) of the intelligence target—and to identify the major controversies and disagreements surrounding the target model.

The existing intelligence should not be accepted automatically as fact. Few experienced analysts would blithely accept the results of earlier studies on a topic, though they would know exactly what the studies found. The danger is that, in conducting the search, an analyst naturally tends to adopt a preexisting target model.

In this case, premature closure, or a bias toward the status quo, leads the analyst to keep the existing model even when evidence indicates that a different model is more appropriate.

To counter this tendency, it’s important to do a key assumptions check on the existing model(s).

Do the existing analytic conclusions appear to be valid?

What are the premises on which these conclusions rest, and do they appear to be valid as well?

Has the underlying situation changed so that the premises may no longer apply?

Once the finished reports are in hand, the analyst should review all of the relevant raw intelligence data that already exist. Few things can ruin an analyst’s career faster than sending collectors after information that is already in the organization’s files.

Sources of New Raw Intelligence

Raw intelligence comes from a number of sources, but they typically are categorized as part of the five major “INTs” shown in this section.

 

 

 

The definitions of each INT follow:

  • Open source (OSINT). Information of potential intelligence value that is available to the general public
  • Human intelligence (HUMINT). Intelligence derived from information collected and provided by human sources
  • Measurements and signatures intelligence (MASINT). Scientific and technical intelligence obtained by quantitative and qualitative analysis of data (metric, angle, spatial, wavelength, time dependence, modulation, plasma, and hydromagnetic) derived from specific technical sensors
  • Signals intelligence (SIGINT). Intelligence comprising either individually or in combination all communications intelligence, electronics intelligence, and foreign instrumentation signals intelligence
  • Imagery intelligence (IMINT). Intelligence derived from the exploitation of collection by visual photography, infrared sensors, lasers, electro-optics, and radar sensors such as synthetic aperture radar wherein images of objects are reproduced optically or electronically on film, electronic dis- play devices, or other media

 

The taxonomy approach in this book is quite different. It strives for a breakout that focuses on the nature of the material collected and processed, rather than on the collection means.

Traditional COMINT, HUMINT, and open- source collection are concerned mainly with literal information, that is, information in a form that humans use for communication. The basic product and the general methods for collecting and analyzing literal information are usually well understood by intelligence analysts and the customers of intelligence. It requires no special exploitation after the processing step (which includes translation) to be understood. It literally speaks for itself.

Nonliteral information, in contrast, usually requires special processing and exploitation in order for analysts to make use of it.

 

The logic of this division has been noted by other writers in the intelligence business. British author Michael Herman observed that there are two basic types of collection: One produces evidence in the form of observations and measurements of things (nonlit- eral), and one produces access to human thought processes

 

The automation of data handling has been a major boon to intelligence analysts. Informa- tion collected from around the globe arrives at the analyst’s desk through the Internet or in electronic message form, ready for review and often presorted on the basis of keyword searches. A downside of this automation, however, is the tendency to treat all information in the same way. In some cases the analyst does not even know what collection source provided the information; after all, everything looks alike on the display screen. However, information must be treated depending on its source. And, no matter the source, all information must be evaluated before it is synthesized into the model—the subject to which we now turn.

Evaluating Evidence

The fundamental problem in weighing evidence is determining its credibility—its completeness and soundness.

checking the quality of information used in intelligence analysis is an ongoing, continuous process. Having multiple sources on an issue is not a substitute for having good information that has been thoroughly examined. Analysts should perform periodic checks of the information base for their analytic judgments.

Evaluating the Source

  • Is the source competent (knowledgeable about the information being given)?
  • Did the source have the access needed to get the information?
  • Does the source have a vested interest or bias?

Competence

The Anglo-American judicial system deals ef- fectively with competence: It allows people to describe what they observed with their senses because, absent disability, we are pre- sumed competent to sense things. The judi- cial system does not allow the average per- son to interpret what he or she sensed unless the person is qualified as an expert in such interpretation.

Access

The issue of source access typically does not arise because it is assumed that the source had access. When there is reason to be suspicious about the source, however, check whether the source might not have had the claimed access.

In the legal world, checks on source access come up regularly in witness cross-examinations. One of the most famous examples was the “Almanac Trial” of 1858, where Abraham Lincoln conducted the cross-examination. It was the dying wish of an old friend that

Lincoln represent his friend’s son, Duff Armstrong, who was on trial for murder. Lincoln gave his client a tough, artful, and ultimately successful defense; in the trial’s highlight, Lincoln consulted an almanac to discredit a prosecution witness who claimed that he saw the murder clearly because the moon was high in the sky. The almanac showed that the moon was lower on the horizon, and the wit- ness’s access—that is, his ability to see the murder—was called into question.

Vested Interest or Bias

In HUMINT, analysts occasionally encounter the “professional source” who sells information to as many bidders as possible and has an incentive to make the information as interesting as possible. Even the densest sources quickly realize that more interesting information gets them more money.

An intelligence organization faces a problem in using its own parent organization’s (or country’s) test and evaluation results: Many have been contaminated. Some of the test results are fabricated; some contain distortions or omit key points. An honestly conducted, objective test may be a rarity. Several reasons for this problem exist. Tests are sometimes conducted to prove or dis- prove a preconceived notion and thus unconsciously are slanted. Some results are fabricated because they would show the vulnerability or the ineffectiveness of a system and because procurement decisions often depend on the test outcomes.

Although the majority of contaminated cases probably are never discovered, history provides many examples of this issue.

In examining any test or evaluation results, begin by asking two questions:

  • Did the testing organization have a major stake in the outcome (such as the threat that a program would be canceled due to negative test results or the possibility that it would profit from positive results)?
  • Did the reported outcome support the organization’s position or interests?

If the answer to both questions is yes, be wary of accepting the validity of the test. In the pharmaceutical testing industry, for example, tests have been fraudulently conducted or the results skewed to support the regulatory approval of the pharmaceutical.

A very different type of bias can occur when collection is focused on a particular issue. This bias comes from the fact that, when you look for something in the intelligence business, you may find what you are looking for, whether or not it’s there. In looking at suspected Iraqi chemical facilities prior to 2003, analysts concluded from imagery reporting that the level of activity had increased at the facilities. But the appearance of an increase in activity may simply have been a result of an increase in imagery collection.

David Schum and Jon Morris have published a detailed treatise on human sources of intelligence analysis. They pose a set of twenty-five questions di- vided into four categories: source competence, veracity, objectivity, and observational sensitivity. Their questions cover in more explicit detail the three questions posed in thissection about competence, access, and vested interest.

Evaluating the Communications Channel

A second basic rule of weighing evidence is to look at the communications channel through which the evidence arrives.

The accuracy of a message through any communications system decreases with the length of the link or the number of intermediate nodes.

Large and complex systems tend to have more entropy. The result is often cited as “poor communication” problems in large organizations

In the business intelligence world, analysts recognize the importance of the communications channel by using the differentiating terms primary sources for firsthand information, acquired through discussions or other interaction directly with a human source, and secondary sources for information learned through an intermediary, a publication, or online. This division does not consider the many gradations of reliability, and national intelligence organizations commonly do not use the primary/secondary source division. Some national intelligence collection organizations use the term collateral to refer to intelligence gained from other collectors, but it does not have the same meaning as the terms primary and secondary as used in business intelligence.

It’s not un- heard of (though fortunately not common) for the raw intelligence to be misinterpreted or misanalyzed as it passes through the chain. Organizational or personal biases can shape the interpretation and analysis, especially of literal intelligence. It’s also possible for such biases to shape the analysis of non- literal intelligence, but that is a more difficult product for all-source analysts to challenge, as noted earlier.

Entropy has another effect in intelligence. An intelligence assertion that “X is a possibility” very often, over time and through diverse communications channels, can become “X may be true,” then “X probably is the case,” and eventually “X is a fact,” without a shred of new evidence to support the assertion. In intelligence, we refer to this as the “creeping validity” problem.

 

 

Evaluating the Credentials of Evidence

The major credentials of evidence, as noted earlier, are credibility, reliability, and inferential force. Credibility refers to the extent to which we can believe something. Reliability means consistency or replicability. Inferential force means that the evidence carries weight, or has value, in supporting a conclusion.

 

 

U.S. government intelligence organizations have established a set of definitions to distinguish levels of credibility of intelligence:

  • Fact. Verified information, something known to exist or to have happened.
  • Direct information. The content of reports, research, and reflection on an intelligence issue that helps to evaluate the likelihood that something is factual and thereby reduces uncertainty. This is information that can be considered factual because of the nature of the source (imagery, signal intercepts, and similar observations).
  • Indirect information. Information that may or may not be factual because of some doubt about the source’s reliability, the source’s lack of direct access, or the complex (non-concrete) character of the contents (hearsay from clandestine sources, foreign government reports, or local media accounts).

In weighing evidence, the usual approach is to ask three questions that are embedded in the oath that witnesses take before giving testimony in U.S. courts:

  • Is it true?
  • Is it the whole truth?
  • Is it nothing but the truth? (Is it relevant or significant?)

 

Is It True?

Is the evidence factual or opinion (someone else’s analysis)? If it is opinion, question its validity unless the source quotes evidence to support it.

How does it fit with other evidence? The relating of evidence—how it fits in—is best done in the synthesis phase. The data from different collection sources are most valuable when used together.

The synergistic effect of combining data from many sources both strengthens the conclusions and increases the analyst’s confidence in them.

 

 

 

  • HUMINT and OSINT are often melded together to give a more comprehensive picture of people, programs, products, facilities, and research specialties. This is excellent background information to interpret data derived from COMINT and IMINT.
  • Data on environmental conditions during weapons tests, acquired through specialized technical collection, can be used with ELINT and COMINT data obtained during the same test event to evaluate the cap- abilities of the opponent’s sensor systems.
  • Identification of research institutes and their key scientists and research- ers can be initially made through HUMINT, COMINT, or OSINT. Once the organization or individual has been identified by one intelligence collector, the other ones can often provide extensive additional information.
  • Successful analysis of COMINT data may require correlating raw COMINT data with external information such as ELINT and IMINT, or with knowledge of operational or technical practices.

Is It the Whole Truth?

When asking this question, it is time to do source analysis.

An incomplete picture can mislead as much as an outright lie.

 

 

 

Is It Nothing but the Truth?

It is worthwhile at this point to distinguish between data and evidence. Data become evidence only when the data are relevant to the problem or issue at hand. The simple test of relevance is whether it affects the likelihood of a hypothesis about the target.

Does it help answer a question that has been asked?

Or does it help answer a question that should be asked?

The preliminary or initial guidance from customers seldom tells what they really need to know—an important reason to keep them in the loop through the target-centric process.

Doctors encounter difficulties when they must deal with a patient who has two pathologies simultaneously. Some of the symptoms are relevant to one pathology, some to the other. If the doctor tries to fit all of the symptoms into one diagnosis, he or she is apt to make the wrong call. This is a severe enough problem for doctors, who must deal with relatively few symptoms. It is a much worse problem for intelligence analysts, who typically deal with a large volume of data, most of which is irrelevant.

Pitfalls in Evaluating Evidence

Vividness Weighting

In general, the channel for communication of intelligence should be as short as possible; but when could a short channel become a problem? If the channel is too short, the res- ult is vividness weighting—the phenomenon that evidence that is experienced directly is strongest (“seeing is believing”). Customers place the most weight on evidence that they collect themselves—a dangerous pitfall that senior executives fall into repeatedly and that makes them vulnerable to deception.

Michael Herman tells how Churchill, reading Field Marshal Erwin Rommel’s decrypted cables during World War II, concluded that the Germans were desperately short of supplies in North Africa. Basing his interpretation on this raw COMINT traffic, Churchill pressed his generals to take the offensive against Rommel. Churchill did not realize what his own intelligence analysts could have readily told him: Rommel consistently exaggerated his short- ages in order to bolster his demands for sup- plies and reinforcements.

Statistics are the least persuasive form of evidence; abstract (general) text is next; concrete (specific, focused, exemplary) text is a more persuasive form still; and visual evidence, such as imagery or video, is the most persuasive.

Weighing Based on the Source

One of the most difficult traps for an analyst to avoid is that of weighing evidence based on its source.

Favoring the Most Recent Evidence

Analysts often give the most recently acquired evidence the most weight.

The freshest intelligence—crisp, clear, and the focus of the analyst’s attention—often gets more weight than the fuzzy and half-re- membered (but possibly more important) in- formation that has had to travel down the long lines of time. The analyst has to remember this tendency and compensate for it. It sometimes helps to go back to the original (older) intelligence and reread it to bring it more freshly to mind.

Favoring or Disfavoring the Unknown

It is hard to decide how much weight to give to answers when little or no information is available for or against each one.

Trusting Hearsay

The chief problem with much of HUMINT (not including documents) is that it is hearsay evidence; and as noted earlier, the judiciary long ago learned to distrust hearsay for good reasons, including the biases of the source and the collector. Sources may deliberately distort or misinform because they want to influence policy or increase their value to the collector.

Finally, and most important, people can be misinformed or lie. COMINT can only report what people say, not the truth about what they say. So intelligence analysts have to use hearsay, but they must also weigh it accordingly.

Unquestioning Reliance on Expert Opinions

Expert opinion is often used as a tool for analyzing data and making estimates. Any intelligence community must often rely on its nation’s leading scientists, economists, and political and social scientists for insights into foreign developments.

outside experts often have issues with objectivity. With experts, an ana- lyst gets not only their expertise, but also their biases; there are those experts who have axes to grind or egos that convince them there is only one right way to do things (their way).

British counterintelligence officer Peter Wright once noted that “on the big issues, the experts are very rarely right.”

Analysts should treat expert opinion as HUMINT and be wary when the expert makes extremely positive comments (“that foreign development is a stroke of genius!”) or extremely negative ones (“it can’t be done”).

Analysis Principle 7-3

Many experts, particularly scientists, are not mentally prepared to look for deception, as intelligence officers should be. It is simply not part of the expert’s training. A second problem, as noted earlier, is that experts often are quite able to deceive themselves without any help from opponents.

Varying the way expert opinion is used is one way to attempt to head off the problems cited here. Using a panel of experts to make analytic judgments is a common method of trying to reach conclusions or to sort through a complex array of interdisciplinary data.

Such panels have had mixed results. One former CIA office director observed that “advisory panels of eminent scientists are usually useless. The members are seldom willing to commit the time to studying the data to be of much help.”

The quality of the conclusions reached by such panels depends

on several variables, including the panel’s

  • Expertise
  • Motivation to produce a quality product
  • Understanding of the problem area to be addressed
  • Effectiveness in working as a group

A major advantage of the target-centric approach is that it formalizes the process of obtaining independent opinions.

Both single-source and all-source analysts have to guard against falling into the trap of reaching conclusions too early.

Premature closure also has been described as “affirming conclusions,” based on the observation that people are inclined to verify or affirm their existing beliefs rather than modify or discredit those beliefs.

The primary danger of premature closure is not that one might make a bad assessment because the evidence is incomplete. Rather, the danger is that when a situation is changing quickly or when a major, unprecedented event occurs, the analyst will become trapped by the judgments already made. Chances increase that he or she will miss indications of change, and it becomes harder to revise an initial estimate

The counterintelligence technique of deception thrives on this tendency to ignore evidence that would disprove an existing assumption

Denial and deception succeed if one op- ponent can get the other to make a wrong initial estimate.

Combining Evidence

In almost all cases, intelligence analysis in- volves combining disparate types of evidence.

Analysts have to have methods for weighing the combined data to help them make qualitative judgments as to which conclusions the various data best support.

Convergent and Divergent Evidence

Two items of evidence are said to be conflicting or divergent if one item favors one conclusion and the other item favors a different conclusion.

two items of evidence are contradictory if they say logically opposing things.

Redundant Evidence

Convergent evidence can also be redundant. To understand the concept of redundancy, it helps to understand its importance in communications theory.

Redundant, or duplicative, evidence can have corroborative redundancy or cumulatsive redundancy. In both types, the weight of the evidence piles up to reinforce a given conclusion. A simple example illustrates the difference.

Formal Methods for Combining Evidence

The preceding sections describe some informal methods for evidence combination. It often is important to combine evidence and demonstrate the logical process of reaching a conclusion based on that evidence by careful argument. The formal process of making that argument is called structured argumentation. Such formal structured argumentation approaches have been around at least since the seventeenth century.

Structured Argumentation

Structured argumentation is an analytic process that relies on a framework to make assumptions, reasoning, rationales, and evidence explicit and transparent. The process begins with breaking down and organizing a problem into parts so that each one can be examined systematically, as discussed in earlier chapters.

As analysts work through each part, they identify the data require- ments, state their assumptions, define any terms or concepts, and collect and evaluate relevant information. Potential explanations or hypotheses are formulated and evaluated with empirical evidence, and information gaps are identified.

Formal graphical or numerical processes for combining evidence are time consuming to apply and are not widely used in intelligence analysis. They are usually reserved for cases in which the customer requires them because the issue is critically important, because the customer wants to examine the reasoning process, or because the exact probabilities associated with each alternative are import- ant to the customer.

Wigmore’s Charting Method

John Henry Wigmore was the dean of the Northwestern University School of Law in the early 1900s and author of a ten-volume treatise commonly known as Wigmore on Evidence. In this treatise he defined some principles for rational inquiry into disputed facts and methods for rigorously analyzing and ordering possible inferences from those facts.

Wigmore argued that structured argumentation brings into the open and makes explicit the important steps in an argument, and thereby makes it easier to judge both their soundness and their probative value. One of the best ways to recognize any inherent tendencies one may have in making biased or illogical arguments is to go through the body of evidence using Wigmore’s method.

  • Different symbols are used to show varying kinds of evidence: explanatory, testimonial, circumstantial, corroborative, undisputed fact, and combinations.
  • Relationships between symbols (that is, between individual pieces of evidence) are indicated by their relative positions (for example, evidence tending to prove a fact is placed be- low the fact symbol).
  • The connections between symbols indicate the probative effect of their relationship and the degree of uncertainty about the relationship.

Even proponents admit that it is too time-consuming for most practical uses, especially in intelligence analysis, where the analyst typically has limited time.

Nevertheless, making Wigmore’s approach, or something like it, widely usable in intelligence analysis would be a major contribution.

Bayesian Techniques for Combining Evidence

By the early part of the eighteenth century, mathematicians had solved what is called the “forward probability” problem: When all of the facts about a situation are known, what is the probability of a given event happening?

Intelligence analysts find this problem of far more interest than the forward probability problem, because they often must make judgments about an underlying situation from observing the events that the situation causes. Bayes developed a formula for the answer that bears his name: Bayes’ rule.

One advantage claimed for Bayesian analysis is its ability to blend the subjective probability judgments of experts with historical frequencies and the latest sample evidence.

Bayes seems difficult to teach. It is generally considered to be “advanced” statistics and, given the problem that many people (including intelligence analysts) have with traditional elementary probabilistic and statistical techniques, such a solution seems to require expertise not currently resident in the intelligence community or available only through expensive software solutions.

A Note about the Role of Information Technology

It may be impossible for new analysts today to appreciate the markedly different work environment that their counterparts faced 40 years ago. Incoming intelligence arrived at the analyst’s desk in hard copy, to be scanned, marked up, and placed in file drawers. Details about intelligence targets—installations, persons, and organizations—were often kept on 5” × 7” cards in card catalog boxes. Less tidy analysts “filed” their most interesting raw intelligence on their desktops and cabinet tops, sometimes in stacks over 2 feet high.

IT systems allow analysts to acquire raw intelligence material of interest (incoming classified cable traffic and open source) and to search, organize, and store it electronically. Such IT capabilities have been eagerly accepted and used by analysts because of their ad- vantages in dealing with the information explosion.

A major consequence of this information explosion is that we must deal with what is called “big data” in collating and analyzing intelligence. Big data has been defined as “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.”

Analysts, inundated by the flood, have turned to IT tools for extracting meaning from the data. A wide range of such tools exists, including ones for visualizing the data and identifying patterns of intelligence interest, ones for conducting statistical analysis, and ones for running simulation models. Analysts with responsibility for counterterrorism, organized crime, counternarcotics, counterproliferation, or financial fraud can choose from commercially available tools such as Palantir, CrimeLink, Analyst’s Notebook, NetMap, Orion, or VisuaLinks to produce matrix and link diagrams, timeline charts, telephone toll charts, and similar pattern displays.

Tactical intelligence units, in both the military and law enforcement, find geospatial analysis tools to be essential.

Some intelligence agencies also have in-house tools that replicate these capabilities. Depending on the analyst’s specialty, some tools may be more relevant than others. All, though, have definite learning curves and their database structures are generally not compatible with each other. The result is that these tools are used less effectively than they might be, and the absence of a single standard tool hinders collaborative work across intelligence organizations.

Summary

In gathering information for synthesizing the target model, analysts should start by re- viewing existing finished and raw intelligence. This provides a picture of the current target model. It is important to do a key assumptions check at this point: Do the premises that underlie existing conclusions about the target seem to be valid?

Next, the analyst must acquire and evaluate raw intelligence about the target, and fit it into the target model—a step often called col- lation. Raw intelligence is viewed and evalu- ated differently depending on whether it is literal or nonliteral. Literal sources include open source, COMINT, HUMINT, and cyber collection. Nonliteral sources involve several types of newer and highly focused collection techniques that depend heavily on processing, exploitation, and interpretation

to turn the material into usable intelligence.

Once a model template has been selected for the target, it becomes necessary to fit the relevant information into the template. Fitting the information into the model template re- quires a three-step process:

  • Evaluating the source, by determining whether the source (a) is competent, that is, knowledgeable about the information being given; (b) had the access needed to get the information; and (c) had a vested interest or bias regarding the information provided.
  • Evaluating the communications channel through which the information arrived. Information that passes through many intermediate points becomes distorted. Processors and exploiters of collected information can also have a vested interest or bias.
  • Evaluating the credentials of the evidence itself. This involves evaluating (a) the credibility of evidence, based in part on the previously completed source and communications channel evaluations; (b) the reliability; and (c) the relevance of the evidence. Relevance is a particularly important evaluation step; it is too easy to fit evidence into the wrong target model.
  • As evidence is evaluated, it must be combined and incorporated into the target mod- el. Multiple pieces of evidence can be convergent (favoring the same conclusion) or diver- gent (favoring different conclusions and leading to alternative target models). Convergent evidence can also be redundant, reinforcing a conclusion.

Tools to extract meaning from data, for example, by relation- ship, pattern, and geospatial analysis, are used by analysts where they add value that offsets the cost of “care and feeding” of the tool. Tools to support structured argumentation are available and can significantly im- prove the quality of the analytic product, but whether they will find serious use in intelligence analysis is still an open question.

Denial, Deception, and Signaling

There is nothing more deceptive than an obvious fact.

Sherlock Holmes, in “The Boscombe Valley Mystery”

In evaluating evidence and developing a target model, an analyst must constantly take into account the fact that evidence may have been deliberately shaped by an opponent.

Denial and deception are major weapons in the counterintelligence arsenal of a country or organization.

 

They may be the only weapons available for many countries to use against highly sophisticated technical intelligence.

At the opposite extreme, the opponent may intentionally shape what the analyst sees, not to mislead but rather to send a message or signal. It is important to be able to recognize signals and to understand their meaning.

Denial

Denial and deception come in many forms. Denial is somewhat more straightforward.

Deception

Deception techniques are limited only by our imagination. Passive deception might include using decoys or having the intelligence target emulate an activity that is not of intelligence interest—making a chemical or biological warfare plant look like a medical drug production facility, for example. Decoys that have been widely used in warfare include dummy ships, missiles, and tanks.

Active deception includes misinformation (false communications traffic, signals, stories, and documents), misleading activities, and double agents (agents who have been discovered and “turned” to work against their former employers), among others.

Illicit groups (for example, terrorists) con- duct most of the deception that intelligence must deal with. Illicit arms traffickers (known as gray arms traffickers) and narcotics traffickers have developed an extensive set of deceptive techniques to evade international restrictions. They use intermediaries to hide financial transactions. They change ship names or aircraft call signs en route to mislead law enforcement officials. One airline changed its corporate structure and name overnight when its name became linked to illicit activities.1 Gray arms traffickers use front companies and false end-user certificates.

Defense against Denial and Deception: Protecting Intelligence Sources and Methods

In the intelligence business, it is axiomatic that if you need information, someone will try to keep it from you. And we have noted repeatedly that if an opponent can model a system, he can defeat it. So your best defense is to deny your opponent an understanding of your intelligence capabilities. Without such understanding, the opponent cannot effectively conduct D&D.

For small governments, and in the business intelligence world, protection of sources and methods is relatively straightforward. Selective dissemination of and tight controls on intelligence information are possible. But a major government has too many intelligence customers to justify such tight restrictions. Thus these bureaucracies have established an elaborate system to simultaneously protect and disseminate intelligence information. This protection system is loosely called compartmentation, because it puts information in “compartments” and restricts access to the compartments.

In the U.S. intelligence community, the intelligence product, sources, and methods are protected by the sensitive compartmented information (SCI) system. The SCI system uses an extensive set of compartments to protect sources and methods. Only the col- lectors and processors have access to many of the compartmented materials. Much of the product, however, is protected only by standard markings such as “Secret,” and access is granted to a wide range of people.

Open-source intelligence has little or no protection because the source material is unclassified. However, the techniques for exploiting open-source material, and the specific material of interest for exploitation, can tell an opponent much about an intelligence service’s targets. For this reason, intelligence agencies that translate open source often restrict its dissemination, using markings such as “Official Use Only.”

Higher Level Denial and Deception

A few straightforward examples of denial and deception were cited earlier. But sophisticated deception must follow a careful path; it has to be very subtle (too-obvious clues are likely to tip off the deception) yet not so subtle that your opponent misses it. It is commonly used in HUMINT, but today it frequently requires multi-INT participation or a “swarm” attack to be effective. Increasingly, carefully planned and elaborate multi- INT D&D is being used by various countries. Such efforts even have been given a different name—perception management—that focuses on the end result that the effort is intended to achieve.

Perception management can be effective against an intelligence organization that, through hubris or bureaucratic politics, is reluctant to change its initial conclusions about a topic. If the opposing intelligence organization makes a wrong initial estimate, then long-term deception is much easier to pull off. If D&D are successful, the opposing organization faces an unlearning process: its predispositions and settled conclusions have to be discarded and replaced.

The best perception management results from highly selective targeting, intended to get a specific message to a specific person or organization. This requires knowledge of that person’s or organization’s preferences in intelligence—a difficult feat to accomplish, but the payoff of a successful perception management effort is very high. It can result in an opposing intelligence service making a miscall or causing it to develop a false sense of security. If you are armed with a well-developed model of the three elements of a foreign intelligence strategy —targets, operations, and linkages—an effective counterintelligence counterattack in the form of perception management or covert action is possible, as the following examples show.

The Farewell Dossier

Detailed knowledge of an opponent is the key to successful counterintelligence, as the “Farewell” operation shows. In 1980 the French internal security service Direction de la Surveillance du Territoire (DST) recruited a KGB lieutenant colonel, Vladimir I. Vetrov, codenamed “Farewell.” Vetrov gave the French some four thousand documents, de- tailing an extensive KGB effort to clandes- tinely acquire technical know-how from the West, primarily from the United States.

In 1981 French president François Mitterrand shared the source and the documents (which DST named “the Farewell Dossier”) with U.S. president Ronald Reagan.

In early 1982 the U.S. Department of Defense, the Federal Bureau of Investigation, and the CIA began developing a counterattack. Instead of simply improving U.S. defenses against the KGB efforts, the U.S. team used the KGB shopping list to feed back, through CIA-controlled channels, the items on the list—augmented with “improvements” that were designed to pass acceptance testing but would fail randomly in service. Flawed computer chips, turbines, and factory plans found their way into Soviet military and civilian factories and equipment. Misleading information on U.S. stealth technology and space defense flowed into the Soviet intelligence reporting. The resulting failures were a severe setback for major segments of Soviet industry. The most dramatic single event resulted when the United States provided gas pipeline management software that was in- stalled in the trans-Siberian gas pipeline. The software had a feature that would, at some time, cause the pipeline pressure to build up to a level far above its fracture pres- sure. The result was the Soviet gas pipeline explosion of 1982, described as the “most monumental non-nuclear explosion and fire ever seen from space.

Countering Denial and Deception

In recognizing possible deception, an analyst must first understand how deception works. Four fundamental factors have been identified as essential to deception: truth, denial, deception, and misdirection.

Truth—All deception works within the context of what is true. Truth establishes a foundation of perceptions and beliefs that are accepted by an opponent and can then be exploited in deception. Supplying the opponent with real data establishes the credibility of future communications that the opponent then relies on.

Denial—It’s essential to deny the op- ponent access to some parts of the truth. Denial conceals aspects of what is true, such as your real intentions and capabilities. Denial often is used when no deception is intended; that is, the end objective is simply to deny knowledge. One can deny without intent to deceive, but not the

converse.

Deceit—Successful deception requires the practice of deceit.

Misdirection—Deception depends on manipulating the opponent’s perceptions. You want to redirect the opponent away from the truth and toward a false perception. In operations, a feint is used to redirect the adversary’s attention away from where the real operation will occur.

 

The first three factors allow the deceiver to present the target with desirable, genuine data while reducing or eliminating signals that the target needs to form accurate perceptions. The fourth provides an attractive alternative that commands the target’s attention.

The effectiveness of hostile D&D is a direct reflection of the predictability of collection.

Collection Rules

The best way to defeat D&D is for all of the stakeholders in the target-centric approach to work closely together. The two basic rules for collection, described here, form a complementary set. One rule is intended to provide incentive for collectors to defeat D&D. The other rule suggests ways to defeat it.

The first rule is to establish an effective feedback mechanism.

Relevance of the product to intelligence questions is the correct measure of collection effectiveness, and analysts and customers—not collectors—determine relevance. The system must enforce a content-oriented evaluation of the product, because content is used to determine relevance.

The second rule is to make collection smarter and less predictable. There exist several tried-and-true tactics for doing so:

  • Don’t optimize systems for quality and quantity; optimize for content.
  • Apply sensors in new ways. Analysis groups often can help with new sensor approaches in their areas of responsibility.
  • Consider provocative techniques against D&D targets.

Probing an opponent’s system and watching the response is a useful tactic for learning more about the system. Even so, probing may have its own set of un- desirable consequences: The Soviets would occasionally chase and shoot down the reconnaissance aircraft to discourage the probing practice.

  • Hit the collateral or inferential tar- gets. If an opponent engages in D&D about a specific facility, then sup- porting facilities may allow inferences to be made or to expose the deception. Security measures around a facility and the nature and status of nearby communications, power, or transportation facilities may provide a more complete picture.
  • Finally, use deception to protect a collection capability.

The best weapon against D&D is to mis- lead or confuse opponents about intelligence capabilities, disrupt their warning programs, and discredit their intelligence services.

 

An analyst can often beat D&D simply by using several types of intelligence—HUMINT, COMINT, and so on—in combination, simultaneously, or successively. It is relatively easy to defeat one sensor or collection channel. It is more difficult to defeat all types of intelligence at the same time.

Increasingly, opponents can be expected to use “swarm” D&D, targeting several INTs in a coordinated effort like that used by the Soviets in the Cuban missile crisis and the Indian government in the Pokhran deception.

The Information Instrument

Analysts, whether single- or all-source, are the focal points for identifying D&D. In the types of conflicts that analysts now deal with opponents have made effective use of a weapon that relies on deception: using both traditional media and social media to paint a misleading picture of their adversaries. Nongovernmental opponents (insurgents and terrorists) have made effective use of this information instrument.

 

the prevalence of media reporters in all conflicts, and the easy access to social media, have given the information instrument more utility. Media deception has been used repeatedly by opponents to portray U.S. and allied “atrocities” during military campaigns in Kosovo, Iraq, Afghanistan, and Syria.

Signaling

Signaling is the opposite of denial and deception. It is the process of deliberately sending a message, usually to an opposing intelligence service.

its use depends on a good know- ledge of how the opposing intelligence ser- vice obtains and analyzes knowledge. Recognizing and interpreting an opponent’s signals is one of the more difficult challenges an analyst must face. Depending on the situation, signals can be made verbally, by actions, by displays, or by very subtle nuances that depend on the context of the signal.

In negotiations, signals can be both verbal and nonverbal.

True signals often are used in place of open declarations, to provide in- formation while preserving the right of deniability.

Analyzing signals requires examining the content of the signal and its context, timing, and source. Statements made to the press are quite different from statements made through diplomatic channels—the latter usually carry more weight.

Signaling between members of the same culture can be subtle, with high success rates of the signal being understood. Two U.S. corporate executives can signal to each other with confidence; they both understand the rules. A U.S. executive and an Indonesian executive would face far greater risks of misunderstanding each other’s signals. The cultural differences in signaling can be substantial. Cultures differ in their reliance on verbal and nonverbal signals to communicate their messages. The more people rely on nonverbal or indirect verbal signals and on context, the higher the complexity.

  • In July 1990 the U.S. State Department unintentionally sent several signals that Saddam Hussein apparently interpreted as a green light to attack Kuwait. State Department spokesperson Margaret Tutwiler said, “[W]e do not have any defense treaties with Kuwait. . . .” The next day, Ambassador April Glaspie told Saddam Hussein, “[W]e have no opinion on Arab-Arab conflicts like your border disagreement with Kuwait.” And two days before the invasion, Assistant Secretary of State John Kelly testified before the House Foreign Affairs Committee that there was no obligation on our part to come to the defense of Kuwait if it were attacked.

 

Analytic Tradecraft in a World of Denial and Deception

Writers often use the analogy that intelligence analysis is like the medical profession.

Analysts and doctors weigh evidence and reach conclusions in much the same fashion. In fact, intelligence analysis, like medicine, is a combination of art, tradecraft, and science. Different doctors can draw different conclusions from the same evidence, just as different analysts do.

But intelligence analysts have a different type of problem than doctors do. Scientific researchers and medical professionals do not routinely have to deal with denial and deception. Though patients may forget to tell them about certain symptoms, physicians typically don’t have an opponent who is trying to deny them knowledge. In medicine, once doctors have a process for treating a pathology, it will in most cases work as expected. The human body won’t develop countermeasures to the treatment. But in intelligence, your opponent may be able to identify the analysis process and counter it. If analysis becomes standardized, an opponent can predict how you will analyze the available intelligence, and then D&D become much easier to pull off.

One cannot establish a process and retain it indefinitely.

Intelligence analysis within the context of D&D is in fact analogous to being a professional poker player, especially in the games of Seven Card Stud or Texas Hold ’em. You have an opponent. Some of the opponent’s resources are in plain sight, some are hidden. You have to observe the opponent’s actions (bets, timing, facial expressions, all of which incorporate art and tradecraft) and do pattern analysis (using statistics and other tools of science).

Summary

In evaluating raw intelligence, analysts must constantly be aware of the possibility that they may be seeing material that was deliberately provided by the opposing side. Most targets of intelligence efforts practice some form of denial. Deception—providing false information—is less common than denial because it takes more effort to execute, and it can backfire.

Defense against D&D starts with your own denial of your intelligence capabilities to op- posing intelligence services.

Where one intelligence service has extensive knowledge of another service’s sources and methods, more ambitious and elaborate D&D efforts are possible. Often called perception management, these involve developing a coordinated multi-INT campaign to get the opposing service to make a wrong initial estimate. Once this happens, the opposing service faces an unlearning process, which is difficult. A high level of detailed knowledge also allows for covert actions to disrupt and discredit the opposing service.

A collaborative target-centric process helps

to stymie D&D by bringing together different perspectives from the customer, the collector, and the analyst. Collectors can be more effective in a D&D environment with the help of analysts. Working as a team, they can make more use of deceptive, unpredictable, and provocative collection methods that have proven effective in defeating D&D.

Intelligence analysis is a combination of art, tradecraft, and science. In large part, this is because analysts must constantly deal with denial and deception, and dealing with D&D is primarily a matter of artfully applying tradecraft.

Systems Modeling and Analysis

Believe what you yourself have tested and found to be reasonable.

Buddha

In chapter 3, we described the target as three things: as a complex system, as a network, and as having temporal and spatial attributes.

any entity having the attributes of structure, function, and process can be de- scribed and analyzed as a system, as noted in previous chapters.

the basic principles apply in modeling political and economic systems, as well. Systems analysis can be applied to analyze both existing systems and those under development.

A government can be considered a system and analyzed in much the same way—by creating structural, functional, and process models.

Analyzing an Existing System: The Mujahedeen Insurgency

a single weapon can be defeated, as in this case, by tactics. But the proper mix of antiair weaponry could not. The mix here included surface-to-air missiles (SA-7s, British Blowpipes, and Stinger missiles) and machine guns (Oerlikons and captured Soviet Dashika machine guns). The Soviet helicopter operators could defend against some of these, but not all simultaneously. SA-7s were vulnerable to flares; Blowpipes were not. The HINDs could stay out of range of the Dashikas, but then they would be at an effective range for the Oerlikons.3 Unable to know what they might be hit with, Soviet pilots were likely to avoid at- tacking or rely on defensive maneuvers that would make them almost ineffective—which is exactly what happened.

Analyzing a Developmental System: Methodology

In intelligence, we also are concerned about modeling a system that is un- der development. The first step in modeling a developmental system, and particularly a future weapons system, is to identify the system(s) under development. Two approaches traditionally have been applied in weapons systems analysis, both based on reasoning paradigms drawn from the writings of philosophers: deductive and inductive.

  • The deductive approach to prediction is to postulate desirable objectives, in the eyes of the opponent; identify the system requirements; and then search the incoming intelligence for evidence of work on the weapons systems, subsystems, components, devices, and basic research and development (R&D) required to reach those objectives.
  • The opposite, an inductive or synthesis approach, is to begin by looking at the evidence of development work and then synthesize the advances in systems, subsystems, and devices that are likely to follow.

A number of writers in the intelligence field have argued that intelligence uses a different method of reasoning—abduction, which seeks to develop the best hypothesis or inference from a given body of evidence. Abduction is much like induction, but its stress is on integrating the analyst’s own thoughts and intuitions into the reasoning process.

The deductive approach can be described as starting from a hypothesis and using evidence to test the hypothesis. The inductive approach is described as evidence-based reasoning to develop a conclusion.7 Evidence- based reasoning is applied in a number of professions. In medicine, it is known as evidence-based practice—applying a combination of theory and empirical evidence to make medical decisions.

Both (or all three) approaches have advantages and drawbacks. In practice, though, deduction has some advantages over induction or abduction in identifying future systems development.

The problem arises when two or more systems are under development at the same time. Each system will have its R&D process, and it can be very difficult to separate the processes out of the mass of in- coming raw intelligence. This is the “multiple pathologies” problem: When two or more pathologies are present in a patient, the symptoms are mixed together, and diagnosing the separate ill- nesses becomes very difficult. Generally, the deductive technique works better for dealing with the multiple pathologies issue in future systems assessments.

Once a system has been identified as being in development, analysis proceeds to the second step: answering customers’ questions about it. These questions usually are about the system’s functional, process, and structural characteristics—that is, about performance, schedule, risk, and cost.

As the system comes closer to completion, a wider group of customers will want to know what specific targets the system has been designed against, in what circumstances it will be used, and what its effectiveness will be. These matters typically require analysis of the system’s performance, including its suitability for operating in its environment or in accomplishing the mission for which it has been designed. The schedule for completing development and fielding the system, as well as associated risks, also become important. In some cases, the cost of development and deployment will be of interest.

Performance

Performance analyses are done on a wide range of systems, varying from simple to highly complex multidisciplinary systems. Determining the performance of a narrowly defined system is straightforward. More challenging is assessing the performance of a complex system such as an air defense network or a narcotics distribution network. Most complex system performance analysis is now done by using simulation, a topic to which we will return.

Comparative Modeling

Comparative modeling is similar to benchmarking, but the focus is on analysis of one group’s system or product performance, versus an opponent’s.

Comparing your country’s or organization’s developments with those of an opponent can involve four distinct fact patterns. Each pat- tern poses challenges that the analyst must deal with.

In short, the possibilities can be de- scribed as follows:

  • We did it—they did it.
    • We did it—they didn’t do it.
    • We didn’t do it—they did it.
    • We didn’t do it—they didn’t do it.

There are many examples of the “we did it—they did it” sort of intelligence problem, especially in industries in which competitors typically develop similar products.

In the second case, “we did it—they didn’t do it,” the intelligence officer runs into a real problem: It is almost impossible to prove a negative in intelligence. The fact that no intelligence information exists about an opponent’s development cannot be used to show that no such development exists.

The third pattern, “we didn’t do it—they did it,” is the most dangerous type that we en- counter. Here the intelligence officer has to overcome opposition from skeptics in his country, because he has no model to use for comparison.

The “we didn’t do it—they did it” case presents analysts with an opportunity to go off in the wrong direction analytically

This sort of transposition of cause and effect is not uncommon in human source report- ing. Part of the skill required of an intelli- gence analyst is to avoid the trap of taking sources too literally. Occasionally, intelli- gence analysts must spend more time than they should on problems that are even more fantastic or improbable than that of the German engine killer.

 

 

Simulation

Performance simulation typically is a parametric, sensitivity, or “what if” type of analysis; that is, the analyst needs to try a relationship between two variables (parameters), run a computer analysis and examine the results, change the input constants, and run the simulation again.

The case also illustrates the common systems analysis problem of presenting the worst- case estimate: National security plans often are made on the basis of a systems estimate; out of fear that policymakers may become complacent, an analyst will tend to make the worst case that is reasonably possible.

The Mirror-Imaging Challenge

Both comparative modeling and simulation have to deal with the risks of mirror imaging. The opponent’s system or product (such as an airplane, a missile, a tank, or a supercomputer) may be designed to do different things or to serve a different market than expected.

The risk in all systems analysis is one of mirror imaging, which is much the same as the mirror-imaging problem in decision-making.

Unexpected Simplicity

In effect, the Soviets applied a version of Occam’s razor (choose the simplest explanation that fits the facts at hand) in their industrial practice. Because they were cautious in adopting new technology, they tended to keep everything as simple as possible. They liked straightforward, proven designs. When they copied a design, they simplified it in obvious ways and got rid of the extra features that the United States tends to put on its weapons systems. The Soviets made maintenance as simple as possible, because the hardware was going to be maintained by people who did not have extensive training.

In a comparison of Soviet and U.S. small jet engine technology, the U.S. model engine was found to have 2.5 times as much materials cost per pound of weight. It was smaller and lighter than the Soviet engine, of course, but it had 12 times as many maintenance hours per flight-hour as the Soviet model, and overall the Soviet engine had a life cycle cost half that of the U.S. engine.10 The ability to keep things simple was the Soviets’ primary advantage over the United States in technology, especially military technology.

Quantity May Replace Quality

U.S. analysts often underestimated the number of units that the Soviets would produce. The United States needed fewer units of a given system to perform a mission, since each unit had more flexibility, quality, and performance ability than its Soviet counterpart. The United States forgot a lesson that it had learned in World War II—U.S. Sherman tanks were inferior to the German Tiger tanks in combat, but the United States deployed a lot of Shermans and overwhelmed the Tigers with numbers.

Schedule

The intelligence customer’s primary concern about systems under development usually centers on performance, as discussed previously.

the importance of the systems development process, which is one of the many types of processes we deal with in intelligence.

Process Models

The functions of any system are carried out by processes. The processes will be different for different systems. That’s true whether you are describing an organization, a weapons system, or an industrial system. Different types of organizations, for ex- ample—civil government, law enforcement, military, and commercial organizations—will have markedly different processes. Even similar types of organizations will have different processes, especially in different cultures.

Political, military, economic, and weapons systems analysts all use specialized process-analysis techniques.

Most processes and most process models have feedback loops. Feedback al- lows the system to be adaptive, that is, to ad- just its inputs based on the output. Even simple systems such as a home heating/air conditioning system provide feedback via a thermostat. For complex systems, feedback is essential to prevent the process from producing undesirable output. Feedback is an important part of both synthesis and analysis

Development Process Models

In determining the schedule for a systems development, we concentrate on examining the development process and identifying the critical points in that process.

An example development process model is shown in Figure 9-2. In this display, the pro- cess nodes are separated by function into “swim lanes” to facilitate analysis.

 

The Program Cycle Model

Beginning with the system requirement and progressing to production, deployment, and operations, each phase bears unique indicators and opportunities for collection and synthesis/analysis. Customers of intelligence often want to know where a major system is in this life cycle.

Different types of systems may evolve through different versions of the cycle, and product development differs somewhat from systems development. It is therefore important for the analyst to first determine the specific names and functions of the cycle phases for the target country, industry, or company and then determine exactly where the target program is in that cycle. With that information, analytic techniques can be used to predict when the program might become operational or begin producing output.

It is important to know where a program is in the cycle in order to make accurate predictions.

A general rule of thumb is that the more phases in the program cycle, the longer the process will take, all other things being equal. Countries and organizations with large, stable bureaucracies typically have many phases, and the process, whatever it may be, takes that much longer.

Program Staffing

The duration of any stage of the cycle shown in the Generic Program Cycle is determined by the type of work involved and the number and expertise of workers assigned.

 

Fred Brooks, one of the premier figures in computer systems development, defined four types of projects in his book The Mythical Man-Month. Each type of project has a unique relationship between the number of workers needed (the project loading) and the time it takes to complete the effort.

The first type of project is a perfectly partitionable task—that is, one that can be completed in half the time by doubling the number of workers.

A second type of project involves the unpartitionable task…The profile is referred to here as the “baby production curve,” because no matter how many women are assigned to the task, it takes nine months to produce a baby.

Most small projects fit the curve shown in the lower left of the figure, which is a com- bination of the first two curves. In this case a project can be partitioned into subtasks, but the time it takes for people working on different subtasks to communicate with one another will eventually balance out the time saved by adding workers, and the curve levels off.

Large projects tend to be dominated by communication. At some point, shown as the bottom point of the lower right curve, adding additional workers begins to slow the project because all workers have to spend more time in communication.

The Technology Factor

Technology is another important factor in any development schedule; and technology is neither available nor applied in the same way everywhere. An analyst in a technologically advanced country, such as the United States, tends to take for granted that certain equipment—test equipment, for example—will be readily available and will be of a certain quality.

There is also a definite schedule advantage to not being the first to develop a system. A country or organization that is not a leader in technology development has the advantage of learning from the leader’s mistakes, an ad- vantage that entails being able to keep research and development costs low and avoid wrong paths.

A basic rule of engineering is that you are halfway to a solution when you know that there is a solution, and you are three-quarters there when you know how a competitor solved the problem. It took much less time for the Soviets to develop atomic and hydrogen bombs than U.S. intelligence had predicted. The Soviets had no principles of impotence or doubts to slow them down.

Risk

Analysts often assume that the programs and projects they are evaluating will be completed on time and that the target system will work perfectly. They would seldom be so foolish in evaluating their own projects or the performance of their own organizations. Risk analysis needs to be done in any assessment of a target program.

It is typically difficult to do and, once done, difficult to get the customer to accept. But it is important to do because intelligence customers, like many analysts, also tend to assume that an opponent’s program will be executed perfectly.

One fairly simple but often overlooked approach to evaluating the probability of success is to examine the success rate of similar ventures.

Known risk areas can be readily identified from past experience and from discussions with technical experts who have been through similar projects. The risks fall into four major categories—programmatic, technical, production, and engineering. Analyzing potential problems requires identifying specific potential risks from each category. Some of these risks include the following:

 

  • Programmatic: funding, schedule, contract relationships, political issues
  • Technical: feasibility, survivability, system performance
  • Production: manufacturability, lead times, packaging, equipment
  • Engineering: reliability, maintainability, training, operations

Risk assessment assesses risks quantitatively and ranks them to establish those of most concern. A typical ranking is based on the risk factor, which is a mathematical combination of the probability of failure and the consequence of failure. This assessment requires a combination of expertise and software tools in a structured and consistent approach to ensure that all risk categories are considered and ranked.

Risk management is the definition of alternative paths to minimize risk and set criteria on which to initiate or terminate these activities. It includes identifying alternatives, options, and approaches to mitigation.

Examples are initiation of parallel developments (for example, funding two manufacturers to build a satellite, where only one satellite is needed), extensive development testing, addition of simulations to check performance predictions, design reviews by consultants, or focused management attention on specific elements of the program. A number of decision analysis tools are useful for risk management. The most widely used tool is the Program Evaluation and Review Technique (PERT) chart, which shows the interrelationships and dependencies among tasks in a program on a timeline.

Cost

Systems analysis usually doesn’t focus heavily on cost estimates. The usual assumption is that costs will not keep the system from being completed. Sometimes, though, the costs are important because of their effect on the overall economy of a country.

Estimating the cost of a system usually starts with comparative modeling. That is, you be- gin with an estimate of what it would cost your organization or an industry in your country to build something. You multiply that number by a factor that accounts for the difference in costs of the target organization (and they will always be different).

When several system models are being considered, cost-utility analysis may be necessary. Cost-utility analysis is an important part of decision prediction. Many decision-making processes, especially those that require resource allocation, make use of cost-utility analysis. For an analyst assessing a foreign military’s decision whether to produce a new weapons system, it is a useful place to start. But the analyst must be sure to take “rationality” into account. As noted earlier, what is “rational” is different across cultures and from one individual to the next. It is important for the analyst to understand the logic of the decision maker—that is, how the opposing decision maker thinks about topics such as cost and utility.

In performing cost-utility analysis, the analyst must match cost figures to the same time horizon over which utility is being assessed. This will be a difficult task if the horizon reaches past a few years away. Life-cycle costs should be considered for new systems, and many new systems have life cycles in the tens of years.

Operations Research

A number of specialized methodologies are used to do systems analysis. Operations re- search is one of the more widely used ones.

Operations research has a rigorous process for defining problems that can be usefully applied in intelligence. As one specialist in the discipline has noted, “It often occurs that the major contribution of the operations research worker is to decide what is the real problem.” Understanding the problem often requires understanding the environment and/or system in which an issue is embedded, and operations researchers do that well.

After defining the problem, the operations research process requires representing the system in mathematical form. That is, the operations researcher builds a computation- al model of the system and then manipulates or solves the model, using computers, to come up with an answer that approximates how the real-world system should function. Systems of interest in intelligence are characterized by uncertainty, so probability analysis is a commonly used approach.

Two widely used operations research techniques are linear programming and network analysis. They are used in many fields, such as network planning, reliability analysis, capacity planning, expansion capability de- termination, and quality control.

Linear Programming

Linear programming involves planning the efficient allocation of scarce resources, such as material, skilled workers, machines, money, and time.

Linear programs are simply systems of linear equations or in- equalities that are solved in a manner that yields as its solution an optimum value—the best way to allocate limited resources, for example. The optimum value is based on some single-goal statement (provided to the program in the form of what is called a linear objective function). Linear programming is often used in intelligence for estimating production rates, though it has applicability in a wide range of disciplines.

Network Analysis

In chapter 10 we’ll investigate the concept of network analysis as applied to relation- ships among entities. Network analysis in an operations research sense is not the same. Here, networks are interconnected paths over which things move. The things can be automobiles (in which case we are dealing with a network of roads), oil (with a pipeline system), electricity (with wiring diagrams or circuits), information signals (with communication systems), or people (with elevators or hallways).

In intelligence against networks, we frequently are concerned with things like maximum throughput of the system, the shortest (or cheapest) route between two or more locations, or bottlenecks in the system.

Summary

Any entity having the attributes of structure, function, and process can be described and analyzed as a system. Systems analysis is used in intelligence extensively for assessing foreign weapons systems performance. But it also is used to model political, economic, infrastructure, and social systems.

Modeling the structure of a system can rely on an inductive, a deductive, or an abductive approach.

Functional assessments typically require analysis of a system’s performance. Comparative performance analysis is widely used in such assessments. Simulations are used to prepare more sophisticated predictions of a system’s performance.

Process analysis is important for assessing organizations and systems in general. Organizational processes vary by organization type and across cultures. Process analysis also is used to determine systems development schedules and in looking at the life cycle of a program. Program staffing and the technologies involved are other factors that shape development schedules.

10

Network Modeling and Analysis

Future conflicts will be fought more by networks than by hierarchies, and whoever masters the network form will gain major advantages.

John Arquilla and David Ronfeldt, RAND Corporation

In intelligence, we’re concerned with many types of networks: communications, social, organizational, and financial networks, to name just a few. The basic principles of modeling and analysis apply across most different types of networks.

intelligence has the job of providing an advantage in conflicts by reducing uncertainty.

One of the most powerful tools in the analyst’s toolkit is network modeling and analysis. It has been used for years in the U.S. intelligence community against targets such as terrorist groups and narcotics traffickers. The netwar model of multidimensional conflict between opposing networks is more and more applicable to all intelligence, and network analysis is our tool for examining the opposing network.

a few definitions:

 

  • Network—that group of elements forming a unified whole, also known as a system
    • Node—an element of a system that represents a person, place, or physical thing
    • Cell—a subordinate organization formed around a specific process, capability, or activity within a designated larger organization
  • Link—a behavioral, physical, or functional relationship between nodes

 

Link Models

Link modeling has a long history; the Los Angeles police department reportedly used it first in the 1940s as a tool for assessing organized crime networks. Its primary purpose was to display relationships among people or between people and events. Link models demonstrated their value in discerning the complex and typically circuitous ties between entities.

some types of link diagrams are referred to as horizontal relevance trees. Their essence is the graphical representation of (a) nodes and their connection patterns or (b) entities and relationships.

Most humans simply cannot assimilate all the information collected on a topic over the course of several years. Yet a typical goal of intelligence synthesis and analysis is to develop precise, reliable, and valid inferences (hypotheses, estimations, and conclusions) from the available data for use in strategic decision-making or operational planning. Link models directly support such inferences.

The primary purpose of link modeling is to facilitate the organization and presentation of data to assist the analytic process. A major part of many assessments is the analysis of relationships among people, organizations, locations, and things. Once the relationships have been created in a database system, they can be displayed and analyzed quickly in a link analysis program.

To be useful in intelligence analysis, the links should not only identify relationships among data items but also show the nature of their ties. A subject-verb-object display has been used in the intelligence community for sever- al decades to show the nature of such ties, and it is sometimes used in link displays.

Quantitative and temporal (date stamping) relationships have also been used when the display software has a filtering capability. Filters allow the user to focus on connections of interest and can simplify by several orders of magnitude the data shown in a link dis- play.

Link modeling has been replaced almost completely by network modeling, discussed next, because it offers a number of advantages in dealing with complex networks.

Network Models

Most modeling and analysis in intelligence today focuses on networks.

Some Network Types

A target network can include friendly or allied entities

It can include neutrals that your customer wishes to influence—either to become an ally or to remain neutral.

Social Networks

When intelligence analysts talk about net- work analysis, they often mean social net- work analysis (SNA). SNA involves identifying and assessing the relationships among people and groups—the nodes of the network. The links show relationships or trans- actions between nodes. So a social network model provides a visual display of relation- ships among people, and SNA provides a visual or mathematical analysis of the relationships. SNA is used to identify key people in an organization or social network and to model the flow of information within the network.

Organizational Networks

Management consultants often use SNA methodology with their business clients, referring to it as organizational network analysis. It is a method for looking at communication and social networks within a formal organization. Organizational network modeling is used to create statistical and graphical models of the people, tasks, groups, knowledge, and resources of organizations.

Commercial Networks

In competitive intelligence, network analysis tends to focus on networks where the nodes are organizations.

As Babson College professor and business analyst Liam Fahey noted, competition in many industries is now as much competition between networked enterprises

Fahey has de- scribed several such networks and defined five principal types:

  • Vertical networks. Networks organized across the value chain; for example, 3M Corporation goes from mining raw materials to delivering finished products.
  • Technology networks. Alliances with technology sources that allow a firm to maintain technological superiority,

such as the CISCO Systems network. • Development networks. Alliances fo- cused on developing new products or processes, such as the multimedia entertainment venture DreamWorks SKG.

  • Ownership networks. Networks in which a dominant firm owns part or all of its suppliers, as do the Japanese keiretsu.
  • Political networks. Those focused on political or regulatory gains for its members, for example, the National Association of Manufacturers.

Hybrids of the five are possible, and in some cultures such as in the Middle East and Far East, families can be the basis for a type of hybrid business network.

 

Financial Networks

Financial networks tend to feature links among organizations, though individuals can be important nodes, as in the Abacha family funds-laundering case. These networks focus on topics such as credit relationships, financial exposures between banks, liquidity flows in the interbank payment system, and funds-laundering transactions. The relationships among financial institutions, and the relationships of financial institutions with other organizations and individuals, are best captured and analyzed with network modeling.

Global financial markets are interconnected and therefore amenable to large-scale modeling. Analysis of financial system networks helps economists to understand systemic risk and is key to preventing future financial crises.

Threat Networks

Military and law enforcement organizations define a specific type of network, called a threat network. These are networks that are opposed to friendly networks.

Such net- works have been defined as being “comprised of people, processes, places, and material—components that are identifiable, targetable, and exploitable.”

A premise of threat network modeling is that all such networks have vulnerabilities that can be exploited. Intelligence must provide an understanding of how the network operates so that customers can identify actions to exploit the vulnerabilities.

Threat networks, no matter their type, can access political, military, economic, social, infrastructure, and information resources. They may connect to social structures in multiple ways (kinship, religion, former association, and history)—providing them with resources and support. They may make use of the global information networks, especially social media, to obtain recruits and funding and to conduct information operations to gain recognition and international support.

Other Network Views

Target networks can be a composite of the types described so far. That is, they can have social, organizational, commercial, and financial elements, and they can be threat net- works. But target networks can be labeled another way. They generally take one of the following relationship forms:

  • Functional networks. These are formed for a specific purpose. Individuals and organizations in this net- work come together to undertake activities based primarily on the skills, expertise, or particular capabilities they offer. Commercial net- works, crime syndicates, and insurgent groups all fall under this label.
  • Family and cultural networks. Some members or associates have familial bonds that may span generations. Or the network shares bonds due to a shared culture, language, religion, ideology, country of origin, and/or sense of identity. Friendship net- works fall into this category as do proximity networks—where the network has bonds due to geographic or proximity ties (such as time spent together in correctional institutions).
  • Virtual network. This is a relatively new phenomenon. In these networks, participants seldom (possibly never) physically meet, but work together through the Internet or some other means of communication. Networks involved in online fraud, theft, or funds laundering are usually virtual networks. Social media often are used to operate virtual networks.

Modeling the Network

Target networks can be modeled manually, or by using computer algorithms to automate the process. Using open-source and classified HUMINT or COMINT, an analyst typically goes through the following steps in manually creating a network model:

  • Understand the environment.

You should start by understanding the setting in which the network operates. That may require looking at all six of the PMESII factors that constitute the environment, and almost certainly at more than one of these factors. This approach applies to most networks of intelligence interest, again recognizing that “military” refers to that part of the network that applies force (usually physical force) to serve network interests. Street gangs and narcotics traffickers, for example, typically have enforcement arms.

  • Select or create a network template.

Pattern analysis, link analysis, and social network analysis are the foundational analytic methods that enable intelligence analysts to begin templating the target network. To begin with, are the networks centralized or decentralized? Are they regional or transnational? Are they virtual, familial, or functional? Are they a combination? This information provides a rough idea of their structure, their adaptability, and their resistance to disruption.

  • Populate the network.

If you don’t have a good idea what the network template looks like, you can apply a technique that is sometimes called “snowballing.” You begin with a few key members of the target network. Then add nodes and linkages based on the information these key members provide about others. Over time, COMINT and other collection sources (open source, HUMINT) al- low the network to be fleshed out. You identify the nodes, name them, and determine the linkages among them. You also typically need to determine the nature of the link. For example, is it a familial link, a trans- actional link, or a hostile link

Computer-Assisted and Automated Modeling

Although manual modeling is still used, commercially available network tools such as Analyst’s Notebook and Palantir are now available to help. One option for using these tools is to enter the data manually but to rely on the tool to create and manipulate the network model electronically.

Analyzing the Network

Analyzing a network involves answering the classic questions—who-what-where-when- how-why—and placing the answers in a format that the customer can understand and act upon, what is known as “actionable intelligence.” Analysis of the network pattern can help identify the what, when, and where. Social network analysis typically identifies who. And nodal analysis can tell how and why.

Nodal Analysis

As noted throughout this book, nodes in a target network can include persons, places, objects, and organizations (which also could be treated as separate networks). Where the node is an organization, it may be appropriate to assess the role of the organization in the larger network—that is, to simply treat it as a node.

The usual purpose of nodal analysis is to identify the most critical nodes in a target network. This requires analyzing the properties of individual nodes, and how they affect or are affected by other nodes in the network. So the analyst must understand the behavior of many nodes and, where the nodes are organizations, the activities taking place within the nodes.

Social Network Analysis

Social network analysis, in which all of the network nodes are persons or groups, is widely used in the social sciences, especially in studies of organizational behavior. In intelligence, as noted earlier, we more frequently use target network analysis, in which almost anything can be a node.

 

To understand a social network, we need a full description of the social relationships in the network. Ideally, we would know about every relationship between each pair of actors in the network.

In summary, SNA is a tool for understanding the internal dynamics of a target network and how best to attack, exploit, or influence it. Instead of assuming that taking out the leader will disrupt the network, SNA helps to identify the distribution of power in the net- work and the influential nodes—those that can be removed or influenced to achieve a desired result. SNA also is used to describe how a network behaves and how its connectivity shapes its behavior.

Several analytic concepts that come along with SNA also apply to target network ana- lysis. The most useful concepts are centrality and equivalence. These are used today in the analysis of intelligence problems related to terrorism, arms networks, and illegal narcotics organizations.

the extent to which an actor can reach others in the network is a major factor in determining the power that the actor wields. Three basic sources of this advantage are high degree, high closeness, and high betweenness.

Actors who have many network ties have greater opportunities because they have choices. Their rich set of choices makes them less dependent than those with fewer ties and hence more powerful.

The network centrality of the individuals removed will determine the extent to which the removal impedes continued operation of the activity. Thus centrality is an important ingredient (but by no means the only one) in considering the identification of net- work vulnerabilities.

A second analytic concept that accompanies SNA is equivalence. The disruptive effectiveness of removing one individual or a set of individuals from a network (such as by making an arrest or hiring a key executive away from a business competitor) depends not only on the individual’s centrality but also on some notion of his uniqueness, that is, on whether or not he has equivalents.

The notion of equivalence is useful for strategic targeting and is tied closely to the concept of centrality. If nodes in the social network have a unique role (no equivalents), they will be harder to replace.

Network analysis literature offers a variety of concepts of equivalence. Three in particular are quite distinct and, between them, seem to capture most of the important ideas on the subject. The three concepts are substitutability, stochastic equivalence, and role equivalence. Each can be important in specific analysis and targeting applications.

Substitutability is easiest to understand; it can best be described as interchangeability. Two objects or persons in a category are substitutable if they have identical relationships with every other object in the category.

Individuals who have no network substitutes usually make the most worthwhile targets for removal.

Substitutability also has relevance to detecting the use of aliases. The use of an alias by a criminal will often show up in a network analysis as the presence of two or more substitutable individuals (who are in reality the same person with an alias). The interchangeability of the nodes actually indicates the interchangeability of the names.

Stochastic equivalence is a slightly more sophisticated idea. Two network nodes are stochastically equivalent if the probabilities of their being linked to any other particular node are the same. Narcotics dealers working for one distribution organization could be seen as stochastically equivalent if they, as a group, all knew roughly 70 percent of the group, did not mix with dealers from any other organizations, and all received their narcotics from one source.

Role equivalence means that two individuals play the same role in different organizations, even if they have no common acquaintances at all. Substitutability implies role equivalence, but not the converse.

Stochastic equivalence and role equivalence are useful in creating generic models of target organizations and in targeting by analogy—for example, the explosives expert is analogous to the biological expert in planning collection, analyzing terrorist groups, or attacking them.

Organizational Network Analysis

Organizational network analysis is a well-developed discipline for analyzing organizational structure. The traditional hierarchical description of an organizational structure does not sufficiently portray entities and their relationships.

the typical organization also is a system that can be viewed (and analyzed) from the same  three perspectives previously discussed:

structure, function, and process.

Structure here refers to the components of the organization, especially people and their relation- ships; this chapter deals with that.

Function refers to the outcome or results produced and tends to focus on decision making.

Process describes the sequences of activities and the expertise needed to produce the results or outcome. Fahey, in his assessment of organizational infrastructure, described four perspectives: structure, systems, people, and decision-making processes. Whatever their names, all three (or four, following Fahey’s example) perspectives must be considered.

Depending on the goal, an analyst may need to assess the network’s mission, its power distribution, its human resources, and its decision- making processes. The analyst might ask questions such as, Where is control exercised? Which elements provide support ser- vices? Are their roles changing? Network analysis tools are valuable for this sort of analysis.

Threat Network Analysis

We want to develop a detailed understanding of how a threat network functions by identifying its constituent elements, learning how its internal processes work to carry out operations, and seeing how all of the network components interact.

assessing threat networks requires, among other things, looking at the

  • Command-and-control structure. Threat networks can be decentralized, or flat. They can be centralized, or hierarchical. The structures will vary, but they are all designed to facilitate the attainment of the net- work’s goals and continued survival.
  • Closeness. This is a measure of the members’ shared objectives, kinship, ideology, religion, and personal relations that bond the network and facilitate recruiting new members.
    • Expertise. This includes the know- ledge, skills, and abilities of group leaders and members.
    • Resources. These include weapons, money, social connections, and public support.
  • Adaptability. This is a measure of the network’s ability to learn and adjust behaviors and modify operations in response to opposing actions.
  • Sanctuary. These are locations where the network can safely conduct planning, training, and resupply.

Primary is the ability to adapt over time, specifically to blend into the local population and to quickly replace losses of key personnel and recruit new members. The networks also tend to be difficult to penetrate because of their insular nature and the bonds that hold them together. They typically are organized into cells in a loose network where the loss of one cell does not seriously degrade the entire network.

To carry out the network’s functions, they must engage in activities that expose parts of the network to countermeasures.

They must communicate between cells and with their leadership, exposing the network to discovery and mapping of links.

Target Network Analysis

As we have said, in intelligence work we usually apply an extension of social network analysis that retains its basic concepts. So the techniques described earlier for SNA work for almost all target networks. But whereas all of the entities in SNA are people, again, in target network analysis they can be anything.

Automating the Analysis

Target network analysis has become one of the principal tools for dealing with complex systems, thanks to new, computer-based analytic methods. One tool that has been useful in assessing threat networks is the Organization Risk Analyzer (called *ORA) developed by the Computational Analysis of Social and Organizational Systems (CASOS) at Carnegie Mellon University. *ORA is able to group nodes and identify patterns of ana- lytic significance. It has been used to identify key players, groups, and vulnerabilities, and to model network changes over space and time.

Intelligence analysis relies heavily on graphical techniques to represent the descriptions of target networks compactly. The underlying mathematical techniques allow us to use computers to store and manipulate the information quickly and more accurately than we could by hand.

Summary

One of the most powerful tools in the analyst’s toolkit is network modeling and analysis. It is widely used in analysis disciplines. It is derived from link modeling, which organizes and presents raw intelligence in a visual form such that relationships among nodes (which can be people, places, things, organizations, or events) can be analyzed to extract finished intelligence.

We prefer to have network models created and updated automatically from raw intelligence data by software algorithms. Although some software tools exist for doing that, the analyst still must evaluate the sources and validate the results.

 

 

11 Geospatial and Temporal Modeling and Analysis

GEOINT is the professional practice of integrating and interpreting all forms of geospatial data to create historical and anticipatory intelligence products used for planning or that answer questions posed by decision-makers.

This definition incorporates the key ideas of an intelligence mission: all-source analysis and modeling in both space and time (from “historical and anticipatory”). These models are frequently used in analysis; insights about networks are often obtained by examining them in spatial and temporal ways.

  • During World War II, although the Germans maintained censorship as effectively as anyone else, they did publish their freight tariffs on all goods, including petroleum products. Working from those tariffs, a young U.S. Office of Strategic Services analyst, Walter Levy, conducted geospatial modeling based on the German railroad network to pinpoint the ex- act location of the refineries, which were subsequently targeted by allied bombers.

Static Geospatial Models

In the most general case, geospatial modeling is done in both space and time. But sometimes only a snapshot in time is needed.

Human Terrain Modeling

U.S. ground forces in Iraq and Afghanistan in the past few years have rediscovered and refined a type of static geospatial model that was used in the Vietnam War, though its use dates far back in history. Military forces now generally consider what they call “human terrain mapping” as an essential part of planning and conducting operations in populated areas.

In combating an insurgency, military forces have to develop a detailed model of the local situations that includes political, economic, and sociological inform- ation as well as military force information.

It involves acquiring the following details about each village and town:

  • The boundaries of each tribal area (with specific attention to where they adjoin or overlap)
  • Location and contact information for each sheik or village mukhtar and for government officials
  • Locations of mosques, schools, and markets
  • Patterns of activity such as movement into and out of the area; waking, sleeping, and shopping habits
  • Nearest locations and checkpoints of security forces
  • Economic driving forces including occupation and livelihood of inhabit- ants; employment and unemployment levels
  • Anti-coalition presence and activities
  • Access to essential services such as fuel, water, emergency care, and fire response
  • Particular local population concerns and issues

Human terrain mapping, or more correctly human terrain modeling, is an old intelligence technique.

Though Moses’s HUMINT mission failed because of poor analysis by the spies, it remains an excellent example of specific collection tasking as well as of the history of human terrain mapping.

1919 Paris Peace Conference

In 1917 President Woodrow Wilson established a study group to prepare materials for peace negotiations that would conclude World War I. He eventually tapped geographer Isaiah Bowman to head a group of 150 academics to prepare the study. It covered the languages, ethnicities, resources, and historical boundaries of Europe. With support from the American Geological Society, Bowman directed the production of over three hundred maps per week during January 1919.

The Tools of Human Terrain Modeling

Today, human terrain modeling is used extensively to support military operations in Syria, Iraq, and Afghanistan. Many tools have been developed to create and analyze such models. The ability to do human terrain mapping and other types of geospatial modeling has been greatly expanded and popularized by Google Earth and by Microsoft’s Virtual Earth. These geospatial modeling tools provide multiple layers of information.

This unclassified online material has a number of intelligence applications. For intelligence analysts, it permits planning HUMINT and COMINT operations. For military forces, it supports precise targeting. For terrorists, it facilitates planning of attacks.

Temporal Models

Pure temporal models are used less frequently than the dynamic geospatial models discussed next, because we typically want to observe activity in both space and time—sometimes over very short times. Timing shapes the consequences of planned events.

There are a number of different temporal model types; this chapter touches on two of them—timelines and pattern-of-life modeling and analysis.

Timelines

An opponent’s strategy often becomes apparent only when seemingly disparate events are placed on a timeline.

Event-time patterns tell analysts a great deal; they allow analysts to infer relationships among events and to examine trends. Activity patterns of a target network, for example, are useful in determining the best time to collect intelligence. An example is a plot of total telephone use over twenty-four hours—the plot peaks about 11 a.m., which is the most likely time for a per- son to be on the telephone.

Pattern-of-Life Modeling and Analysis

Pattern-of-life (POL) analysis is a method of modeling and understanding the behavior of a single person or group by establishing a re- current pattern of actions over time in a given situation. It has similarities to the concept of activity-based intelligence

 

Dynamic Geospatial Models

A dynamic variant of the geospatial model is the space-time model. Many activities, such as the movement of a satellite, a vehicle, a ship, or an aircraft, can best be shown spatially—as can population movements. A com- bination of geographic and time synthesis and analysis can show movement patterns, such as those of people or of ships at sea.

Dynamic geospatial modeling and analysis has been described using a number of terms. Three that are commonly used in intelligence are described in this section: movement intelligence, activity-based intelligence, and geographic profiling. Though they are similar, each has a somewhat different meaning. Dynamic modeling is also applied in understanding intelligence enigmas.

Movement Intelligence

Intelligence practitioners sometimes describe space-time models as movement intelligence, or “MOVINT” as if it were a collection “INT” instead of a target model. The name “movement intelligence” for a specialized intelligence product dates roughly to the wide use of two sensors for area surveillance.

One was the moving target indicator (MTI) capability for synthetic aperture radars. The other was the deployment of video cameras on intelligence collection platforms. MOVINT has been defined as “an intelligence gathering method by which images (IMINT), non-imaging products (MASINT), and signals (SIGINT) produce a movement history of objects of interest.”

Activity-Based Intelligence

Activity-based intelligence, or ABI, has been defined as “a discipline of intelligence where the analysis and subsequent collection is focused on the activity and transactions associated with an entity, population, or area of interest.”

ABI is a form of situational awareness that focuses on interactions over time. It has three characteristics:

  • Raw intelligence information is constantly collected on activities in a given region and stored in a database for later metadata searches.
  • It employs the concept of “sequence neutrality,” meaning that material is collected without advance knowledge of whether it will be useful for any intelligence purpose.
  • It also relies on “data neutrality,” meaning that any source of intelligence may contribute; in fact, open source may be the most valuable.

ABI therefore is a variant of the target-centric approach, focused on the activity of a target (person, object, or group) within a specified target area. So it includes both spatial and temporal dimensions. At a higher level of complexity, it can include network relationships as well.

Though the term ABI is of recent origin and is tied to the development of surveillance methods for collecting intelligence, the concept of solving intelligence problems by monitoring activity over time has been ap- plied for decades. It has been the primary tool for dealing with geographic profiling and intelligence enigmas.

Geographic Profiling

Geographic profiling is a term used in law enforcement for geospatial modeling, specifically a space-time model, that supports serial violent crime or sexual crime investigations. Such crimes, when committed by strangers, are difficult to solve. Their investigation can produce hundreds of tips and suspects, resulting in the problem of information overload

Intelligence Enigmas

Geospatial modeling and analysis frequently must deal with unidentified facilities, objects, and activities. These are often referred to by the term intelligence enigmas. For such targets, a single image—a snapshot in time—is insufficient.

Summary

One of the most powerful combination models is the geospatial model, which combines all sources of intelligence into a visual picture (often on a map) of a situation. One of the oldest of analytic products, geospatial modeling today is the product of all-source analysis that can incorporate OSINT, IMINT, HUMINT, COMINT, and advanced technical collection methods.

Many GEOINT models are dynamic; they show temporal changes. This combination of geospatial and temporal models is perhaps the single most important trend in GEOINT. Dynamic GEOINT models are used to observe how a situation develops over time and to extrapolate future developments

 

Part II

The Estimative Process

12 Predictive Analysis

“Your problem is that you are not able to see things before they happen.”

Wotan to Fricka, in Wagner’s opera Die Walküre

Describing a past event is not intelligence analysis; it is reciting history. The highest form of intelligence analysis requires structured thinking that results in an estimate of what is likely to happen.

True intelligence analysis is always predictive.

 

The value of a model of possible futures is in the insights that it produces. Those insights prepare customers to deal with the future as it unfolds. The analyst’s contribution lies in the assessment of the forces that will shape future events and the state of the target mod- el. If an analyst accurately assesses the forces, she has served the intelligence customer well, even if the prediction derived from that assessment turns out to be wrong.

policymaking customers tend to be skeptical of predictive analysis unless they do it themselves. They believe that their own opinions about the future are at least as good as those of intelligence analysts. So when an analyst offers an estimate without a compelling supporting argument, he or she should not be surprised if the policymaker ignores it.

By contrast, policymakers and executives will accept and make use of predictive analysis if it is well reasoned, and if they can follow the analyst’s logical development. This implies that we apply a formal methodology, one that the customer can understand, so that he or she can see the basis for the conclusions drawn.

Former national security adviser Brent Scowcroft observed, “What intelligence estimates do for the policymaker is to remind him what forces are at work, what the trends are, and what are some of the possibilities that he has to consider.” Any intelligence assessment that does these things will be readily accepted.

Introduction to Predictive Analysis

Intelligence can usually deal with near-term developments. Extrapolation—the act of making predictions based solely on past observations—serves us reasonably well in the short term for situations that involve established trends and normal individual or organizational behaviors.

Adding to the difficulty, intelligence estimates can also affect the future that they predict. Often, the estimates are acted on by policymakers—sometimes on both sides.

The first step in making any estimate is to consider the phenomena that are involved, in order to determine whether prediction is even possible.

Convergent and Divergent Phenomena

In examining trends and possible future events, we use the same terminology: Convergent phenomena make prediction possible; divergent phenomena frustrate it.

a basic question to ask at the outset of any predictive attempt is, Does the principle of causation apply? That is, are the phenomena we are to examine and prepare estimates about governed by the laws of cause and effect?

A good example of a divergent phenomenon in intelligence is the coup d’état. Policy- makers often complain that their intelligence organizations have failed to warn of coups. But a coup event is conspiratorial in nature, limited to a handful of people, and dependent on the preservation of secrecy for its success.

If a foreign intelligence service knows of the event, then secrecy has been com- promised and the coup is almost certain to fail—the country’s internal security services will probably forestall it. The conditions that encourage a coup attempt can be assessed and the coup likelihood estimated by using probability theory, but the timing and likelihood of success are not “predictable.”

The Estimative Approach

The target-centric approach to prediction follows an analytic pattern long established in the sciences, in organizational planning, and in systems synthesis and analysis.

 

The synthesis and analysis process discussed in this chapter and the next is derived from an estimative approach that has been formalized in several professional disciplines. In management theory, the approach has several names, one of which is the Kepner-Tregoe Rational Management Process. In engineering, the formalization is called the Kalman Filter. In the social sciences, it is called the Box-Jenkins method. Although there are differences among them, all are techniques for combining complex data to create estimates. They all require combining data to estimate an entity’s present state and evaluating the forces acting on the entity to predict its future state.

This concept—to identify the forces acting on an entity, to identify likely future forces, and to predict the likely changes in old and new forces over time, along with some indicator of confidence in these judgments—is the key to successful estimation. It takes into ac- count redundant and conflicting data as well as the analyst’s confidence in these data.

The key is to start from the present target model (and preferably, also with a past target model) and move to one of the future models, using an analysis of the forces involved as a basis. Other texts on estimative analysis describe these forces as issues, trends, factors, or drivers. All those terms have the same meaning: They are the entities that shape the future.

The methodology relies on three predictive mechanisms: extrapolation, projection, and forecasting. Those components and the general approach are defined here; later in the chapter, we delve deeper into “how-to” details of each mechanism.

An extrapolation assumes that these forces do not change between the present and future states, a projection assumes they do change, and a forecast assumes they change and that new forces are added.

The analysis follows these steps:

  1. Determine at least one past state and the present state of the entity. In intelligence, this entity is the target model, and it can be a model of almost anything—a terrorist organization, a government, a clandestine trade network, an industry, a technology, or a ballistic missile.
  2. Determine the forces that acted on the entity to bring it to its present state.

These same forces, acting unchanged, would result in the future state shown as an extrapolation (Scenario 1).

  1. To make a projection, estimate the changes in existing forces that are likely to occur. In the figure, a decrease in one of the existing forces (Force 1) is shown as causing a projected future state that is different from the extrapolation (Scenario 2).
  2. To make a forecast, start from either the extrapolation or the projection and then identify the new forces that may act on the entity, and incorporate their effect. In the figure, one new force is shown as coming to bear, resulting in a forecast future state that differs from both the extrapolated and the projected future states (Scenario 3).
  3. Determine the likely future state of the entity based on an assessment of the forces. Strong and certain forces are weighed most heavily in this pre- diction. Weak forces, and those in which the analyst lacks confidence (high uncertainty about the nature or effect of the force), are weighed least.

The process is iterative.

In this figure, we are concerned with a target (technology, system, person, organization, country, situation, industry, or some combination) that changes over time. We want to describe or characterize the entity at some future point.

the basic analytic paradigm is to create a model of the past and present state of the target, followed by alternative models of its possible future states, usually created in scenario form.

A CIA assessment of Mikhail Gorbachev’s economic reforms in 1985–1987 correctly estimated that his proposed reforms risked “confusion, economic disruption, and worker discontent” that could embolden potential rivals to his power.17 This projection was based on assessing the changing forces in Soviet society along with the inertial forces that would resist change.

The process we’ve illustrated in these examples has many names—force field analysis and system dynamics are two.

for forecasting, the analyst must identify new forces that are likely to come into play. Most of the chapters that follow focus on identifying and measuring these forces.

An analyst can (wrongly) shape the outcome by concentrating on some forces and ignoring or downplaying the significance of others.

Force Analysis According to Sun Tzu

Factor or force analysis is an ancient predictive technique. Successful generals have practiced it in warfare for thousands of years, and one of its earliest known pro- ponents was Sun Tzu. He described the art of war as being controlled by five factors, or forces, all of which must be taken into ac- count in predicting the outcome of an engagement. He called the five factors Moral Law, Heaven, Earth, the Commander, and Method and Discipline. In modern terms, the five would be called social, environmental, geospatial, leadership, and organizational factors.

The simplest approach to both projection and forecasting is to do it qualitatively. That is, an analyst who is an expert in the subject area begins the process by answering the following questions:

  1. What forces have affected this entity (organization, situation, industry, technical area) over the past several years?19
  2. Which five or six forces had more im- pact than others?
  3. What forces are expected to affect this entity over the next several years?
  4. Which five or six forces are likely to have more impact than others?
  5. What are the fundamental differ- ences between the answers to ques- tions two and four?
  6. What are the implications of these differences for the entity being analyzed?

The answers to those questions shape the changes in direction of the extrapolation… At more sophisticated levels of qualitative synthesis and analysis, the analyst might examine adaptive forces (feedback forces) and their changes over time.

High-Impact/Low-Probability Analysis

Projections and forecasts focus on the most likely outcomes. But customers also need to be aware of the unlikely outcomes that could have severe adverse effects on their interests.

 

The CIA’s tradecraft manual describes the analytic process as follows:

  • Define the high-impact outcome clearly. This definition will justify examining what most analysts believe to be a very unlikely development.
  • Devise one or more plausible explanations for or “pathways” to the low-probability outcome. This should be as precise as possible, as it can help identify possible indicators for later monitoring.
  • Insert possible triggers or changes in momentum if appropriate. These can be natural disasters, sudden health problems of key leaders, or new eco- nomic or political shocks that might have occurred historically or in other parts of the world.
  • Brainstorm with analysts having a broad set of experiences to aid the development of plausible but unpredictable triggers of sudden change.
  • Identify for each pathway a set of indicators or “observables” that would help you anticipate that events were beginning to play out this way.
  • Identify factors that would deflect a bad outcome or encourage a positive outcome.

The product of high-impact/low-probability analysis is a type of scenario called a demonstration scenario…

Two important types of bias can exist in predictive analysis: pattern, or confirmation, bias—looking for evidence that confirms rather than rejects a hypothesis; and heuristic bias—using inappropriate guidelines or rules to make predictions.

Two points are worth noting at the beginning of the discussion:

  • One must make careful use of the tools in synthesizing the model, as some will fail when applied to prediction. Expert opinion, for example, is often used in creating a target model; but experts’ biases, egos, and narrow focuses can interfere with their pre- dictions. (A useful exercise for the skeptic is to look at trade press or technical journal predictions that were made more than ten years ago that turned out to be way off base. Stock market predictions and popular science magazine predictions of automobile designs are particularly entertaining.)
  • Time constraints work against the analyst’s ability to consistently employ the most elaborate predictive techniques. Veteran analysts tend to use analytic techniques that are relatively fast and intuitive. They can view scenario development, red teams (teams formed to take the opponent’s perspective in planning or assessments), competing hypotheses, and alternative analysis as being too time-consuming to use in ordinary circumstances. An analyst has to guard against using just extrapolation because it is the fastest and easiest to do. But it is possible to use shortcut versions of many predictive techniques and sometimes the situation calls for that. This chapter and the following one contain some examples of shortcuts.

Extrapolation

An extrapolation is a statement, based only on past observations, of what is expected to happen. Extrapolation is the most conservative method of prediction. In its simplest form, an extrapolation, using historical performance as the basis, extends a linear curve on a graph to show future direction.

Extrapolation also makes use of correlation and regression techniques. Correlation is a measure of the degree of association between two or more sets of data, or a measure of the degree to which two variables are related. Regression is a technique for predicting the value of some unknown variable based only on information about the current values of other variables. Regression makes use of both the degree of association among variables and the mathematical function that is determined to best describe the relationships among variables.

the more bureaucracy and red tape involved in doing business, the more corruption is likely in the country.

Projection

Before moving on to projection and forecasting, let’s reinforce the differentiation from extrapolation. An extrapolation is a simple assertion about what a future scenario will look like. In contrast, a projection or a forecast is a probabilistic statement about some future scenario.

Projection is more reliable than extrapolation. It predicts a range of likely futures based on the assumption that forces that have operated in the past will change, whereas extrapolation assumes the forces do not change.

Projection makes use of two major analytic techniques. One technique, force analysis, was discussed earlier in this chapter. After a qualitative force analysis has been completed, the next technique is to apply probabilistic reasoning to it. Probabilistic reasoning is a systematic attempt to make subjective estimates of probabilities more explicit and consistent. It can be used at any of several levels of complexity (each successive level of sophistication adds new capability and completeness). But even the simplest level of generating alternatives, discussed next, helps to prevent premature closure and adds structure to complicated problems.

Generating Alternatives

The first step to probabilistic reasoning is no more complicated than stating formally that more than one outcome is possible. One can generate alternatives simply by listing all possible outcomes to the issue under consideration. One can generate alternatives simply by listing all possible outcomes to the issue under consideration. Remember that the possible outcomes can be defined as alternative scenarios.

The mere act of generating a complete, detailed list often provides a useful perspective on a problem.

Influence Trees or Diagrams

A list of alternative outcomes is the first step. A simple projection might not go beyond this level. But for more rigorous analysis, the next step typically is to identify the things that influence the possible outcomes and indicate the interrelationship of these influences. This process is frequently done by using an influence tree.

let’s assume that an analyst wants to assess the outcome of an ongoing African insurgency movement. There are three obvious possible outcomes: The insur- gency will be crushed, the insurgency will succeed, or there will be a continuing stale- mate. Other outcomes may be possible, but we can assume that they are so unlikely as not to be worth including. The three outcomes for the influence diagram are as follows:

  • Regime wins
  • Insurgency wins
  • Stalemate

The analyst now describes those forces that will influence the assessment of the relative likelihoods of each outcome. For instance, the insurgency’s success may depend on whether economic conditions improve, remain the same, or become worse during the next year. It also may depend on the success of a new government poverty relief program. The assumptions about these “driver” events are often described as linchpin premises in U.S. intelligence practice, and these assumptions need to be made explicit.

Having established the uncertain events that influence the outcome, the analyst proceeds to the first stage of an influence tree.

The thought process that is invoked when generating the list of influencing events and their outcomes can be useful in several ways. It helps identify and document factors that are relevant to judging whether an alternative outcome is likely to occur.

The audit trail is particularly useful in showing colleagues what the analyst’s thinking has been, especially if he desires help in upgrading the diagram with things that may have been overlooked. Software packages for creating influence trees allow the inclusion of notes that create an audit trail.

In the process of generating the alternative lists, the analyst must address the issue of whether the event (or outcome) being listed actually will make a difference in his assessment of the relative likelihood of the outcomes of any of the events being listed.

For instance, in the economics example, if the analyst knew that it would make no difference to the success of the insurgency whether economic conditions improved or remained the same, then there would be no need to differentiate these as two separate outcomes. The analyst should instead simplify the diagram.

The second question, having to do with additional influences not yet shown on the diagram, allows the analyst to extend this pictorial representation of influences to whatever level of detail is considered necessary. Note, however, that the analyst should avoid adding unneeded layers of detail.

Probabilistic reasoning is used to evaluate outcome scenarios.

This influence tree approach to evaluating possible outcomes is more convincing to customers than would be an unsupported ana- lytic judgment about the prospects for the insurgency. Human beings tend to do poorly at such complex assessments when they are approached in a totally unaided, subjective manner; that is, by the analyst mentally combining the force assessments in an un- structured way.

Influence Nets

Influence net modeling is an alternative to the influence tree.

To create an influence net, the analyst defines influence nodes, which depict events that are part of cause-effect relationships within the target model. The analyst also creates “influence links” between cause and effect that graphically illustrate the causal relation between the connected pair of events.

The influence can be either positive (sup- porting a given decision) or negative (decreasing the likelihood of the decision), as identified by the link “terminator.” The terminator is either an arrowhead (positive influence) or a filled circle (negative influence). The resulting graphical illustration is called the “influence net topology.”

 

Making Probability Estimates

Probabilistic projection is used to predict the probability of future events for some time- dependent random process… A number of these probabilistic techniques are used in industry for projection.

Two techniques that we use in intelligence analysis are as follows:

  • Point and interval estimation. This method attempts to describe the probability of outcomes for a single event. An example would be a country’s economic growth rate, and the event of concern might be an eco- nomic depression (the point where the growth rate drops below a certain level).
  • Monte Carlo simulation. This method simulates all or part of a process by running a sequence of events repeatedly, with random combinations of values, until sufficient statistical material is accumulated to determine the probability distribution of the outcome.

Most of the predictive problems we deal with in intelligence use subjective probability estimates. We routinely use subjective estimates of probabilities in dealing with broad issues for which no objective estimate is feasible.

Sensitivity Analysis

When a probability estimate is made, it is usually worthwhile to conduct a sensitivity analysis on the result. For example, the occurrence of false alarms in a security system can be evaluated as a probabilistic process.

Forecasting

Projections often work out better than extrapolations over the medium term. But even the best-prepared projections often seem very conservative when compared to reality years later. New political, economic, social, technological, or military developments will create results that were not foreseen even by experts in a field.

Forecasting uses many of the same tools that projection relies on—force analysis and probabilistic reasoning, for example. But it presents a stressing intellectual challenge, because of the difficulty in identifying and assessing the effect of new forces.

The development of alternative futures is essential for effective strategic decision-making. Since there is no single predictable future, customers need to formulate strategy within the context of alternative future states of the target. To this end, it is necessary to develop a model that will make it possible to show systematically the interrelationships of the individually forecast trends and events.

A forecast is not a blueprint of the future, and it typically starts from extrapolations or projections. Forecasters then must expand their scope to admit and juggle many additional forces or factors. They must examine key technologies and developments that are far afield but that nevertheless affect the subject of the forecast.

The Nonlinear Approach to Forecasting

Obviously, a forecasting methodology requires analytic tools or principles. But for any forecasting methodology to be successful, analysts who have significant understanding of many PMESII factors and the ability to think about issues in a nonlinear fashion are also required.

Futuristic thinking examines deeper forces and flows across many disciplines that have their own order and pattern. In predictive analysis, we may seem to wander about, making only halting progress toward the solution. This nonlinear process is not a flaw; rather it is the mark of a natural learning process when dealing with complex and nonlinear matters.

The sort of person who can do such multidisciplinary analysis of what is likely to happen in the future has a broad under- standing of the principles that cause a physical phenomenon, a chemical reaction, or a social reaction to occur. People who are multidisciplinary in their knowledge and thinking can pull together concepts from several fields and assess political, economic, and social, as well as technical, factors. Such breadth of understanding recognizes the similarity of principles and the underlying forces that make them work. It might also be called “applied common sense,” but unfortunately it is not very common. Analysts instead tend to specialize, because in-depth expertise is highly valued by both intelligence management and the intelligence customer.

The failure to do multidisciplinary analysis is often tied closely to mindset.

Techniques and Analytic Tools of Forecasting

Forecasting is based on a number of assumptions, among them the following:

  • The future cannot be predicted, but by taking explicit account of uncertainty, one can make probabilistic forecasts.
  • Forecasts must take into account possible future developments in such areas as organizational changes, demography, lifestyles, technology, economics, and regulation.

For policymakers and executives, the aim of defining alternative futures is to try to determine how to create a better future than the one that would materialize if we merely keep doing what we’re currently doing. Intelligence analysis contributes to this definition of alternative futures, with emphasis on the likely actions of others—allies, neutrals, and opponents.

Forecasting starts through examination of the changing political, military, economic, and social environments.

We first select issues or concerns that require attention. These issues and concerns have component forces that can be identified using a variant of the strategies-to-task methodology.

If the forecast is done well, these scenarios stimulate the customer of intelligence—the executive—to make decisions that are appropriate for each scenario. The purpose is to help the customer make a set of decisions that will work in as many scenarios as possible.

Evaluating Forecasts

Forecasts are judged on the following criteria:

  • Clarity. Can the customer under- stand the forecast and the forces involved? Is it clear enough to be useful?
  • Credibility. Do the results make sense to the customer? Do they appear valid on the basis of common sense?
  • Plausibility. Are the results consistent with what the customer knows about the world outside the scenario and how this world really works or is likely to work in the future?
  • Relevance. To what extent will the forecasts affect the successful achievement of the customer’s mission?
  • Urgency. To what extent do the forecasts indicate that, if action is required, time is of the essence in developing and implementing the necessary changes?
  • Comparative advantage. To what extent do the results provide a basis for customer decision-making, com- pared with other sources available to the customer?
  • Technical quality. Was the process that produced the forecasts technically sound? Are the alternative forecasts internally consistent?

 

A “good” forecast is one that meets all or most of these criteria. A “bad” forecast is one that does not. The analyst has to make clear to customers that forecasts are transitory and need constant adjustment to be helpful in guiding thought and action.

Customers typically have a number of complaints about forecasts. Common complaints are that the forecast is obvious; it states nothing new; it is too optimistic, pessimistic, or naïve; or it is not credible because it overlooks obvious trends, events, causes, or consequences. Such objections are actually desirable; they help to improve the product. There are a number of appropriate responses to these objections: If something important is missing, add it. If something unimportant is included, get rid of it. If the forecast seems either obvious or counterintuitive, probe the underlying logic and revise the forecast as necessary.

Summary

Intelligence analysis, to be useful, must be predictive. Some events or future states of a target are predictable because they are driven by convergent phenomena. Some are not predictable because they are driven by divergent phenomena.

The analysis product—a demonstration scenario—describes how such a development might plausibly start and identifies its consequences. This provides indicators that can be monitored to warn that the improbable event is actually happening.

For analysts predicting systems developments as many as five years into the future, extrapolations work reasonably well; for those looking five to fifteen years into the future, projections usually fare better.

13 Estimative Forces

Estimating is what you do when you don’t know.

The factors or forces that have to be considered in estimation—primarily PMESII factors—vary from one intelligence problem to another. I do not attempt to catalog them in this book; there are too many. But an important aspect of critical thinking, discussed earlier, is thinking about the underlying forces that shape the future. This chapter deals with some of those forces.

The CIA’s tradecraft manual describes an analytic methodology that is appropriate for identifying and assessing forces. Called “outside in” thinking, it has the objective of identifying the critical external factors that could influence how a given situation will develop. According to the tradecraft manual, analysts should develop a generic description of the problem or the phenomenon under study. Then, analysts should:

  • List all the key forces (social, technological, economic, environmental, and political) that could have an impact on the topic, but over which one can exert little influence (e.g., globalization, social stress, the Internet, or the global economy).
  • Focus next on key factors over which an actor or policymaker can exert some influence. In the business world this might be the market size, customers, the competition, suppliers or partners; in the government do- main it might include the policy actions or the behavior of allies or adversaries.
  • Assess how each of these forces could affect the analytic problem.
  • Determine whether these forces actually do have an impact on the particular issue based on the available evidence.

 

Political and military factors are often the focus of attention in assessing the likely out- come of conflicts. But the other factors can turn out to be dominant. In the developing conflict between the United States and Japan in 1941, Japan had a military edge in the Pacific. But the United States had a substantial edge in these factors:

  • Political. The United States could call on a substantial set of allies. Japan had Germany and Italy.
  • Economy. Japan lacked the natural resources that the United States and its allies controlled.
  • Social. The United States had almost twice the population of Japan. Japan initially had an edge in the solidarity of its population in support of the government, but that edge was matched within the United States after Pearl Harbor.
  • Infrastructure. The U.S. manufacturing capability far exceeded that of Japan and would be decisive in a prolonged conflict (as many Japanese military leaders foresaw).
  • Information. The prewar information edge favored Japan, which had more control of its news media, while a segment of the U.S. media strongly opposed involvement in war. That edge also evaporated after December 7, 1941.

Inertia

One force that has broad implications is inertia, the tendency to stay on course and resist change.

It has been observed that: “Historical inertia is easily underrated . . . the historical forces molding the outlook of Americans, Russians, and Chinese for centuries before the words capitalism and communism were invented are easy still to overlook.”

Opposition to change is a common reason for organizations’ coming to rest. Opposition to technology in general, for example, is an inertial matter; it results from a desire of both workers and managers to preserve society as it is, including its institutions and traditions.

A common manifestation of the law of inertia is the “not-invented-here,” or NIH, factor, in which the organization opposes pressures for change from the outside.

But all societies resist change to a certain extent. The societies that succeed seem able to adapt while preserving that part of their heritage that is useful or relevant.

From an analyst’s point of view, inertia is an important force in prediction. Established factories will continue to produce what they know how to produce. In the automobile industry, it is no great challenge to predict that next year’s autos will look much like this year’s. A naval power will continue to build ships for some time even if a large navy ceases to be useful.

Countervailing Forces

All forces are likely to have countervailing or resistive forces that must be considered.

The principle is summarized well by another of Newton’s laws of physics: For every action there is an equal and opposite reaction.

Applications of this principle are found in all organizations and groups, commercial, national, and civilizational. As Samuel P. Huntington noted, “[W]e know who we are . . . often only when we know who we are against.”

A predictive analysis will always be incomplete unless it identifies and assesses opposing forces. All forces eventually meet counterforces. An effort to expand free trade inevitably arouses protectionist reactions. One country’s expansion of its military strength always causes its neighbors to react in some fashion.

 

Counterforces need not be of the same nature as the force they are countering. A prudent organization is not likely to play to its opponent’s strengths. Today’s threats to U.S. national security are asymmetric; that is, there is little threat of a conventional force-on-force engagement by an opposing military, but there is a threat of an unconventional yet lethal attack by a loosely organized terrorist group, as the events of September 11, 2001, and more recently the Boston Marathon bombing, demonstrated. Asymmetric counterforces are common in industry as well. Industrial organizations try to achieve cost asymmetry by using defensive tactics that have a large favorable cost differential between their organization and that of an opponent.

Contamination

Contamination is the degradation of any of the six factors—political, military, economic, social, infrastructure, or information (PMESII factors)—through an infection-like process. Corruption is a form of political and social contamination. Funds laundering and counterfeiting are forms of economic contamination. The result of propaganda is information contamination.

Contamination phenomena can be found throughout organizations as well as in the scientific and technical disciplines. Once such an infection starts, it is almost impossible to eradicate.

Contamination phenomena have analogies in the social sciences, organization theory, and folklore.

At some point in organizations, contamination can become so thorough that only drastic measures will help—such as shutting down the glycerin plant or rebuilding the microwave tube plant. Predictive intelligence has to consider the extent of such social contamination in organizations, because contamination is a strong restraining force on an organization’s ability to deal with change.

The effects of social contamination are hard to measure, but they are often highly visible.

The contamination phenomenon has an interesting analogy in the use of euphemism in language. It is well known that if a word has or develops negative associations, it will be replaced by a succession of euphemisms. Such words have a half-life, or decay rate, that is shorter as the word association be- comes more negative. In older English, the word stink meant “to smell.” The problem is that most of the strong impressions we get from scents are unpleasant ones; so each word for olfactory senses becomes contaminated over time and must be replaced. Smell has a generally unpleasant connotation now

The renaming of a program or project is a good signal that the program or project is in trouble—especially in Washington, D.C., but the same rule holds in any culture.

Synergy

predictive intelligence analysis almost always requires multidisciplinary understanding. Therefore, it is essential that the analysis organization’s professional development program cultivate a professional staff that can understand a broad range of concepts and function in a multidisciplinary environment. One of the most basic concepts is that of synergy: The whole can be more than the sum of its parts due to interactions among the parts. Synergy is therefore, in some respects, the opposite of the countervailing forces discussed earlier.

Synergy is not really a force or factor as much as a way of thinking about how forces or factors interact. Synergy can result from cooperative efforts and alliances among organizations (synergy on a large scale).

Netwar is an application of synergy.

In electronics warfare, it is now well known that a weapons system may be unaffected by a single countermeasure; however, it may be degraded by a combination of countermeasures, each of which fail individually to defeat it. The same principle applies in a wide range of systems and technology developments: The combination may be much greater than the sum of the components taken individually.

Synergy is the foundation of the “swarm” approach that military forces have applied for centuries—the coordinated application of overwhelming force.

In planning a business strategy against a competitive threat, a company will often put in place several actions that, each taken alone, would not succeed. But the combination can be very effective. As a simple example, a company might use sever- al tactics to cut sales of a competitor’s new product: start rumors of its own improved product release, circulate reports on the defects or expected obsolescence of the competitor’s product, raise buyers’ costs of switching from its own to the competitor’s product, and tie up suppliers by using exclusive contracts. Each action, taken separately, might have little impact, but the synergy—the “swarm” effect of the actions taken in combination—might shatter the competitor’s market.

Feedback

In examining any complex system, it is important for the analyst to evaluate the system’s feedback mechanism. Feedback is the mechanism whereby the system adapts—that is, learns and changes itself. The following discussion provides more detail about how feedback works to change a system.

Many of the techniques for prediction de- pend on the assumption that the process being analyzed can be described, using systems theory, as a closed-loop system. Under the mathematical theory of such systems, feedback is a controlling force in which the out- put is compared with the objective or standard, and the input process is corrected as necessary to bring the output toward a desired state

The feedback function therefore determines the behavior of the total system over time. Only one feedback loop is shown in the figure, but many feedback loops can exist, and usually do in a complex system.

Notes from Transnational Organized Crime, Terrorism, and Criminalized States in Latin America – An Emerging Tier-One National Security Priority

Douglas Farah is an American journalist, national security consultant, a Senior Fellow of Financial Investigations and Transparency at the International Assessment of Strategy Center and also an adjunct fellow at the Center for Strategic and International Studies.

Farah served as United Press International bureau chief in El Salvador from 1985 to 1987, and a freelance journalist for The Washington Post, Newsweek, and other publications until being hired as a staff correspondent for The Washington Post in 1992.

These are notes from his monograph published by the Strategic Studies Institute Monograph Transnational Organized Crime, Terrorism, and Criminalized States in Latin America – An Emerging Tier-One National Security Priority in August of 2012.

NOTES

The emergence of new hybrid (state and nonstate) transnational criminal and terrorist franchises in Latin America poses a tier-one security threat for the United States. These organizations operate under broad state protection and undermine democratic governance, sovereignty, growth, trade, and stability.

Leaders of these organizations share a publicly articulated doctrine to employ asymmetric warfare against the United States and its allies that explicitly endorses the use of WMD as a legitimate tactic.

illicit forces in Latin America within criminalized states have begun using tactical operations centers as a means of pursuing their view of statecraft. That brings new elements to the “dangerous spaces” where nonstate actors intersect with regions characterized by weak sovereignty and alternative governance systems. This new dynamic fundamentally alters the structure underpinning global order.

Being capable of understanding and mitigating this threat requires a whole-of-government approach, including collection, analysis, law enforcement, policy, and programming. The traditional state/nonstate dichotomy is no longer useful for an adequate illumination of these problems. Similarly, the historical divide between transnational organized crime and terrorism is becoming increasingly irrelevant.

TRANSNATIONAL ORGANIZED CRIME, TERRORISM, AND CRIMINALIZED STATES IN LATIN AMERICA: AN EMERGING TIER-ONE NATIONAL SECURITY PRIORITY

INTRODUCTION AND GENERAL FRAMEWORK

The Changing Nature of the Threat.

The purpose of this monograph is to identify and discuss the role played by transnational organized crime groups (TOCs) in Latin America, and the inter- play of these groups with criminalizing state structures, “stateless” regions, extra-regional actors, and the multiple networks that exploit them. It particularly focuses on those areas that pose, or potentially pose, a threat to U.S. interests at home and abroad; and, it can be used as a model for understanding similar threats in other parts of the world.

This emerging combination of threats comprises a hybrid of criminal-terrorist, and state and nonstate franchises, combining multiple nations acting in concert, and traditional TOCs and terrorist groups acting as proxies for the nation-states that sponsor them. These hybrid franchises should now be viewed as a tier-one security threat for the United States. Under- standing and mitigating the threat requires a whole- of-government approach, including collection, analysis, law enforcement, policy, and programming. No longer is the state/nonstate dichotomy useful in illuminating these problems, just as the TOC/terrorism divide is increasingly disappearing.

These franchises operate in, and control, specific geographic territories which allow them to function in a relatively safe environment. These pipelines, or recombinant chains of networks, are highly adaptive and able to move a multiplicity of illicit products (cocaine, weapons, humans, and bulk cash) that ultimately cross U.S. borders undetected thousands of times each day. The actors along the pipeline form and dis- solve alliances quickly, occupy both physical and cyber space, and use both highly developed and modern institutions, including the global financial system, as well as ancient smuggling routes and methods.

This totals to some $6.2 trillion— fully 10 percent of the world’s GDP, placing it behind only the United States and the European Union (EU), but well ahead of China, in terms of global GDP ranking.1 Other estimates of global criminal proceeds range from a low of about 4 percent to a high of 15 percent of global GDP.

Latin American networks now extend not only to the United States and Canada, but outward to Sub-Saharan Africa, Europe, and Asia, where they have begun to form alliances with other networks. A clear understanding of how these rela- tionships evolve, and the relative benefits derived from the relationships among and between state and nonstate actors, will greatly enhance the understand- ing of this new hybrid threat.

 

There is no universally accepted definition of “transnational organized crime.” Here it is defined as, at a minimum, serious crimes or offenses spanning at least one border, undertaken by self-perpetuating associations of individuals who cooperate transnationally, motivated primarily by the desire to obtain a financial or other material benefit and/or power and influence.3 This definition can encompass a number of vitally important phenomena not usually addressed by studies of TOC:

  • A spectrum or continuum of state participation in TOC, ranging from strong but “criminalized” states to weak and “captured” states, with various intermediate stages of state criminal behavior.
  • A nexus between TOCs on the one hand, and terrorist and insurgent groups on the other, with a shifting balance between terrorist and criminal activity on both sides of the divide.
  • Recombinant networks of criminal agents, potentially including not only multiple TOCs, but also terrorist groups as well as states and proxies.
  • Enduring geographical “pipelines” for moving various kinds of commodities and illicit profits in multiple directions, to and from a major destination.
  • We have also crafted this definition to be broadly inclusive: It can potentially encompass the virtual world of TOC, e.g., cybercrime;
  • It can be applied to other regions; the recombinant pipelines and networks model offers an analytical framework which can be applied to multiple regions and circumstances.

 

The term criminalized state” used in this mono- graph refers to states where the senior leadership is aware of and involved—either actively or through passive acquiescence—on behalf of the state in trans- national criminal enterprises, where TOC is used as an instrument of statecraft, and where levers of state power are incorporated into the operational structure of one or more TOC groups.

New Actors in Latin American TOC-State Relations.

Significant TOC organizations, principally drug trafficking groups, have posed serious challenges for U.S. security since the rise of the Medellín cartel in the early 1980s, and the growth of the Mexican drug trafficking organizations in the 1990s. In addition, Latin America has a long history of revolutionary movements, from the earliest days of independence, to the Marxist movements that sprouted up across the region in the 1960s to 1980s. Within this context, these groups often served as elements of governance, primarily to advance or defeat the spread of Marxism in the region. These Marxist revolutions were victorious in Cuba and Nicaragua, which, in turn, became state sponsors of external revolutionary movements, themselves relying on significant economic and military support from the Soviet Union and its network of aligned states’ intelligence and security services.

With the end of the Cold War, the negotiated end to numerous armed conflicts (the Farabund Marti National Liberation Front [FMLN] in El Salvador; the Contra rebels in Nicaragua; the Popular Liberation Army [EPL], M-19, and other small groups in Colom- bia), and the collapse of Marxism, most of the armed groups moved into the democratic process. However, this was not true for all groups, and armed nonstate groups are again being sponsored in Latin America under the banner of the “Bolivarian Revolution.”4

Other states that traditionally have had little inter- est or influence in Latin America have emerged over the past decade, primarily at the invitation of the self- described Bolivarian states seeking to establish 21st- century socialism. This bloc of nations—led by Hugo Chávez of Venezuela, also including Rafael Correa of Ecuador, Evo Morales of Bolivia, and Daniel Ortega of Nicaragua—seeks to break the traditional ties of the region to the United States. To this end, the Bolivar- ian alliance has formed numerous organizations and military alliances—including a military academy in Bolivia to erase the vestiges of U.S. military training— which explicitly exclude the United States.

Over the past decade, China’s trade with Latin America has jumped from $10 billion to $179 billion.11 With the increased presence has come a significantly enhanced Chinese intelligence capacity and access across Latin America. At the same time, Chinese Triads—modern remnants of ancient Chinese secret societies that evolved into criminal organizations—are now operating extensive money laundering services for drug trafficking organizations via Chinese banks.

China also has shown a distinct willingness to bail out financially strapped authoritarian governments if the price is right. For example, China lent Venezuela $20 billion, in the form of a joint venture with a company to pump crude oil that China then locked up for a decade at an average price of about $18 a barrel. The money came as Chávez was facing a financial crisis, rolling blackouts, and a severe liquidity shortage across the economy.12 Since then, China has extended several other significant loans to Venezuela, Ecuador, and Bolivia.

The dynamics of the relationship between China and the Bolivarian bloc and its nonstate proxies will be one of the key determinants of the future of Latin America and the survival of the Bolivarian project. Without significant material support from China, the economic model of the Bolivarian alliance will likely collapse under its own weight of statist inefficiency and massive corruption, despite being richly endowed with natural resources.

Chinese leaders likely understand that any real replacement of the Bolivarian structure leadership by truly democratic forces could result in a significant loss of access to the region, and a cancellation of existing contracts. This, in turn, gives China an incentive to continue to support some form of the Bolivarian project going forward, even if ailing leaders such as Chávez and Fidel Castro are no longer on the scene.

While there have been criminalized states in the past (the García Meza regime of “cocaine colonels” in Bolivia in 1980, and Desi Bouterse in Suriname in the 1980s, for ex- ample), what is new with the Bolivarian structure is the simultaneous and mutually supporting merger of state with TOC activities across multiple state and nonstate platforms. While García Meza, Bouterse, and others were generally treated as international pariahs with little outside support, the new criminalized states offer each other economic, diplomatic, political, and military support that shields them from international isolation and allows for mutually reinforcing structures to be built.

Rather than operating in isolation, these groups have complex but significant interaction with each other, based primarily on the ability of each actor or set of actors to provide a critical service while profiting mutually from the transactions.

While not directly addressing the threat from criminalized states, the Strategy notes that:

  • TOC penetration of states is deepening and leading to co-option in some states and weakening of governance in many others. TOC net- works insinuate themselves into the political process through bribery and in some cases have become alternate providers of governance, security, and livelihoods to win popular support. The nexus in some states among TOC groups and elements of government—including intelligence services and personnel—and big business figures, threatens the rule of law.
  • TOC threatens U.S. economic interests and can cause significant damage to the world financial system by subverting legitimate markets. The World Bank estimates that about $1 trillion is spent each year to bribe public officials. TOC groups, through their state relationships, could gain influence over strategic markets.
  • Terrorists and insurgents increasingly are turn- ing to crime and criminal networks for funding and logistics. In fiscal year (FY) 2010, 29 of the 63 top drug trafficking organizations identified by the Department of Justice had links to terror- ist organizations. While many terrorist links to TOC are opportunistic, this nexus is dangerous, especially if it leads a TOC network to facilitate the transfer of WMD material to terrorists.17

Stewart Patrick and others correctly argue that, contrary to the predominant thinking that emerged immediately after September 11, 2001 (9/11) (i.e., failed states are a magnet for terrorist organizations), failed or nonfunctional states are actually less attractive to terrorist organizations and TOC groups than “weak but functional” states.18 But there is another category, perhaps the most attractive of all to TOC and terrorist groups they are allied with: strong and functional states that participate in TOC activities.

The Unrecognized Role of the Criminalized States.

While it is true that TOC penetration of the state threatens the rule of law, as the administration’s strategy notes, it also poses significant new threats to the homeland. Criminalized states frequently use TOCs as a form of statecraft, bringing new elements to the dangerous spaces where nonstate actors intersect with regions of weak sovereignty and alternative governance systems.19 This fundamentally alters the structure of global order.

As the state relationships consolidate, the recombinant criminal-terrorist pipelines become more rooted and thus more dangerous. Rather than being pursued by state law enforcement and intelligence services in an effort to impede their activities, TOC groups (and perhaps terrorist groups) are able to operate in a more stable, secure environment, something that most businesses, both licit and illicit, crave.

Rather than operating on the margins of the state or seeking to co-opt small pieces of the state machinery, the TOC groups in this construct operate in concert with the state on multiple levels. Within that stable environment, a host of new options open, from the sale of weapons, to the use of national aircraft and ship- ping registries, to easy use of banking structures, to the use of national airlines and shipping lines to move large quantities of unregistered goods, and the acquisition of diplomatic passports and other identification means.

Examples of the benefits of a criminal state can be seen across the globe. For example, the breakaway republic of Transnistria, near Moldova, known as “Europe’s Black Hole,” is a notorious weapons trafficking center from which dozens of surface-to-air missiles have disappeared; it is run by former Russian secret police (KGB) officials.

The FARC needs to move cocaine to U.S. and European markets in order to obtain the money necessary to maintain its army of some 9,000 troops. In order to do that, the FARC, with the help of tra- ditional drug trafficking organizations, must move its product through Central America and Mexico to the United States—the same route used by those who want to move illegal aliens to the United States, and those who want to move bulk cash shipments, stolen cars, and weapons from the United States southward. All of these goods traverse the same territory, pass through the same gatekeepers, and are often inter- changeable along the way. A kilo of cocaine can be traded for roughly one ton of AK-47 assault rifles before either of the goods reaches what would normally be its final destination.

Though the presence of a state government (as op- posed to its absence) is ordinarily considered to be a positive situation, the presence of the state is beneficial or positive only if it meets the needs of its people. If the state, as it is in many parts of Latin America and many other parts of the world, is present but is viewed, with good reason, as corrupt, incompetent, and/or predatory, then its presence is not beneficial in terms of creating state strength or state capacity. In fact, where the state is strongest but least accountable for abuses, people often prefer nonstate actors to exercise authority.25

This has led to an underlying conceptual problem in much of the current literature describing regions or territories as “governed” or “ungoverned,” a frame- work that presents a false dichotomy suggesting that the lack of state presence means a lack of a governing authority. “Ungoverned spaces” connotes a lawless region with no controlling authority. In reality, the stateless regions in question almost always fall under the control of nonstate actors who have sufficient force or popular support (or a mixture of both), to impose their decisions and norms, thus creating alternate power structures that directly challenge the state, or that take the role of the state in its absence.

The notion of ungoverned spaces can be more broadly applied to legal, functional, virtual, and social arenas that either are not regulated by states or are contested by non-state actors and spoilers.26

THE NATURE OF THE THREAT IN THE AMERICAS

Old Paradigms Are Not Enough.

Control of broad swaths of land by these nonstate groups in Latin America not only facilitates the movement of illegal products, both northward and south- ward, through transcontinental pipelines, but also undermines the stability of an entire region of great strategic interest to the United States.

The traditional threat is broadly understood to be posed by the illicit movement of goods (drugs, money, weapons, and stolen cars), people (human traffic, gang members, and drug cartel enforcers), and the billions of dollars these illicit activities generate in an area where states have few resources and little legal or law enforcement capacity .

As Moisés Naim wrote:

Ultimately, it is the fabric of society which is at stake. Global illicit trade is sinking entire industries while boosting others, ravaging countries and sparking booms, making and breaking political careers, desta- bilizing some governments and propping up others.27

The threat increases dramatically with the nesting of criminal/terrorist groups within governments that are closely aligned ideologically, such as Iran and the Bolivarian states in Latin America, and that are identified sponsors of designated terrorist groups, including those that actively participate in the cocaine trafficking trade.

While Robert Killebrew28 and Max Manwaring29 make compelling cases that specific parts of this dangerous cocktail could be defined as insurgencies (narco-insurgency in Mexico and gangs in Central America, respectively), the new combination of TOC, criminalized states, and terrorist organizations presents a new reality that breaks the traditional paradigms.

While Mexico is not the focus of this monograph, the regional convulsions from Mexico through Central America are not viewed as a narco-insurgency. Instead, this hybrid mixture of groups with a variety of motives, including those engaged in TOC, insurgencies, and criminalized states with a declared hatred for the United States, is something new and in many ways more dangerous than a traditional insurgency.

The New Geopolitical Alignment.

The visible TOC threats are only a part of the geo- strategic threats to the United States emerging from Latin America’s current geopolitical alignment. The criminalized states are already extending their grip on power through strengthened alliances with hostile outside state and quasi-state actors such as Iran and Hezbollah. The primary unifying theme among these groups is a deep hatred for the United States.

they have carried out a similar pattern of rewriting the constitution to concentrate powers in the executive and to allow for unlimited reelection; a systematic takeover of the judiciary by the executive and the subsequent criminalizing of the opposition through vaguely worded laws and constitutional amendments that make it illegal to oppose the revolution; systematic attacks on independent news media, and the use of criminal libel prosecutions to silence media critics; and, overall, the increasing criminalization of the state. These measures are officially justified as necessary to ensure the revolution can be carried out without U.S. “lackeys” sabotaging it.

The Model: Recombinant Networks and Geographical Pipelines.

To understand the full significance of the new geopolitical reality in Latin America, it is necessary to think in terms of the geopolitics of TOC. Because of the clandestine nature of the criminal and terrorist activities, designed to be as opaque as possible, one must start from the assumption that, whatever is known of specific operations along the criminal-terrorist pipeline, or whatever combinations of links are seen, represents merely a snapshot in time, not a video of continuing events. Moreover, it is often out of date by the time it is assessed.

Nonstate armed actors as treated in this mono- graph are defined as:

  • Terrorist groups, motivated by religion, politics, ethnic forces, or at times, even by financial considerations;
  • Transnational criminal organizations, both structured and disaggregated, including third generation gangs as defined by Manwaring;35
  • Militias that control “black hole” or “stateless” sectors of one or more national territories; Insurgencies, which have more well-defined and specific political aims within a particular national territory, but may operate from out- side of that national territory.

“In some cases, the terrorists simply imitate the criminal behavior they see around them, borrowing techniques such as credit card fraud and extortion in a phenomenon we refer to as activity appropriation. This is a shared approach rather than true interaction, but it often leads to more intimate connections within a short time.” This can evolve into a more symbiotic relationship, which in turn can (but many do not) turn into hybrid groups.38

While the groups that overlap in different networks are not necessarily allies, and in fact occasionally are enemies, they often can and do make alliances of convenience that are short-lived and shifting. Even violent drug cartels, which regularly engage in bloody turf battles, also frequently engage in truces and alliances, although most end when they are no longer mutually beneficial or the balance of power shifts among them.

Another indication of the scope of the emerging alliances is the dramatic rise of Latin American drug trafficking organizations operating in West Africa, for onward shipment to Western Europe. Among the drug trafficking organizations found to be working on the ground in West Africa are the FARC, Mexican drug cartels, Colombian organizations, and Italian organized crime. It is worth bearing in mind that al- most every major load of cocaine seized in West Africa in recent years has been traced to Venezuela as the point of origin.

This overlapping web of networks was described in a July 2010 federal indictment from the Southern District of New York, which showed that drug trafficking organizations in Colombia and Venezuela, including the FARC, had agreed to move several multi-ton loads of cocaine through Liberia en route to Europe.

The head of Liberian security forces, who is also the son of the president, negotiated the transshipment deals with a Colombian, a Russian, and three West Africans.

On December 8, 2011, it aired footage of the Iranian ambassador in Mexico urging a group of Mexican university students who were hackers to launch broad cyber attacks against U.S. defense and intelligence facilities, claiming such an attack would be “bigger than 9/11.”

Geographical “Pipelines.”

The central feature binding together these disparate organizations and networks which, in aggregate, make up the bulk of nonstate armed actors, is the in- formal (meaning outside legitimate state control and competence) “pipeline” or series of overlapping pipe- lines that these operations need to move products, money, weapons, personnel, and goods. The pipelines often form well-worn, customary, geographical routes and conduits developed during past conflicts, or traditionally used to smuggle goods without paying taxes to the state. Their exploitation by various communities, organizations, and networks yields recognizable patterns of activity.

The geography of the pipelines may be seen as both physical (i.e., terrain and topography), and human (i.e., historical and sociological patterns of local criminal activity).

These regions may develop their own cultures that accept what the state considers to be illicit activities as normal and desirable. This is especially true in areas where the state has been considered an enemy for generations.

The criminal pipeline itself is often a resource in dispute, and one of the primary sources of violence. Control of the pipeline can dramatically alter the relative power among different trafficking groups, as has been seen in the ongoing war between the Juarez and Sinaloa cartels in Mexico.47 Because of the lucrative nature of control of the actual physical space of the pipeline, these types of conflicts are increasingly carried out in gruesome fashion in Guatemala, Honduras, and El Salvador.

These states are not collapsing. They risk becoming shell-states: sovereign in name, but hollowed out from the inside by criminals in collusion with corrupt officials in the government and the security services. This not only jeopardizes their survival, it poses a serious threat to regional security because of the trans-national nature of the crimes.

CRIMINALIZING STATES AS NEW REGIONAL ACTORS

While nonstate actors make up the bulk of criminal agents engaged in illicit activities, state actors play an increasingly important yet under-reported role. That role pertains in part to the availability of pipeline territory, and in part to the sponsorship and even direction of criminal activity. TOC groups can certainly exploit the geographical vulnerabilities of weak or failing states, but they also thrive on the services provided by stronger states.

There are traditional categories for describing state performance as developed by Robert Rotberg and others in the wake of state failures at the end of the Cold War. The premise is that that “nation-states fail because they are convulsed by internal violence and can no longer deliver positive political goods to their inhabitants.”52 These categories are:

  • Strong, i.e., able to control its territory and offer quality political goods to its people;
  • Weak, i.e., filled with social tensions, the state has only a limited monopoly on the use of force;
  • Failed, i.e., in a state of conflict with a preda- tory ruler, with no state monopoly on the use of force;
  • Collapsed, i.e., no functioning state institutions and a vacuum of authority.

This conceptualization, while useful, is extremely limited, as is the underlying premise. It fails to make a critical distinction between countries where the state has little or no power in certain areas and may be fighting to assert that control, and countries where the government, in fact, has a virtual monopoly on power and the use of force, but turns the state into a functioning criminal enterprise for the benefit of a small elite.

The 4-tier categorization also suffers from a significant omission with regard to geographical areas of operation rather than criminal actors. The model pre- supposes that stateless regions are largely confined within the borders of a single state.

 

State absence can be the product of a successful bid for local dominance by TOC groups, but it can also result from a perception on the part of the local population that the state poses a threat to their communities, livelihoods, or interests.

 

 

A 2001 Naval War College report insightfully described some of the reasons in terms of “commercial” and “political” in- surgencies. These are applicable to organized criminal groups as well and have grown in importance since then:

The border zones offer obvious advantages for political and economic insurgencies. Political insurgents prefer to set up in adjacent territories that are poorly integrated, while the commercial insurgents favor active border areas, preferring to blend in amid business and government activity and corruption. The border offers a safe place to the political insurgent and easier access to communications, weapons, provisions, transport, and banks.

For the commercial insurgency, the frontier creates a fluid, trade-friendly environment. Border controls are perfunctory in ‘free trade’ areas, and there is a great demand for goods that are linked to smuggling, document fraud, illegal immigration, and money laundering.

For the political insurgency, terrain and topography often favor the narco-guerilla. Jungles permit him to hide massive bases and training camps, and also laboratories, plantations, and clandestine runways. The Amazon region, huge and impenetrable, is a clear example of the shelter that the jungle areas give. On all of Colombia’s borders—with Panama, Ecuador, Brazil, and Venezuela—jungles cloak illegal activity.

The Weak State-Criminal State Continuum.

One may array the degree of state control of, or par- ticipation in, criminal activity along a spectrum (see Figure 2). At one end are strong but criminal states, with the state acting as a TOC element or an important component of a TOC group.

In Latin America, the government of Suriname (formerly Dutch Guiana) in the 1980s and early 1990s under Desi Bouterse, a convicted drug trafficker with strong ties to the FARC, was (and perhaps still is) an operational player in an ongoing criminal enterprise and benefited from it.

Bouterse’s only public defender in the region is Hugo Chávez of Venezuela.

Again, the elements of TOC as statecraft can be seen. Chávez reportedly funded Bouterse’s improbable electoral comeback in Suriname, funneling money to his campaign and hosting him in Venezuela on several visits.60 While no other heads of state accepted Bouterse’s invitation to attend his inauguration, Chávez did, although he had to cancel at the last minute. In recompense, he promised to host Bouterse on a state visit to Venezuela.

One of the key differences between the Bolivarian alliance and earlier criminalized states in the region is the mutually reinforcing structure of the alliance. While other criminalized states have been widely viewed as international pariahs and broadly shunned, thus hastening their demise, the new Bolivarian structures unite several states in a joint, if loosely-knit, criminal enterprise. This ensures these mutually sup- porting regimes can endure for much longer.

At the other end are weak and captured states, where certain nodes of governmental authority, whether local or central, have been seized by TOCs, who in turn are the primary beneficiaries of the proceeds from the criminal activity. Penetration of the state usually centers on one or more of three functions: judiciary (to ensure impunity), border control and customs (to ensure the safe passage of persons and goods), and legislature (to codify the structures necessary to TOC organizations, such as a ban on extradition, weak asset forfeiture laws, etc.). It also is more local in its focus, rather than national.

Typically, TOC elements aim at dislodging the state from local territory, rather than assuming the role of the state in overall political authority across the country. As Shelley noted, “Older crime groups, often in long-established states, have developed along with their states and are dependent on existing institution- al and financial structures to move their products and invest their profits.”

By definition, insurgents aim to wrest political control from the state and transfer it to their own leadership.

“Captured states” are taken hostage by criminal organizations, often through intimidation and threats, giving the criminal enterprise access to some parts of the state apparatus. Guatemala would be an example: the government lacks control of roughly 60 percent of the national territory, with the cartels enjoying local power and free access to the border; but the central government itself is not under siege.

In the middle range between the extremes, more criminalized cases include participation in criminal activity by state leaders, some acting out of personal interest, others in the interest of financing the services or the ideology of the state. A variant of this category occurs when a functioning state essentially turns over, or “franchises out” part of its territory to non- state groups to carry out their own agenda with the blessing and protection of the central government or a regional power. Both state and nonstate actors share in the profits and proceeds from criminal activity thus generated. Venezuela under Hugo Chávez is perhaps the clearest example of this model in the region, given his relationship with the FARC.

Hugo Chávez and the FARC: The Franchising Model.

Chávez’s most active support for the FARC came after the FARC had already become primarily a drug trafficking organization vice political insurgency. The FARC has also traditionally earned considerable income (and wide international condemnation) from the kidnapping for ransom of hundreds of individuals, in violation of the Geneva Convention and other international conventions governing armed conflicts. It was impossible, by the early part of the 21st century, to separate support for the FARC from support for TOC, as these two activities were the insurgent group’s primary source of income.

Chávez had cultivated a relationship with the FARC long before becoming president. As one recent study of internal FARC documents noted:

When Chávez became president of Venezuela in February 1999, FARC had not only enjoyed a relationship with him for at least some of the previous seven years but had also penetrated and learned how to best use Venezuelan territory and politics, manipulating and building alliances with new and traditional Venezuelan political sectors, traversing the Colombia-Venezuela border in areas ranging from coastal desert to Amazonian jungle and building cooperative relation- ships with the Venezuelan armed forces. Once Chávez was inaugurated, Venezuelan border security and foreign policies shifted in the FARC’s favor.67

Perhaps the strongest public evidence of the importance of Venezuela to the FARC is the public fingering of three of Chávez’s closest advisers and senior government officials by the U.S Treasury Department’s Office of Foreign Assets Control (OFAC).

OFAC said the three—Hugo Armando Carvajál, director of Venezuelan Military Intelligence; Henry de Jesus Rangél, director of the Venezuelan Directorate of Intelligence and Prevention Services; and Ramón Emilio Rodriguez Chacín, former minister of justice and former minister of interior—were responsible for “materially supporting the FARC, a narco-terrorist organization.” It specifically accused Carvajál and Rangél of protecting FARC cocaine shipments moving through Venezuela, and said Rodriguez Chacín, who resigned his government position just a few days before the designations, was the “Venezuelan government’s main weapons contact for the FARC.”

According to the U.S. indictment against him, Makled exported at least 10 tons of cocaine a month to the United States by keeping more than 40 Venezuelan generals and senior government officials on his payroll. “All my business associates are generals. The highest,” Makled said. “I am telling you, we dis- patched 300,000 kilos of coke. I couldn’t have done it without the top of the government.”75 What added credibility to Makled’s claims were the documents he presented showing what appear to be the signatures of several generals and senior Ministry of Interior officials accepting payment from Makled. “I have enough evidence to justify the invasion of Venezuela” as a criminal state, he said.76

The FARC and Bolivia, Ecuador, and Nicaragua.

Since the electoral victories of Correa in Ecuador and Morales in Bolivia, and the re-election of Daniel Ortega in Nicaragua, their governments have actively supported FARC rebels in their war of more than 4 decades against the Colombian state, as well as significant drug trafficking activities.77 While Ecuador and Venezuela have allowed their territory to be used for years as rear guard and transshipment stations for the FARC and other drug trafficking organizations, Bolivia has become a recruitment hub and safe haven; and Nicaragua, a key safe haven and weapons procurement center. In addition, several senior members of both the Correa and Morales administrations have been directly implicated in drug trafficking incidents, showing the complicity of the state in the criminal enterprises.

In Bolivia, the Morales government, which has maintained cordial ties with the FARC at senior levels,78 has, as noted, faced an escalating series of drug trafficking scandals at the highest levels.79 It is worth noting that Alvaro García Linera, the nation’s vice president and a major power center in the Morales administration, was a member of the armed Tupac Katari Revolutionary Movement (Movimiento Revolucionario Tupak Katari [MRTK]), an ally of the FARC, and served several years in prison.

An analysis of the Reyes computer documents concluded that the FARC donated several hundred thousand dollars to Correa’s campaign,84 a conclusion drawn by other national and international investigations.85 The Reyes documents show senior Ecuadoran officials meeting with FARC commanders and offering to remove certain commanders in the border region so the FARC would not be under so much pressure on the Ecuadoran side.

A closer friend, at least for a time, was Hugo Mol- dis, who helped found the MAS and has been one of the movement’s intellectual guides, and was seriously considered for senior cabinet positions. Instead, he was given the job as leader of the government-backed confederation of unions and social groups called the “People’s High Command” (Estado Mayor del Pueblo [EMP]),93 and he maintains a fairly high profile as journalist and writer for several Marxist publications.

The EMP was one of the principal vehicles of the MAS and its supporters in forcing the 2003 resignation of the government of Gonzalo Sánchez de Lozada, and Morales, as president, named it the organization responsible for giving social movements a voice in the government.

Moldiz told the group that “our purpose is to defend the government, defend the political process of change, which we have conquered with blood, strikes, marches, sacrifice, and pain. Our main enemy is called United States imperialism and the Bolivian oligarchy.”95

The Regional Infrastructure.

Brazil and Peru, while not actively supporting the FARC, have serious drug trafficking issues to contend with on their own and exercise little real control over their border regions. Despite this geographic and geopolitical reality, Colombia has undertaken a costly and somewhat successful effort to reestablish state control in many long-abandoned regions of its own national territory. Yet the Colombian experience offers an object lesson in the limits of what can be done even if the political will exists and if significant national treasure is invested in reestablishing a positive state presence. Once nonstate actors have established uncontested authority over significant parts of the national territory, the cost of recouping control and establishing a functional state presence is enormous.

It becomes even more costly when criminal/terrorist groups such as the FARC become instruments of regional statecraft. The FARC has been using its ideological affinity with Correa, Morales, Chávez, and Nicaragua’s Ortega to press for a change in status to “belligerent group” in lieu of terrorist entity or simple insurgency. “Belligerent” status is a less pejorative term and brings certain international protections.

the FARC and its political arm, the Continental Bolivarian Movement (Movimiento Continental Bolivariano[MCB] discussed below), has become a vehicle for a broader-based alliance of nonstate armed groups seeking to end the traditional democratic representative government model and replace it with an ideology centered on Marxism, anti-globalization, and anti-United States.

…not all states are criminal, not all TOCs are engaged in terrorism or collude with terrorist groups, and not all terrorist groups conduct criminal activities. The overlap between all three groups constitutes a small but highly dangerous subset of cases, and ap- plies most particularly to the Bolivarian states.

The TOC-Terrorist State Alliance.

At the center of the nexus of the Bolivarian move- ment with TOC, terrorism, and armed revolution is the FARC, and its political wing, the Continental Boli- varian Coordinator (Coordinadora Continental Bolivari- ana [CCB]), a continental political movement founded in 2003, funded and directed by the FARC. In 2009, the CCB officially changed its name to the MCB to re- flect its growth across Latin America. For purposes of consistency, we refer to the organization as the CCB throughout this monograph.

In a November 24, 2004, letter from Raúl Reyes, the FARC’s second-in-command, to another member of the FARC General Secretariat, he laid out the FARC’s role in the CCB, as well as the Chávez government’s role, in the following unambiguous terms:

The CCB has the following structure: an executive, some chapters by region . . . and a “foreign legion.” Headquarters: Caracas. It has a newspaper called “Correo Bolivariano,” [Bolivarian Mail] and Internet site and an FM radio station heard throughout Caracas. . . . This is an example of coordinated struggle for the creation of the Bolivarian project. We do not exclude any forms of struggle. It was founded in Fuerte Tiuna in Caracas. [Author’s Note: Fuerte Tiuna is the main government military and intelligence center in Venezuela, and this is a clear indication that the Venezuelan government fully supported the founding of the organization.] The political ammunition and the leadership is provided by the FARC. 97

According to an internal FARC report dated March 11, 2005, on the CCB’s activities in 2004, there were already active groups in Mexico, Dominican Republic, Ecuador, Venezuela, and Chile. International brigades from the Basque region of Spain, Italy, France, and Denmark were operational. Work was underway in Argentina, Guatemala, and Brazil. The number of organizations that were being actively coordinated by the CCB was listed at 63, and there were “political relations” with 45 groups and 25 institutions. The CCB database contained 500 e-mails.

Numerous other documents show that different Bolivarian governments directly supported the CCB, whose president is always the FARC leader.

The government of Rafael Correa in Ecuador of- ficially hosted the second congress of the organization in Quito in late February 2008. The meeting was at- tended by members of Peru’s Tupac Amaru Revolu- tionary Movement (Movimiento Revolucionario Tupac Amaru [MRTA]); the Mapuches and MIR of Chile; Spain’s ETA, and other terrorist and insurgent groups.

The 2009 meeting at which the CCB became the MCB was held in Caracas and the keynote address was given Alfonso Cano, the current FARC leader. Past FARC leaders are honorary presidents of the organization.101 This places the FARC—a well-identified drug trafficking organization with significant ties to the major Mexican drug cartels102 and a designated terrorist entity with a broad-based alliance that spans the globe—directly in the center of a state-sponsored project to fundamentally reshape Latin America and its political structure and culture.

The importance of the cocaine transit increase through Venezuela was documented by the U.S. Government Accountability Office, which estimates that the product transit rose fourfold from 2004 to 2007, from 60 metric tons to 240 metric tons.

Finally, the CCB, as a revolutionary meeting house for “anti-imperialist” forces around the world, provides the political and ideological underpinning and justification for the growing alliance among the Bolivarian states, again led by Chávez, and Iran, led by Mahmoud Ahmadinejad.

Hezbollah’s influence extends to the nature of the war and diplomacy pursued by Chávez and his Bolivarian comrades. The franchising model strongly resembles the template pioneered by Hezbollah.

THE BOLIVARIAN AND IRANIAN REVOLUTIONS: THE TIES THAT BIND

The most common assumption among those who view the Iran-Bolivarian alliance as troublesome, and many do not view it as a significant threat at all, is that there are two points of convergence between the radical and reactionary theocratic Iranian government and the self-proclaimed socialist and progressive Bolivarian revolution.

These assumed points of convergence are: 1) an overt and often stated hatred for the United States and a shared belief in how to destroy a common enemy; and 2) a shared acceptance of authoritarian state structures that tolerate little dissent and encroach on all aspects of a citizen’s life.

While Iran’s revolutionary rulers view the 1979 revolution in theological terms as a miracle of divine intervention in which the United States, the Great Satan, was defeated, the Bolivarians view it from a secular point of view as a roadmap to defeat the United States as the Evil Empire. To both, it has strong political con- notations and serves as a model for how asymmetrical leverage, whether applied by Allah or humans, can conjure the equivalent of a David defeating a Goliath on the world stage.

Ortega has declared the Iranian and Nicaraguan revolutions to be “twin revolutions, with the same objectives of justice, liberty, sovereignty and peace . . . despite the aggressions of the imperialist policies.” Ahmadinejad couched the alliances as part of “a large anti-imperialist movement that has emerged in the region.”

Among the first to articulate the possible merging of radical Shite Islamic thought with Marxist aspirations of destroying capitalism and U.S. hegemony was Illich Sánchez Ramirez, better known as the terrorist leader, “Carlos the Jackal,” a Venezuelan citizen who was, until his arrest in 1994, one of the world’s most wanted terrorists.

The emerging military doctrine of the “Bolivarian Revolution,” officially adopted in Venezuela and rapidly spreading to Bolivia, Nicaragua, and Ecuador, explicitly embraces the radical Islamist model of asymmetrical or “fourth generation warfare,” and its heavy reliance on suicide bombings and different types of terrorism, including the use of nuclear weapons and other WMD.

Chávez has adopted as his military doctrine the concepts and strategies articulated in Peripheral Warfare and Revolutionary Is- lam: Origins, Rules and Ethics of Asymmetrical Warfare (Guerra Periférica y el Islam Revolucionario: Orígenes, Reglas y Ética de la Guerra Asimétrica ) by the Spanish politician and ideologue, Jorge Verstrynge (see Figure 4).110 The tract is a continuation of and exploration of Sánchez Ramirez’s thoughts, incorporating an explicit endorsement of the use of WMD to destroy the United States. Verstrynge argues for the destruction of the United States through a series of asymmetrical attacks like those of 9/11, in the belief that the United States will simply crumble when its vast military strength cannot be used to combat its enemies.

Central to Verstrynge’s idealized view of terrorists is the belief in the sacredness of fighters sacrificing their lives in pursuit of their goals.

An Alliance of Mutual Benefit.

This ideological framework of a combined Marxism and radical Islamic methodology for successfully attacking the United States is an important, though little examined, underpinning for the greatly enhanced relationships among the Bolivarian states and Iran. These relationships are being expanded, absorbing significant resources despite the fact that there is little economic rationale to the ties and little in terms of legitimate commerce.

One need only look at how rapidly Iran has in- creased its diplomatic, economic, and intelligence presence in Latin America to see the priority it places on this emerging axis, given that it is an area where it has virtually no trade, no historic or cultural ties, and no obvious strategic interests. The gains, in financial institutions, bilateral trade agreements, and state visits (eight state visits between Chávez and Ahmadinejad alone since 2006), are almost entirely within the Bolivarian orbit; and, as noted, the Bolivarian states have jointly declared their intention to help Iran break international sanctions.

The most recent salvo by Iran is the launching of a Spanish language satellite TV station, HispanTV, aimed at Latin America. Bolivia and Venezuela are collaborating in producing documentaries for the station. Mohammed Sarafraz, deputy di- rector of international affairs, said Iran was “launching a channel to act as a bridge between Iran and the countries of Latin America [there being] a need to help familiarize Spanish-speaking citizens with the Iranian nation.” He said that HispanTV was launched with the aim of reinforcing cultural ties with the Spanish- speaking nations and helping to introduce the traditions, customs, and beliefs of the Iranian people.

What is of particular concern is that many of the bilateral and multilateral agreements signed between Iran and Bolivarian nations, such as the creation of a dedicated shipping line between Iran and Ecuador, or the deposit of $120 million by an internationally sanctioned Iranian bank into the Central Bank of Ecuador, are based on no economic rationale.

Iran, whose banks, including its central bank, are largely barred from the Western financial systems, benefits from access to the international financial mar- ket through Venezuelan, Ecuadoran, and Bolivian financial institutions, which act as proxies by moving Iranian money as if it originated in their own legal financial systems.120 Venezuela also agreed to provide Iran with 20,000 barrels of gasoline per day, leading to U.S. sanctions against the state petroleum company.

CONCLUSIONS

Latin America, while not generally viewed as part of the stateless regions phenomenon, or part of the failed state discussion, presents multiple threats that center on criminalized states, their hybrid alliance with extra-regional sponsors of terrorism, and nonstate TOC actors. The groups within this hybrid threat—often rivals, but willing to work in temporary alliances—are part of the recombinant criminal/terrorist pipeline, and their violence is often aimed at gaining control of specific territory or parts of that pipeline, either from state forces or other nonstate groups.

pipelines are seldom disrupted for more than a minimal amount of time, in part because the critical human nodes in the chain, and key chokepoints in the pipelines, are not identified, and the relationships among the different actors and groups are not under- stood adequately. As noted, pipelines are adaptable and versatile as to product—the epitome of modern management systems—often intersecting with formal commercial institutions (banks, commodity exchanges, legitimate companies, etc.), both in a physical and virtual/cyber manner, in ways difficult to determine, collect intelligence on, or disaggregate from protected commercial activities which may be both domestic and international in nature, with built-in legal and secrecy protections.

While the situation is already critical, it is likely to get worse quickly. There is growing evidence of Russian and Chinese organized crime penetration of the region, particularly in Mexico and Central America, greatly strengthening the criminal organizations and allowing them to diversify their portfolios and sup- ply routes—a particular example being precursor chemicals for the manufacture of methamphetamines and cocaine. The Chinese efforts to acquire ports, re- sources, and intelligence-gathering capacity in the region demonstrate just how quickly the situation can develop, given that China was not a major player in the region 5 years ago.

This is a new type of alliance of secular (self-proclaimed socialist and Marxist) and radical Islamist organizations with a common goal directly aimed at challenging and undermining the security of the United States and its primary allies in the region (Colombia, Chile, Peru, Panama, and Guatemala). This represents a fundamental change because both primary state allies in the alliance (the governments of Venezuela and Iran) host and support nonstate actors, allowing the nonstate actors to thrive in ways that would be impossible without state protection.

Under- standing how these groups develop, and how they relate to each other and to groups from outside the region, is vital—particularly given the rapid pace with which they are expanding their control across the continent, across the hemisphere, and beyond. Developing a predictive capacity can be done based only on a more realistic understanding of the shifting networks of actors exploiting the pipelines; the nature and location of the geographic space in which they operate; the critical nodes where these groups are most vulnerable; and their behaviors in adapting to new political and economic developments, market opportunities and setbacks, internal competition, and the countering actions of governments.

In turn, an effective strategy for combating TOC must rest on a solid foundation of regional intelligence which, while cognizant of the overarching transnational connections, remains sensitive to unique local realities behind seemingly ubiquitous behaviors. A one-size-fits-all policy will not suffice.

It is not a problem that is only, or primarily, a matter of state or regional security, narcotics, money laundering, terrorism, human smuggling, weakening governance, democracy reversal, trade and energy, counterfeiting and contraband, immigration and refugees, hostile states seeking advantage, or alterations in the military balance and alliances. It is increasingly a combination of all of these. It is a comprehensive threat that requires analysis and management within a comprehensive, integrated whole-of-government approach. At the same time, however expansive in global terms, a strategy based on geopolitics—the fundamental understanding of how human behavior relates to geo- graphic space—must always be rooted in the local.

ENDNOTES

  1. “Fact Sheet: Strategy to Combat Transnational Organized Crime,” Washington, DC: Office of the Press Secretary, the White House, July 25, 2011.
  2. On the lower end, the United Nations (UN) Office of Drugs and Crime estimate transnational organized crime (TOC) earn- ings for 2009 at $2.1 trillion, or 3.6 percent of global gross domes- tic product (GDP). Of that, typical TOC activities such as drug trafficking, counterfeiting, human trafficking, weapons traffick- ing, and oil smuggling, account for about $1 trillion or 1.5 per- cent of global GDP. For details, see “Estimating Illicit Financial Flows Resulting from Drug Trafficking and other Transnational Organized Crimes,” Washington, DC: UN Office of Drugs and Crime, September 2011. On the higher end, in a speech to Interpol in Singapore in 2009, U.S. Deputy Attorney General Ogden cited 15 percent of world GDP as total annual turnover of TOC. See Josh Meyer, “U.S. attorney general calls for global effort to fight organized crime,” Los Angeles Times, October 13, 2009, available from articles.latimes.com/print/2009/oct/13/nation/na-crime13.
  3. This definition is adapted from the 1998 UN Conven- tion on Transnational Organized Crime and Protocols Thereto, UNODC, Vienna, Austria; and the 2011 Strategy to Combat Trans- national Organized Crime, available from www.whitehouse.gov/ administration/eop/nsc/transnational-crime/definition.
  4. The self-proclaimed “Bolivarian” states (Venezuela, Ecua- dor, Bolivia, and Nicaragua) take their name from Simón Bolivar, the revered 19th-century leader of South American independence from Spain. They espouse 21st-century socialism, a vague notion that is deeply hostile to free market reforms, to the United States as an imperial power, and toward traditional liberal democratic concepts, as will be described in detail.
  5. One of the most detailed cases involved the 2001 weapons transfers among Hezbollah operatives in Liberia, a retired Israeli officer in Panama, and a Russian weapons merchant in Guatema- la. A portion of the weapons, mostly AK-47 assault rifles, ended up with the United Self Defense Forces of Colombia (Autodefensas Unidads de Colombia [AUC]), a designated terrorist organization heavily involved in cocaine trafficking. The rest of the weapons, including anti-tank systems and anti-aircraft weapons, likely end- ed up with Hezbollah. For details, see Douglas Farah, Blood From Stones: The Secret Financial Network of Terror, New York: Broadway Books, 2004.
  6. For a detailed look at this development, see Antonio L. Mazzitelli, “The New Transatlantic Bonanza: Cocaine on High- way 10,” North Miami, FL: Western Hemisphere Security Analy- sis Center, Florida International University, March 2011.
  7. The FARC is the oldest insurgency in the Western hemi- sphere, launched in 1964 by Colombia’s Liberal Party militias, and enduring to the present as a self-described Marxist revolu- tionary movement. For a more detailed look at the history of the FARC, see Douglas Farah, “The FARC in Transition: The Fatal Weakening of the Western Hemisphere’s Oldest Guerrilla Move- ment,” NEFA Foundation, July 2, 2008, available from www.nefa- foundation.org/miscellaneous/nefafarc0708.pdf.
  8. These include recently founded Community of Latin Amer- ican and Caribbean States (Comunidad de Estados Latinoamericanos y Caribeños [CELAC]), and the Bolivarian Alliance for the Peoples of Our America (Alianza Bolivariana para los Pueblos de Nuestra América [ALBA]).
  9. James R. Clapper, Director of National Intelligence, “Un- classified Statement for the Record: Worldwide Threat Assessment of the US Intelligence Community for the Senate Select Committee on Intelligence,” January 31, 2012, p. 6.
  10. For the most comprehensive look at Russian Organized Crime in Latin America, see Bruce Bagley, “Globalization, Ungov- erned Spaces and Transnational Organized Crime in the Western Hemisphere: The Russian Mafia,” paper prepared for Internation- al Studies Association, Honolulu, HI, March 2, 2005.
  11. Ruth Morris, “China: Latin America Trade Jumps,” Latin American Business Chronicle, May 9, 2011, available from www. latinbusinesschronicle.com/app/article.aspx?id=4893.
  12. Daniel Cancel, “China Lends Venezuela $20 Billion, Se- cures Oil Supply,” Bloomberg News Service, April 18, 2010. By the end of August 2011, Venezuela’s publicly acknowledged debt to China stood at some $36 billion, equal to the rest of its out- standing international debt. See Benedict Mander, “More Chinese Loans for Venezuela,” FT Blog, September 16, 2011, available from blogs.ft.com/beyond-brics/2011/09/16/more-chinese-loans-4bn- worth-for-venezuela/#axzz1Z3km4bdg.
  13. “Quito y Buenos Aires, Ciudades preferidas para narcos nigerianos” (“Quito and Buenos Aires, Favorite Cities of Narco Nigerians”), El Universo Guayaquil, Ecuador, January 3, 2011.
  14. Louise Shelley, “The Unholy Trinity: Transnational Crime, Corruption and Terrorism,” Brown Journal of World Affairs, Vol. XI, Issue 2, Winter/Spring 2005.
  15. National Security Council, “Strategy to Combat Trans- national Organized Crime: Addressing Converging Threats to National Security,” Washington, DC: Office of the President, July 2011. The Strategy grew out of a National Intelligence Estimate inititated by the Bush administration and completed in December 2008, and is a comprehensive government review of transnational organized crime, the first since 1995.
  16. “Fact Sheet: Strategy to Combat Transnational Organized Crime,” Washington, DC: Office of the Press Secretary, the White House, July 25, 2011.
  17. Ibid.
  18. Stewart Patrick, Weak Links: Fragile States, Global Threats and International Security, Oxford, UK: Oxford University Press, 2011.
  19. The phrase “dangerous spaces” was used by Phil Williams to describe 21st-century security challenges in terms of spaces and gaps, including geographical, functional, social, economic, legal, and regulatory holes. See Phil Williams, “Here Be Dragons: Dan- gerous Spaces and International Security,” Anne L. Clunan and Harold A. Trinkunas eds., Ungoverned Spaces: Alternatives to State Authority in an Era of Softened Sovereignty, Stanford, CA: Stanford University Press, 2010, pp. 34-37.
  20. For a more complete look at Transnistria and an excellent overview of the global illicit trade, see Misha Glenny, McMafia: A Journey Through the Global Criminal Underworld, New York: Alfred A. Knopf, 2008.
  21. For a complete look at the operations of Taylor, recently convicted in the Special Court for Sierra Leone in the Hague for crimes against humanity, see Douglas Farah, Blood From Stones: The Secret Financial Network of Terror, New York: Broadway Books, 2004.
  22. For a look at the weapons transfers, see “Los ‘rockets’ Venezolanos” Semana, Colombia, July 28, 2009. For a look at doc- umented financial and logistical support of Chávez and Correa for the FARC, see “The FARC Files: Venezuela, Ecuador, and the Secret Archives of ‘Raúl Reyes,’” An IISS Strategic Dossier, Wash- ington, DC: International Institute for Strategic Studies, May 2011. To see FARC connections to Evo Morales, see Douglas Farah, “Into the Abyss: Bolivia Under Evo Morales and the MAS,” Alex- andria, VA: International Assessment and Strategy Center, 2009.
  23. Douglas Farah, “Iran in Latin America: Strategic Security Issues,” Alexandria, VA: International Assessment and Strategy Center, Defense Threat Reduction Agency Advanced Systems and Concept Office, May 2011.
  24. Rem Korteweg and David Ehrhardt, “Terrorist Black Holes: A Study into Terrorist Sanctuaries and Governmental Weakness,” The Hague, The Netherlands: Clingendael Centre for Strategic Studies, November 2005, p. 22.
  25. Robert H. Jackson, Quasi-states: Sovereignty, International Relations and the Third World, Cambridge, UK: Cambridge Univer- sity Press, 1990. Jackson defines negative sovereignty as freedom from outside interference, the ability of a sovereign state to act in- dependently, both in its external relations and internally, towards its people. Positive sovereignty is the acquisition and enjoyment of capacities, not merely immunities. In Jackson’s definition, it presupposes “capabilities which enable governments to be their own masters” (p. 29). The absence of either type of sovereignty can lead to the collapse of or absence of state control.
  26. Anne L. Clunan and Harold A. Trinkunas eds., Ungov- erned Spaces: Alternatives to State Authority in an Era of Softened Sovereignty, Stanford, CA: Stanford University Press, 2010, p. 19.
  27. Moises Naim, Illicit: How Smugglers, Traffickers, and Copy- cats are Hijacking the Global Economy, New York: Anchor Books, 2006, p. 33.
  28. Robert Killebrew and Jennifer Bernal, “Crime Wars: Gangs, Cartels and U.S. National Security,” Washington, DC: Center for New American Security, September 2010, available from www.cnas.org/files/documents/publications/CNAS_CrimeWars_ KillebrewBernal_3.pdf.
  29. Max G. Manwaring, Street Gangs: The New Urban Insurgency, Carlisle, PA: Strategic Studies Institute, U.S. Army War College, March 2005.
  30. As is true in much of Central America and Colombia, in Mexico there are centuries-old sanctuaries used by outlaws where the state had little authority. For a more complete explanation, see Gary Moore, “Mexico, the Un-failed State: A Geography Lesson,” InsightCrime, November 9, 2011, available from insight- crime.com/insight-latest-news/item/1820-mexico-the-un-failed-state-a- geography-lesson.
  31. For a look at the factors that led to the rise of the Bolivarian leaders, see Eduardo Gamarra, “Bolivia on the Brink: Center for Preventative Action, Council on Foreign Relations, February 2007; Cynthia J. Arnson et al., La Nueva Izquierda en América Latina: Derechos Humanos, Participación Política y Sociedad Civil (The New Left in Latin America: Human Rights, Political Participation and Civil Society), Washington, DC: The Woodrow Wilson International Center for Scholars, January 2009; Farah, “Into the Abyss: Bolivia Under Evo Morales and the MAS”; Farah and Simpson.
  32. See “Iran to Help Bolivia Build Peaceful Nuclear Power Plant,” Xinhua, October 31, 2010; Russia Izvestia Information, September 30, 2008; and Agence France Presse, “Venezuela Wants to Work With Russia on Nuclear Energy: Chávez,” September 29, 2008.
  33. Author interview with IAEA member in November, 2011. The official said the agency had found that Iran possessed enough uranium stockpiled to last a decade. Moreover, he said the evidence pointed to acquisition of minerals useful in missile production. He also stressed that dual-use technologies or items specifically used in the nuclear program had often been shipped to Iran as automotive or tractor parts. Some of the principal investments Iran has made in the Bolivarian states have been in a tractor fac- tory that is barely operational, a bicycle factory that does not seem to produce bicycles, and automotive factories that have yet to be built.
  34. “Venezuela/Iran ALBA Resolved to Continue Economic Ties with Iran,” Financial Times Information Service, July 15, 2010.
  35. Manwaring.
  36. These typologies were developed and discussed more completely, including the national security implications of their growth, in Richard Shultz, Douglas Farah, and Itamara V. Lo- chard, “Armed Groups: A Tier-One Security Priority,” USAF Academy, CO: USAF Institute for National Security Studies, Occasional Paper 57, September 2004.
  37. Louise I. Shelley, John T. Picarelli et al., Methods and Mo- tives: Exploring Links between Transnational Organized Crime and International Terrorism, Washington, DC: Department of Justice, September 2005.
  38. Ibid., p. 5.
  39. While much of Operation TITAN remains classified, there has been significant open source reporting, in part because the Colombian government announced the most important arrests. For the most complete look at the case, see Jo Becker, “Investi- gation into bank reveals links to major South American cartels,” International Herald Tribune, December 15, 2011. See also Chris Kraul and Sebastian Rotella, “Colombian Cocaine Ring Linked to Hezbollah,” Los Angeles Times, October 22, 2008; and “Por Lavar Activos de Narcos y Paramilitares, Capturados Integrantes de Or- ganización Internatcional” (“Members of an International Orga- nization Captured for Laundering Money for Narcos and Parami- litaries”), Fiscalía General de la Republica (Colombia) (Attorney General’s Office of Colombia), October 21, 2008.
  40. Among the reasons for the increase in cocaine trafficking to Western Europe is the price. While the cost of a kilo of cocaine averages about $17,000 in the United States, it is $37,000 in the EU. Shipping via Africa is relatively inexpensive and relatively attractive, given the enhanced interdiction efforts in Mexico and the Caribbean. See Antonio Mazzitelli, “The Drug Trade: Africa’s Expanding Role,” United Nations Office on Drugs and Crime, presentation at the Woodrow Wilson Center for International Scholars, May 28, 2009.
  41. Benjamin Weiser and William K. Rashbaum, “Liberian Of- ficials Worked with U.S. Agency to Block Drug Traffic,” New York Times, June 2, 2010.
  42. For a history of AQIM, see “Algerian Group Backs al Qaeda,” BBC News, October 23, 2003, available from news.bbc. co.uk/2/hi/africa/3207363.stm. For an understanding of the relation- ship among the different ethnic groups, particularly the Tuareg, and AQIM, see Terrorism Monitor, “Tuareg Rebels Joining Fight Against AQIM?” Jamestown Foundation, Vol. 8, Issue 40, November 4, 2010.
  43. Evan Perez, “U.S. Accuses Iran in Plot: Two Charged in Alleged Conspiracy to Enlist Drug Cartel to Kill Saudi Ambas- sador,” The Wall Street Journal, October 12, 2011.
  44. “La Amenaza Iraní” (“The Iranian Threat”), Univision Documentales, aired December 8, 2011.
  45. Sebastian Rotella, “Government says Hezbollah Profits From U.S. Cocaine Market via Link to Mexican Cartel,” ProPubli- ca, December 11, 2011.
  46. For an examination of the “cultures of contraband” and their implications in the region, see Rebecca B. Galemba, “Cultures of Contraband: Contesting the Illegality at the Mexico-Guatemala Border,” Ph.D. dissertation, Brown University Department of An- thropology, May 2009. For a look at the use of traditional smug- gling routes in TOC structures in Central America, see Doug- las Farah, “Mapping Transnational Crime in El Salvador: New Trends and Lessons From Colombia,” North Miami, FL: Western Hemisphere Security Analysis Center, Florida International Uni- versity, August 2011.
  47. For a more complete look at that conflict and other con- flicts over plazas, see Samuel Logan and John P. Sullivan, “The Gulf-Zeta Split and the Praetroian Revolt,” International Relations and Security Network, April 7, 2010, available from www.isn.ethz. ch/isn/Security-Watch/Articles/Detail/?ots591=4888caa0-b3db-1461- 98b9-e20e7b9c13d4&lng=en&id=114551.
  48. Mazzitelli.
  49. “Drug Trafficking as a Security Threat in West Africa,” New York: UN Office on Drugs and Crime, October 2008.
  50. For a look at the chaos in Guinea Bissau, see “Guinea- Bissau president shot dead,” BBC News, March 2, 2009, available from news.bbc.co.uk/2/hi/7918061.stm.
  51. Patrick.
    52. See, for example, Robert I. Rotberg, “Failed States, Collapsed States, Weak States: Causes and Indicators,” Failure and State Weakness in a Time of Terror, Washington, DC: Brookings In- stitution, January 2003.
  52. Rotberg.
  53. Rem Korteweg and David Ehrhardt, “Terrorist Black Holes: A Study into Terrorist Sanctuaries and Governmental Weakness,” The Hague, The Netherlands: Clingendael Centre for Strategic Studies, November 2005, p. 26.
  54. “The Failed States Index 2009,” Foreign Policy Magazine, July/August 2009, pp. 80-93, available from www.foreignpolicy.com/ articles/2009/06/22/2009_failed_states_index_interactive_map_and_ rankings.
  55. Julio A. Cirino et al., “Latin America’s Lawless Areas and Failed States,” in Paul D. Taylor, ed., Latin American Security Chal- lenges: A Collaborative Inquiry from North and South, Newport, RI: Naval War College, Newport Papers 21, 2004. Commercial insur- gencies are defined as engaging in “for-profit organized crime without a predominate political agenda,” leaving unclear how that differs from groups defined as organized criminal organiza- tions.
  56. For details of Taylor’s activities, see Douglas Farah, Blood From Stones: The Secret Financial Network of Terror, New York: Broadway Books, 2004.
  57. Hannah Stone, “The Comeback of Suriname’s ‘Narco- President’,” Insightcrime.org, Mar 4, 2011, available from insight- crime.org/insight-latest-news/item/865-the-comeback-of-surinames- narco-president.
  58. Simon Romero, “Returned to Power, a Leader Celebrates a Checkered Past,” The New York Times, May 2, 2011.
  59. “Wikileaks: Chávez funded Bouterse,” The Nation (Barba- dos), February 2, 2011.
  60. Harmen Boerboom, “Absence of Chávez a blessing for Su- riname,” Radio Netherlands Worldwide, August 12, 2010.
  61. For a look at the Zetas in Guatemala, see Steven Dudley, “The Zetas in Guatemala,” InSight Crime, September 8, 2011. For a look at Los Perrones in El Salvador, see Douglas Farah, “Organized Crime in El Salvador: Homegrown and Transnational Dimen- sions,” Organized Crime in Central America: The Northern Triangle, Woodrow Wilson Center Reports on the Americas #29, Wash- ington, DC: Woodrow Wilson International Center for Scholars, November 2011, pp. 104-139, available from www.wilsoncenter.org/ sites/default/files/LAP_single_page.pdf.
  62. Louise Shelley, “The Unholy Trinity: Transnational Crime, Corruption and Terrorism,” Brown Journal of World Affairs, Vol. XI, Issue 2, Winter/Spring 2005, p. 101.
  63. See Bill Lahneman and Matt Lewis, “Summary of Proceed- ings: Organized Crime and the Corruption of State Institutions,” College Park, MD: University of Maryland, November 18, 2002, available from www.cissm.umd.edu/papers/files/organizedcrime.pdf.
  64. Author interviews with Drug Enforcement Administra- tion and National Security Council officials; for example, two aircraft carrying more than 500 kgs of cocaine were stopped in Guinea Bissau after arriving from Venezuela. See “Bissau Police Seize Venezuelan cocaine smuggling planes,” Agence France Presse, July 19, 2008.
  65. “FARC Terrorist Indicted for 2003 Grenade Attack on Americans in Colombia,” Department of Justice Press Re- lease, September 7, 2004. available from www.usdoj.gov/opa/ pr/2004/September/04_crm_599.htm; and Official Journal of the European Union, Council Decision of December 21, 2005, available from eur-lex.europa.eu/LexUriServ/site/en/oj/2005/l_340/l_ 34020051223en00640066.pdf.
  66. “The FARC Files: Venezuela, Ecuador and the Secret Ar- chives of ‘Raúl Reyes’,” Washington, DC: International Institute for Strategic Studies,” May 2011.
  67. The strongest documentary evidence of Chávez’s support for the FARC comes from the Reyes documents, which contained the internal communications of senior FARC commanders with senior Venezuelan officials. These documents discuss everything from security arrangements in hostage exchanges to the possibil- ity of joint training exercises and the purchasing of weapons. For full details of these documents and their interpretation, see Ibid.
  68. “Treasury Targets Venezuelan Government Officials Sup- port of the FARC,” Washington, DC: U.S. Treasury Department, Office of Public Affairs, September 12, 2008. The designations came on the heels of the decision of the Bolivian government of Evo Morales to expel the U.S. ambassador, allegedly for support- ing armed movements against the Morales government. In soli- darity, Chávez then expelled the U.S. ambassador to Venezuela. In addition to the citations of the Venezuelan officials, the United States also expelled the Venezuelan and Bolivian ambassadors to Washington.
  69. “Chávez Shores up Military Support,” Stratfor, November 12, 2010.
  70. “Venezuela: Asume Nuevo Ministro De Defensa Acusado de Narco por EEUU” (“Venezuela: New Minister Accused by the United States of Drug Trafficking Takes Office”), Agence France Presse, January 17, 2012.
  71. Robert M. Morgenthau, “The Link Between Iran and Ven- ezuela: A Crisis in the Making,” speech at the Brookings Institu- tion, Washington, DC, September 8, 2009.
  72. Colombia, Venezuela: Another Round of Diplomatic Fu- ror,” Strafor, July 29, 2010.
  73. The FARC Files: Venezuela, Ecuador and the Secret Ar- chives of ‘Raúl Reyes’.”
  74. The Colombian decision to extradite Makled to Venezu- ela rather than the United States caused significant tension be- tween the two countries and probably means that the bulk of the evidence he claims to possess will never see the light of day. Among the documents he presented in prison were his checks cashed by senior generals and government officials and videos of what appear to be senior government officials in his home dis- cussing cash transactions. For details of the case, see José de Cór- doba and Darcy Crowe, “U.S. Losing Big Drug Catch,” The Wall Street Journal, April 1, 2011; “Manhattan U.S. Attorney Announces Indictment of one of World’s Most Significant Narcotics Kingpins,” United States Attorney, Southern District of New York, November 4, 2010.
  75. “Makled: Tengo suficientes pruebas sobre corrupción y narcotráfico para que intervengan a Venezuela” (“Makled: I have Enough Evidence of Corruption and Drug Trafficking to justify an invasion of Venezuela”), NTN24 TV (Colombia), April 11, 2011.
  76. For a more comprehensive look at the history of the FARC; its relations with Bolivia, Venezuela, and Ecuador; and its involvement in drug trafficking, see “The FARC Files: Ven- ezuela, Ecuador and the Secret Archives of ‘Raúl Reyes’”; Doug- las Farah, “Into the Abyss: Bolivia Under Evo Morales and the MAS,” Alexandria, VA: International Assessment and Strategy Center, June 2009; Douglas Farah and Glenn Simpson, “Ecuador at Risk: Drugs, Thugs, Guerrillas and the ‘Citizens’ Revolution,” Alexandria, VA: International Assessment and Strategy Center, January 2010.
  77. Farah, “Into the Abyss: Bolivia Under Evo Morales and the MAS.”
  78. Martin Arostegui, “Smuggling Scandal Shakes Bolivia,” The Wall Street Journal, March 3, 2011.
  79. Farah, “Into the Abyss: Bolivia Under Evo Morales and the MAS.”
  80. “The FARC Files: Venezuela, Ecuador, and the Secret Ar- chives of ‘Raul Reyes’’’; Farah and Simpson; and Francisco Huer- ta Montalvo et al., “Informe Comisión de Transparencia y Verdad: Caso Angostura” (“Report of the Commission on Transparency and Truth: The Angostura Case”), December 10, 2009, available from www.scribd.com/doc/24329223/informe-angostura.
  81. Farah and Simpson.
  82. For details of the relationships among these officials the president’s sister, and the Ostaiza brothers, see Farah and Simpson.
  83. “The FARC Files: Venezuela, Ecuador and the Secret Ar- chives of ‘Raúl Reyes’.”
  84. See, for example, Farah and Simpson; Huerta Montalvo; Arturo Torres, Juego del Camaleón: Los secretos de Angostura (The Chameleon’s Game: The Secrets of Angostura), 2009.
  85. “The FARC Files: Venezuela, Ecuador and the Secret Ar- chives of ‘Raúl Reyes’.”
  86. Farah, “Into the Abyss.”
  87. Eugene Roxas, “Spiritual Guide who gave Evo Baton caught with 350 kilos of liquid cocaine,” The Achacachi Post (Bo- livia), July 28, 2010.
  88. “Panama arrests Bolivia ex-drugs police chief Sanabria,” BBC News, February 26, 2011.
  89. “Las FARC Buscaron el Respaldo de Boliva Para Lograr Su Expansión” (“The FARC Looked for Bolivian Support in Order to Expand”).
  90. The MAS is the coalition of indigenous and coca growing organizations that propelled Morales to his electoral victory. The movement, closely aligned with Chávez and funded by the Ven- ezuelan government, has defined itself as Marxist, socialist, and anti-imperilist.
  91. It is interesting to note that Peredo’s brothers Roberto (aka Coco) and Guido (aka Inti) were the Bolivian contacts of Che Gue- vara, and died in combat with him. The two are buried with Gue- vara in Santa Clara, Cuba.
  92. A copy of the founding manifesto of the EMP and its adherents is available from bibliotecavirtual.clacso.org.ar/ar/libros/osal/ osal10/documentos.pdf.
  93. Prensa Latina, “President Boliviano Anunica Creación de Estado Mayor Popular” (“Bolivian President Announces the For- mation of a People’s High Command”), February 2, 2006.
  94. “Estado Mayor del Pueblo Convoca a Defender Al Gobi- erno de Evo” (“People’s High Command Calls for the Defense of Evo’s Government”), Agencia Boliviana de Informacion, April 17, 2006, available from www.bolpress.com/art.php?Cod=2006041721.
  95. The situation has changed dramatically with the election of Juan Manuel Santos as President of Colombia in 2010. Despite serving as Uribe’s defense minister during the most successful operations against the FARC and developing a deeply antagonis- tic relationship with Chávez in that capacity, relations between Santos and the Bolivarian heads of state have been surprisingly cordial since he took office. This is due in part to Santos’ agreeing to turn over copies of the Reyes hard drives to Correa, and his ex- pressed desire to normalize relations with Chávez. A particularly sensitive concession was allowing the extradition of Walid Mak- led, a designated drug kingpin by the United States, to be extra- dited to Venezuela rather than to stand trial in the United States.
  96. “The FARC Files: Venezuela, Ecuador and the Secret Ar- chives of ‘Raúl Reyes’.”
  97. March 11, 2005, e-mail from Iván Ríos to Raúl Reyes, pro- vided by Colombia officials, in possession of the author.
  98. April 1, 2006, e-mail from Raúl Reyes to Aleyda, provided by Colombia officials, in possession of the author.
  99. Following Ortega’s disputed electoral triumph in No- vember 2011, the FARC published a congratulatory communiqué lauding Ortega and recalling their historically close relationship. “In this moment of triumph how can we fail to recall that memo- rable scene in Caguán when you gave the Augusto Cesar San- dino medal to our unforgettable leader Manuel Marulanda. We have always carried pride in our chests for that deep honor which speaks to us of the broad vision of a man who considers himself to be a spiritual son of Bolivar.” Available from anncol.info/index. php?option=com_content&view=article&id=695:saludo-a-daniel-orteg a&catid=71:movies&Itemid=589.
  100. Reyes was killed a few days after the CCB assembly when the Colombian military bombed his camp, which was in Ec- uadoran territory. The bombing of La Angostura caused a severe diplomatic rift between Colombia and Ecuador, but the raid also yielded several hundred gigabytes of data from the computers Reyes kept in the camp, where he lived in a hardened structure and had been stationary for several months.
  101. Farah and Simpson.
  102. “U.S. Counternarcotics Cooperation with Venezuela Has Declined,” Washington, DC: Government Accountability Office, July 2009, GAO-09-806.
  103. Ibid., p. 12.
  104. For a more detailed look at this debate, see Iran in Latin America: Threat or Axis of Annoyance? in which the author has a chapter arguing for the view that Iran is a significant threat.
  105. “‘Jackal’ book praises bin Laden,” BBC News, June 26, 2003.
  106. See, for example, Associated Press, “Chávez: ‘Carlos the Jackal’ a ‘Good Friend’,” June 3, 2006.
  107. Raúl Reyes (trans.) and Hugo Chávez, “My Struggle,” from a March 23, 1999, letter to Illich Ramirez Sánchez, the Venezuelan terrorist known as “Carlos the Jackal,” from Ven- ezuelan president Hugo Chávez, in response to a previous let- ter from Ramirez, who is serving a life sentence in France for murder. Harper’s, October 1999, available from harpers.org/ archive/1999/10/0060674.
  108. In addition to Operation TITAN, there have been numer- ous incidents in the past 18 months in which operatives being directly linked to Hezbollah have been identified or arrested in Venezuela, Colombia, Guatemala, Aruba, and elsewhere in Latin America.
  109. Verstrynge, born in Morocco to Belgian and Spanish parents, began his political career on the far right of the Spanish political spectrum as a disciple of Manuel Fraga, and served in a national and several senior party posts with the Alianza Popular. By his own admission he then migrated to the Socialist Party, but never rose through the ranks. He is widely associated with radical anti-globalization views and anti-U.S. rhetoric, repeatedly stating that the United States is creating a new global empire and must be defeated. Although he has no military training or experience, he has written extensively on asymmetrical warfare.
  110. Verstrynge., pp. 56-57.
  111. Bartolomé. See also John Sweeny, “Jorge Verstrynge: The Guru of Bolivarian Asymmetric Warfare,” September 9, 2005 available from www.vcrisis.com; and “Troops Get Provocative Book,” Miami Herald, November 11, 2005.
  112. “Turkey holds suspicious Iran-Venezuela shipment,” Associated Press, June 1, 2009, available from www.ynetnews.com/ articles/0,7340,L-3651706,00.html.
  113. For a fuller examination of the use of websites, see Doug- las Farah, “Islamist Cyber Networks in Spanish-Speaking Latin America,” North Miami, FL: Western Hemisphere Security Anal- ysis Center, Florida International University, September 2011.
  114. “Hispan TV begins with ‘Saint Mary’,” Tehran Times, December 23, 2011, available from www.tehrantimes.com/arts-and- culture/93793-hispan-tv-begins-with-saint-mary.
  115. For a more complete look at Iran’s presence in Latin America, see Douglas Farah, “Iran in Latin America: An Over- view,” Washington, DC: Woodrow Wilson International Center for Scholars, Summer 2009 (to be published as a chapter in Iran in Latin America: Threat or Axis of Annoyance? Cynthia J. Arnson et al., eds., 2010. For a look at the anomalies in the economic relations, see also Farah and Simpson.
  116. “Treasury Targets Hizbullah in Venezuela,” Washing- ton, DC: United States Department of Treasury Press Center, June 18, 2008, available from www.treasury.gov/press-center/press-releas- es/Pages/hp1036.aspx.
  117. Orlando Cuales, “17 arrested in Curacao on suspicion of drug trafficking links with Hezbollah,” Associated Press, April 29, 2009.
  118. United States District Court, Southern District of New York, The United States of America v Jamal Yousef, Indictment, July 6, 2009.
  119. For a look at how the Ecuadoran and Venezuelan banks function as proxies for Iran, particularly the Economic Devel- opment Bank of Iran, sanctioned for its illegal support of Iran’s nuclear program, and the Banco Internacional de Desarrollo, see Farah and Simpson.
  120. Office of the Spokesman, “Seven Companies Sanctioned Under Amended Iran Sanctions Act,” Washington, DC: U.S. De- partment of State, May 24, 2011, available from www.state.gov/r/ pa/prs/ps/2011/05/164132.htm.
  121. Russia Izvestia Information, September 30, 2008, and Agence France Presse, “Venezuela Wants to Work With Russia on Nuclear Energy: Chávez,” September 29, 2008.
  122. Simon Romero, “Venezuela Says Iran is Helping it Look for Uranium,” New York Times, September 25, 2009.
  123. Nikolai Spassky, “Russia, Ecuador strike deal on nuclear power cooperation,” RIA Novosti, August 21, 2009.
  124. José R. Cárdenas, “Iran’s Man in Ecuador,” Foreign Pol- icy, February 15, 2011, available from shadow.foreignpolicy.com/ posts/2011/02/15/irans_man_in_ecuador.
  125. The primary problem has been the inability of the Colom- bian government to deliver promised services and infrastructure after the military has cleared the area. See John Otis, “Decades of Work but No Land Titles to Show for It,” GlobalPost, Novem- ber 30, 2009. For a more complete look at the challenges posed by the reemergence and adaptability of armed groups, see Fundación Arco Iris, Informe 2009: El Declive de la Seguridad Democratica? (Re- port 2009: The Decline of Democratic Security?), available from www. nuevoarcoiris.org.co/sac/?q=node/605.

Notes on Methods and Motives: Exploring Links between Transnational Organized Crime & International Terrorism

Notes from Methods and Motives: Exploring Links between Transnational Organized Crime & International Terrorism

In preparation for the work on this report, we reviewed a significant body of academic research on the structure and behavior of organized crime and terrorist groups. By examining how other scholars have approached the issues of organized crime or terrorism, we were able to refine our methodology. This novel approach combines a framework drawn from intelligence analysis with the tenets of a methodological approach devised by the criminologist Donald Cressey, who uses the metaphor of an archeological dig to systematize a search for information on organized crime. All the data and examples used to populate the model have been verified, and our findings have been validated through the rigorous application of case study methods.

While experts broadly accept no single definition of organized crime, a review of the numerous definitions offered identifies several central themes.8 There is consensus that at least two perpetrators are in- volved, but there is a variety of views about the way organized crime is typically organized as a hierarchy or as a network.

Organized crime is a continuing enterprise, so does not include conspiracies that perpetrate single crimes and then go their separate ways. Furthermore, the overarching goals of organized crime groups are profit and power. Groups seek a balance between maximizing profits and minimizing their own risk, while striving for control by menacing certain businesses. Violence, or the threat of violence, is used to enforce obligations and maintain hegemony over rackets and enterprises such as extortion and narcotics smuggling. Corruption is a means of reducing the criminals’ own risk, maintaining control and making profits.

few definitions challenge the common view of organized crime as a ‘parallel government’ that seeks power at the expense of the state but retains patriotic or nationalistic ties to the state. This report takes up that challenge by illustrating the rise of a new class of criminal groups with little or no national allegiance. These criminals are ready to pro- vide services for terrorists as has been observed in European prisons.10

We prefer the definition offered by the UN Convention Against Transnational Organized Crime, which defines an organized crime group as “a structured group [that is not randomly formed for the im- mediate commission of an offense] of three or more persons, existing for a period of time and acting in concert with the aim of committing one or more serious crimes or offences [punishable by a deprivation of liberty of at least four years] established in accordance with this Convention, in order to obtain, directly or indirectly, a financial or other material benefit.

we prefer the notion of a number of shadow economies, in the same way that macroeconomists use the global economy, comprising markets, sectors and national economies, as their basic unit of reference.

terrorism scholar Bruce Hoffman has offered a comprehensive and useful definition of terrorism as the deliberate creation and exploitation of fear through violence or the threat of violence in the pursuit of political change.15 Hoffman’s definition offers precise terms of reference while remaining comprehensive; he further notes that terrorism is ‘political in aims and motives,’ ‘violent,’ ‘designed to have far-reaching psychological repercussions beyond the immediate victim or target,’ and ‘conducted by an organization with an identifiable chain of command or conspiratorial cell structure.’ These elements include acts of terrorism by many different types of criminal groups, yet they clearly circumscribe the violent and other terrorist acts. Therefore, the Hoffman definition can be applied to both groups and activities, a crucial distinction for this methodology we propose in this report.

Early identification of terror-crime cooperation occurred in the 1980s and focused naturally on narcoterrorism, a phrase coined by Peru’s President Belaunde Terry to describe the terrorist attacks against anti-narcotics police in Peru.

the links between narcotics trafficking and terror groups exist in many regions of the world but that it is difficult to make generalizations about the terror- crime nexus.

International relations theorists have also produced a group of scholarly works that examine organized crime and terrorism (i.e., agents or processes) as objects of investigation for their paradigms. While in some cases, the frames of reference international relations scholars employed proved too general for the purposes of this report, the team found that these works demonstrated more environmental or behavioral aspects of the interaction.

2.3 Data collection

Much of the information in the report that follows was taken from open sources, including government reports, private and academic journal articles, court documents and media accounts.

To ensure accuracy in the collection of data, we adopted standards and methods to form criteria for accepting data from open sources. In order to improve accuracy and reduce bias, we attempted to corroborate every piece of data collected from one secondary source with data from a further source that was independent of the original source — that is, the second source did not quote the first source. Second, particularly when using media sources, we checked subsequent reporting by the same publication to find out whether the subject was described in the same way as before. Third, we sought a more heterogeneous data set by examining foreign-language documents from non-U.S. sources. We also obtained primary- source materials such as declassified intelligence reports from the Republic of Georgia, that helped to clarify and confirm the data found in secondary sources.

Since all these meetings were confidential, it was agreed in all cases that the information given was not for attribution by name.

For each of these studies, researchers traveled to the regions a number of times to collect information. Their work was combined with relevant secondary sources to produce detailed case studies presented later in the report. The format of the case studies followed the tenets outlined by Robert Yin, who proposes that case studies offer an advantage to researchers who present data illustrating complex relationships – such as the link between organized crime and terror.

2.4. Research goals

This project aimed to discover whether terrorist and organized crime groups would borrow one another’s methods, or cooperate, by what means, and how investigators and analysts could locate and assess crime-terror interactions. This led to an examination of why this overlap or interaction takes place. Are the benefits merely logistical or do both sides derive some long-term gains such as undermining the capacity of the state to detect and curtail their activities?

preparation of the investigative environment (PIE), by adapting a long-held military practice called intelligence preparation of the battlespace (IPB). The IPB method anticipates enemy locations and movements in order to obtain the best position for a commander’s limited battlefield resources and troops. The goal of PIE is similar to that of IPB—to provide investigators and analysts a strategic and discursive analytical method to identify areas ripe for locating terror and crime interactions, confirm their existence and then assess the ramifications of these collaborations. The PIE approach provides twelve watch points within which investigators and analysts can identify those areas most likely to contain crime-terror interactions.

The PIE methodology was designed with the investigator and analyst in mind, and thus PIE demonstrates how to establish investigations in a way that expend resources most fruitfully. The PIE methodology shows how insights can be gained from analysts to help practitioners identify problems and organize their investigations more effectively.

2.5. Research challenges

Our first challenge in investigating the links between organized crime and terrorism was to obtain enough data to provide an accurate portrayal of that relationship. Given the secrecy of all criminal organizations, many traditional methods of quantitative and qualitative research were not viable. Nonetheless we con- ducted numerous interviews, and obtained identified statements from investigators and policy officials. Records of legal proceedings, criminal records, and terrorist incident reports were also important data sources.

The strategy underlying the collection of data was to focus on the sources of interaction wherever they were located (e.g., developing countries and urban areas), rather than on instances of interaction in developed countries like the September 11th or the Madrid bombing investigations. In so doing, the project team hoped to avoid characterizing the problem “from out there.”

All three case studies highlight patterns of association that are particularly visible, frequent, and of lengthy duration. Because the conflict regions in the case studies also contribute to crime in the United States, our view was these models were needed to perceive patterns of association that are less visible in other environments. A further element in the selection of these regions was practical: in each one, researchers affiliated with the project had access to reliable sources with first-hand knowledge of the subject matter. Our hypothesis was that some of the most easy to detect relations would be in these societies that are so corrupted and with such limited enforcement that the phenomena might be more open for analysis and disclosure than in environments where this is more covert.

  1. A new analytical approach: PIE

Investigators seeking to detect a terrorist activity before an incident takes place are overwhelmed by data.

A counterterrorist analyst at the Central Intelligence Agency took this further, noting that the discovery of crime-terror interactions was often the accidental result of analysis on a specific terror group, and thus rarely was connected to the criminal patterns of other terror groups.

IPB is an attractive basis for analyzing the behavior of criminal and terrorist groups because it focuses on evidence about their operational behavior as well as the environment in which they operate. This evidence is plentiful: communications, financial transactions, organizational forms and behavioral patterns can all be analyzed using a form of IPB.

the project team has devised a methodology based on IPB, which we have termed preparation of the investigation environment, or PIE. We define PIE as a concept in which investigators and analysts organize existing data to identify areas of high potential for collaboration between terrorists and organized criminals in order to focus next on developing specific cases of crime-terror interaction—thereby generating further intelligence for the development of early warning on planned terrorist activity.

While IPB is chiefly a method of eliminating data that is not likely to be relevant, our PIE method also provides positive indicators about where relevant evidence should be sought.

3.1 The theoretical basis for the PIE Method

Donald Cressey’s famous study of organized crime in the U.S., with the analogy of an archeological dig, was the starting point for our model of crime-terror cooperation.35 As Cressey defines it, archeologists first examine documentary sources to collect what is known and develop a map based on what is known. That map allows the investigator to focus on those areas that are not known—that is, the archeologist uses the map to focus on where to dig. The map also serves as a context within which artifacts discovered during the dig can be evaluated for their significance. For example, discovery of a bowl at a certain depth and location can provide information to the investigator concerning the date of an encampment and who established it.

The U.S. Department of Defense defines IPB as an analytical methodology employed to reduce un- certainties concerning the enemy, environment, and terrain for all types of operations. Intelligence preparation of the battlespace builds an extensive database for each potential area in which a unit may be re- quired to operate. The database is then analyzed in detail to determine the impact of the enemy, environment, and terrain on operations and presents it in graphic form.36 Alongside Cressey’s approach, IPB was selected as a second basis of our methodological approach.

Territory outside the control of the central state such as exists in failed or failing states, poorly regulated or border regions (especially those regions surrounding the intersection of multiple borders), and parts of otherwise viable states where law and order is absent or compromised, including urban quarters populated by diaspora communities or penal institutions, are favored locales for crime-terror interactions.

3.2 Implementing PIE as an investigative tool

Organized crime and terrorist groups have significant differences in their organizational form, culture, and goals. Bruce Hoffman notes that terrorist organizations can be further categorized based on their organizational ideology.

In converting IPB to PIE, we defined a series of watch points based on organizational form, goals, culture and other aspects to ensure PIE is flexible enough to compare a transnational criminal syndicate or a traditional crime hierarchy with an ethno-nationalist terrorist faction or an apocalyptic terror group.

The standard operating procedures and means by which military units are expected to achieve their battle plan are called doctrine, which is normally spelled out in great detail as manuals and training regimens. The doctrine of an opposing force thus is an important part of an IPB analysis. Such information is equally important to PIE, but is rarely found in manuals nor is it as highly developed as military doctrines.

Once the organizational forms, terrain and behavior of criminal and terrorist groups were defined at this level of detail, we settled on 12 watch points to cover the three components of PIE. For example, the watch point entitled organizational goals examines what the goals of organized crime and terror groups can tell investigators about potential collaboration or overlap between the two.

Investigators using PIE will collect evidence systematically through the investigation of watch points and analyze the data through its application to one or more indicators. That in turn will enable them to build a case for making timely predictions about crime-terror cooperation or overlap. Conversely, PIE also provides a mechanism for ruling out such links.

The indicators are designed to reduce the fundamental uncertainty associated with seemingly disparate or unrelated pieces of information. They also serve as a way of constructing probable cause, with evidence triggering indicators.

Although some watch points may generate ambiguous indicators of interaction between terror and crime, providing investigators and analysts with negative evidence of collusion between criminals and terrorists also has the practical benefit of steering scarce resources toward higher pay-off areas for detecting cooperation between the groups.

3.3. PIE composition: Watch points and indicators

The first step for PIE is to identify those areas where terror-crime collaborations are most likely to occur. To prepare this environment, PIE asks investigators and analysts to engage in three preliminary analyses. These are first to map where particular criminal and terrorist groups are likely to be operating, both in physical geographic terms and through information traditional and electronic media; secondly, to develop typologies for the behavior patterns of the groups and, when possible, their broader networks (often represented chronologically as a timeline); thirdly, to detail the organizations of specific crime and terror groups and, as feasible, their networks.

The geographical areas where terrorists and criminals are highly likely to be cooperating are known in IPB parlance as named areas of interest, or localities that are highly likely to support military operations. In PIE they are referred to as watch points.

A critical function of PIE is to set sensible priorities for analysts.

The second step of a PIE analysis concentrates on the watch points to identify named areas of inter- action where overlaps between crime and terror groups are most likely. The PIE method expresses areas of interest geographically but remains focused on the overlap between terrorism and organized crime.

the three preliminary analyses mentioned above are deconstructed into watch points, which are broad categories of potential crime-terror interactions.

the use of PIE leads to the early detection of named areas of interest through the analysis of watch points, providing investigators the means of concentrating their focus on terror-crime interactions and thereby enhancing their ability to detect possible terrorist planning.

The third and final step is for the collection and analysis of information that indicates organizational, operational or other nodes whereby criminals and terrorists appear to interact. While watch points are broad categories, they are composed of specific indicators of how organized criminals and terrorists might cooperate. These specific patterns of behavior help to confirm or deny that a watch point is applicable.

If several indicators are present, or if the indicators are particularly clear, this bolsters the evidence that a particular type of terror-crime interaction is present. No single indicator is likely to provide ‘smoking gun’ evidence of a link, although examples of this have occasionally arisen. Instead, PIE is a holistic approach that collects evidence systematically in order to make timely predictions of an affiliation, or not, between specific criminal and terrorist groups.

For policy analysts and planners, indicators reduce the sampling risk that is unavoidable for anyone collecting seemingly disparate and unrelated pieces of evidence. For investigators, indicators serve as a means of constructing probable cause. Indeed, even negative evidence of interaction has the practical benefit of helping investigators and analysts manage their scarce resources more efficiently.

3.4 The PIE approach in practice: Two Cases

the process began with the collection of relevant information (scanning) that was then placed into the larger context of watch points and indicators (codification) in order to produce the aforementioned analytical insights (abstraction).

 

Each case will describe how the TraCCC team shared (diffusion) its findings in or- der to obtain validation and to have an impact on practitioners fighting terrorism and/or organized crime.

3.4.1 The Georgia Case

In 2003-4, TraCCC used the PIE approach to identify one of the largest money laundering cases ever successfully prosecuted. The PIE method helped close down a major international vehicle for money laundering. The ability to organize the financial records from a major money launderer allowed the construction of a significant network that allowed understanding of the linkages among major criminal groups whose relationship has not previously been acknowledged.

Some of the information most pertinent to Georgia included but that was not limited to:

  1. Corrupt Georgian officials held high law enforcement positions prior to the Rose Revolution and maintained ties to crime and terror groups that allowed them to operate with impunity;
  2. Similar patterns of violence were found among organized crime and terrorist groups operating in Georgia;
  3. Numerous banks, corrupt officials and other providers of illicit goods and services assisted both organized crime and terrorists
  4. Regions of the country supported criminal infrastructures useful to organized crime and terrorists alike, including Abkhazia, Ajaria and Ossetia.

Combined with numerous other pieces of information and placed into the PIE watch point structure, the resulting analysis triggered a sufficient number of indicators to suggest that further analysis was warranted to try to locate a crime-terror interaction.

 

The second step of the PIE analysis was to examine information within the watch points for connections that would suggest patterns of interaction between specific crime and terror groups. These points of interaction are identified in the Black Sea case study but the most successful identification was found from an analysis of the watch point that specifically examined the financial environment that would facilitate the link between crime and terrorism.

The TraCCC team began its investigation within this watch point by identifying the sectors of the Georgian economy that were most conducive to economic crime and money laundering. This included such sectors as energy, railroads and banking. All of these sectors were found to be highly criminalized.

Only by having researchers with knowledge of the economic climate, the nature of the business community and the banking sector determined that investigative resources needed to be concentrated on the “G” bank. By knowing the terrain, investigative focus was focused on “G” bank by the newly established financial investigative unit of the Central Bank. A six-month analysis of the G bank and its transactions enabled the development of a massive network analysis that facilitated prosecution in Georgia and may lead to prosecutions in major financial centers that were previously unable to address some crime groups, at least one of which was linked to a terrorist group.

Using PIE allowed a major intelligence breakthrough.

First, it located a large facilitator of dirty money. Second, the approach was able to map fundamental connections between crime and terror groups. Third, the analysis highlighted the enormous role that purely “dirty banks” housed in countries with small economies can provide as a service for transnational crime and even terrorism.

While specific details must remain sealed due to deference to ongoing legal proceedings, to date the PIE analysis has grown into investigations in Switzerland, and others in the US and Georgia.

the PIE approach is one that favors the construction and prosecution of viable cases.

the PIE approach is a platform for starting and later focusing investigations. When coupled with investigative techniques like network analysis, the PIE approach supports the construction and eventual prosecution of cases against organized crime and terrorist suspects.

3.4.2 Russian Closed Cities

In early 2005, a US government agency asked TraCCC to identify how terrorists are potentially trying to take advantage of organized crime groups and corruption to obtain fissile material in a specific region of Russia—one that is home to a number of sensitive weapons facilities located in so-called “closed cities.” The project team assembled a wealth of information concerning the presence and activities of both criminal and terror groups in the region in question, but was left with the question of how best to organize the data and develop significant conclusions.

The project’s information supported connections in 11 watch points, including:

  • A vast increase in the prevalence of violence in the region, especially in economic sectors with close ties to organized crime;
  • Commercial ties in the drug trade between crime groups in the region and Islamic terror groups formerly located in Afghanistan;
  • Rampant corruption in all levels of the regional government and law enforcement mechanisms, rendering portions of the region nearly ungovernable;
  • The presence of numerous regional and transnational crime groups as well as recruiters for Islamic groups on terrorist watch lists;

employment of the watch points prompted creative leads to important connections that were not readily apparent until placed into the larger context of the PIE analytical framework. Specifically, the analysis might not have included evidence of trust links and cultural ties between crime and terror groups had the PIE approach not explained their utility.

When the TraCCC team applied the PIE to the closed cities case, the team found using the technologies reduced time analyzing data while improving the analytical rigor of the task. For example, structured queries of databases and online search engines provided information quickly. Likewise, network mapping improved analytical rigor by codifying the links between numerous actors (e.g., crime groups, terror groups, workers at weapons facilities and corrupt officials) in local, regional and transnational contexts.

3.5 Emergent behavior and automation

The dynamic nature of crime and terror groups complicates the IPB to PIE transition. The spectrum of cooperation demonstrates that crime-terror intersections are emergent phenomena.

PIE must have feedback loops to cope with the emergent behavior of crime and terror groups

when the project team spoke with analysts and investigators, the one deficiency they noted was the ability to conduct strategic intelligence given their operational tempo.

  1. The terror-crime interaction spectrum

In formulating PIE, we recognized that crime and terrorist groups are more diverse in nature than military units. They may be networks or hierarchies, they have a variety of cultures rather than a disciplined code of behavior, and their goals are far less clear. Hoffman notes that terrorist groups can be further categorized based on their organizational ideology.

Other researchers have found significant evidence of interaction between terrorism and organized crime, often in support of the general observation that while their methods might converge, the basic motives of crime and terror groups would serve to keep them at arm’s length—thus the term “methods, not motives.”41 Indeed, the differences between the two are plentiful: terrorists pursue political or religious objectives through overt violence against civilians and military targets. They turn to crime for the money they need to survive and operate.

Criminal groups, on the other hand, are focused on making money. Any use of violence tends to be concealed, and is generally focused on tactical goals such as intimidating witnesses, eliminating competitors or obstructing investigators.

In a corrupt environment, the two groups find common cause.

Terrorists often find it expedient, even necessary, to deal with outsiders to get funding and logistical support for their operations. As such interactions are repeated over time, concerns arise that criminal and terrorist organizations will integrate and might even form new types of organizations.

Support for this point can be found in the seminal work of Sutherland, who has argued that the “in- tensity and duration” of an association with criminals makes an individual more likely to adopt criminal behavior. In conflict regions, where there is intensive interaction between criminals and terrorists, there is more shared behavior and a process of mutual learning that goes on.

The dynamic relationship between international terror and transnational crime has important strategic implications for the United States.

The result is a model known as the terror-crime interaction spectrum that depicts the relationship between terror and criminal groups and the different forms it takes.

Each form of interaction represents different, yet specific, threats, as well as opportunities for detection by law enforcement and intelligence agencies.

An interview with a retired member of the Chicago organized crime investigative unit revealed that it had investigated taxi companies and taxicab owners as cash-based money launderers. Logic suggests that terrorists may also be benefiting from the scheme. But this line of investigation was not pursued in the 9/11 investigations although two of the hijackers had worked as taxi drivers.

Within the spectrum, processes we refer to as activity appropriation, nexus, symbiotic relationship, hybrid, and transformation illustrate the different forms of interaction between a terrorist group and an organized crime group, as well as the behavior of a single group engaged in both terrorism and organized crime.

While activity appropriation does not represent organizational linkages between crime and terror groups, it does capture the merger of methods that were well-documented in section 2. Activity appropriation is one way that terrorists are exposed to organized crime activities and, as Chris Dishman has noted, can lead to a transformation of terror cells into organized crime groups.

Applying the Sutherland principle of differential association, these activities are likely to bring a terror group into regular contact with organized crime. As they attempt to acquire forged documents, launder money, or pay bribes, it is a natural step to draw on the support and expertise of the criminal group, which is likely to have more experience in these activities. It is referred to here as a nexus.

terrorists first engage in “do it yourself” organized crime and then turn to organized crime groups for specialized services like document forgery or money laundering.

In most cases a nexus involves the criminals providing goods and services to terrorists for payment although it can work in both directions. A typically short-term relation- ship, a nexus does not imply that the criminals share the ideological views of the terrorists, merely that the transaction offers benefits to both sides.

After all, they have many needs in common: safe havens, false documentation, evasive tactics, and other strategies to lower the risk of being detected. In Latin America, transnational criminal gangs have employed terrorist groups to guard their drug processing plants. In Northern Ireland, terrorists have provided protection for human smuggling operations by the Chinese Triads.

If the nexus continues to benefit both sides over a period of time, the relationship will deepen. More members of both groups will cooperate, and the groups will create structures and procedures for their business transactions, transfer skills and/or share best practices. We refer to this closer, more sustained cooperation as a symbiotic relationship, and define it as a relationship of mutual benefit or dependence.

In the next stage, the two groups continue to cooperate over a long period and members of the organized crime group begin to share the ideological goals of the terrorists. They grow increasingly alike and finally they merge. That process results in a hybrid or dark network49 that has been memorably described as terrorist by day and criminal by night.50 Such an organization engages in criminal acts but also has a political agenda. Both the criminal and political ends are forwarded by the use of violence and corruption.

These developments are not inevitable, but result from a series of opportunities that can lead to the next stage of cooperation. It is important to recognize, however, that even once the two groups have reached the point of hybrid, there is no reason per se to suspect that transformation will follow. Likewise, a group may persist with borrowed methods indefinitely without ever progressing to cooperation. In Italy and elsewhere, crime groups that also engaged in terrorism never found a terrorist partner and thus remained at the activity appropriation stage. Eventually they ended their terrorist activities and returned to the exclusive pursuit of organized crime.

Interestingly, the TraCCC team found no example where a terrorist group engaging in organized crime, either through activity appropriation or through an organizational linkage, came into conflict with a criminal group.51 Neither archival sources nor our interviews revealed such a conflict over “turf,” though logic would suggest that organized crime groups would react to such forms of competition.

The spectrum does not create exact models of the evolution of criminal-terrorist cooperation. In- deed, the evidence presented both here and in prior studies suggests that a single evolutionary path for crime-terror interactions does not exist. Environmental factors outside the control of either organization and the varied requirements of specific organized crime or terrorist groups are but two of the reasons that interactions appear more idiosyncratic than generalizable.

Using the PIE method, investigators and analysts can gain an understanding of the terror-crime intersection by analyzing evidence sourced from communications, financial transactions, organizational charts, and behavior. They can also apply the methodology to analyze watch points where the two entities may interact. Finally, using physical, electronic, and data surveillance, they can develop indicators showing where watch points translate into practice.

  1. The significance of terror-crime interactions in geographic terms

Some shared characteristics arose from examining this case. First, both neighborhoods shared similar diaspora compositions and a lack of effective or interested policing. Second, both terror cells had strong connections to the shadow economy.

the case demonstrated that each cell shared three factors—poor governance, a sense of ethnic separation amongst the cell (supported by the nature of the larger diaspora neighborhoods), and a tradition of organized crime.

U.S. intelligence and law enforcement are naturally inclined to focus on manifestations of organized crime and terrorism in their own country, but they would benefit from studying and assessing patterns and behavior of crime in other countries as well as areas of potential relevance to terrorism.

When turning to the situation overseas, one can differentiate between longstanding crime groups and their more recently formed counterparts according to their relationship to the state. With the exception of Colombia, rarely do large, established (i.e., “traditional”) crime organizations link with terrorists. These groups possess long-held financial interests that would suffer should the structures of the state and the international financial community come to be undermined. Through corruption and movement into the lawful economy, these groups minimize the risk of prosecution and therefore do not fear the power of state institutions.

Developing countries with weak economies, a lack of social structures, many desperate, hungry people, and a history of unstable government are both relatively likely to provide ideological and economic foundations for both organized crime and terrorism within their borders and relatively unlikely to have much capacity to combat either of them. Conflict zones have traditionally provided tremendous opportunities for smuggling and corruption and reduced oversight capacities, as regulatory and enforcements be- come almost solely directed at military targets. They are therefore especially vulnerable to both serious organized crime and violent activity directed at civilian populations for political goals – as well as cooperation between those engaging in pure criminal activities and those engaging in politically-motivated violence.

Post-conflict zones are also likely to spawn such cooperation; as such areas often retain weak enforcement capacity for some time following an end to formal hostilities.

these patterns of criminal behavior and organization can arise from areas as diverse as conflict zones overseas (which then tend can replicate once they arrive in the U.S.) to neighborhoods in U.S. cities. The problematic combinations of poor governance, ethnic separation from larger society, and a tradition of criminal activity (frequently international) are the primary concerns behind this broad taxonomy of geographic locales for crime-terror interaction.

  1. Watch points and indicators

Taking the evidence of cooperation between organized crime and terrorism, we have generated 12 specific areas of interaction, which we refer to as watch points. In turn these watch points are subdivided into a number of indicators that point out where interaction between terror and crime may be taking place.

These watch points cover a variety of habits and operating modes of organized crime and terrorist groups.

We have organized our watch points into three categories: environmental, organizational, and behavioral. Each of the following sections details one of the twelve watch points.

 

Watch Point 1: Open activities in the legitimate economy

Watch Point 2: Shared illicit nodes

Watch Point 3: Communications

Watch Point 4: Use of information technology (IT)

Watch Point 5: Violence

Watch Point 6: Use of corruption

Watch Point 7: Financial transactions & money laundering

Watch Point 8: Organizational structures

Watch Point 9: Organizational goals

Watch Point 10: Culture

Watch Point 11: Popular support

Watch Point 12: Trust

 

6.1. Watch Point 1: Open activities in the legitimate economy

The many indicators of possible links include habits of travel, the use of mail and courier services, and the operation of fronts.

Organized crime and terror may be associated with subterfuge and secrecy, but both criminal types engage legitimate society quite openly for particular political purposes. Yet in the first instance, criminal groups are likely to leave greater “traces,” especially when they operate in societies with functioning governments, than do terrorist groups.

Terrorist groups usually seek to make common cause with segments of society that will support their goals, particularly the very poor and the disadvantaged. Terrorists usually champion repressed or dis- enfranchised ethnic and religious minorities, describing their terrorist activities as mechanisms to pressure the government for greater autonomy and freedom, even independence, for these minorities… the openly take responsibility for their attacks, but their operational mechanisms are generally kept secret, and any ongoing contacts they may have with legitimate organizations are carefully hidden.

Criminal groups, like terrorists, may have political goals. For example, such groups may seek to strengthen their legitimacy through donating some of their profits to charity. Colombian drug traffickers are generous in their support of schools and local sports teams.5

criminals of all types could scarcely carry out criminal activities, maintain their cover, and manage their money flows without doing legal transactions with legitimate businesses.

Travel: Frequent use of passenger carriers and shipping companies are potential indicators of illicit activity. Clues can be gleaned from almost any pattern of travel that can be identified as such.

Mail and courier services: Indicators of interaction are present in the tracking information on international shipments of goods, which also generate customs records. Large shipments require bills-of-lading and other documentation. Analysis of such transactions, cross-referenced with in- formation on crime databases, can identify links between organized crime and terrorist groups.

Fronts: A shared front company or mutual connections to legitimate businesses are clearly also indicators of interaction.

Watch Point 2: Shared illicit nodes

 

The significance of overt operations by criminal groups should not be overstated. Transnational crime and terror groups alike carry out their operations for the most part with illegal and undercover methods. There are many similarities in these tactics. Both organized criminals and terrorists need forged pass- ports, driver’s licenses, and other fraudulent documents. Dishonest accountants and bankers help criminals launder money and commit fraud. Arms and explosives, training camps and safe houses are other goods and services that terrorists obtain illicitly.

Fraudulent Documents. Groups of both types may use the same sources of false documents,

or the same techniques, indicating cooperation or overlap. A criminal group often develops an expertise in false document production as a business, expanding production and building a customer base.

 

Some of the 9/11 hijackers fraudulently obtained legitimate driver’s licenses through a fraud ring based at an office of DMV in the Virginia suburbs of Washington, DC. Ac- cording to an INS investigator, this ring was under investigation well before the 9/11 attacks, but there was insufficient political will inside the INS to take the case further.

Arms Suppliers. Both terror and organized crime might use the same supplier, or the same distinctive method of doing business, such as bartering weapons or drugs. In 2001 the Basque terror group ETA contracted with factions of the Italian Camorra to obtain missile launchers and ammunition in return for narcotics.

Financial experts. Bankers and financial professionals who assist organized crime might also have terrorist affiliations. The methods of money laundering long used by narcotics traffickers and other organized crime have now been adopted by some terrorist groups.

 

Drug Traffickers. Drug trafficking is the single largest source of revenues for international organized crime. Substantial criminal groups often maintain well-established smuggling routes to distribute drugs. Such an infrastructure would be valuable to terrorists who purchased weapons of mass destruction and needed to transport them.

 

Other Criminal Enterprises. An increasing number of criminal enterprises outside of narcotics smuggling are serving the financial or logistical ends of terror groups and thus serve as nodes of interaction. For example, piracy on the high seas, a growing threat to maritime commerce, often depends on the collusion of port authorities, which are controlled in many cases by organized crime.

These relationships are particularly true of developed countries with effective law enforcement, since criminals obviously need to be more cautious and often restrict their operations to covert activity. In conflict zones, however, criminals of all types feel even less restraint about flaunting their illegal nature, since there is little chance of being detected or apprehended.

Watch Point 3: Communications

 

The Internet, mobile phones and satellite communications enable criminals and terrorists to communicate globally in a relatively secure fashion. FARC, in concert with Colombian drug cartels, offered training on how to set up narcotics trafficking businesses used secure websites and email to handle registration.

Such scenarios are neither hypothetical nor anecdotal. Interviews with an analyst at the US Drug Enforcement Administration revealed that narcotics cartels were increasingly using encryption in their digital communications. In turn, the agent interviewed stated that the same groups were frequently turning to information technology experts to provide them encryption to help secure their communications.

Nodes of interaction therefore include:

  • Technical overlap: Examples exist where organized crime groups opened their illegal communications systems to any paying customer, thus providing a service to other criminals and terrorists among others. For example, a recent investigation found clandestine telephone exchanges in the Tri-Border region of South America that were connected to Jihadist networks. Most were located in Brazil, since calls between Middle Eastern countries and Brazil would elicit less suspicion and thus less chance of electronic eavesdropping.
  • Personnel overlap: Crime and terror groups that recruit common high-tech specialists to their cause. Given their ability to encrypt messages, criminals of all kinds may rely on outsiders to carry the message. Smuggling networks all have operatives who can act as couriers, and terrorists have networks of sympathizers in ethnic diasporas who can also help.

Watch Point 4: Use of information technology (IT)

 

Organized crime has devised IT-based fraud schemes such as online gambling, securities fraud, and pirating of intellectual property. Such schemes appeal to terror groups, too, particularly given the relative anonymity that digital transactions offer. Investigators into the Bali disco bombing of 2002 found that the laptop computer of the ringleader, Imam Samudra, contained a primer he authored on how to use online fraud to finance operations. Evidence of terror groups’ involvement is a significant set of indicators of cooperation or overlap.

Indicators of possible cooperation or nodes of interaction include:

Fundraising: Online fraud schemes and other uses of IT for obtaining ill-gotten gains are already well-established by organized crime groups and terrorists are following suit. Such IT- assisted criminal activities serve as another node of overlap for crime and terror groups, and thus expand the area of observation beyond the brick-and-mortar realm into cyberspace (i.e., investigators now expect to find evidence of collaboration on the Internet or in email as much as through telephone calls or postal services).

  • Use of technical experts: While no evidence exists that criminals and terrorists have directly cooperated to conduct cybercrime or cyberterrorism, they are often served by the same technical experts.

Watch Point 5: Violence

 

Violence is not so much a tactic of terrorists as their defining characteristic. These acts of violence are designed to obtain publicity for the cause, to create a climate of fear, or to provoke political repression, which they hope will undermine the legitimacy of the authorities. Terrorist attacks are deliberately highly visible in order to enhance their impact on the public consciousness. Indiscriminate violence against innocent civilians is therefore more readily ascribed to terrorism.

no examples exist where terrorists have engaged criminal groups for violent acts.

A more significant challenge lies in trying to discern generalities about organized crime’s patterns of violence. Categorizing patterns of violence according to their scope or their promulgation is suspect. In the past, crime groups have used violence selectively and quietly to achieve their goals, but then have also used violence broadly and loudly to achieve other goals. Neither can one categorize organized crime’s violence according to goals as social, political and economic considerations often overlap in every attack or campaign.

Violence is therefore an important watch point that may not yield specific indicators of crime-terror interaction per se but can serve to frame the likelihood that an area might support terror-crime interaction.

Watch Point 6: Use of corruption

 

Both terrorists and organized criminals bribe government officials to undermine the work of law enforcement and regulation. Corrupt officials assist criminals by exerting pressure on businesses that refuse to cooperate with organized crime groups, or by providing passports for terrorists. The methods of corruption are diverse on both sides and include payments, the provision of illegal goods, the use of compromising information to extort cooperation, and outright infiltration of a government agency or other target.

Many studies have demonstrated that organized crime groups often evolve in places where the state cannot guarantee law or order, or provide basic health care, education, and social services. The absence of effective law enforcement combines with rampant corruption to make well-organized criminals nearly invulnerable.

Colombia may be the only example of a conflict zone where a major transnational crime group with very large profits is directly and openly connected to terrorists. The interaction between the FARC and ELN terror groups and the drug syndicates provides crucial important financial resources for the guerillas to operate against the Colombian state – and against each another. This is facilitated by universal corruption, from top government officials to local police. Corruption has served as the foundation for the growth of the narcotics cartels and insurgent/terrorist groups.

In the search for indicators, it would be simplistic to look for a high level of corruption, particularly in conflict zones. Instead, we should pose a series of questions:

Cooperation Are terrorist and criminal groups working together to minimize cost and maximize leverage from corrupt individuals and institutions?

Division of labor Are terrorist and criminal groups purposefully corrupting the areas they have most contact with? In the case of crime groups, that would be law enforcement and the judiciary; in the case of terrorists, the intelligence and security services.

  • Autonomy Are corruption campaigns carried out by one or both groups completely independent of the other?

These indicators can be applied to analyze a number of potential targets of corruption. Personnel that can provide protection or services are often mentioned as the target of corruption. Examples include law enforcement, the judiciary, border guards, politicians and elites, internal security agents and Consular officials. Economic aid and foreign direct investment are also targeted as sources of funds by criminals and terrorists that they can access by means of corruption.

 

Watch Point 7: Financial transactions & money laundering

 

despite the different purposes that may be involved in their respective uses of financial institutions (organized crime seeking to turn illicit funds into licit funds; terrorists seeking to move licit funds to use them for illicit means), the groups tend to share a common infrastructure for carrying out their financial activities. Both types of groups need reliable means of moving, and laundering money in many different jurisdictions, and as a result, both use similar methods to move money internationally. Both use charities and front groups as a cover for money flows.

Possible indicators include:

  • Shared methods of money laundering
  • Mutual use of known front companies and banks, as well as financial experts.

Watch Point 8: Organizational structures

 

The traditional model of organized crime used by U.S. law enforcement is that of the Sicilian Mafia – a hierarchical, conservative organization embedded in the traditional social structures of southern Italy… among today’s organized crime groups the Sicilian mafia is more of an exception than the rule.

Most organized crime now operates not as a hierarchy but as a decentralized, loose-knit network – which is a crucial similarity to terror groups. Networks offer better security, make intelligence-gathering more efficient, cover geographic distances and span diverse memberships more effectively.

Membership dynamics Both terror and organized crime groups – with the exception of the Sicilian Mafia and other traditional crime groups (i.e., Yakuza) – are made up of members with loose, relatively short-term affiliations to each other and even to the group itself. They can readily be recruited by other groups. By this route, criminals have become terrorists.

Scope of organization Terror groups need to make constant efforts to attract and recruit new members. Obvious attempts to attract individuals from crime groups are a clear indication of co- operation. An intercepted phone conversation in May 2004 by a suspected terrorist called Rabei Osman Sayed Ahmed revealed his recruitment tactics: “You should also know that I have met other brothers, that slowly I have created with a few things. First, they were drug pushers, criminals, I introduced them to the faith and now they are the first ones who ask when the moment of the jihad will be…”

Need to buy, wish to sell Often the business transactions between the two sides operate in both directions. Terrorist groups are not just customers for the services of organized crime, but often act as suppliers, too. Arms supply by terrorists is particularly marked in certain conflict zones. Thus, any criminal group found to be supplying outsiders with goods or services should be investigated for its client base too.

Investigators who discovered the money laundering in the above example were able to find out more about the terrorists’ activities too. The Islamic radical cell that planned the Madrid train bombings of 2004 was required to support itself financially through a business venture despite its initial funding by Al Qaeda.

Watch Point 9: Organizational goals

 

In theory, their different goals are what set terrorists apart from the perpetrators of organized crime. Terrorist groups are most often associated with political ends, such as change in leadership regimes or the establishment of an autonomous territory for a subnational group. Even millenarian and apocalyptic terrorist groups, such as the science-fiction mystics of Aum Shinrikyo, often include some political objectives. Organized crime, on the other hand, is almost always focused on personal enrichment.

By cataloging the different – and shifting – goals of terror and organized crime groups, we can develop indicators of convergence or divergence. This will help identify shared aspirations or areas where these aims might bring the two sides into conflict. On this basis, investigators can ask what conditions might prompt either side to adopt new goals or to fall back to basic goals, such as self-preservation.

Long view or short-termism

Affiliations of protagonists

 

Watch Point 10: Culture

 

Both terror and criminal groups use ideologies to maintain their internal identity and provide external justifications for their activities. Religious terror groups adopt and may alter the teachings of religious scholars to suggest divine support for their cause, while Italian, Chinese, Japanese, and other organized crime groups use religious and cultural themes to win public acceptance. Both types use ritual and tradition to construct and maintain their identity. Tattoos, songs, language, and codes of conduct are symbolic to both.

Religious affiliations, strong nationalist sentiments and strong roots in the local community are often characteristics that cause organized criminals to shun any affiliation with terrorists. Conversely, the absence of such affiliations means that criminals have fewer constraints keeping them from a link with terrorists.

In any organization, culture connects and strengthens ties between members. For networks, cultural features can also serve as a bridge to other networks.

  • Religion Many criminal and terrorist groups feature religion prominently.
  • Nationalism Ethno-nationalist insurgencies and criminal groups with deep historical roots are particularly likely to play the nationalist card.
  • Society Many criminal and terrorist networks adapt cultural aspects of the local and regional societies in which they operate to include local tacit knowledge, as contained in narrative traditions. Manuel Castells notes the attachment of drug traffickers to their country, and to their regions of origin. “They were/are deeply rooted in their cultures, traditions, and regional societies. …they have also revived local cultures, rebuilt rural life, strongly affirmed their religious feeling, and their beliefs in local saints and miracles, supported musical folklore (and were rewarded with laudatory songs from Colombian bards)…”

Watch Point 11: Popular support

 

Both organized crime and terrorist groups engage legitimate society in furtherance of their own agendas. In conflict zones, this may be done quite openly, while under the rule of law they are obliged to do so covertly. One way of doing so is to pay lip service to the interests of certain ethnic groups or social classes. Organized crime is particularly likely to make an appeal to disadvantaged people or people in certain professionals though paternalistic actions that make them a surrogate for the state. For instance, the Japanese Yakuza crime groups provided much-needed assistance to the citizens of Kobe after the serious earthquake there. Russian organized crime habitually supports cultural groups and sports troupes.

 

Both crime and terror derive crucial power and prestige through the support of their members and of some segment of the public at large. This may reflect enlightened self-interest, when people see that the criminals are acting on their behalf and improving their well-being and personal security. But it is equally likely to be that people are afraid to resist a violent criminal group in their neighborhood

This quest for popular support and common cause suggests various indicators:

  • Sources Terror groups seek and sometimes obtain the assistance of organized crime based on the perceived worthiness of the terrorist cause, or because of their common cause against state authorities or other sources of opposition. In testimony before the U.S. House Committee on International Relations, Interpol Secretary General Ronald Noble made this point. One of his examples was that Lebanese syndicates in South America send funds to Hezbollah.
  • Means Groups that cooperate may have shared activities for gaining popular support such as political parties, labor movements, and the provision of social services.
  • Places In conflict zones where the government has lost authority to criminal groups, social welfare and public order might be maintained by the criminal groups that hold power.

 

Watch Point 12: Trust

Like business corporations, terrorist and organized crime groups must attract and retain talented, dedicated, and loyal personnel. These skills are at an even greater premium than in the legitimate economy because criminals cannot recruit openly. A further challenge is that law enforcement and intelligence services are constantly trying to infiltrate and dismantle criminal networks. Members’ allegiance to any such group is constantly tested and demonstrated through rituals such as the initiation rites…

We propose three forms of trust in this context, using as a basis Newell and Swan’s model for inter- personal trust within commercial and academic groups.94

Companion trust based on goodwill or personal friendships… In this context, indicators of terror-crime interaction would be when members of the two groups use personal bonds based on family, tribe, and religion to cement their working relationship. Efforts to recruit known associates of the other group, or in common recruiting pools such as diasporas, would be another indicator.

Competence trust, which Newell and Swan define as the degree to which one person depends upon another to perform the expected task.

Commitment or contract trust, where all actors understand the practical importance of their role in completing the task at hand.

  1. Case studies

7.1. The Tri-Border Area of Paraguay, Brazil, and Argentina

Chinese Triads such as the Fuk Ching, Big Circle Boys, and Flying Dragons are well established and believed to be the main force behind organized crime in CDE.

CDE is also a center of operations for several terrorist groups, including Al Qaeda, Hezbollah, Islamic Jihad, Gamaa Islamiya, and FARC.

Watch points

Crime and terrorism in the Tri-Border Area interact seamlessly, making it difficult to draw a clean line be- tween the types of persons and groups involved in each of these two activities. There is no doubt, however, that the social and economic conditions allow groups that are originally criminal in nature and groups whose primary purpose is terrorism to function and interact freely.

Organizational structure

Evidence from CDE suggests that some of the local structures used by both groups are highly likely to overlap. There is no indication, however, of any significant organizational overlap between the criminal and terrorist groups. Their cooperation, when it exists, is ad hoc and without any formal or lasting agreements, i.e., activity appropriation and nexus forms only.

Organizational goals

In this region, the short-term goals of criminals and terrorists converge. Both benefit from easy border crossings and the networks necessary to raise funds.

Culture Cultural affinities between criminal and terrorist groups in the Tri-Border Area include shared ethnicities, languages and religions.

It emerged that 400 to 1000 kilograms of cocaine may have been shipped on a monthly basis through the Tri-Border Area on its way to Sao Paulo and thence to the Middle East and Europe

Numerous arrests revealed the strong ties between entrepreneurs in CDE and criminal and potentially terrorist groups. From the evidence in CDE it seems that the two phenomena operate in rather separate cultural realities, focusing their operations within ethnic groups. But nor does culture serve as a major hindrance to cooperation between organized crime and terrorists.

Illicit activities and subterfuge

The evidence in CDE suggests that terrorists see it as logical and cost-effective to use the skills, contacts, communications and smuggling routes of established criminal networks rather than trying to gain the requisite experience and knowledge themselves. Likewise, terrorists appear to recognize that to strike out on their own risks potential turf conflicts with criminal groups.

There is a clear link between Hong Kong-based criminal groups that specialize in large-scale trafficking of counterfeit products such as music albums and software, and the Hezbollah cells active in the Tri-Border Area. Within their supplier-customer relationship, the Hong Kong crime groups smuggle contraband goods into the region and deliver them to Hezbollah operatives, who in turn profit from their sale. The proceeds are then used to fund the terrorist groups.

Open activities in the legitimate economy

The knowledge and skills potential of CDE is tremendous. While no specific examples exist to connect terrorist and criminal groups through the purchase of legal goods and services, it is obvious that the likelihood of this is high, given how the CDE economy is saturated with organized crime.

Support or sustaining activities

The Tri-Border Area has an usually large and efficient transport infrastructure, which naturally assists organized crime. In turn, the many criminals and terrorists using cover require a sophisticated and reliable document forgery industry. The ease with which these documents can be obtained in CDE is an indicator of cooperation between terrorists and criminals.

Brazilian intelligence services have evidence that Osama bin Laden visited CDE in 1995 and met with the members of the Arab community in the city’s mosque to talk about his experience as a mujahadeen fighter in the Afghan war against the Soviet Union.

Use of violence

Contract murder in CDE costs as little as one thousand dollars, and the frequent violence in CDE is directed at business people who refuse to bend to extortion by terror groups. Ussein Mohamed Taiyen, president of the CDE Chamber of Commerce, was one such victim—murdered because he refused to pay the tax.

Financial transactions and money laundering in 2000, money laundering in the Tri-Border Area was estimated at 12 billion U.S. dollars annually.

As many as 261 million U.S. dollars annually has been raised in Tri-Border Area and sent overseas to fund the terrorist activities of Hezbollah, Hamas, and Islamic Jihad.

Use of corruption

Most of the illegal activities in the Tri-Border Area bear the hallmark of corruption. In combination with the generally low effectiveness of state institutions, especially in Paraguay, and high level of corruption in that country, CDE appears to be a perfect environment for the logistical operations of both terrorists and organized criminals.

Even the few bona fide anti-corruption attempts made by the Paraguayan government have been under- mined because of the pervasive corruption, another example being the attempts to crack down on the Chinese criminal groups in CDE. The Consul General of Taiwan in CDE, Jorge Ho, stated that the Chinese groups were successful in bribing Paraguayan judges, effectively neutralizing law enforcement moves against the criminals.122

The other watch points described earlier – including fund raising and use of information technology – can also be illustrated with similar indicators of possible cooperation between terror and organized crime.

In sum, for the investigator or analyst seeking examples of perfect conditions for such cooperation, the Tri-Border Area is an obvious choice.

7.2. Crime and terrorism in the Black Sea region

Illicit or veiled operations Cigarette, drugs and arms smuggling have been major sources of financing of all the terrorist groups in the region.

Cigarette and alcohol smuggling has fueled the Kurdish-Turkish conflict as well as the terrorist violence in both the Abkhaz and Ossetian conflicts.

From the very beginning, the Chechen separatist movement had close ties with the Chechen crime rings in Russia, mainly operating in Moscow. These crime groups provided and some of them still provide financial sup- port for the insurgents.

  1. Conclusion and recommendations

The many examples in this report of cooperation between terrorism and organized crime make clear that the links between these two potent threats to national and global security are widespread, dynamic, and dangerous. It is only rational to consider the possibility that an effective organized crime group may have a connection with terrorists that has gone unnoticed so far.

Our key conclusion is that crime is not a peripheral issue when it comes to investigating possible terrorist activity. Efforts to analyze the phenomenon of terrorism without considering the crime component undermine all counter-terrorist activities, including those aimed at protecting sites containing weapons of mass destruction.

Yet the staffs of intelligence and law enforcement agencies in the United States are already over- whelmed. Their common complaint is that they do not have the time to analyze the evidence they possess, or to eliminate unnecessary avenues of investigation. The problem is not so much a dearth of data, but the lack of suitable tools to evaluate that data and make optimal decisions about when, and how, to investigate further.

Scrutiny and analysis of the interaction between terrorism and organized crime will become a matter of routine best practice. Aware- ness of the different forms this interaction takes, and the dynamic relationship between them, will become the basis for crime investigations, particularly for terrorism cases.

In conclusion, our overarching recommendation is that crime analysis must be central to understanding the patterns of terrorist behavior and cannot be viewed as a peripheral issue.

For policy analysts:

  1. More detailed analysis of the operation of illicit economies where criminals and terrorists interact would improve understanding of how organized crime operates, and how it cooperates with terrorists. Domestically, more detailed analysis of the businesses where illicit transactions are most common would help investigation of organized crime – and its affiliations. More focus on the illicit activities within closed ethnic communities in urban centers and in prisons in developed countries would prove useful in addressing potential threats.
  2. Corruption overseas, which is so often linked to facilitating organized crime and terrorism, should be elevated to a U.S. national security concern with an operational focus. After all, many jihadists are recruited because they are disgusted with the corrupt governments in their home countries. Corruption has facilitated the commission of criminal acts such as the Chechen suicide bombers who bribed airport personnel to board aircraft in Moscow.
  3. Analysts must study patterns of organized crime-terrorism interaction as guidance for what maybe observed subsequently in the United States.
  4. Intelligence and law enforcement agencies need more analysts with the expertise to understand the motivations and methods of criminal and terrorist groups around the globe, and with the linguistic and other skills to collect and analyze sufficient data.

For investigators:

  1. The separation of criminals and terrorists is not always as clear cut as many investigators believe. Crime and terrorists’ groups are often indistinguishable in conflict zones and in prisons.
  2. The hierarchical structure and conservative habits of the Sicilian Mafia no longer serves as an appropriate model for organized crime investigations. Most organized crime groups now operate as loose networked affiliations. In this respect they have more in common with terrorist groups.
  3. The PIE method provides a series of indicators that can result in superior profiles and higher- quality risk analysis for law enforcement agencies both in the United States and abroad. The approach can be refined with sensitive or classified information.
  4. Greater cooperation between the military and the FBI would allow useful sharing of intelligence, such as the substantial knowledge on crime and illicit transactions gleaned by the counterintelligence branch of the U.S. military that is involved in conflict regions where terror-crime interaction is most profound.
  5. Law enforcement personnel must develop stronger working relationships with the business sector. In the past, there has been too little cognizance of possible terrorist-organized crime interaction among the clients of private-sector business corporations and banks. Law enforcement must pursue evidence of criminal affiliations with high status individuals and business professionals who are often facilitators of terrorist financing and money laundering. In the spirit of public-private partnerships, corporations and banks should be placed under an obligation to watch for indications of organized crime or terrorist activity by their clients and business associates. Furthermore, they should attempt to analyze what they discover and to pass on their assessment to law enforcement.
  6. Law enforcement personnel posted overseas by federal agencies such as the DEA, the Department of Justice, the Department of Homeland Security, and the State Department’s Bureau of International Narcotics and Law Enforcement should be tasked with helping to develop a better picture of the geography of organized crime and its most salient features (i.e., the watch points of the PIE approach). This should be used to assist analysts in studying patterns of crime behavior that put American interests at risk overseas and alert law enforcement to crime patterns that may shortly appear in the U.S.
  7. Training for law enforcement officers at federal, state, and local level in identifying authentic and forged passports, visas, and other documents required for residency in the U.S. would eliminate a major shortcoming in investigations of criminal networks.

 

 

 

 

 

 

 

 

 

 

 

A.1 Defining the PIE Analytical Process

In order to begin identifying the tools to support the analytical process, the process of analysis itself first had to be captured. The TraCCC team adopted Max Boisot’s (2003) I-Space as a representation for de- scribing the analytical process. As Figure A-1 illustrates, I-Space provides a three-dimensional representation of the cognitive steps that constitute analysis in general and the utilization of the PIE methodology in particular. The analytical process is reduced to a series of logical steps, with one step feeding the next until the process starts anew. The steps are:

  1. Scanning
    2. Codification 3. Abstraction 4. Diffusion
    5. Validation 6. Impacting

Over time, repeated iterations of these steps result in more and more PIE indicators being identified, more information being gathered, more analytical product being generated, and more recommendations being made. Boisot’s I-Space is described below in terms of law enforcement and intelligence analytical processes.

A.1.1. Scanning

The analytical process begins with scanning, which Boisot defines as the process of identifying threats and opportunities in generally available but often fuzzy data. For example, investigators often scan avail- able news sources, organizational data sources (e.g., intelligence reports) and other information feeds to identify patterns or pieces of information that are of interest. Sometimes this scanning is performed with a clear objective in mind (e.g., set up through profiles to identify key players). From a tools perspective, scanning with a focus on a specific entity like a person or a thing is called a subject-based query. At other times, an investigator is simply reviewing incoming sources for pieces of a puzzle that is not well under- stood at that moment. From a tools perspective, scanning with a focus on activities like money laundering or drug trafficking is called a pattern-based query. For this type of query, a specific subject is not the target, but a sequence of actors/activities that form a pattern of interest.

Many of the tools described herein focus on either:

o Helping an investigator build models for these patterns then comparing those models against the data to find ‘matches’, or

o Supporting automated knowledge discovery where general rules about interesting patterns are hypothesized and then an automated algorithm is employed to search through large amounts of data based on those rules.

The choice between subject-based and pattern-based queries is dependent on several factors including the availability of expertise, the size of the data source to be scanned, the amount of time available and, of course, how well the subject is understood and anticipated. For example, subject-based queries are by nature more tightly focused and thus are often best conducted through keyword or Boolean searches, such as a Google search containing the string “Bin Laden” or “Abu Mussab al-Zarqawi.” Pattern-based queries, on the other hand, support a relationship/discovery process, such as an iterative series of Google searches starting at ‘with all of the words’ terrorist, financing, charity, and hawala, proceeding through ‘without the words’ Hezbollah and Iran and culminating in ‘with the exact phrase’ Al Qaeda Wahabi charities. Regard- less of which is employed, the results provide new insights into the problem space. The construction, employment, evaluation, and validation of results from these various types of scanning techniques will pro- vide a focus for our tool exploration.

A.1.2. Codification

In order for the insights that result from scanning to be of use to the investigator, they must be placed into the context of the questions that the investigator is attempting to answer. This context provides structure through a codification process that turns disconnected patterns into coherent thoughts that can be more easily communicated to the community. The development of indicators is an example of this codification. Building up network maps from entities and their relationships is another example that could sup- port indicator development. Some important tools will be described that support this codification step.

A.1.3. Abstraction

During the abstraction phase, investigators generalize the application of newly codified insights to a wider range of situations, moving from the specific examples identified during scanning and codification towards a more abstract model of the discovery (e.g., one that explains a large pattern of behavior or predicts future activities). Indicators are placed into the larger context of the behaviors that are being monitored. Tools that support the generation and maintenance of models that support this abstraction process

81

will be key to making the analysis of an overwhelming number of possibilities and unlimited information manageable.

A.1.4. Diffusion

Many of the intelligence failures cited in the 9/11 Report were due to the fact that information and ideas were not shared. This was due to a variety of reasons, not the least of which were political. Technology also built barriers to cooperation, however. Information can only be shared if one of two conditions is met. Either the sender and receiver must share a context (a common language, background, understanding of the problem) or the information must be coded and abstracted (see steps 2 and 3 above) to extract it from the personal context of the sender to one that is generally understood by the larger community. Once this is done, the newly created insights of one investigator can be shared with investigators in sister groups.

The technology for the diffusion itself is available through any number of sources ranging from repositories where investigators can share information to real-time on-line cooperation. Tools that take advantage of this technology include distributed databases, peer-to-peer cooperation environments and real- time meeting software (e.g., shared whiteboards).

A.1.5. Validation

In this step of the process, the hypotheses that have been formed and shared are now validated over time, either by a direct match of the data against the hypotheses (i.e., through automation) or by working towards a consensus within the analytical community. Some hypotheses will be rejected, while others will be retained and ranked according to probability of occurrence. In either case, tools are needed to help make this match and form this consensus.

A.1.6. Impacting

Simply validating a set of hypotheses is not enough. If the intelligence gathering community stops at that point, the result is a classified CNN feed to the policy makers and practitioners. The results of steps 1 through 5 must be mapped against the opposing landscape of terrorism and transnational crime in order to understand how the information impacts the decisions that must be taken. In this final step, investigators work to articulate how the information/hypotheses they are building impact the overall environment and make recommendations on actions (e.g., probes) that might be taken to clarify that environment. The con- sequences of the actions taken as a result of the impacting phase are then identified during the scanning phase and the cycle begins again.

A.1.7. An Example of the PIE Analytical Approach

While section 4 provided some real-life examples of the PIE approach in action, a retrodictive analysis of terror-crime cooperation in the extraction, smuggling, and sale of conflict diamonds provides a grounding example of Boisot’s six step analytical process. Diamonds from West Africa were a source of funding for various factions in the Lebanese civil war since the 1980s. Beginning in the late 1990s intelligence, law enforcement, regulatory, non-governmental, and press reports suggested that individuals linked to transnational criminal smuggling and Middle Eastern terrorist groups were involved in Liberia’s illegal diamond trade. We would expect to see the following from an investigator assigned to track terrorist financing:

  1. Scanning: During this step investigators could have assembled fragmentary reports to reveal crude patterns that indicated terror-crime interaction in a specific region (West Africa), involving two countries (Liberia and Sierra Leone) and trade in illegal diamonds.
  2. Codification: Based on patterns derived from scanning, investigators could have codified the terror- crime interaction by developing explicit network maps that showed linkages between Russian arms dealers, Russian and South American organized crime groups, Sierra Leone insurgents, the government of Liberia, Al Qaeda, Hezbollah, Lebanese and Belgian diamond merchants, and banks in Cyprus, Switzerland, and the U.S.
  3. Abstraction: The network map developed via codification is essentially static at this point. Utilizing social network analysis techniques, investigators could have abstracted this basic knowledge to gain a dynamic understanding of the conflict diamond network. A calculation of degree, betweenness, and closeness centrality of the conflict diamond network would have revealed those individuals with the most connections within the network, those who were the links between various subgroups within the network, and those with the shortest paths to reach all of the network participants. These calculations would have revealed that all the terrorist links in the conflict diamond network flowed through Ibra- him Bah, a Libyan-trained Senegalese who had fought with the mujahadeen in Afghanistan and whom Charles Taylor, then President of Liberia, had entrusted to handle the majority of his diamond deals. Bah arranged for terrorist operatives to buy all diamonds possible from the RUF, the Charles Taylor- supported rebel army that controlled much of neighboring civil-war-torn Sierra Leone. The same calculations would have delineated Taylor and his entourage as the key link to transnational criminals in the network, and the link between Bah and Taylor as the essential mode of terror-crime interaction for purchase and sale of conflict diamonds.
  4. Diffusion: Disseminating the results of the first three analytical steps in this process could have alerted investigators in other domestic and foreign law enforcement and intelligence agencies to the emergent terror-crime nexus involving conflict diamonds in West Africa. Collaboration between various security services at this junction could have revealed Al Qaeda’s move into commodities such as diamonds, gold, tanzanite, emeralds, and sapphires in the wake of the Clinton Administration’s freezing of 240 million dollars belonging to Al Qaeda and the Taliban in Western banks in the aftermath of the August 1998 attacks on the U.S. embassies in Kenya and Tanzania. In particular, diffusion of the parameters of the conflict diamond network could have allowed investigators to tie Al Qaeda fund raising activities to a Belgian bank account that contained approximately 20 million dollars of profits from conflict diamonds.
  5. Validation: Having linked Al Qaeda, Hezbollah, and multiple organized crime groups to the trade in conflict diamonds smuggled into Europe from Sierra Leone via Liberia, investigators would have been able to draw operational implications from the evidence amassed in the previous steps of the analytical process. For example, Al Qaeda diamond purchasing behavior changed markedly. Prior to July 2001 Al Qaeda operatives sought to buy low in Africa and sell high in Europe so as to maximize profit. Around July they shifted to a strategy of buying all the diamonds they could and offering the highest prices required to secure the stones. Investigators could have contrasted these buying patterns and hypothesized that Al Qaeda was anticipating events which would disrupt other stores of value, such as financial instruments, as well as bring more scrutiny of Al Qaeda financing in general.
  6. Impacting: In the wake of the 9/11attacks, the hypothesis that Al Qaeda engaged in asset shifting prior to those strikes similar to that undertaken in 1999 has gained significant validity. During this final step in the analytical process, investigators could have created a watch point involving a terror-crime nexus associated with conflict diamonds in West Africa, and generated the following indicators for use in future investigations:
  • Financial movements and expenditures as attack precursors;
  • Money as a link between known and unknown nodes;
  • Changes in the predominant patterns of financial activity;
  • Criminal activities of a terrorist cell for direct or indirect operational support;
  • Surge in suspicious activity reports.

A.2. The tool space

The key to successful tool application is understanding what type of tool is needed for the task at hand. In order to better characterize the tools for this study, we have divided the tool space into three dimensions:

  • An abstraction dimension: This continuum focuses on tools that support the movement of concepts from the concrete to the abstract. Building models is an excellent example of moving concrete, narrow concepts to a level of abstraction that can be used by investigators to make sense of the past and predict the future.
  • A codification dimension: This continuum attaches labels to concepts that are recognized and accepted by the analytical community to provide a common context for grounding models. One end of the spectrum is the local labels that individual investigators assign and perhaps only that they understand. The other end of the spectrum is the community-accepted labels (e.g., commonly accepted definitions that will be understood by the broader analytical community). As we saw earlier, concepts must be defined in community-recognizable labels before the community can begin to cooperate on those concepts.
  • The number of actors: This last continuum talks in term of the number of actors who are involved with a given concept within a certain time frame. Actors could include individual people, groups, and even automated software agents. Understanding the number of actors involved with the analysis will play a key role in determining what type of tool needs to be employed.

Although they may appear to be performing the same function, abstraction and codification are not the same. An investigator could build a set of models (moving from concrete to abstract concepts) but not take the step of changing his or her local labels. The result would be an abstracted model of use to the single investigator, but not to a community working from a different context. For example, one investigator could model a credit card theft ring as a petty crime network under the loose control of a traditional organized crime family, while another investigator could model the same group as a terrorist logistic sup- port cell.

The analytical process described above can now be mapped into the three-dimensional tool space, represented graphically in Figure A-1. So, for example, scanning (step 1) is placed in the portion of the tool space that represents an individual working in concrete terms without those terms being highly codified (e.g., queries). Validation (step 5), on the other hand, requires the cooperation of a larger group working with abstract, highly codified concepts.

A.2.1. Scanning tools

Investigators responsible for constructing and monitoring a set of indicators could begin by scanning available data sources – including classified databases, unclassified archives, news archives, and internet sites – for information related to the indicators of interest. As can be seen from exhibit 6, all scanning tools will need to support requirements dictated by where these tools fall within the tool space. Scanning tools should focus on:

  • How to support an individual investigator as opposed to the collective analytical community. Investigators, for the most part, will not be performing these scanning functions as a collaborative effort;
  • Uncoded concepts, since the investigator is scanning for information that is directly related to a specific context (e.g., money laundering), then the investigator will need to be intimately familiar with the terms that are local (uncoded) to that context;
  • Concrete concepts or, in this case, specific examples of people, groups, and circumstances within the investigator’s local context. In other words, if the investigator attempts to generalize at this stage, much could be missed.

Using these criteria as a background, and leveraging state-of-the-art definitions for data mining, scanning tools fall into two basic categories:

  • Tools that support subject-based queries are used by investigators when they are searching for specific information about people, groups, places, events, etc.; and
  • Investigators who are not as interested in individuals as they are in identifying patterns of activities use tools that support pattern-based queries.

This section briefly describes the functionality in general, as well as providing specific tool examples, to support both of these critical types of scanning.

A.2.1.1. Subject-based queries

Subject-based queries are the easiest to perform and the most popular. Examples of tools that are used to support subject-based queries are Boolean search tools for databases and internet search engines.

Functionalities that should be evaluated when selecting subject-based query tools include that they are easy to use and intuitive to the investigator. Investigators should not be faced with a bewildering array of ‘ifs’, ‘ands’, and ‘ors’, but should be presented with a query interface that matches the investigator’s cognitive view of searching the data. The ideal is a natural language interface for constructing the queries. An- other benefit is that they provide a graphical interface whenever possible. One example might be a graphical interface that allows the investigator to define subjects of interest, then uses overlapping circles to indicate the interdependencies among the search terms. Furthermore, query interfaces should support synonyms, have an ability to ‘learn’ from the investigator based on specific interests, and create an archive of queries so that the investigator can return and repeat. Finally, they should provide a profiling capability that alerts the investigator when new information is found based on the subject.

Subject-based query tools fall into three categories: queries against databases, internet searches, and customized search tools. Examples of tools for each of these categories include:

  • Queries from news archives: All major news groups provide web-based interfaces that support queries against their on-line data sources. Most allow you to select the subject, enter keywords, specify date ranges, and so on. Examples include the New York Times (at http://www.nytimes.com/ref/membercenter/nytarchive.html) and the Washington Post (at http://pqasb.pqarchiver.com/washingtonpost/search.html). Most of these sources allow you to read through the current issue, but charge a subscription for retrieving articles from past issues.
  • Queries from on-line references: There are a host of on-line references now available for query that range from the Encyclopedia Britannica (at http://www.eb.com/) to the CIA’s World Factbook (at http://www.cia.gov/cia/publications/factbook/). A complete list of such references is impossible to include, but the search capabilities provided by each are clear examples of subject-based queries.
  • Search engines: Just as with queries against databases, there are a host of commercial search engines available for free-format internet searching. The most popular is Google, which combines a technique called citation indexing with web crawlers that constantly search out and index new web pages. Google broke the mold of free-format text searching by not focusing on exact matches between the search terms and the retrieved information. Rather, Google assumes that the most popular pages (the ones that are referenced the most often) that include your search terms will be the pages of greatest interest to you. The commercial version of Google is available free of charge on the internet, and organizations can also purchase a version of Google for indexing pages on an intranet. Google also works in many languages. More information about Google as a business solution can be found at http://www.google.com/services/. Although the current version of Google supports many of the requirements for subject-based queries, its focus is quick search and it does not support sophisticated query interfaces, natural language queries, synonyms, or a managed query environment where queries can be saved. There are now numerous software packages available that provide this level of support, many of them as add-on packages to existing applications.

o Name Search®: This software enables applications to find, identify and match information. Specifically, Name Search finds and matches records based on personal and corporate names, social security numbers, street addresses and phone numbers even when those records have variations due to phonetics, missing words, noise words, nicknames, prefixes, keyboard errors or sequence variations. Name Search claims that searches using their rule-based matching algorithms are faster and more accurate than those based only on Soundex or similar techniques. Soundex, developed by Odell and Russell, uses codes based on the sound of each letter to translate a string into a canonical form of at most four characters, preserving the first letter.

Name Search also supports foreign languages, technical data, medical information, and other specialized information. Other problem-specific packages take advantage of the Name Search functionality through an Application Programming Interface (API) (i.e., Name Search is bundled). An example is ISTwatch. See http://www.search-software.com/.

o ISTwatch©: ISTwatch is a software component suite that was designed specifically to search and match individuals against the Office of Foreign Assets Control’s (OFAC’s) Specially Designated Nationals list and other denied parties lists. These include the FBI’s Most Wanted, Canadian’s OSFI terrorist lists, the Bank of England’s consolidated lists and Financial Action Task Force data on money-laundering countries. See

http://www.intelligentsearch.com/ofac_software/index.html

All these tools are packages designed to be included in an application. A final set of subject-based query tools focus on customized search environments. These are tools that have been customized to per- form a particular task or operate within a particular context. One example is WebFountain.

o WebFountain: IBM’s WebFountain began as a research project focused on extending subject- based query techniques beyond free format text to target money-laundering activities identified through web sources. The WebFountain project, a product of IBM’s Almaden research facility in California, used advanced natural language processing technologies to analyze the entire internet – the search covered 256 terabytes of data in the process of matching a structured list of people who were indicted for money laundering activities in the past with unstructured in- formation on the internet. If a suspicious transaction is identified and the internet analysis finds a relationship between the person attempting the transaction and someone on the list, then an alert is issued. WebFountain has now been turned into a commercially available IBM product. Robert Carlson, IBM WebFountain vice president, describes the current content set as over 1 petabyte in storage with over three billion pages indexed, two billion stored, and the ability to mine 20 million pages a day. The commercial system also works across multiple languages. Carlson stated in 2003 that it would cover 21 languages by the end of 2004 [Quint, 2003]. See: http://www.almaden.ibm.com/webfountain

o Memex: Memex is a suite of tools that was created specifically for law enforcement and national security groups. The focus of these tools is to provide integrated search capabilities against both structured (i.e., databases) and unstructured (i.e., documents) data sources. Memex also provides a graphical representation of the process the investigator is following, structuring the subject-based queries. Memex’s marketing literature states that over 30 percent of the intelligence user population of the UK uses Memex. Customers include the Metropolitan Police Service (MPS), whose Memex network that includes over 90 dedicated intelligence servers pro- viding access to over 30,000 officers; the U.S. Department of Defense; numerous U.S. intelligence agencies, drug intelligence Groups and law enforcement agencies. See http://www.memex.com/index.shtml.

A.2.1.2. Pattern queries

Pattern-based queries focus on supporting automated knowledge discovery (1) where the exact subject of interest is not known in advance and (2) where what is of interest is a pattern of activity emerging over time. In order for pattern queries to be formed, the investigator must hypothesize about the patterns in advance and then use tools to confirm or deny these hypotheses. This approach is useful when there is expertise available to make reasonable guesses with respect to the potential patterns. Conversely, when that expertise is not available or the potential patterns are unknown due to extenuating circumstances (e.g., new patterns are emerging too quickly for investigators to formulate hypotheses), then investigators can auto- mate the construction of candidate patterns by formulating a set of rules that describe how potentially interesting, emerging patterns might appear. In either case, tools can help support the production and execution of the pattern queries. The degree of automation is dependent upon the expertise available and the dynamics of the situation being investigated.

As indicated earlier, pattern-based query tools fall into two general categories: those that support investigators in the construction of patterns based on their expertise, then run those patterns against large data sets, and those that allow the investigator to build rules about patterns of interest and, again, run those rules against large data sets.

Examples of tools for each of these categories include

  1. Megaputer (PolyAnalyst 4.6): This tool falls into the first category of pattern-based query tools, helping the investigator hypothesize patterns and explore the data based on those hypotheses. PolyAnalyst is a tool that supports a particular type of pattern-based query called Online Analytical Processing (OLAP), a popular analytical approach for large amounts of quantitative data. Using PolyAnalyst, the investigator defines dimensions of interest to be considered in text exploration and then displays the results of the analysis across various combinations of these dimensions. For example, an investigator could search for mujahideen who had trained at the same Al Qaeda camp in the 1990s and who had links to Pakistani Intelligence as well as opium growers and smuggling networks into Europe. See http://www.megaputer.com/.
  2. Autonomy Suite: Autonomy’s search capabilities fall into the second category of pattern-based query tools. Autonomy has combined technologies that employ adaptive pattern-matching techniques with Bayesian inference and Claude Shannon’s principles of information theory. Autonomy identifies the pat- terns that naturally occur in text, based on the usage and frequency of words or terms that correspond to specific ideas or concepts as defined by the investigator. Based on the preponderance of one pattern over another in a piece of unstructured information, Autonomy calculates the probability that a document in question is about a subject of interest [Autonomy, 2002]. See http://www.autonomy.com/content/home/
  3. Fraud Investigator Enterprise: The Fraud Investigator Enterprise Similarity Search Engine (SSE) from InfoGlide Software is another example of the second category of pattern search tools. SSE uses ana- lytic techniques that dissect data values looking for and quantifying partial matches in addition to exact matches. SSE scores and orders search results based upon a user-defined data model. See http://www.infoglide.com/composite/ProductsF_2_1.htm

Although an evaluation of data sources available for scanning is beyond the scope of this paper, one will serve as an example of the information available. It is hypothesized in this report that tools could be developed to support the search and analysis of Short Message Service (SMS) traffic for confirmation of PIE indicators. Often referred to as ‘text messaging’ in the U.S., the SMS is an integrated message service that lets GSM cellular subscribers send and receive data using their handset. A single short message can be up to 160 characters of text in length – words, numbers, or punctuation symbols. SMS is a store and for- ward service; this means that messages are not sent directly to the recipient but via a network SMS Center. This enables messages to be delivered to the recipient if their phone is not switched on or if they are out of a coverage area at the time the message was sent. This process, called asynchronous messaging, operates in much the same way as email. Confirmation of message delivery is another feature and means the sender can receive a return message notifying them whether the short message has been delivered or not. SMS messages can be sent to and received from any GSM phone, providing the recipient’s network supports text messaging. Text messaging is available to all mobile users and provides both consumers and business people with a discreet way of sending and receiving information
Over 15 billion SMS text messages were sent around the globe in January 2001. Tools taking advantage of the stored messages in an SMS Center could:

  • Perform searches of the text messages for keywords or phrases,
  • Analyze SMS traffic patterns, and
  • Search for people of interest in the Home Location Register (HLR) database that maintains information about the subscription profile of the mobile phone and also about the routing information for the subscriber.

A.2.2. Codification tools

As can be seen from exhibit 6, all codification tools will need to support requirements dictated by where these tools fall within the tool space. Codification tools should focus on:

  • Supporting individual investigators (or at best a small group of investigators) in making sense of the information discovered during the scanning process.
  • Moving the terms with which the information is referenced from a localized organizational context (uncoded, e.g., hawala banking) to a more global context (codified, e.g., informal value storage and transfer operations).
  • Moving that information from specific, concrete examples towards more abstract terms that could support identification of concepts and patterns across multiple situations, thus providing a larger context for the concepts being explored.

Using these criteria as a background, the codification tools reviewed fall into two major categories:

  1. Tools that help investigators label concepts and cluster different concepts into terms that are recognizable and used by the larger analytical community; and
  2. Tools that use this information to build up network maps identifying entities, relationships, missions, etc.

This section briefly describes codification functionality in general, as well as providing specific tool examples, to support both of these types of codification.

A.2.2.1. Labeling and clustering

The first step to codification is to map the context-specific terms used by individual investigators to a taxonomy of terms that are commonly accepted in a wider analytical context. This process is performed through labeling individual terms, clustering other terms and renaming them according to a community- accepted taxonomy.

In general, labeling and clustering tools should:

  • Support the capture of taxonomies that are being developed by the broader analytical community; Allow the easy mapping of local terms to these broader terms;
    Support the clustering process either by providing algorithms for calculating the similarity between concepts, or tools that enable collaborative consensus construction of clustered concepts;
  • Label and cluster functionality is typically embedded in applications support analytical processes, not provided separately as stand-alone tools.

Two examples of such products include:

COPLINK® – COPLINK began as a research project at the University of Arizona and has now grown into a commercially available application from Knowledge Computing Corporation (KCC). It is focused on providing tools for organizing vast quantities of structured and seemingly unrelated information in the law enforcement arena. See COPLINK’s commercial website at http://www.knowledgecc.com/index.htm and its academic website at the University of Arizona at http://ai.bpa.arizona.edu/COPLINK/.

Megaputer (PolyAnalyst 4.6) – In addition to supporting pattern queries, PolyAnalyst also pro- vides a means for creating, importing and managing taxonomies which could be useful in the codification step and carries out automated categorization of text records against existing taxonomies.

A.2.2.2. Network mapping

Terrorists have a vested interest in concealing their relationships, they often emit confusing or intentionally misleading information and they operate in self-contained and difficult to penetrate cells for much of the time. Criminal networks are also notoriously difficult to map, and the mapping often happens after a crime has been committed than before. What is needed are tools and approaches that support the map- ping of networks to represent agents (e.g., people, groups), environments, behaviors, and the relationships between all of these.

A large number of research efforts and some commercial products have been created to automate aspects of network mapping in general and link analysis specifically. In the past, however, these tools have provided only marginal utility in understanding either criminal or terrorist behavior (as opposed to espionage networks, for which this type of tool was initially developed). Often the linkages constructed by such tools are impossible to disentangle since all links have the same importance. PIE holds the potential to focus link analysis tools by clearly delineating watch points and allowing investigators to differentiate, characterize and prioritize links within an asymmetric threat network. This section focuses on the requirements dictated by PIE and some candidate tools that might be used in the PIE context.

In general, network mapping tools should:

  • Support the representation of people, groups, and the links between them within the PIE indicator framework;
  • Sustain flexibility for mapping different network structures;
  • Differentiate, characterize and prioritize links within an asymmetric threat network;
  • Focus on organizational structures to determine what kinds of network structures they use;
  • Provide a graphical interface that supports analysis;
  • Access and associate evidence with an investigator’s data sources.

Within the PIE context, investigators can use network mapping tools to identify the flows of information and authority within different types of network forms such as chains, hub and spoke, fully matrixed, and various hybrids of these three basic forms.
Examples of network mapping tools that are available commercially include:

Analyst Notebook®: A PC-based package from i2 that supports network mapping/link analysis via network, timeline and transaction analysis. Analyst Notebook allows an investigator to capture link information between people, groups, activities, and other entities of interest in a visual format convenient for identifying relationships, dependencies and trends. Analyst Notebook facilitates this capture by providing a variety of tools to review and integrate information from a number of data sources. It also allows the investigator to make a connection between the graphical icons representing entities and the original data sources, supporting a drill-down feature. Some of the other useful features included with Analyst Note- book are the ability to: 1) automatically order and depict sequences of events even when exact date and time data is unknown and 2) use background visuals such as maps, floor plans or watermarks to place chart information in context or label for security purposes. See http://www.i2.co.uk/Products/Analysts_Notebook/default.asp. Even though i2 Analyst Notebook is widely used by intelligence community, anti-terrorism and law enforcement investigators for constructing network maps, interviews with investigators indicate that it is more useful as a visual aid for briefing rather than in performing the analysis itself. Although some investigators indicated that they use it as an analytical tool, most seem to perform the analysis using either another tool or by hand, then entering the results into the Analyst Notebook in order to generate a graphic for a report or briefing. Finally, few tools are available within the Analyst Notebook to automatically differentiate, characterize and prioritize links within an asymmetric threat network.

Patterntracer TCA: Patterntracer Telephone Call Analysis (TCA) is an add-on tool for the Analyst Notebook intended to help identify patterns in telephone billing data. Patterntracer TCA automatically finds repeating call patterns in telephone billing data and graphically displays them using network and timeline charts. See http://www.i2.co.uk/Products/Analysts_Workstation/default.asp

Memex: Memex has already been discussed in the context of subject-based query tools. In addition to supporting such queries, however, Memex also provides a tool that supports automated link analysis on unstructured data and presents the results in graphical form.

Megaputer (PolyAnalyst 4.6): In addition to supporting pattern-based queries, PolyAnalyst was also designed to support a primitive form of link analysis, by providing a visual relationship of the results.

A.2.3. Abstraction tools

As can be seen from exhibit 6, all abstraction tools will need to support requirements dictated by where these tools fall within the tool space. Abstraction tools should focus on:

  • Functionalities that help individual investigators (or a small group of investigators) build abstract models;
  • Options to help share these models, and therefore the tools should be defined using terms that will be recognized by the larger community (i.e., codified as opposed to uncoded);
  • Highly abstract notions that encourage examination of concepts across networks, groups, and time.

The product of these tools should be hypotheses or models that can be shared with the community to support information exchange, encourage dialogue, and eventually be validated against both real-world data and by other experts. This section provides some examples of useful functionality that should be included in tools to support the abstraction process.

A.2.3.1. Structured argumentation tools

Structured argumentation is a methodology for capturing analytical reasoning processes designed to address a specific analytic task in a series of alternative constructs, or hypotheses, represented by a set of hierarchical indicators and associated evidence. Structured argumentation tools should:

  • Capture multiple, competing hypotheses of multi-dimensional indicators at both summary and/or detailed levels of granularity;
  • Develop and archive indicators and supporting evidence;
  • Monitor ongoing activities and assess the implications of new evidence;
  • Provide graphical visualizations of arguments and associated evidence;
  • Encourage a careful analysis by reminding the investigator of the full spectrum of indicators to be considered;
  • Ease argument comprehension by allowing the investigator to move along the component lines of reasoning to discover the basis and rationale of others’ arguments;
  • Invite and facilitate argument comparison by framing arguments within common structures; and
  • Support collaborative development and reuse of models among a community of investigators.
  • Within the PIE context, investigators can use structured argumentation tools to assess a terrorist group’s ability to weaponize biological materials, and determine the parameters of a transnational criminal organization’s money laundering methodology.

Examples of structured argumentation tools that are available commercially include:

Structured Evidential Argument System (SEAS) from SRI International was initially applied to the problem of early warning for project management, and more recently to the problem of early crisis warning for the U.S. intelligence and policy communities. SEAS is based on the concept of a structured argument, which is a hierarchically organized set of questions (i.e., a tree structure). These are multiple-choice questions, with the different answers corresponding to discrete points or subintervals along a continuous scale, with one end of the scale representing strong support for a particular type of opportunity or threat and the other end representing strong refutation. Leaf nodes represent primitive questions, and internal nodes represent derivative questions. The links represent support relationships among the questions. A derivative question is supported by all the derivative and primitive questions below it. SEAS arguments move concepts from their concrete, local representations into a global context that supports PIE indicator construction. See http://www.ai.sri.com/~seas/.

A.2.3.2. Modeling

  • By capturing information about a situation (e.g., the actors, possible actions, influences on those actions, etc.), in a model, users can define a set of initial conditions, match these against the model, and use the results to support analysis and prediction. This process can either be performed manually or, if the model is complex, using an automated tool or simulator.
  • Utilizing modeling tools, investigators can systematically examine aspects of terror-crime interaction. Process models in particular can reveal linkages between the two groups and allow investigators to map these linkages to locations on the terror-crime interaction spectrum. Process models capture the dynamics of networks in a series of functional and temporal steps. Depending on the process being modeled, these steps must be conducted either sequentially or simultaneously in order for the process to execute as de- signed. For example, delivery of cocaine from South America to the U.S. can be modeled as process that moves sequentially from the growth and harvesting of coca leaves through refinement into cocaine and then transshipment via intermediate countries into U.S. distribution points. Some of these steps are sequential (e.g., certain chemicals must be acquired and laboratories established before the coca leaves can be processed in bulk) and some can be conducted simultaneously (e.g., multiple smuggling routes can be utilized at the same time).

Corruption, modeled as a process, should reveal useful indicators of cooperation between organized crime and terrorism. For example, one way to generate and validate indicators of terror-crime interaction is to place cases of corrupt government officials or private sector individuals in an organizational network construct utilizing a process model and determine if they serve as a common link between terrorist and criminal networks via an intent model with attached evidence. An intent model is a type of process model constructed by reverse engineering a specific end-state, such as the ability to move goods and people into and out of a country without interference from law enforcement agencies.

This end-state is reached by bribing certain key officials in groups that supply border guards, provide legitimate import-export documents (e.g., end-user certificates), monitor immigration flows, etc.

Depending on organizational details, a bribery campaign can proceed sequentially or simultaneously through various offices and individuals. This type of model allows analysts to ‘follow the money’ through a corruption network and link payments to officials with illicit sources. The model can be set up to reveal payments to officials that can be linked to both criminal and terrorist involvement (perhaps via individuals or small groups with known links to both types of network).

Thus investigators can use a process model as a repository for numerous disparate data items that, taken together, reveal common patterns of corruption or sources of payments that can serve as indicators of cooperation between organized crime and terrorism. Using these tools, investigators can explore multiple data dimensions by dynamically manipulating several elements of analysis:

  • Criminal and/or terrorist priorities, intent and factor attributes;
  • Characterization and importance of direct evidence;
  • Graphical representations and other multi-dimensional data visualization approaches.

There have been a large number of models built over the last several years focusing on counter- terrorism and criminal activities. Some of the most promising are models that support agent-based execution of complex adaptive environments that are used for intelligence analysis and training. Some of the most sophisticated are now being developed to support the generation of more realistic environments and interactions for the commercial gaming market.

In general, modeling tools should:

  • Capture and present reasoning from evidence to conclusion;
  • Enable comparison of information across situation, time, and groups;
  • Provide a framework for challenging assumptions and exploring alternative hypotheses;
  • Facilitate information sharing and cooperation by representing hypotheses and analytical judgment, not just facts;
  • Incorporate the first principle of analysis—problem decomposition;
  • Track ongoing and evolving situations, collect analysis, and enable users to discover information and critical data relationships;
  • Make rigorous option space analysis possible in a distributed electronic context;
  • Warn users of potential cognitive bias inherent in analysis.

Although there are too many of these tools to list in this report, good examples of some that would be useful to support PIE include:

NETEST: This model estimates the size and shape of covert networks given multiple sources with omissions and errors. NETEST makes use of Bayesian updating techniques, communications theory and social network theory [Dombroski, 2002].

The Modeling, Virtual Environments and Simulation (MOVES) Institute at the Naval Postgraduate School in Monterey, California, is using a model of cognition formulated by Aaron T. Beck to build models capturing the characteristics of people willing to employ violence [Beck, 2002].

BIOWAR: This is a city scale multi-agent model of weaponized bioterrorist attacks for intelligence and training. At present the model is running with 100,000 agents (this number will be increased). All agents have real social networks and the model contains real city data (hospitals, schools, etc.). Agents are as realistic as possible and contain a cognitive model [Carley, 2003a].

All of the models reviewed had similar capabilities:

  • Capture the characteristics of entities – people, places, groups, etc.;
  • Capture the relationships between entities at a level of detail that supports programmatic construction of processes, situations, actions, etc. these are usually “is a” and “a part of” representations of object-oriented taxonomies, influence relationships, time relationships, etc.;
  • The ability to represent this information in a format that supports using the model in simulations. The next section provides information on simulation tools that are in common use for running these types of models.
  • User interfaces for defining the models, the best being graphical interfaces that allow the user to define the entities and their relationships through intuitive visual displays. For example, if the model involves defining networks or influences between entities, graphical displays with the ability to create connections and perform drag and drop actions become important.

A.2.4. Diffusion tools

As can be seen from exhibit 6, all diffusion tools will need to support requirements dictated by where these tools fall within the tool space. Diffusion tools should focus on:

  • Moving information from an individual or small group of investigators to the collective community;
  • Providing abstract concepts that are easily understood in a global context with little worry that the terms will be misinterpreted;
  • Supporting the representation of abstract concepts and encouraging dialogues about those concepts.

In general diffusion tools should:

  • Provide a shared environment that investigators can access on the internet;
  • Support the ability for everyone to upload abstract concepts and their supporting evidence (e.g., documents);
  • Contain the ability for the person uploading the information to be able to attach an annotation and keywords;
  • Possess the ability to search concept repositories;
  • Be simple to set up and use.

Within the PIE context, investigators could use diffusion tools to:

  • Employ a collaborative environment to exchange information, results of analysis, hypotheses, models, etc.;
  • Utilize collaborative environments that might be set up between law enforcement groups and counterterrorism groups to exchange information on a continual and near real-time basis. Examples of diffusion tools run from one end of the cooperation/dissemination spectrum to the other. One of the simplest to use is:
  • AskSam: The AskSam Web Publisher is an extension of the standalone AskSam capability that has been used by the analytical community for many years. The capabilities of AskSam Web Publisher include: 1) sharing documents with others who have access to the local net- work, 2) anyone who has access to the network has access to the AskSam archive without the need for an expensive license, and 3) advanced searching capabilities including adding keywords which supports a group’s codification process (see step 2 in exhibit 6 in our analytical process). See http://www.asksam.com/.

There are some significant disadvantages to using AskSam as a cooperation environment. For example, each document included has to be ‘published’. The assumption is that there are only one or two people primarily responsible for posting documents and these people control all documents that are made available, a poor assumption for an analytical community where all are potential publishers of concepts. The result is expensive licenses for publishers. Finally, there is no web-based service for AskSam, requiring each organization to host its own AskSam server.

There are two leading commercial tools for cooperation now available and widely used. Which tool is chosen for a task depends on the scope of the task and the number of users.

  • Groove: virtual office software that allows small teams of people to work together securely over a network on a constrained problem. Groove capabilities include: 1) the ability for investigators to set up a shared space, invite people to join and give them permission to post documents to a document repository (i.e., file sharing), 2) security including encryption that protects content (e.g., upload and download of documents) and communications (e.g., email and text messaging), investigators can work across firewalls without a Virtual Private Network (VPN) which improves speed and makes it accessible from outside of an intranet, 4) investigators are able to work off-line, then synchronize when they come back on line, 5) includes add- in tools to support cooperation such as calendars, email, text- and voice-based instant messaging, and project management.

Although Groove satisfies most of the basic requirements listed for this category, there are several drawbacks to using Groove for large projects. For example, there is no free format search for text documents and investigators cannot add on their own keyword categories or attributes to the stored documents. This limits Groove’s usefulness as an information exchange archive. In addition, Groove is a fat client, peer-to-peer architecture. This means that all participants are required to purchase a license, download and install Groove on their individual machines. It also means that Groove requires high bandwidth for the information exchange portion of the peer-to-peer updates. See http://www.groove.net/default.cfm?pagename=Workspace.

  • SharePoint: Allows teams of people to work together on documents, tasks, contacts, events, and other information. SharePoint capabilities include: 1) text document loading and sharing, 2) free format search capability, 3) cooperation tools to include instant messaging, email and a group calendar, and 4) security with individual and group level access control. The TraCCC

team employed SharePoint for this project to facilitate distributed research and document

generation. See http://www.microsoft.com/sharepoint/.
SharePoint has many of the same features as Groove, but there are fundamental underlying differences. Sharepoint’s architecture is server based with the client running in a web browser. One ad- vantage to this approach is that each investigator is not required to download a personal version on a machine (Groove requires 60-80MB of space on each machine). In fact, an investigator can access the SharePoint space from any machine (e.g., at an airport). The disadvantage of this approach is that the investigator does not have a local version of the SharePoint information and is unable to work offline. With Groove, an investigator can work offline, and then resynchronize with the remaining members of the group when the network once again becomes available. Finally, since peer-to-peer updates are not taking place, SharePoint does not necessarily require a high-speed internet access, except perhaps in the case where the investigator would like to upload large documents.

Another significant difference between SharePoint and Groove is linked to the search function. In Groove, the search capability is limited to information that is typed into Groove directly, not to documents that have been attached to Groove in an archive. A SharePoint support not only document searches, but also allows the community of investigators to set up their own keyword categories to help with the codification of the shared documents (again see step 2 from exhibit 6). It should be noted, however, that SharePoint only supports searches for Microsoft documents (e.g., Word, Power- Point, etc.) and not ‘foreign’ document formats such as PDF. This fact is not surprising given that SharePoint is a Microsoft tool.

SharePoint and Groove are commercially available cooperation solutions. There are also a wide variety of customized cooperation environments now appearing on the market. For example:

  • WAVE Enterprise Information Integration System– Modus Operandi’s Wide Area Virtual Environment (WAVE) provides tools to support real-time enterprise information integration, cooperation and performance management. WAVE capabilities include: 1) collaborative workspaces for team-based information sharing, 2) security for controlled sharing of information, 3) an extensible enterprise knowledge model that organizes and manages all enterprise knowledge assets, 4) dynamic integration of legacy data sources and commercial off-the-shelf (COtS) tools, 5) document version control, 6) cooperation tools, including discussions, issues, action items, search, and reports, and 7) performance metrics. WAVE is not a COtS solution, however. An organization must work with Modus Operandi services to set up a custom environment. The main disadvantage to this approach as opposed to Groove or SharePoint is cost and the sharing of information across groups. See http://www.modusoperandi.com/wave.htm.

Finally, many of the tools previously discussed have add-ons available for extending their functionality to a group. For example:

  • iBase4: i2’s Analyst Notebook can be integrated with iBase4, an application that allows investigators to create multi-user databases for developing, updating, and sharing the source information being used to create network maps. It even includes security to restrict access or functionality by user, user groups and data fields. It is not clear from the literature, but it appears that this functionality is restricted to the source data and not the sharing of network maps generated by the investigators. See http://www.i2.co.uk/Products/iBase/default.asp

The main disadvantage of iBase4 is its proprietary format. This limitation might be somewhat mitigated by coupling iBase4 with i2’s iBridge product which creates a live connection between legacy databases, but there is no evidence in the literature that i2 has made this integration.

A.2.5. Validation tools

As can be seen from exhibit 6, all validation tools will need to support requirements dictated by where these tools fall within the tool space. Validation tools should focus on:

  • Providing a community context for validating the concepts put forward by the individual participants in the community;
  • Continuing to work within a codified realm in order to facilitate communication between different groups articulating different perspectives;
  • Matching abstract concepts against real world data (or expert opinion) to determine the validity of the concepts being put forward.

Using these criteria as background, one of the most useful toolsets available for validation are simulation tools. This section briefly describes the functionality in general, as well as providing specific tool examples, to support simulations that ‘kick the tires’ of the abstract concepts.

Following are some key capabilities that any simulation tool must possess:

  • Ability to ingest the model information that has been constructed in the previous steps in the

analytical process;

  • Access to a data source for information that might be required by the model during execution;
  • Users need to be able to define the initial conditions against which the model will be run;
  • The more useful simulators allow the user to “step through” the model execution, examining

variables and resetting variable values in mid-execution;

  • Ability to print out step-by-step interim execution results and final results;
  • Change the initial conditions and compare the results against prior runs.

Although there are many simulation tools available, following are brief descriptions of some of the most promising:

  • Online iLink: An optional application for i2’s Analyst Notebook that supports dynamic up- date of Analyst Notebook information from online data sources. Once a connection is made with an on-line source (e.g., LexisNexistM, or D&B®) Analyst Notebook uses this connection to automatically check for any updated information and propagates those updates throughout to support validation of the network map information. See http://www.i2inc.com.

One apparent drawback with this plug-in is that Online iLink appears to require that the line data provider deploy i2’s visualization technology.

  • NETEST: A research project from Carnegie Mellon University, which is developing tools

that combine multi-agent technology with hierarchical Bayesian inference models and biased net models to produce accurate posterior representations of terrorist networks. Bayesian inference models produce representations of a network’s structure and informant accuracy by combining prior network and accuracy data with informant perceptions of a network. Biased net theory examines and captures the biases that may exist in a specific network or set of net- works. Using NETEST, an investigator can estimate a network’s size, determine its member- ship and structure, determine areas of the network where data is missing, perform cost/benefit analysis of additional information, assess group level capabilities embedded in the network, and pose “what if” scenarios to destabilize a network and predict its evolution over time [Dombroski, 2002].

  • REcursive Porous Agent Simulation toolkit (REPAST): A good example of the free, open-source toolkits available for creating agent-based simulations. Begun by the University of Chicago’s social sciences research community and later maintained by groups such as Argonne National Laboratory, Repast is now managed by the non-profit volunteer Repast Organization for Architecture and Development (ROAD). Some of Repast’s features include: 1) a variety of agent templates and examples (however, the toolkit gives users complete flexibility as to how they specify the properties and behaviors of agents), 2) a fully concurrent discrete event scheduler (this scheduler supports both sequential and parallel discrete event operations), 3) built-in simulation results logging and graphing tools, 4) an automated Monte Carlo simulation framework, 5) allows users to dynamically access and modify agent properties, agent behavioral equations, and model properties at run time, 6) includes libraries for genetic algorithms, neural networks, random number generation, and specialized mathematics, and 7) built-in systems dynamics modeling.

More to the point for this investigation, Repast has social network modeling support tools. The Repast website claims that “Repast is at the moment the most suitable simulation framework for the applied modeling of social interventions based on theories and data,” [Tobias, 2003]. See http://repast.sourceforge.net/.

A.2.6. Impacting tools

As can be seen from exhibit 6, all impacting tools will need to support requirements dictated by where these tools fall within the tool space. Impacting tools should focus on:

  • Helping law enforcement and intelligence practitioners understand the implications of their validated models. For example, what portions of the terror-crime interaction spectrum are relevant in various parts of the world, and what is the likely evolutionary path of this phenomenon in each specific geographic area?

Support for translating abstracted knowledge into more concrete local execution strategies. The information flows feeding the scanning process, for example, should be updated based on the results of mapping local events and individuals to the terror-crime interaction spectrum. Watch points and their associated indicators should be reviewed, updated and modified. Probes can be constructed to clarify remaining uncertainties in specific situations or locations.

The following general requirements have been identified for impacting tools:

  • Probe management software to help law enforcement investigators and intelligence community analysts plan probes against known and suspected transnational threat entities, monitor their execution, map their impact, and analyze the resultant changes to network structure and operations.
  • Situational assessment software that supports transnational threat monitoring and projection. Data fusion and visualization algorithms that portray investigators’ current understanding of the nature and extent of terror-crime interaction, and allow investigators to focus scarce collection and analytical resources on the most threatening regions and networks.

Impacting tools are only just beginning to exit the laboratory, and none of them can be considered ready for operational deployment. This type of functionality, however, is being actively pursued within the U.S. governmental and academic research communities. An example of an impacting tool currently under development is described below:

DyNet – A multi-agent network system designed specifically for assessing destabilization strategies on dynamic networks. A knowledge network (e.g., a hypothesized network resulting from Steps 1 through 5 of Boisot’s I-Space-driven analytical process) is given to DyNet as input. In this case, a knowledge network is defined as an individual’s knowledge about who they know, what resources they have, and what task(s) they are performing. The goal of an investigator using DyNet is to build stable, high performance, adaptive networks with and conduct what-if analysis to identify successful strategies for destabilizing those net- works. Investigators can run sensitivity tests examining how differences in the structure of the covert net- work would impact the overall ability of the network to respond to probe and attacks on constituent nodes. [Carley, 2003b]. See the DyNet website hosted by Carnegie Mellon University at http://www.casos.cs.cmu.edu/projects/DyNet/.

A.3. Overall tool requirements

This appendix provides a high-level overview of PIE tool requirements:

  • Easy to put information into the system and get information out of it. The key to the successful use of many of these tools is the quality of the information that is put into them. User interfaces have to be easy to use, context based, intuitive, and customizable. Otherwise, investigators soon determine that the “care and feeding” of the tool does not justify the end product.
  • Reasonable response time: The response time of the tool needs to match the context. If the tool is being used in an operational setting, then the ability to retrieve results can be time- critical–perhaps a matter of minutes. In other cases, results may not be time-critical and days can be taken to generate results.
  • Training: Some tools, especially those that have not been released as commercial products, may not have substantial training materials and classes available. When making a decision regarding tool selection, the availability and accessibility of training may be critical.

Ability to integrate with the enterprise resources: There are many cases where the utility of the tool will depend on its ability to access and integrate information from the overall enterprise in which the investigator is working. Special-purpose tools that require re-keying of information or labor-intensive conversions of formats should be carefully evaluated to determine the man- power required to support such functions.

  • Support for integration with other tools: Tools that have standard interfaces will act as force multipliers in the overall analytical toolbox. At a minimum, tools should have some sort of a developer’s kit that allows the creation of an API. In the best case, a tool would support some generally accepted integration standard such as web services.
  • Security: Different situations will dictate different security requirements, but in almost all cases some form of security is required. Examples of security include different access levels for different user populations. The ability to be able to track and audit transactions, linking them back to their sources, will also be necessary in many cases.
  • Customizable: Augmenting usability, most tools will need to support some level of customizability (e.g., customizable reporting templates).
  • Labeling of information: Information that is being gathered and stored will need to be labeled (e.g., for level of sensitivity or credibility).
  • Familiar to the current user base: One characteristic in favor of any tool selected is how well the current user base has accepted it. There could be a great deal of benefit to upgrading existing tools that are already familiar to the users.
  • Heavy emphasis on visualization: To the greatest extent possible, tools should provide the investigator with the ability to display different aspects of the results in a visual manner.
  • Support for cooperation: In many cases, the strength of the analysis is dependent on leveraging cross-disciplinary expertise. Most tools will need to support some sort of cooperation.

A.4. Bibliography and Further Reading

Autonomy technology White Paper, Ref: [WP tECH] 07.02. This and other information documents about Autonomy may be downloaded after registration from http://www.autonomy.com/content/downloads/

Beck, Aaron T., “Prisoners of Hate,” Behavior research and therapy, 40, 2002: 209-216. A copy of this article may be found at http://mail.med.upenn.edu/~abeck/prisoners.pdf. Also see Dr. Beck’s website at http://mail.med.upenn.edu/~abeck/ and the MOVES Institute at http://www.movesinstitute.org/.

Boisot, Max and Ron Sanchez, “the Codification-Diffusion-Abstraction Curve in the I-Space,” Economic Organization and Nexus of Rules: Emergence and the Theory of the Firm, a working paper, the Universitat Oberta de Catalunya, Barcelona, Spain, May 2003.

Carley, K. M., D. Fridsma, E. Casman, N. Altman, J. Chang, B. Kaminsky, D. Nave, & Yahja, “BioWar: Scalable Multi-Agent Social and Epidemiological Simulation of Bioterrorism Events” in Proceedings from the NAACSOS Conference, 2003. this document may be found at http://www.casos.ece.cmu.edu/casos_working_paper/carley_2003_biowar.pdf

Carley, Kathleen M., et. al., “Destabilizing Dynamic Covert Networks” in Proceedings of the 8th International Command and Control Research and technology Symposium, 2003. Conference held at the National Defense War College, Washington, DC. This document may be found at http://www.casos.ece.cmu.edu/resources_others/a2c2_carley_2003_destabilizing.pdf

Collier, N., Howe, T., and North, M., “Onward and Upward: the transition to Repast 2.0,” in Proceedings of the First Annual North American Association for Computational Social and Organizational Science Conference, Electronic Proceedings, Pittsburgh, PA, June 2003. Also, read about REPASt 3.0 at the REPASt website: http://repast.sourceforge.net/index.html

DeRosa, Mary, “Data Mining and Data Analysis for Counterterrorism,” CSIS Report, March 2004. this document may be purchased at http://csis.zoovy.com/product/0892064439

Dombroski, M. and K. Carley, “NETEST: Estimating a Terrorist Network’s Structure,” Journal of Computational and Mathematical Organization theory, 8(3), October 2002: 235-241.
http://www .casos.ece.cmu.edu/conference2003/student_paper/Dombroski.pdf

Farah, Douglas, Blood from Stones: The Secret Financial Network of Terror, New York: Broadway Books, 2004.

Hall, P. and G. Dowling, “Approximate string matching,” Computing Surveys, 12(4), 1980: 381-402. For more information on phonetic string matching see http://www.cs.rmit.edu.au/~jz/fulltext/sigir96.pdf. A good summary of the inherent limitations of Soundex may be found at http://www.las-inc.com/soundex/?source=gsx.

Lowrance, J.D., Harrison, I.W., and Rodriguez, A.C., “Structured Argumentation for Analysis,” Proceedings of the 12th Inter- national Conference on Systems Research, Informatics, and Cybernetics, (August 2000).

Quint, Barbara, “IBM’s WebFountain Launched – the Next Big Thing?” September 22, 2003 – from the Information today, Inc. website at http://www.infotoday.com/newsbreaks/nb030922-1.shtml Also see IBM’s WebFountain website at http://www.almaden.ibm.com/webfountain/ and the WebFountain Application Development Guide at
http://www .almaden.ibm.com/webfountain/resources/sg247029.pdf.

Shannon, Claude, “A mathematical theory of communication,” Bell System technical Journal, (27), July and October 1948: 379- 423 and 623-656.

Tobias, R. and C. Hofmann, “Evaluation of Free Java-libraries for Social-scientific Agent Based Simulation,” Journal of Artificial Societies and Social Simulation, University of Surrey, 7(1), January 2003 may be found at http://jasss.soc.surrey.ac.uk/7/1/6.html.

Notes from Joint Publication 3-13 Information Operations

Notes from Joint Publication 3-13 Information Operations.

  1. Scope

PREFACE

This publication provides joint doctrine for the planning, preparation, execution, and assessment of information operations across the range of military operations.

Overview

The ability to share information in near real time, anonymously and/or securely, is a capability that is both an asset and a potential vulnerability to us, our allies, and our adversaries.

The nation’s state and non-state adversaries are equally aware of the significance of this new technology, and will use information-related capabilities (IRCs) to gain advantages in the information environment, just as they would use more traditional military technologies to gain advantages in other operational environments. As the strategic environment continues to change, so does information operations (IO). Based on these changes, the Secretary of Defense now characterizes IO as the integrated employment, during military operations, of IRCs in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision making of adversaries and potential adversaries while protecting our own.

 The Information Environment

The information environment is the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions, which continuously interact with individuals, organizations, and systems. These dimensions are known as physical, informational, and cognitive. The physical dimension is composed of command and control systems, key decision makers, and supporting infrastructure that enable individuals and organizations to create effects. The informational dimension specifies where and how information is collected, processed, stored, disseminated, and protected. The cognitive dimension encompasses the minds of those who transmit, receive, and respond to or act on information.

Information Operations

Information Operations and the Information-Influence Relational Framework

The relational framework describes the application, integration, and synchronization of IRCs to influence, disrupt, corrupt, or usurp the decision making of TAs to create a desired effect to support achievement of an objective.

Relationships and Integration

IO is not about ownership of individual capabilities but rather the use of those capabilities as force multipliers to create a desired effect. There are many military capabilities that contribute to IO and should be taken into consideration during the planning process. These include: strategic communication, joint interagency coordination group, public affairs, civil-military operations, cyberspace operations (CO), information assurance, space operations, military information support operations (MISO), intelligence, military deception, operations security, special technical operations, joint electromagnetic spectrum operations, and key leader engagement.

Legal Considerations

IO planners deal with legal considerations of an extremely diverse and complex nature. For this reason, joint IO planners should consult their staff judge advocate or legal advisor for expert advice.

Multinational Information Operations

Other Nations and Information Operations

Multinational partners recognize a variety of information concepts and possess sophisticated doctrine, procedures, and capabilities. Given these potentially diverse perspectives regarding IO, it is essential for the multinational force commander (MNFC) to resolve potential conflicts as soon as possible. It is vital to integrate multinational partners into IO planning as early as possible to gain agreement on an integrated and achievable IO strategy.

Information Operations Assessment  

Information Operations assessment is iterative, continuously repeating rounds of analysis within the operations cycle in order to measure the progress of information related capabilities toward achieving objectives.  

The Information Operations Assessment Process

Assessment of IO is a key component of the commander’s decision cycle, helping to determine the results of tactical actions in the context of overall mission objectives and providing potential recommendations for refinement of future plans. Assessments also provide opportunities to identify IRC shortfalls, changes in parameters and/or conditions in the information environment, which may cause unintended effects in the employment of IRCs, and resource issues that may be impeding joint IO effectiveness.

A solution to these assessment requirements is the eight-step assessment process.

  • Focused characterization of the information environment
  • Integrate information operations assessment into plans and develop the assessment plan
  • Develop information operations assessment information requirements and collection plans
  • Build/modify information operations assessment baseline
  • Coordinate and execute information operations and collection activities
  • Monitor and collect focused information environment data for information operations assessment
  • Analyze information operations assessment data
  • Report information operations assessment results and recommendations

 

 

Measures and Indicators

Measures of performance (MOPs) and measures of effectiveness (MOEs) help accomplish the assessment process by qualifying or quantifying the intangible attributes of the information environment. The MOP for any one action should be whether or not the TA was exposed to the IO action or activity. MOEs should be observable, to aid with collection; quantifiable, to increase objectivity; precise, to ensure accuracy; and correlated with the progress of the operation, to attain timeliness. Indicators are crucial because they aid the joint IO planner in informing MOEs and should be identifiable across the center of gravity critical factors.

CHAPTER I

OVERVIEW

“The most hateful human misfortune is for a wise man to have no influence.”

Greek Historian Herodotus, 484-425 BC

INTRODUCTION

  1. The growth of communication networks has decreased the number of isolated populations in the world. The emergence of advanced wired and wireless information technology facilitates global communication by corporations, violent extremist organizations, and individuals. The ability to share information in near real time, anonymously and/or securely, is a capability that is both an asset and a potential vulnerability to us, our allies, and our adversaries. Information is a powerful tool to influence, disrupt, corrupt, or usurp an adversary’s ability to make and share decisions.
  2. The instruments of national power (diplomatic, informational, military, and economic) provide leaders in the United States with the means and ways of dealing with crises around the world. Employing these means in the information environment requires the ability to securely transmit, receive, store, and process information in near real time. The nation’s state and non-state adversaries are equally aware of the significance of this new technology, and will use information-related capabilities (IRCs) to gain advantages in the information environment, just as they would use more traditional military technologies to gain advantages in other operational environments. These realities have transformed the information environment into a battlefield, which poses both a threat to the Department of Defense (DOD), combatant commands (CCMDs), and Service components and serves as a force multiplier when leveraged effectively.
  3. As the strategic environment continues to change, so does IO. Based on these changes, the Secretary of Defense now characterizes IO as the integrated employment, during military operations, of IRCs in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision making of adversaries and potential adversaries while protecting our own.

This revised characterization has led to a reassessment of how essential the information environment can be and how IRCs can be effectively integrated into joint operations to create effects and operationally exploitable conditions necessary for achieving the joint force commander’s (JFC’s) objectives.

  1. The Information Environment

The information environment is the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions which continuously interact with individuals, organizations, and systems. These dimensions are the physical, informational, and cognitive (see Figure I-1).

The Physical Dimension. The physical dimension is composed of command and control (C2) systems, key decision makers, and supporting infrastructure that enable individuals and organizations to create effects. It is the dimension where physical platforms and the communications networks that connect them reside. The physical dimension includes, but is not limited to, human beings, C2 facilities, newspapers, books, microwave towers, computer processing units, laptops, smart phones, tablet computers, or any other objects that are subject to empirical measurement. The physical dimension is not confined solely to military or even nation-based systems and processes; it is a defused network connected across national, economic, and geographical boundaries.

The Informational Dimension. The informational dimension encompasses where and how information is collected, processed, stored, disseminated, and protected. It is the dimension where the C2 of military forces is exercised and where the commander’s intent is conveyed. Actions in this dimension affect the content and flow of information.

The Cognitive Dimension. The cognitive dimension encompasses the minds of those who transmit, receive, and respond to or act on information. It refers to individuals’ or groups’ information processing, perception, judgment, and decision making. These elements are influenced by many factors, to include individual and cultural beliefs, norms, vulnerabilities, motivations, emotions, experiences, morals, education, mental health, identities, and ideologies. Defining these influencing factors in a given environment is critical for understanding how to best influence the mind of the decision maker and create the desired effects. As such, this dimension constitutes the most important component of the information environment.

The Information and Influence Relational Framework and the Application of Information-Related Capabilities

IRCs are the tools, techniques, or activities that affect any of the three dimensions of the information environment. They affect the ability of the target audience (TA) to collect, process, or disseminate information before and after decisions are made. The TA is the individual or group selected for influence.

The change in the TA conditions, capabilities, situational awareness, and in some cases, the inability to make and share timely and informed decisions, contributes to the desired end state. Actions or inactions in the physical dimension can be assessed for future operations. The employment of IRCs is complemented by a set of capabilities such as operations security (OPSEC), information assurance (IA), counter-deception, physical security, electronic warfare (EW) support, and electronic protection. These capabilities are critical to enabling and protecting the JFC’s C2 of forces. Key components in this process are:

(1) Information. Data in context to inform or provide meaning for action.

(2) Data. Interpreted signals that can reduce uncertainty or equivocality.

(3) Knowledge. Information in context to enable direct action. Knowledge can be further broken down into the following:

(a) Explicit Knowledge. Knowledge that has been articulated through words, diagrams, formulas, computer programs, and like means.

(b) Tacit Knowledge. Knowledge that cannot be or has not been articulated through words, diagrams, formulas, computer programs, and like means.

(4) Influence. The act or power to produce a desired outcome or end on a TA.

(5) Means. The resources available to a national government, non-nation actor, or adversary in pursuit of its end(s). These resources include, but are not limited to, public- and private-sector enterprise assets or entities.

(6) Ways. How means can be applied, in order to achieve a desired end(s). They can be characterized as persuasive or coercive.

(7) Information-Related Capabilities. Tools, techniques, or activities using data, information, or knowledge to create effects and operationally desirable conditions within the physical, informational, and cognitive dimensions of the information environment.

(8) Target Audience. An individual or group selected for influence. (9) Ends. A consequence of the way of applying IRCs.

(10) Using the framework, the physical, informational, and cognitive dimensions of the information environment provide access points for influencing TAs (see Figure I-2).

  1. The purpose of integrating the employment of IRCs is to influence a TA. While the behavior of individuals and groups, as human social entities, are principally governed by rules, norms, and beliefs, the behaviors of systems principally reside within the physical and informational dimensions and are governed only by rules. Under this construct, rules, norms, and beliefs are:

(1) Rules. Explicit regulative processes such as policies, laws, inspection routines, or incentives. Rules function as a coercive regulator of behavior and are dependent upon the imposing entity’s ability to enforce them.

(2) Norms. Regulative mechanisms accepted by the social collective. Norms are enforced by normative mechanisms within the organization and are not strictly dependent upon law or regulation.

(3) Beliefs. The collective perception of fundamental truths governing behavior. The adherence to accepted and shared beliefs by members of a social system will likely persist and be difficult to change over time. Strong beliefs about determinant factors (i.e., security, survival, or honor) are likely to cause a social entity or group to accept rules and norms.

  1. The first step in achieving an end(s) through use of the information-influence relational framework is to identify the TA. Once the TA has been identified, it will be necessary to develop an understanding of how that TA perceives its environment, to include analysis of TA rules, norms, and beliefs. Once this analysis is complete, the application of means available to achieve the desired end(s) must be evaluated (see Figure I-3). Such means may include (but are not limited to) diplomatic, informational, military, or economic actions, as well as academic, commercial, religious, or ethnic pronouncements. When the specific means or combinations of means are determined, the next step is to identify the specific ways to create a desired effect.
  2. Influencing the behavior of TAs requires producing effects in ways that modify rules, norms, or beliefs. Effects can be created by means (e.g., governmental, academic, cultural, and private enterprise) using specific ways (i.e., IRCs) to affect how the TAs collect, process, perceive, disseminate, and act (or do not act) on information
  3. Upon deciding to persuade or coerce a TA, the commander must then determine what IRCs it can apply to individuals, organizations, or systems in order to produce a desired effect(s) (see Figure I-5). As stated, IRCs can be capabilities, techniques, or activities, but they do not necessarily have to be technology-based. Additionally, it is important to focus on the fact that IRCs may come from a wide variety of sources. Therefore, in IO, it is not the ownership of the capabilities and techniques that is important, but rather their integrated application in order to achieve a JFC’s end state.

(10) Using the framework, the physical, informational, and cognitive dimensions of the information environment provide access points for influencing TAs

  1. The purpose of integrating the employment of IRCs is to influence a TA. While the behavior of individuals and groups, as human social entities, are principally governed by rules, norms, and beliefs, the behaviors of systems principally reside within the physical and informational dimensions and are governed only by rules. Under this construct, rules, norms, and beliefs are:

(1) Rules. Explicit regulative processes such as policies, laws, inspection routines, or incentives. Rules function as a coercive regulator of behavior and are dependent upon the imposing entity’s ability to enforce them.

(2) Norms. Regulative mechanisms accepted by the social collective. Norms are enforced by normative mechanisms within the organization and are not strictly dependent upon law or regulation.

(3) Beliefs. The collective perception of fundamental truths governing behavior. The adherence to accepted and shared beliefs by members of a social system will likely persist and be difficult to change over time. Strong beliefs about determinant factors (i.e., security, survival, or honor) are likely to cause a social entity or group to accept rules and norms.

  1. The first step in achieving an end(s) through use of the information-influence relational framework is to identify the TA. Once the TA has been identified, it will be necessary to develop an understanding of how that TA perceives its environment, to include analysis of TA rules, norms, and beliefs. Once this analysis is complete, the application of means available to achieve the desired end(s) must be evaluated.

Such means may include (but are not limited to) diplomatic, informational, military, or economic actions, as well as academic, commercial, religious, or ethnic pronouncements. When the specific means or combinations of means are determined, the next step is to identify the specific ways to create a desired effect.

  1. InfluencingthebehaviorofTAsrequiresproducingeffectsinwaysthatmodifyrules, norms, or beliefs. Effects can be created by means (e.g., governmental, academic, cultural, and private enterprise) using specific ways (i.e., IRCs) to affect how the TAs collect, process, perceive, disseminate, and act (or do not act) on information (see Figure I-4).
  2. Upon deciding to persuade or coerce a TA, the commander must then determine what IRCs it can apply to individuals, organizations, or systems in order to produce a desired effect(s) (see Figure I-5). As stated, IRCs can be capabilities, techniques, or activities, but they do not necessarily have to be technology-based. Additionally, it is important to focus on the fact that IRCs may come from a wide variety of sources. Therefore, in IO, it is not the ownership of the capabilities and techniques that is important, but rather their integrated application in order to achieve a JFC’s end state.

CHAPTER II

INFORMATION OPERATIONS

“There is a war out there, old friend- a World War. And it’s not about whose got the most bullets; it’s about who controls the information.”

Cosmo, in the 1992 Film “Sneakers”

  1. Introduction

This chapter addresses how the integrating and coordinating functions of IO help achieve a JFC’s objectives.

  1. Terminology
  2. Because IO takes place in all phases of military operations, in concert with other lines of operation and lines of effort, some clarification of the terms and their relationship to IO is in order.

(1) Military Operations. The US military participates in a wide range of military operations, as illustrated in Figure II-1. Phase 0 (Shape) and phase I (Deter) may include defense support of civil authorities, peace operations, noncombatant evacuation, foreign humanitarian assistance, and nation-building assistance, which fall outside the realm of major combat operations represented by phases II through V.

(2) Lines of Operation and Lines of Effort. IO should support multiple lines of operation and at times may be the supported line of operation. IO may also support numerous lines of effort when positional references to an enemy or adversary have little relevance, such as in counterinsurgency or stability operations.

  1. IO integrates IRCs (ways) with other lines of operation and lines of effort (means) to create a desired effect on an adversary or potential adversary to achieve an objective (ends).
  2. Information Operations and the Information-Influence Relational Framework

Influence is at the heart of diplomacy and military operations, with integration of IRCs providing a powerful means for influence. The relational framework describes the application, integration, and synchronization of IRCs to influence, disrupt, corrupt, or usurp the decision making of TAs to create a desired effect to support achievement of an objective.

  1. The Information Operations Staff and Information Operations Cell

Within the joint community, the integration of IRCs to achieve the commander’s objectives is managed through an IO staff or IO cell. JFCs may establish an IO staff to provide command-level oversight and collaborate with all staff directorates and supporting organizations on all aspects of IO.

APPLICATION OF INFORMATION-RELATED CAPABILITIES TO THE INFORMATION AND INFLUENCE RELATIONAL FRAMEWORK

This example provides insight as to how information-related capabilities (IRCs) can be used to create lethal and nonlethal effects to support achievement of the objectives to reach the desired end state. The integration and synchronization of these IRCs require participation from not just information operations planners, but also organizations across multiple lines of operation and lines of effort. They may also include input from or coordination with national ministries, provincial governments, local authorities, and cultural and religious leaders to create the desired effect.

Situation: An adversary is attempting to overthrow the government of Country X using both lethal and nonlethal means to demonstrate to the citizens that the government is not fit to support and protect its people.

Joint Force Commander’s Objective: Protect government of Country X from being overthrown.

Desired Effects:

  1. Citizens have confidence in ability of government to support and protect its people.
  2. Adversary is unable to overthrow government of Country X.

Potential Target Audience(s):

  1. Adversary leadership (adversary).
  2. Country X indigenous population (friendly, neutral, and potential adversary).

Potential Means available to achieve the commander’s objective:

  • Diplomatic action (e.g., demarche, public diplomacy)
  •  Informational assets (e.g., strategic communication, media)
  •  Military forces (e.g., security force assistance, combat operations, military information support operations, public affairs, military deception)
  •  Economic resources (e.g., sanctions against the adversary, infusion of capital to Country X for nation building)
  •  Commercial, cultural, or other private enterprise assets

Potential Ways (persuasive communications or coercive force):

  •  Targeted radio and television broadcasts
  •  Blockaded adversary ports
  •  Government/commercially operated Web sites
  •  Key leadership engagement

Regardless of the means and ways employed by the players within the information environment, the reality is that the strategic advantage rests with whoever applies their means and ways most efficiently.

  1. IO Staff

(1) In order to provide planning support, the IO staff includes IO planners and a complement of IRCs specialists to facilitate seamless integration of IRCs to support the JFC’s concept of operations (CONOPS).

(2) IRC specialists can include, but are not limited to, personnel from the EW, cyberspace operations (CO), military information support operations (MISO), civil-military operations (CMO), military deception (MILDEC), intelligence, and public affairs (PA) communities. They provide valuable linkage between the planners within an IO staff and those communities that provide IRCs to facilitate seamless integration with the JFC’s objectives.

  1. IO Cell

(1) The IO cell integrates and synchronizes IRCs, to achieve national or combatant commander (CCDR) level objectives.

  1. Relationships and Integration
  2. IO is not about ownership of individual capabilities but rather the use of those capabilities as force multipliers to create a desired effect.

(1) Strategic Communication (SC)

(a) The SC process consists of focused United States Government (USG) efforts to create, strengthen, or preserve conditions favorable for the advancement of national interests, policies, and objectives by understanding and engaging key audiences through the use of coordinated programs, plans, themes, messages, and products synchronized with the actions of all instruments of national power.

(b) The elements and organizations that implement strategic guidance, both internal and external to the joint force, must not only understand and be aware of the joint force’s IO objectives; they must also work closely with members of the interagency community, in order to ensure full coordination and synchronization of USG efforts.

(2) Joint Interagency Coordination Group. Interagency coordination occurs between DOD and other USG departments and agencies, as well as with private-sector entities, nongovernmental organizations, and critical infrastructure activities, for the purpose of accomplishing national objectives. Many of these objectives require the combined and coordinated use of the diplomatic, informational, military, and economic instruments of national power.

(3) Public Affairs

(a) PA comprises public information, command information, and public engagement activities directed toward both the internal and external publics with interest in DOD. External publics include allies, neutrals, adversaries, and potential adversaries. When addressing external publics, opportunities for overlap exist between PA and IO.

(b) By maintaining situational awareness between IO and PA the potential for information conflict can be minimized. The IO cell provides an excellent place to coordinate IO and PA activities that may affect the adversary or potential adversary. Because there will be situations, such as counterpropaganda, in which the TA for both IO and PA converge, close cooperation and deconfliction are extremely important. …final coordination should occur within the joint planning group (JPG).

(4) Civil-Military Operations

(a) CMO is another area that can directly affect and be affected by IO. CMO activities establish, maintain, influence, or exploit relations between military forces, governmental and nongovernmental civilian organizations and authorities, and the civilian populace in a friendly, neutral, or hostile operational area in order to achieve US objectives. These activities may occur prior to, during, or subsequent to other military operations.

(b) Although CMO and IO have much in common, they are distinct disciplines.

The TA for much of IO is the adversary; however, the effects of IRCs often reach supporting friendly and neutral populations as well. In a similar vein, CMO seeks to affect friendly and neutral populations, although adversary and potential adversary audiences may also be affected. This being the case, effective integration of CMO with other IRCs is important, and a CMO representative on the IO staff is critical. The regular presence of a CMO representative in the IO cell will greatly promote this level of coordination.

(5) Cyberspace Operations

(a) Cyberspace is a global domain within the information environment consisting of the interdependent network of information technology infrastructures and resident data, including the Internet, telecommunications networks, computer systems, and embedded processors and controllers.

(b) As a process that integrates the employment of IRCs across multiple lines of effort and lines of operation to affect an adversary or potential adversary decision maker, IO can target either the medium (a component within the physical dimension such as a microwave tower) or the message itself (e.g., an encrypted message in the informational dimension). CO is one of several IRCs available to the commander.

(6) Information Assurance. IA is necessary to gain and maintain information superiority. The JFC relies on IA to protect infrastructure to ensure its availability, to position information for influence, and for delivery of information to the adversary.

(7) Space Operations. Space capabilities are a significant force multiplier when integrated with joint operations. Space operations support IO through the space force enhancement functions of intelligence, surveillance, and reconnaissance; missile warning; environmental monitoring; satellite communications; and spacebased positioning, navigation, and timing.

(8) Military Information Support Operations. MISO are planned operations to convey selected information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign governments, organizations, groups, and individuals. MISO focuses on the cognitive dimension of the information environment where its TA includes not just potential and actual adversaries, but also friendly and neutral populations.

MISO are applicable to a wide range of military operations such as stability operations, security cooperation, maritime interdiction, noncombatant evacuation, foreign humanitarian operations, counterdrug, force protection, and counter-trafficking.

(9) Intelligence

(a) Intelligence is a vital military capability that supports IO. The utilization of information operations intelligence integration (IOII) greatly facilitates understanding the interrelationship between the physical, informational, and cognitive dimensions of the information environment.

(b) By providing population-centric socio-cultural intelligence and physical network lay downs, including the information transmitted via those networks, intelligence can greatly assist IRC planners and IO integrators in determining the proper effect to elicit the specific response desired. Intelligence is an integrated process, fusing collection, analysis, and dissemination to provide products that will expose a TA’s potential capabilities or vulnerabilities. Intelligence uses a variety of technical and nontechnical tools to assess the information environment, thereby providing insight into a TA.

(c) A joint intelligence support element (JISE) may establish an IO support office (see Figure II-5) to provide IOII. This is due to the long lead time needed to establish information baseline characterizations, provide timely intelligence during IO planning and execution efforts, and to properly assess effects in the information environment.

(10) Military Deception

(a) One of the oldest IRCs used to influence an adversary’s perceptions is MILDEC. MILDEC can be characterized as actions executed to deliberately mislead adversary decision makers, creating conditions that will contribute to the accomplishment of the friendly mission. While MILDEC requires a thorough knowledge of an adversary or potential adversary’s decision-making processes, it is important to remember that it is focused on desired behavior. It is not enough to simply mislead the adversary or potential adversary; MILDEC is designed to cause them to behave in a manner advantageous to the friendly mission, such as misallocation of resources, attacking at a time and place advantageous to friendly forces, or avoid taking action at all.

(b) When integrated with other IRCs, MILDEC can be a particularly powerful way to affect the decision-making processes of an adversary or potential adversary.

(11) Operations Security

(a) OPSEC is a standardized process designed to meet operational needs by mitigating risks associated with specific vulnerabilities in order to deny adversaries critical information and observable indicators. OPSEC identifies critical information and actions attendant to friendly military operations to deny observables to adversary intelligence systems.

(b) The effective application, coordination, and synchronization of other IRCs are critical components in the execution of OPSEC. Because a specified IO task is “to protect our own” decision makers, OPSEC planners require complete situational awareness, regarding friendly activities to facilitate the safeguarding of critical information. This kind of situational awareness exists within the IO cell, where a wide range of planners work in concert to integrate and synchronize their actions to achieve a common IO objective.

(12) Special Technical Operations (STO). IO need to be deconflicted and synchronized with STO. Detailed information related to STO and its contribution to IO can be obtained from the STO planners at CCMD or Service component headquarters. IO and STO are separate, but have potential crossover, and for this reason an STO planner is a valuable member of the IO cell.

(14) Key Leader Engagement (KLE)

(a) KLEs are deliberate, planned engagements between US military leaders and the leaders of foreign audiences that have defined objectives, such as a change in policy or supporting the JFC’s objectives. These engagements can be used to shape and influence foreign leaders at the strategic, operational, and tactical levels, and may also be directed toward specific groups such as religious leaders, academic leaders, and tribal leaders; e.g., to solidify trust and confidence in US forces.

(b) KLEs may be applicable to a wide range of operations such as stability operations, counterinsurgency operations, noncombatant evacuation operations, security cooperation activities, and humanitarian operations. When fully integrated with other IRCs into operations, KLEs can effectively shape and influence the leaders of foreign audiences.

  1. The capabilities discussed above do not constitute a comprehensive list of all possible capabilities that can contribute to IO. This means that individual capability ownership will be highly diversified. The ability to access these capabilities will be directly related to how well commanders understand and appreciate the importance of IO.

CHAPTER III

AUTHORITIES, RESPONSIBILITIES, AND LEGAL CONSIDERATIONS

Introduction

This chapter describes the JFC’s authority for the conduct of IO; delineates various roles and responsibilities established in DODD 3600.01, Information Operations; and addresses legal considerations in the planning and execution of IO.

Authorities

The authority to employ IRCs is rooted foremost in Title 10, United States Code (USC). While Title 10, USC, does not specify IO separately, it does provide the legal basis for the roles, missions, and organization of DOD and the Services.

Responsibilities

Under Secretary of Defense for Policy (USD[P]). The USD(P) oversees and manages DOD-level IO programs and activities. In this capacity, USD(P) manages guidance publications (e.g., DODD 3600.01) and all IO policy on behalf of the Secretary of Defense. The office of the USD(P) coordinates IO for all DOD components in the interagency process.

Under Secretary of Defense for Intelligence (USD[I]). USD(I) develops, coordinates, and oversees the implementation of DOD intelligence policy, programs, and guidance for intelligence activities supporting IO.

Joint Staff. In accordance with the Secretary of Defense memorandum on Strategic Communication and Information Operations in the DOD, dated 25 January 2011, the Joint Staff is assigned the responsibility for joint IO proponency. CJCS responsibilities for IO are both general (such as establishing doctrine, as well as providing advice, and recommendations to the President and Secretary of Defense) and specific (e.g., joint IO policy).

Joint Information Operations Warfare Center (JIOWC). The JIOWC is a CJCS- controlled activity reporting to the operations directorate of a joint staff (J-3) via J-39 DDGO.

JIOWC’s specific organizational responsibilities include:

(1) Provide IO subject matter experts and advice to the Joint Staff and the CCMDs.
(2) Develop and maintain a joint IO assessment framework.
(3) Assist the Joint IO Proponent in advocating for and integrating CCMD IO requirements.
(4) Upon the direction of the Joint IO Proponent, provide support in coordination and integration of DOD IRCs for JFCs, Service component commanders, and DOD agencies.

Combatant Commands. The Unified Command Plan provides guidance to CCDRs, assigning them missions and force structure, as well as geographic or functional areas of responsibility. In addition to these responsibilities, the Commander, United States Special Operations Command, is also responsible for integrating and coordinating MISO.

Functional Component Commands. Like Service component commands, functional component commands have authority over forces or in the case of IO, IRCs, as delegated by the establishing authority (normally a CCDR or JFC). Functional component commands may be tasked to plan and execute IO as an integrated part of joint operations.

Legal Considerations

Introduction. US military activities in the information environment, as with all military operations, are conducted as a matter of law and policy. Joint IO will always involve legal and policy questions, requiring not just local review, but often nationallevel coordination and approval. The US Constitution, laws, regulations, and policy, and international law set boundaries for all military activity, to include IO.

Legal Considerations. IO planners deal with legal considerations of an extremely diverse and complex nature. Legal interpretations can occasionally differ, given the complexity of technologies involved, the significance of legal interests potentially affected, and the challenges inherent for law and policy to keep pace with the technological changes and implementation of IRCs.

Implications Beyond the JFC. Bilateral agreements to which the US is a signatory may have provisions concerning the conduct of IO as well as IRCs when they are used in support of IO. IO planners at all levels should consider the following broad areas within each planning iteration in consultation with the appropriate legal advisor:

(1) Could the execution of a particular IRC be considered a hostile act by an adversary or potential adversary?

(2) Do any non-US laws concerning national security, privacy, or information exchange, criminal and/or civil issues apply?

(3) What are the international treaties, agreements, or customary laws recognized by an adversary or potential adversary that apply to IRCs?

(4) How is the joint force interacting with or being supported by US intelligence organizations and other interagency entities?

CHAPTER IV

INTEGRATING INFORMATION-RELATED CAPABILITIES INTO THE JOINT OPERATION PLANNING PROCESS

“Support planning is conducted in parallel with other planning and encompasses such essential factors as IO [information operations], SC [strategic communication]…”

Joint Publication 5-0, Joint Operation Planning, 11 August 201

Introduction

The IO cell chief is responsible to the JFC for integrating IRCs into the joint operation planning process (JOPP). Thus, the IO staff is responsible for coordinating and synchronizing IRCs to accomplish the JFC’s objectives. Coordinated IO are essential in employing the elements of operational design. Conversely, uncoordinated IO efforts can compromise, complicate, negate, and pose risks to the successful accomplishment of the JFC and USG objectives. Additionally, when uncoordinated, other USG and/or multinational information activities, may complicate, defeat, or render DOD IO ineffective. For this reason, the JFC’s objectives require early detailed IO staff planning, coordination, and deconfliction between the USG and partner nations’ efforts within the AOR, in order to effectively synchronize and integrate IRCs.

Information Operations Planning

The IO cell and the JPG. The IO cell chief ensures joint IO planners adequately represent the IO cell within the JPG and other JFC planning processes. Doing so will help ensure that IRCs are integrated with all planning efforts. Joint IO planners should be integrated with the joint force planning, directing, monitoring, and assessing process.

IO Planning Considerations

(1) IOplannersseektocreateanoperationaladvantagethatresultsincoordinated effects that directly support the JFC’s objectives. IRCs can be executed throughout the operational environment, but often directly impact the content and flow of information.

(2) IO planning begins at the earliest stage of JOPP and must be an integral part of, not an addition to, the overall planning effort. IRCs can be used in all phases of a campaign or operation, but their effective employment during the shape and deter phases can have a significant impact on remaining phases.

(3) The use of IO to achieve the JFC’s objectives requires the ability to integrate IRCs and interagency support into a comprehensive and coherent strategy that supports the JFC’s overall mission objectives. The GCC’s theater security cooperation guidance contained in the theater campaign plan (TCP) serves as an excellent platform to embed specific long-term information objectives during phase 0 operations.

(4) Many IRCs require long lead time for development of the joint intelligence preparation of the operational environment (JIPOE) and release authority. The intelligence directorate of a joint staff (J-2) identifies intelligence and information gaps, shortfalls, and priorities as part of the JIPOE process in the early stages of the JOPP. Concurrently, the IO cell must identify similar intelligence gaps in its understanding of the information environment to determine if it has sufficient information to successfully plan IO. Where identified shortfalls exist, the IO cell may need to work with J-2 to submit requests for information (RFIs) to the J-2 to fill gaps that cannot be filled internally.

(5) There may be times where the JFC may lack sufficient detailed intelligence data and intelligence staff personnel to provide IOII. Similarly, a JFC’s staff may lack dedicated resources to provide support. For this reason, it is imperative the IO cell take a proactive approach to intelligence support. The IO cell must also review and provide input to the commander’s critical information requirements (CCIRs), especially priority intelligence requirements (PIRs) and information requirements.

The joint intelligence staff, using PIRs as a basis, develops information requirements that are most critical. These are also known as essential elements of information (EEIs). In the course of mission analysis, the intelligence analyst identifies the intelligence required to CCIRs. Intelligence staffs develop more specific questions known as information requirements. EEIs pertinent to the IO staff may include target information specifics, such as messages and counter-messages, adversary propaganda, and responses of individuals, groups, and organizations to adversary propaganda.

IO and the Joint Operation Planning Process

Throughout JOPP, IRCs are integrated with the JFC’s overall CONOPS

(1) Planning Initiation. Integration of IRCs into joint operations should begin at step 1, planning initiation. Key IO staff actions during this step include the following:

(a) Review key strategic documents.

(b) Monitor the situation, receive initial planning guidance, and review staff estimates from applicable operation plans (OPLANs) and concept plans (CONPLANs).

(c) Alert subordinate and supporting commanders of potential tasking with regard to IO planning support.

(d) Gauge initial scope of IO required for the operation.

(e) Identify location, standard operating procedures, and battle rhythm of other staff organizations that require integration and divide coordination responsibilities among the IO staff.

(f) Identify and request appropriate authorities.

(g) Begin identifying information required for mission analysis and courseof action (COA) development.

(h) Identify IO planning support requirements (including staff augmentation, support products, and services) and issue requests for support according to procedures established locally and by various supporting organizations.

(i) Validate, initiate, and revise PIRs and RFIs, keeping in mind the long lead times associated with satisfying IO requirements.

(j) Provide IO input and recommendations to COAs, and provide resolutions to conflicts that exist with other plans or lines of operation.

(k) In coordination with the targeting cell, submit potential candidate targets to JFC or component joint targeting coordination board (JTCB). For vetting, validation, and deconfliction follow local targeting cell procedures because these three separate processes do not always occur at the JTCB.

(l) Ensure IO staff and IO cell members participate in all JFC or component planning and targeting sessions and JTCBs.

(2) Mission Analysis. The purpose of step 2, mission analysis, is to understand the problem and purpose of an operation and issue the appropriate guidance to drive the remaining steps of the planning process. The end state of mission analysis is a clearly defined mission and thorough staff assessment of the joint operation. Mission analysis orients the JFC and staff on the problem and develops a common understanding, before moving forward in the planning process.

As IO impacts each element of the operational environment, it is important for the IO staff and IO cell during mission analysis to remain focused on the information environment. Key IO staff actions during mission analysis are:

(a) Assist the J-3 and J-2 in the identification of friendly and adversary center(s) of gravity and critical factors (e.g., critical capabilities, critical requirements, and critical vulnerabilities).
(b) Identify relevant aspects of the physical, informational, and cognitive dimensions (whether friendly, neutral, adversary, or potential adversary) of the information environment.
(c) Identify specified, implied, and essential tasks.
(d) Identify facts, assumptions, constraints, and restraints affecting IO planning.
(e) Analyze IRCs available to support IO and authorities required for their employment.
(f) Develop and refine proposed PIRs, RFIs, and CCIRs.
(g) Conduct initial IO-related risk assessment.
(h) Develop IO mission statement.
(i) Begin developing the initial IO staff estimate. This estimate forms the basis for the IO cell chief’s recommendation to the JFC, regarding which COA it can best support.
(j) Conduct initial force allocation review.
(k) Identify and develop potential targets and coordinate with the targeting cell no later than the end of target development. Compile and maintain target folders in the Modernized Integrated Database. Coordinate with the J-2 and targeting cell for participation and representation in vetting, validation, and targeting boards
(l) Develop mission success criteria.

(3) COA Development. Output from mission analysis, such as initial staff estimates, mission and tasks, and JFC planning guidance are used in step 3, COA development. Key IO staff actions during this step include the following:

(a) Identify desired and undesired effects that support or degrade JFC’s information objectives.

(b) Developmeasuresofeffectiveness(MOEs)andmeasuresofeffectiveness indicators (MOEIs).

(c) Develop tasks for recommendation to the J-3.

(d) RecommendIRCsthatmaybeusedtoaccomplishsupportinginformation tasks for each COA.

(e) Analyze required supplemental rules of engagement (ROE). (f) Identify additional operational risks and controls/mitigation. (g) Develop the IO CONOPS narrative/sketch.
(h) Synchronize IRCs in time, space, and purpose.

(i) Continue update/development of the IO staff estimate.

(j) Prepare inputs to the COA brief.

(k) Provide inputs to the target folder.

(4) COA Analysis and War Gaming. Based upon time available, the JFC staff should war game each tentative COA against adversary COAs identified through the JIPOE process. Key IO staff and IO cell actions during this step include the following:

(a) Analyze each COA from an IO functional perspective.

(b) Reveal key decision points.

(c) Recommend task adjustments to IRCs as appropriate.

(d) Provide IO-focused data for use in a synchronization matrix or other decision-making tool.

(e) Identify IO portions of branches and sequels.
(f) Identify possible high-value targets related to IO. (g) Submit PIRs and recommend CCIRs for IO.

(h) Revise staff estimate.

(i) Assess risk.

(5) COA Comparison. Step 5, COA comparison, starts with all staff elements analyzing and evaluating the advantages and disadvantages of each COA from their respective viewpoints. Key IO staff and IO cell actions during this step include the following:

(a) Compare each COA based on mission and tasks.

(b) Compare each COA in relation to IO requirements versus available IRCs.

(c) Prioritize COAs from an IO perspective.

(d) Revise the IO staff estimate. During execution, the IO cell should maintain an estimate and update as required.

(6) COA Approval. Just like other elements of the JFC’s staff, during step 6, COA approval, the IO staff provides the JFC with a clear recommendation of how IO can best contribute to mission accomplishment in the COA(s) being briefed. It is vital this recommendation is presented in a clear, concise manner that is not only able to be quickly grasped by the JFC, but can also be easily understood by peer, subordinate, and higher- headquarters command and staff elements. Failure to foster such an understanding of IO contribution to the approved COA can lead to poor execution and/or coordination of IRCs in subsequent operations.

(7) Plan or Order Development. Once a COA is selected and approved, the IO staff develops appendix 3 (Information Operations) to annex C (Operations) of the operation order (OPORD) or OPLAN. Because IRC integration is documented elsewhere in the OPORD or OPLAN, it is imperative that the IO staff conduct effective staff coordination within the JPG during step 7, plan or order development. Key staff actions during this step include the following:

(a) Refine tasks from the approved COA.

(b) Identify shortfalls of IRCs and recommend solutions.

(c) Facilitate development of supporting plans by keeping the responsible organizations informed of relevant details (as access restrictions allow) throughout the planning process.

(d) Advise the supported commander on IO issues and concerns during the supporting plan review and approval process.

(e) Participate in time-phased force and deployment data refinement to ensure IO supports the OPLAN or CONPLAN.

(f) Assist in the development of OPLAN or CONPLAN appendix 6 (IO Intelligence Integration) to annex B (Intelligence).

  1. Plan Refinement. The information environment is continuously changing and it is critical for IO planners to remain in constant interaction with the JPG to provide updates to OPLANs or CONPLANs.
  2. Assessment of IO. Assessment is integrated into all phases of the planning and execution cycle, and consists of assessment activities associated with tasks, events, or programs in support of joint military operations. Assessment seeks to analyze and inform on the performance and effectiveness of activities. The intent is to provide relevant feedback to decision makers in order to modify activities that achieve desired results. Assessment can also provide the programmatic community with relevant information that informs on return on investment and operational effectiveness of DOD IRCs. It is important to note that integration of assessment into planning is the first step of the assessment process. Planning for assessment is part of broader operational planning, rather than an afterthought. Iterative in nature, assessment supports the Adaptive Planning and Execution process, and provides feedback to operations and ultimately, IO enterprise programmatics.
  3. Relationship Between Measures of Performance (MOPs) and MOEs. Effectiveness assessment is one of the greatest challenges facing a staff. Despite the continuing evolution of joint and Service doctrine and the refinement of supporting tactics, techniques, and procedures, assessing the effectiveness of IRCs continues to be challenging.

(1) MOPs are criteria used to assess friendly accomplishment of tasks and mission execution.

Examples of Measures of Performance Feedback

  • Numbers of populace listening to military information support operations (MISO) broadcasts
  • Percentage of adversary command and control facilities attacked
  • Number of civil-military operations projects initiated/number of projects completed
  • Human intelligence reports number of MISO broadcasts during Commando Solo missions
  • Intelligence assessments (human intelligence, etc.)
  • Open source intelligence
  • Internet (newsgroups, etc.)
  • Military information support operations, and civil-military operations teams (face to face activities)
  • Contact with the public
  • Press inquiries and comments
  • Department of State polls, reports and surveys (reports)
  • Open Source Center
  • Nongovernmental organizations, intergovernmental organizations, international organizations, and host nation organizations
  • Foreign policy advisor meetings
  • Commercial polls
  • Operational analysis cells

(2) In contrast to MOPs, MOEs are criteria used to assess changes in system behavior, capability, or operational environment that are tied to measuring the attainment of an end state, achievement of an objective, or creation of an effect. Ultimately, MOEs determine whether actions being executed are creating desired effects, thereby accomplishing the JFC’s information objectives and end state.

(3) MOEs and MOPs are both crafted and refined throughout JOPP. In developing MOEs and/or MOPs, the following general criteria should be considered:

(a) Ends Related. MOEs and/or MOPs should directly relate to the objectives and desired tasks required to accomplish effects and/or performance.

(b) Measurable. MOEs should be specific, measurable, and observable. Effectiveness or performance is measured either quantitatively (e.g., counting the number of attacks) or qualitatively (e.g., subjectively evaluating the level of confidence in the security forces). In the case of MOEs, a baseline measurement must be established prior to the execution, against which to measure system changes.

(c) Timely. A time for required feedback should be clearly stated for each MOE and/or MOP and a plan made to report within that specified time period.

(d) Properly Resourced. The collection, analysis, and reporting of MOE or MOP data requires personnel, financial, and materiel resources. The IO staff or IO cell

should ensure that these resource requirements are built into IO planning during COA development and closely coordinated with the J-2 collection manager to ensure the means to assess these measures are in place.

(4) Measure of Effectiveness Indicators. An MOEI is a unit, location, or event observed or measured, that can be used to assess an MOE. These are often used to add quantitative data points to qualitative MOEs and can assist an IO staff or IO cell in answering a question related to a qualitative MOE. The identification of MOEIs aids the IO staff or IO cell in determining an MOE and can be identified from across the information environment. MOEIs can be independently weighted for their contribution to an MOE and should be based on separate criteria. Hundreds of MOEIs may be needed for a large scale contingency. Examples of how effects can be translated into MOEIs include the following:

(a) Effect: Increase in the city populace’s participation in civil governance.

MOE: (Qualitative) Metropolitan citizens display increased support for the democratic leadership elected on 1 July. (What activity trends show progress toward or away from the desired behavior?)

MOEI:

  1. A decrease in the number of anti-government rallies/demonstrations in a city since 1 July (this indicator might be weighted heavily at 60 percent of this MOE’s total assessment based on rallies/demonstrations observed.)
  2. An increase in the percentage of positive new government media stories since 1 July (this indicator might be weighted less heavily at 20 percent of this MOE’s total assessment based on media monitoring.)
  3. An increase in the number of citizens participating in democratic functions since 1 July (this indicator might be weighted at 20 percent of this MOE’s total assessment based on government data/criteria like voter registration, city council meeting attendance, and business license registration.)

(b) Effect: Insurgent leadership does not orchestrate terrorist acts in the western region.

  1. MOE: (Qualitative) Decrease in popular support toward extremists and insurgents.
  2. MOEI:
  3. An increase in the number of insurgents turned in/identified since1 October.
  4. The percentage of blogs supportive of the local officials.
  5. Information Operations Phasing and Synchronization

Through its contributions to the GCC’s TCP, it is clear that joint IO is expected to play a major role in all phases of joint operations. This means that the GCC’s IO staff and IO cell must account for logical transitions from phase to phase, as joint IO moves from the main effort to a supporting effort. Regardless of what operational phase may be underway, it is always important for the IO staff and IO cell to determine what legal authorities the JFC requires to execute IRCs during the subsequent operations phase.

  1. Phase 0–Shape. Joint IO planning should focus on supporting the TCP to deter adversaries and potential adversaries from posing significant threats to US objectives. Joint IO planners should access the JIACG through the IO cell or staff. Joint IO planning during this phase will need to prioritize and integrate efforts and resources to support activities throughout the interagency. Due to competing resources and the potential lack of available IRCs, executing joint IO during phase 0 can be challenging. For this reason, the IO staff and IO cell will need to consider how their IO activities fit in as part of a whole-of-government approach to effectively shape the information environment to achieve the CCDR’s information objectives.
  2. Phase I–Deter. During this phase, joint IO is often the main effort for the CCMD. Planning will likely emphasize the JFC’s flexible deterrent options (FDOs), complementing US public diplomacy efforts, in order to influence a potential foreign adversary decision maker to make decisions favorable to US goals and objectives. Joint IO planning for this phase is especially complicated because the FDO typically must have a chance to work, while still allowing for a smooth transition to phase II and more intense levels of conflict, if it does not. Because the transition from phase I to phase II may not allow enough time for application of IRCs to create the desired effects on an adversary or potential adversary, the phase change may be abrupt.
  3. Phase II-Seize Initiative. In phase II, joint IO is supporting multiple lines of operation. Joint IO planning during phase II should focus on maximizing synchronized IRC effects to support the JFC’s objectives and the component missions while preparing the transition to the next phase.
  4. Phase III–Dominate. Joint IO can be a supporting and/or a supported line of operation during phase III. Joint IO planning during phase III will involve developing an information advantage across multiple lines of operation to execute the mission.
  5. Phase IV–Stabilize. CMO, or even IO, is likely the supported line of operation during phase IV. Joint IO planning during this phase will need to be flexible enough to simultaneously support CMO and combat operations. As the US military and interagency information activity capacity matures and eventually slows, the JFC should assist the host- nation security forces and government information capacity to resume and expand, as necessary. As host nation information capacity improves, the JFC should be able to refocus joint IO efforts to other mission areas. Expanding host-nation capacity through military and interagency efforts will help foster success in the next phase.
  6. Phase V-Enable Civil Authority. During phase V, joint IO planning focuses on supporting the redeployment of US forces, as well as providing continued support to stability operations. IO planning during phase V should account for interagency and country team efforts to resume the lead mission for information within the host nation territory. The IO staff and cell can anticipate the possibility of long-term US commercial and government support to the former adversary’s economic and political interests to continue through the completion of this phase.

CHAPTER V

MULTINATIONAL INFORMATION OPERATIONS

Introduction

Joint doctrine for multinational operations, including command and operations in a multinational environment, is described in JP 3-16, Multinational Operations. The purpose of this chapter is to highlight specific doctrinal components of IO in a multinational environment (see Figure V-1). In doing so, this chapter will build upon those aspects of IO addressed in JP 3-16.

Other Nations and Information Operations

Multinational partners recognize a variety of information concepts and possess sophisticated doctrine, procedures, and capabilities. Given these potentially diverse perspectives regarding IO, it is essential for the multinational force commander (MNFC) to resolve potential conflicts as soon as possible. It is vital to integrate multinational partners into IO planning as early as possible to gain agreement on an integrated and achievable IO strategy.

Initial requirements for coordinating, synchronizing, and when required integrating other nations into the US IO plan include:

(1) Clarifying all multinational partner information objectives.
(2) Understanding all multinational partner employment of IRCs.
(3) Establishing IO deconfliction procedures to avoid conflicting messages. (4) Identifying multinational force (MNF) vulnerabilities as soon as possible. (5) Developing a strategy to mitigate MNF IO vulnerabilities.
(6) Identifying MNF IRCs.

Regardless of the maturity of each partner’s IO strategy, doctrine, capabilities, tactics, techniques, or procedures, every multinational partner can contribute to MNF IO by providing regional expertise to assist in planning and conducting IO. Multinational partners have developed unique approaches to IO that are tailored for specific targets in ways that may not be employed by the US. Such contributions complement US IO expertise and IRCs, potentially enhancing the quality of both the planning and execution of multinational IO.

Multinational Information Operations Considerations

Military operation planning processes, particularly for IO, whether JOPP based or based on established or agreed to multinational planning processes, include an understanding of multinational partner(s):

(1) Cultural values and institutions.
(2) Interests and concerns.
(3) Moral and ethical values.
(4) ROE and legal constraints.

(5) Challenges in multilingual planning for the employment of IRCs.

(6) IO doctrine, techniques, and procedures.

Sharing of information with multinational partners.

(1) Each nation has various IRCs to provide, in support of multinational objectives. These nations are obliged to protect information that they cannot share across the MNF. However, to plan thoroughly, all nations must be willing to share appropriate information to accomplish the assigned mission.

(2) Information sharing arrangements in formal alliances, to include US participation in United Nations missions, are worked out as part of alliance protocols. Information sharing arrangements in ad hoc multinational operations where coalitions are working together on a short-notice mission must be created during the establishment of the coalition.

(3) Using National Disclosure Policy(NDP)1, National Policy and Procedures for the Disclosure of Classified Military Information to Foreign Governments and International Organizations, and Department of Defense Instruction (DODI) O-3600.02, Information Operations (IO) Security Classification Guidance (U), as guidance, the senior US commander in a multinational operation must provide guidelines to the US-designated disclosure representative on information sharing and the release of classified information or capabilities to the MNF.

(4) Information concerning US persons may only be collected, retained, or disseminated in accordance with law and regulation. Applicable provisions include: the Privacy Act, Title 5, USC, Section 552a; DODD 5200.27, Acquisition of Information Concerning Persons and Organizations not Affiliated with the Department of Defense; Executive Order 12333, United States Intelligence Activities; and DOD 5240.1-R, Procedures Governing the Activities of DOD Intelligence Components that Affect United States Persons.

  1. Planning, Integration, and Command and Control of Information Operations in Multinational Operations
  2. The role of IO in multinational operations is the prerogative of the MNFC. The mission of the MNF determines the role of IO in each specific operation.
  3. Representation of key multinational partners in the MNF IO cell allows their expertise and capabilities to be utilized, and the IO portion of the plan to be better coordinated and more timely.
  4. While some multinational partners may not have developed an IO concept or fielded IRCs, it is important that they fully appreciate the importance of the information in achieving the MNFC’s objectives. For this reason, every effort should be made to provide basic-level IO training to multinational partners serving on the MNF IO staff.
  5. MNF headquarters staff could be organized differently; however, as a general rule, an information operations coordination board (IOCB) or similar organization may exist (see Figure V-2).

A wide range of MNF headquarters staff organizations should participate in IOCB deliberations to ensure their input and subject matter expertise can be applied to satisfy a requirement in order to achieve MNFC’s objectives.

  1. Besides the coordination activities highlighted above, the IOCB should also participate in appropriate joint operations planning groups (JOPGs) and should take part in early discussions, including mission analysis. An IO presence on the JOPG is essential, as it is the IOCB which provides input to the overall estimate process in close coordination with other members of the MNF headquarters staff.
  2. Multinational Organization for Information Operations Planning
  3. When the JFC is also the MNFC, the joint force staff should be augmented by planners and subject matter experts from the MNF. MNF IO planners and IRC specialists should be trained on US and MNF doctrine, requirements, resources, and how the MNF is structured to integrate IRCs.
  4. Multinational Policy Coordination

The development of capabilities, tactics, techniques, procedures, plans, intelligence, and communications support applicable to IO requires coordination with the responsible DOD components and multinational partners. Coordination with partner nations above the JFC/MNFC level is normally effected within existing defense arrangements, including bilateral arrangements.

CHAPTER VI

INFORMATION OPERATIONS ASSESSMENT

 

“Not everything that can be counted, counts, and not everything that counts can be counted.”

Dr. William Cameron, Informal Sociology: A Casual Introduction to Sociological Thinking, 1963

  1. Introduction
  2. This chapter provides a framework to organize, develop, and execute assessment of IO, as conducted within the information environment. The term “assessment” has been used to describe everything from analysis (e.g., assessment of the enemy) to an estimate of the situation (pre-engagement assessment of blue and red forces).

Assessment considerations should be thoroughly integrated into IO planning.

  1. Assessment of IO is a key component of the commander’s decision cycle, helping to determine the results of tactical actions in the context of overall mission objectives and providing potential recommendations for refinement of future plans. The decision to adapt plans or shift resources is based upon the integration of intelligence in the operational environment and other staff estimates, as well as input from other mission partners, in pursuit of the desired end state.
  2. Assessments also provide opportunities to identify IRC shortfalls, changes in parameters and/or conditions in the information environment, which may cause unintended effects in the employment of IRCs, and resource issues that may be impeding joint IO effectiveness.
  3. Understanding Information Operations Assessment
  4. Assessment consists of activities associated with tasks, events, or programs in support of the commander’s desired end state. IO assessment is iterative, continuously repeating rounds of analysis within the operations cycle in order to measure the progress of IRCs toward achieving objectives. The assessment process begins with the earliest stages of the planning process and continues throughout the operation or campaign and may extend beyond the end of the operation to capture long-term effects of the IO effort.
  5. Analysis of the information environment should begin before operations start, in order to establish baselines from which to measure change. During operations, data is continuously collected, recharacterizing our understanding of the information environment and providing the ability to measure changes and determine whether desired effects are being created.
  6. Purpose of Assessment in Information Operations

Assessments help commanders better understand current conditions. The commander uses assessments to determine how the operation is progressing and whether the operation is creating the desired effects. Assessing the effectiveness of IO activities challenges both the staff and commander. There are numerous venues for informing and receiving information from the commander; they provide opportunities to identify IRC shortfalls and resource issues that may be impeding joint IO effectiveness.

  1. Impact of the Information Environment on Assessment
  2. Operation assessments in IO differ from assessments of other operations because the success of the operation mainly relies on nonlethal capabilities, often including reliance on measuring the cognitive dimension, or on nonmilitary factors outside the direct control of the JFC. This situation requires an assessment with a focused, organized approach that is developed in conjunction with the initial planning effort. It also requires a clear vision of the end state, an understanding of the commander’s objectives, and an articulated statement of the ways in which the planned activities achieve objectives.
  3. The information environment is a complex entity, an “open system” affected by variables that are not constrained by geography. The mingling of people, information, capabilities, organizations, religions, and cultures that exist inside and outside a commander’s operational area are examples of these variables. These variables can give commanders and their staffs the appreciation that the information environment is turbulent―constantly in motion and changing―which may make analysis seem like a daunting task, and make identifying an IRC (or IRCs) most likely to create a desired effect, feel nearly impossible. In a complex environment, seemingly minor events can produce enormous outcomes, far greater in effect than the initiating event, including secondary and tertiary effects that are difficult to anticipate and understand. This complexity is why assessment is required and why there may be specific capabilities required to conduct assessment and subsequent analysis.
  4. A detailed study and analysis of the information environment affords the planner the ability to identify which forces impact the information environment and find order in the apparent chaos. Often the complexity of the information environment relative to a specific operational area requires assets and capabilities that exceed the organic capability of the command, making the required exhaustive study an impossible task. The gaps in capability and information are identified by planners and are transformed into information requirements and requests, request for forces and/or augmentation, and requests for support from external agencies.

Examples of capabilities, forces, augmentation, and external support include specialized software, behavioral scientists, polling, social-science studies, operational research specialists, statisticians, demographic data held by commercial industry, reachback support to other mission partners, military information support personnel, access to external DOD databases, and support from academia.

But the presence of sensitive variables can be a catalyst for exponential changes in outcomes, as in the aforementioned secondary and tertiary effects. Joint IO planners should be cautious about making direct causal statements, since many nonlinear feedback loops can render direct causal statements inaccurate. Incorrect assumptions about causality in a complex system can have disastrous effects on the planning of future operations and open the assessment to potential discredit, because counterexamples may exist.

  1. The Information Operations Assessment Process
  2. Integrating the employment of IRCs with other lines of operation is a unique requirement for joint staffs and is a discipline that is comparatively new.

The broad range of information-related activities occurring across the three dimensions of the information environment (physical, informational, and cognitive) demand a specific, validated, and formal assessment process to determine whether these actions are contributing towards the fulfillment of an objective.

With the additional factor that some actions result in immediate effect and others may take years or generations to fully create, the assessment process must be able to report incremental effects in each dimension. In particular, when assessing the effect of an action or series of actions on behavior, the effects may need to be measured in terms such as cognitive, affective, and action or behavioral. Put another way, we may need to assess how a group thinks, feels, and acts, and whether those behaviors are a result of our deliberate actions intended to produce that effect, an unintended consequence of our actions, a result of another’s action or activity, or a combination of all of these.

  1. Step 1—Analyze the Information Environment

(1) As the entire staff conducts analysis of the operational environment, the IO staff focuses on the information environment. This analysis occurs when planning for an operation begins or, in some cases, prior to planning for an operation, e.g., during routine analysis in support of theater security cooperation plan activities.

It is a required step for viable planning and provides necessary data for, among other things, development of MOEs, determining potential target audiences and targets, baseline data from which change can be measured. Analysis is conducted by interdisciplinary teams and staff sections. The primary product of this step is a description of the information environment. This description should include categorization or delineation of the physical, informational, and cognitive dimensions.

(2) Analysis of the information environment identifies key functions and systems within the operational environment. The analysis provides the initial information to identify decision makers (cognitive), factors that guide the decision-making process (informational), and infrastructure that supports and communicates decisions and decision making (physical).

(3) Gaps in the ability to analyze the information environment and gaps in required information are identified and transformed into information requirements and requests, requests for forces and/or augmentation, and requests for support from external agencies. The information environment is fluid. Technological, cultural, and infrastructure changes, regardless of their source or cause, can all impact each dimension of the information environment. Once the initial analysis is complete, periodic analyses must be conducted to capture changes and update the analysis for the commander, staff, other units, and unified action partners.

Much like a running estimate, the analysis of the information environment becomes a living document, continuously updated to provide a current, accurate picture.

  1. Step 2—Integrate Information Operations Assessment into Plans and Develop the Assessment Plan

(1) Early integration of assessments into plans is paramount, especially in the information environment. One of the first things that must happen during planning is to ensure that the objectives to be assessed are clear, understandable, and measureable. Equally important is to consider as part of the assessment baseline, a control set of conditions within the information environment from which to assess the performance of the tasks assigned to any given IRC, in order to determine their potential impact on IO.

Planners should also be aware that while each staff section participates in the planning process, quite often portions of individual staff sections are simultaneously working on the steps of the planning process in greater depth and detail, not quite keeping pace with the entire staff effort as they work on subordinate and supporting staff tasks.

(2) In order to achieve the objectives, specific effects need to be identified. It is during COA development, Step 3 of JOPP, that specific tasks are determined that will create the desired effects, based on the commander’s objectives. Effects should be clearly distinguishable from the objective they support as a condition for success or progress and not be misidentified as another objective. These effects ultimately support tasks to influence, disrupt, corrupt, or usurp the decision making of our adversaries, or to protect our own. Effects should provide a clear and common description of the desired change in the information environment.

UNDERSTANDING TASK AND OBJECTIVE, CAUSE AND EFFECT INTERRELATIONSHIPS

Understanding the interrelationships of the tasks and objectives, and the desired cause and effect, can be challenging for the planner. Mapping the expected change (a theory of change) provides the clear, logical connections between activities and desired outcomes by defining intermediate steps between current situation and desired outcome and establishing points of measurement. It should include clearly stated assumptions that can be challenged for correctness as activities are executed. The ability to challenge assumptions in light of executed activities allows the joint information operations planner to identify flawed connections between activity and outcome, incorrect assumptions, or the presence of spoilers. For example:

Training and arming local security guards increases their ability and willingness to resist insurgents, which will increase security in the locale. Increased security will lead to increased perceptions of security, which will promote participation in local government, which will lead to better governance. Improved security and better governance will lead to increased stability.

Logical connection between activities and outcomes

  • −  Activity: training and arming local security guards
  • −  Outcome: increased ability to resist insurgents
  • Clearly stated assumptions

−  Increased ability and willingness to resist increases security in the locale −  Increased security leads to increased perceptions of security

  • Intermediate steps and points of measurement
  • −  Measures of performance regarding training activities
  • −  Measures of effectiveness (MOEs) regarding willingness to resist
  • −  MOEs regarding increased local security

 

(3) This expected change shows a logical connection between activities (training and arming locals) and desired outcomes (increased stability). It makes some assumptions, but those assumptions are clearly stated, so they can be challenged if they are believed to be incorrect.

Further, those activities and assumptions suggest obvious things to measure, such as performance of the activities (the training and arming) and the outcome (change in stability). They also suggest measurement of more subtle elements of all the intermediate logical nodes such as capability and willingness of local security forces, change in security, change in perception of security, change in participation in local government, change in governance, and so on. Better still, if one of those measurements does not yield the desired result, the joint IO planner will be able to ascertain where in the chain the logic is breaking down (which hypotheses are not substantiated). They can then modify the expected change and the activities supporting it, reconnecting the logical pathway and continuing to push toward the objectives.

(4) Such an expected change might have begun as something quite simple: training and arming local security guards will lead to increased stability. While this gets at the kernel of the idea, it is not particularly helpful for building assessments. Stopping there would suggest only the need to measure the activity and the outcome. However, it leaves a huge assumptive gap. If training and arming security guards goes well, but stability does not increase, there will be no apparent reason why. To begin to expand on a simple expected change, the joint IO planner should ask the question, “Why? How might A lead to B?” (In this case, how would training and arming security guards lead to stability?) A thoughtful answer to this question usually leads to recognition of another node to the expected change. If needed, the question can be asked again relative to this new node, until the expected change is sufficiently articulated.

(5) Circumstances on the ground might also require the assumptions in an expected change to be more explicitly defined. For example, using the expected change articulated in the above example, the joint IO planner might observe that in successfully training and arming local security guards, they are better able to resist insurgents, leading to an increased perception of security, as reported in local polls. However, participation in local government, as measured through voting in local elections and attendance at local council meetings, has not increased. The existing expected change and associated measurements illustrate where the chain of logic is breaking down (somewhere between perceptions of security and participation in local governance), but it does not (yet) tell why that break is occurring. Adjusting the expected change by identifying the incorrect assumption or spoiling factor preventing the successful connection between security and local governance will also help improve achievement of the objective.

  1. Step 3—Develop Information Operations Assessment Information Requirements and Collection Plans

(1) Critical to this step is ensuring that attributes are chosen that are relevant and applicable during the planning processes, as these will drive the determination of measures that display behavioral characteristics, attitudes, perceptions, and motivations that can be examined externally. Measures are categorized as follows:

(a) Qualitative—a categorical measurement expressed by means of a natural language description rather than in terms of numbers. Methodologies consist of focus groups, in-depth interviews, ethnography, media content analysis, after-action reports, and anecdotes (individual responses sampled consistently over time).

(b) Quantitative—a numerical measurement expressed in terms of numbers rather than means of a natural language description. Methodologies consist of surveys, polls, observational data (intelligence, surveillance, and reconnaissance), media analytics, and official statistics.

(2) An integrated collection management plan ensures that assessment data gathered at the tactical level is incorporated into operational planning. This collection management plan needs to satisfy information requirements with the assigned tactical, theater, and national intelligence sources and other collection resources. Just as crucial is realizing that not every information requirement will be answered by the intelligence community and therefore planners must consider collaborating with other sources of information. Planners must discuss collection from other sources of information with the collection manager and unit legal personnel to ensure that the information is included in the overall assessment and the process is in accordance with intelligence oversight regulations and policy.

(3) Including considerations for assessment collection in the plan will facilitate the return of data needed to accomplish the assessment. Incorporating the assessment plan with the directions to conduct an activity will help ensure that resource requirements for assessment are acknowledged when the plan is approved. The assessment plan should, at a minimum, include timing and frequency of data collection, identify the party to conduct the collection, and provide reporting instructions.

(4) A well-designed assessment plan will:

(a) Develop the commander’s assessment questions.

(b) Document the expected change.

(c) Document the development of information requirements needed specifically for IO.

(d) Define key terms embedded within the end state with regard to the actors or TAs, operational activities, effects, acceptable conditions, rates of change, thresholds of success/failure, and technical/tactical triggers.

(e) Verify tactical objectives—support operational objectives.

(f) Identify strategic and operational considerations—in addition to tactical considerations, linking assessments to lines of operation and the associated desired conditions.

(g) Identify key nodes and connections in the expected change to be measured.

(h) Document collection and analysis methods.
(i) Establish a method to evaluate triggers to the commander’s decision points.

(j) Establish methods to determine progress towards the desired end state.

(k) Establish methods to estimate risk to the mission.

(l) Develop recommendations for plan adjustments.

(m) Establish the format for reporting assessment results.

  1. Step 4—Build/Modify Information Operations Assessment Baseline. A subset of JIPOE, the baseline is part of the overall characterization of the information environment that was accomplished in Step 1. It serves as a reference point for comparison, enabling an assessment of the way in which activities create desired effects. The baseline allows the commander and staff to set goals for desired rates of change within the information environment and establish thresholds for success and failure.
  2. Step 5—Coordinate and Execute Information Operations and Coordinate Intelligence Collection Activities

(1) With information gained in steps 1 and 4, the joint IO planner should be able to build an understanding of the TA. This awareness will yield a collection plan that enables the joint IO planner to determine whether or not the TA is “seeing” the activities/actions presented. The collection method must perceive the TA reaction. IO planners, assessors, and intelligence planners need to be able to communicate effectively to accurately capture the required intelligence needed to perform IO assessments.

(2) Information requirements and subsequent indicator collection must be tightly managed during employment of IRCs in order to validate execution and to monitor TA response. In the information environment, coordination and timing are crucial because some IRCs are time sensitive and require immediate indicator monitoring to develop valid assessment data.

  1. Step 6—Monitor and Collect Information Environment Data for Information Operations Assessment

(1) Monitoring is the continuous process of observing conditions relevant to current operations. Assessment data are collected, aggregated, consolidated and validated. Gaps in the assessment data are identified and highlighted in order to determine actions needed to alleviate shortfalls or make adjustments to the plan. As information and intelligence are collected during execution, assessments are used to validate or negate assumptions that define cause (action) and effect (conclusion) relationships between operational activities, objectives, and end states.

(2) If anticipated progress toward an end state does not occur, then the staff may conclude that the intended action does not have the intended effect. The uncertainty in the information environment makes the use of critical assumptions particularly important, as operation planning may need to be adjusted for elements that may not have initially been well understood when the plan was developed.

  1. Step 7—Analyze Information Operations Assessment Data

(1) If available, personnel trained or qualified in analysis techniques should conduct data analysis. Analysis can be done outside the operational area by leveraging reachback capabilities. One of the more important factors for analysis is that it is conducted in an unbiased manner. This is more easily accomplished if the personnel conducting analysis are not the same personnel who developed the execution plan. Assessment data are analyzed and the results are compared to the baseline measurements and updated continuously as the staff continues its analysis of the information environment.

(2) Deficiency analysis must also occur in this step. If no changes were observed in the information environment, then a breakdown may have occurred somewhere. The plan might be flawed, execution might not have been successful, collection may not have been accomplished as prescribed, or more time may be needed to observe any changes.

  1. Step 8—Report Assessment Results and Make Recommendations

As expressed earlier in this chapter, assessment results enable staffs to ensure that tasks stay linked to objectives and objectives remain relevant and linked to desired end states. They provide opportunities to identify IRC shortfalls and resource issues that may be impeding joint IO effectiveness. These results may also provide information to agencies outside of the command or chain of command.

The primary purpose of reporting the results is to inform the command and staff concerning the progress of objective achievement and the effects on the information environment, and to enable decision making. The published assessment plan, staff standard operating procedures, battle rhythm, and orders are documents in which commanders can dictate how often assessment results are provided and the format in which they are reported. I

  1. Barriers to Information Operations Assessment
  2. The preceding IO assessment methodology can support all operations, and most barriers to assessment can be overcome simply by considering assessment requirements as the plan is developed. But whatever the phase type of operation, the biggest barriers to assessment are generally self-generated.
  3. Some of the self-generated barriers to assessment include the failure to establish objectives that are actually measurable, the failure to collect baseline data against which “post-test” data can be compared, and the failure to plan adequately for the collection of assessment data, including the use of intelligence assets.
  4. There are other factors that complicate IO assessment. Foremost, it may be difficult or impossible to directly relate behavior change to an individual act or group of actions. Also, the logistics of data capture are not simple. Contingencies and operations in uncertain or hostile environments present unique challenges in terms of operational tempo and access to conduct assessments.
  5. Organizing for Operation Assessments
  6. Integrating assessment into the planning effort is normally the responsibility of the lead planner, with assistance across the staff. The lead planner understands the complexity of the plan and decision points established as the plan develops. The lead planner also understands potential indicators of success or failure.
  7. As a plan becomes operationalized, the overall assessment responsibility typically transitions from the lead planner to the J-3.
  8. When appropriate, the commander can establish an assessments cell or team to manage assessments activities. When utilized, this cell or team must have appropriate access to operational information, appropriate access to the planning process, and the representation of other staff elements, to include IRCs.
  9. Measures and Indicators
  10. As emphasized in Chapter IV, “Integrating Information-Related Capabilities into the Joint Operation Planning Process,” paragraph 2.f., “Relationship Between Measures of

Performance (MOPs) and Measures of Effectiveness (MOEs),” MOPs and MOEs help accomplish the assessment process by qualifying or quantifying the intangible attributes of the information environment. This is done to assess the effectiveness of activities conducted in the information environment and to establish a direct cause between the activity and the effect desired.

  1. MOPs should be developed during the operation planning process, should be tied directly to operation planning, and at a minimum, assess completion of the various phases of an activity or program.

Further, MOPs should assess any action, activity, or operation at which IO actions or activities interact with the TA. For certain tasks there are TA capabilities (voice, text, video, or face-to-face). For instance, during a leaflet-drop, the point of dissemination of the leaflets would be an action or activity. The MOP for any one action should be whether or not the TA was exposed to the IO action or activity.

(1) For each activity phase, task, or touch point, a set of MOPs based on the operational plan outlined in the program description should be developed. Task MOPs are measured via internal reporting within units and commands. Touch-point MOPs can be measured in one of several ways. Whether or not a TA is aware of, interested in, or responding to, an IRC product or activity, can be directly ascertained by conducting a survey or interview. This information can also be gathered by direct observational methods such as field reconnaissance, surveillance, or intelligence collection. Information can also be gathered via indirect observations such as media reports, online activity, or atmospherics.

(2) The end state of operation planning is a multi-phased plan or order, from which planners can directly derive a list of MOPs, assuming a higher echelon has not already designated the MOPs.

  1. MOEs need to be specific, clear, and observable to provide the commander effective feedback. In addition, there needs to be a direct link between the objectives, effects, and the TA. Most of the IRCs have their own doctrine and discuss MOEs with slightly different language, but with ultimately the same functions and roles.

(1) In line with JP 5-0, Joint Operation Planning, development of MOEs and their associated impact indicators (derived from measurable supporting objectives) must be done during the planning process.

(2) In developing IO MOEs, the following general guidelines should be considered. First, they should be related to the end state; that is, they should directly relate to the desired effects. They should also be measurable quantitatively or qualitatively. In order to measure effectiveness, a baseline measurement must exist or be established prior to execution, against which to measure system changes. They should be within a defined periodical or conditional assessment framework (i.e., the required feedback time, cyclical period, or conditions should be clearly stated for each MOE and a deadline made to report within a specified assessment period, which clearly delineates the beginning, progression, and termination of a cycle in which the effectiveness of the operations is to be assessed). Finally, they need to be properly resourced. The collection, collation, analysis and reporting of MOE data requires personnel, budgetary, and materiel resources. IO staffs, along with their counterparts at the component level, should ensure that these resource requirements are built into the plan during its development.

(3) The more specific the MOE, the more readily the intelligence collection manager can determine how best to collect against the requirements and provide valuable feedback pertaining to them. The ability to establish MOEs and conduct combat assessment for IO requires observation and collection of information from diverse, nebulous and often untimely sources. These sources may include: human intelligence; signals intelligence; air and ground-based intelligence; surveillance and reconnaissance; open-source intelligence, including the Internet; contact with the public; press inquiries and comments; Department of State polls; reports and surveys; nongovernmental organizations; international organizations; and commercial polls.

(4) One of the biggest challenges with MOE development is the difficulty of defining variables and establishing causality. Therefore, it is more advisable to approach this from a correlational, versus a causality perspective, where unrealistic “zero-defect” predictability gives way to more attainable correlational analysis, which provides insights to the likelihood of particular events and effects given a certain criteria in terms of conditions and actors in the information environment.

evidence seems to point out that correlation of indicators and events have proven more accurate than the evidence to support cause and effects relationships, particularly when it comes to behavior and intangible parameters of the cognitive elements of the information environment. IRCs, however, are directed at TAs and decision makers, and the systems that support them, making it much more difficult to establish concrete causal relationships, especially when assessing foreign public opinion or human behavior. Unforeseen factors can lead to erroneous interpretations, for example, a traffic accident in a foreign country involving a US service member or a local civilian’s bias against US policies can cause a decline in public support, irrespective of otherwise successful IO.

(5) If IO effects and supporting IO tasks are not linked to the commander’s objectives, or are not clearly written, measuring their effectiveness is difficult. Clearly written IO tasks must be linked to the commander’s objectives to justify resources to measure their contributing effects. If MOEs are difficult to write for a specific IO effect, the effect should be reevaluated and a rewrite considered. When attempting to describe desired effects, it is important to keep the effect impact in mind, as a guide to what must be observed, collected, and measured. In order to effectively identify the assessment methodology and to be able to recreate the process as part of the scientific method, MOE development must be written with a documented pathway for effect creation.

MOEs should be observable, to aid with collection; quantifiable, to increase objectivity; precise, to ensure accuracy; and correlated with the progress of the operation, to attain timeliness.

  1. Indicators are crucial because they aid the joint IO planner in informing MOEs and should be identifiable across the center of gravity critical factors. They can be independently weighted for their contribution to a MOE and should be based on separate criteria. A single indicator can inform multiple MOEs. Dozens of indicators will be required for a large-scale operation.
  2. Considerations
  3. In the information environment, it is unlikely that universal measures and indicators will exist because of varying perspectives. In addition, any data collected is likely to be incomplete. Assessments need to be periodically adjusted to the changing situation in order to avoid becoming obsolete.

In addition, assessments will usually need to be supplemented by subjective constructs that are a reflection of the joint IO planner’s scope and perspective (e.g., intuition, anecdotal evidence, or limited set of evidence).

  1. Assessment teams may not have direct access to a TA for a variety of reasons. The goal of measurement is not to achieve perfect accuracy or precision—given the ever present biases of theory and the limitations of tools that exist—but rather, to reduce uncertainty about the value being measured. Measurements of IO effects on TA can be accomplished in two ways: direct observation and indirect observation. Direct observation measures the attitudes or behaviors of the TA either by questioning the TA or observing behavior firsthand. Indirect observation measures otherwise inaccessible attitudes and behaviors by the effects that they have on more easily measurable phenomena. Direct observations are preferable for establishing baselines and measuring effectiveness, while indirect observations reduce uncertainty in measurements, to a lesser degree.
  2. Categories of Assessment
  3. Operation assessment of IO is an evaluation of the effectiveness of operational activities conducted in the information environment. Operation assessments primarily document mission success or failure for the commander and staff. However, operation assessments inform other types of assessment, such as programmatic and budgetary assessment. Programmatic assessment evaluates readiness and training, while budgetary assessment evaluates return on investment.
  4. When categorized by the levels of warfare, there exists tactical, operational and strategic-level assessment. Tactical-level assessment evaluates the effectiveness of a specific, localized activity. Operational-level assessment evaluates progress towards accomplishment of a plan or campaign. Strategic level assessment evaluates progress towards accomplishment of a theater or national objective. The skilled IO planner will link tactical actions to operational and strategic objectives.

APPENDIX A

REFERENCES

 

The development of JP 3-13 is based on the following primary references.

 General

National Security Strategy.

Unified Command Plan.

Executive Order 12333, United States Intelligence Activities.

The Fourth Amendment to the US Constitution.

The Privacy Act, Title 5, USC, Section 552a.

The Wiretap Act and the Pen/Trap Statute, Title 18, USC, Sections 2510-2522 and 3121-3127.

The Stored Communications Act, Title 18, USC, Sections 2701-2712.

The Foreign Intelligence Surveillance Act, Title 50, USC.

 Department of State Publications

Department of State Publication 9434, Treaties In Force.

Department of Defense Publications

Secretary of Defense Memorandum dated 25 January2011, Strategic Communication and Information Operations in the DOD.

National Military Strategy. DODD S-3321.1, Overt Psychological Operations Conducted by the Military Services in Peacetime and in Contingencies Short of Declared War.

DODD 3600.01, Information Operations (IO).

DODD5200.27, Acquisition of Information Concerning Persons and Organizations not Affiliated with the Department of Defense.

DOD 5240.1-R, Procedures Governing the Activities of DOD Intelligence Components that Affect United States Persons.

DODI O-3600.02, Information Operation (IO) Security Classification Guidance.

 Chairman of the Joint Chiefs of Staff Publications

CJCSI 1800.01D, Officer Professional Military Education Policy (OPMEP).
CJCSI 3141.01E, Management and Review of Joint Strategic Capabilities Plan (JSCP)-Tasked Plans.

CJCSI 3150.25E, Joint Lessons Learned Program.

CJCSI 3210.01B, Joint Information Operations Policy.

Chairman of the Joint Chiefs of Staff Manual (CJCSM) 3122.01 A, Joint Operation Planning and Execution System (JOPES) Volume I, Planning Policies and Procedures.

CJCSM 3122.02D, Joint Operation Planning and Execution System (JOPES)Volume III, Time-Phased Force and Deployment Data Development and Deployment Execution.

CJCSM 3122.03C, Joint Operation Planning and Execution System (JOPES)Volume II, Planning Formats.

CJCSM 3500.03C, Joint Training Manual for the Armed Forces of the United States. i. CJCSM 3500.04F, Universal Joint Task Manual.
j. JP 1, Doctrine for the Armed Forces of the United States.
k. JP 1-02, Department of Defense Dictionary of Military and Associated Terms.

JP 1-04, Legal Support to Military Operations.
m. JP 2-0, Joint Intelligence.
n. JP 2-01, Joint and National Intelligence Support to Military Operations.

JP 2-01.3, Joint Intelligence Preparation of the Operational Environment.

JP 2-03, Geospatial Intelligence Support to Joint Operations.
JP 3-0, Joint Operations.
JP 3-08, Interorganizational Coordination During Joint Operations.
JP 3-10, Joint Security Operations in Theater.
JP 3-12, Cyberspace Operations.
JP 3-13.1, Electronic Warfare.
JP 3-13.2, Military Information Support Operations.

JP 3-13.3, Operations Security.
JP 3-13.4, Military Deception.
JP 3-14, Space Operations.
JP 3-16, Multinational Operations.

JP 3-57, Civil-Military Operations.

JP 3-60, Joint Targeting.

JP 3-61, Public Affairs.
JP 5-0, Joint Operation Planning.
JP 6-01, Joint Electromagnetic Spectrum Management Operations. 

Multinational Publication

AJP 3-10, Allied Joint Doctrine for Information Operations.

Notes on Countering Threat Networks

Accession Number: AD1025082

Title: Countering Threat Networks

Descriptive Note: Technical Report

Corporate Author: JOINT STAFF WASHINGTON DC WASHINGTON

Abstract: 

This publication has been prepared under the direction of the Chairman of the Joint Chiefs of Staff CJCS. It sets forth joint doctrine to govern the activities and performance of the Armed Forces of the United States in joint operations, and it provides considerations for military interaction with governmental and nongovernmental agencies, multinational forces, and other interorganizational partners. It provides military guidance for the exercise of authority by combatant commanders and other joint force commanders JFCs, and prescribes joint doctrine for operations and training. It provides military guidance for use by the Armed Forces in preparing and executing their plans and orders. It is not the intent of this publication to restrict the authority of the JFC from organizing the force and executing the mission in a manner the JFC deems most appropriate to ensure unity of effort in the accomplishment of objectives. The worldwide emergence of adaptive threat networks introduces a wide array of challenges to joint forces in all phases of operations. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Countering threat networks CTN consists of activities to pressure threat networks or mitigate their adverse effects. Understanding a threat networks motivation and objectives is required to effectively counter its efforts.

 

Descriptors: Threats, military organizationsintelligence collection

 

Distribution Statement: APPROVED FOR PUBLIC RELEASE

 

Link to Article: https://apps.dtic.mil/sti/citations/AD1025082

 

Notes

Scope

This publication provides joint doctrine for joint force commanders and their staffs to plan, execute, and assess operations to identify, neutralize, disrupt, or destroy threat networks.

Introduction

The worldwide emergence of adaptive threat networks introduces a wide array of challenges to joint forces in all phases of operations. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Countering threat networks (CTN) consists of activities to pressure threat networks or mitigate their adverse effects. Understanding a threat network’s motivation and objectives is required to effectively counter its efforts.

Policy and Strategy

CTN planning and operations require extensive coordination as well as innovative, cross-cutting approaches that utilize all instruments of national power. The national military strategy describes the need of the joint force to operate in this complex environment.

Challenges of the Strategic Security Environment

CTN represents a significant planning and operational challenge because threat networks use asymmetric methods and weapons and often enjoy state cooperation, sponsorship, sympathy, sanctuary, or supply.

The Strategic Approach

The groundwork for successful countering threat networks activities starts with information and intelligence to develop an understanding of the operational environment and the threat network.

Military engagement, security cooperation, and deterrence are just some of the activities that may be necessary to successfully counter threat networks without deployment of a joint task force.

Achieving synergy among diplomatic, political, security, economic, and information activities demands unity of effort between all participants.

Threat Network Fundamentals

Threat Network Construct

A network is a group of elements consisting of interconnected nodes and links representing relationships or associations. A cell is a subordinate organization formed around a specific process, capability, or activity within a designated larger organization. A node is an element of a network that represents a person, place, or physical object. Nodes represent tangible elements within a network or operational environment (OE) that can be targeted for action. A link is a behavioral, physical, or functional relationship between nodes. Links establish the interconnectivity between nodes that allows them to work together as a network—to behave in a specific way (accomplish a task or perform a function). Nodes and links are useful in identifying centers of gravity (COGs), networks, and cells the joint force commander (JFC) may wish to influence or change during an operation.

Network Analysis

Network analysis is a means of gaining understanding of a group, place, physical object, or system. It identifies relevant nodes, determines and analyzes links between nodes, and identifies key nodes. The political, military, economic, social, information, and infrastructure systems perspective is a useful starting point for analysis of threat networks. Networks are typically formed at the confluence of three conditions: the presence of a catalyst, a receptive audience, and an accommodating environment. As conditions within the OE change, the network must adapt in order to maintain a minimal capacity to function within these conditions.

Determining and Analyzing Node-Link Relationships

Social network analysis provides a method that helps the JFC and staff understand the relevance of nodes and links. The strength or intensity of a single link can be relevant to determining the importance of the functional relationship between nodes and the overall significance to the larger system. The number and strength of nodal links within a set of nodes can be indicators of key nodes and a potential COG.

Threat Networks and Cells

A network must perform a number of functions in order to survive and grow. These functions can be seen as cells that have their own internal organizational structure and communications. These cells work in concert to achieve the overall organization’s goals. Examples of cells include: operational, logistical, training, communications, financial, and WMD proliferation cells.

Networked Threats and Their Impact on the Operational Environment

Networked threats are highly adaptable adversaries with the ability to select a variety of tactics, techniques, and technologies and blend them in unconventional ways to meet their strategic aims. Additionally, many threat networks supplant or even replace legitimate government functions such as health and social services, physical protection, or financial support in ungoverned or minimally governed areas. Once the JFC identifies the networks in the OE and understands their interrelationships, functions, motivations, and vulnerabilities, the commander tailors the force to apply the most effective tools against the threat.

Threat Network Characteristics

Threat networks manifest themselves and interact with neutral networks for protection, to perpetuate their goals, and to expand their influence. Networks take many forms and serve different purposes, but are all comprised of people, processes, places, material, or combinations.

Adaptive Networked Threats

For a threat network to survive political, economic, social, and military pressures, it must adapt to those pressures. Networks possess many characteristics important to their success and survival, such as flexible command and control structure; a shared identity; and the knowledge, skills, and abilities of group leaders and members to adapt.

Network Engagement

Network engagement is the interactions with friendly, neutral, and threat networks, conducted continuously and simultaneously at the tactical, operational, and strategic levels, to help achieve the commander’s objectives within an OE. To effectively counter threat networks, the joint force must seek to support and link with friendly networks and engage neutral networks through the building of mutual trust and cooperation through network engagement. Network engagement consists of three components: partnering with friendly networks, engaging neutral networks, and CTN to support the commander’s desired end state.

Networks, Links, and Identity Groups

All individuals are members of multiple, overlapping identity groups. These identity groups form links of affinity and shared understanding, which may be leveraged to form networks with shared purpose.

Types of Networks in an Operational Environment

There are three general types of networks found within an operational area: friendly, neutral, and hostile/threat networks. To successfully accomplish mission goals the JFC should equally consider the impact of actions on multinational and friendly forces, local population, criminal enterprises, as well as the adversary.

Identify a Threat Network

Threat networks often attempt to remain hidden. By understanding the basic, often masked sustainment functions of a given threat network, commanders may also identify individual networks within. A thorough joint intelligence preparation of the operational environment (JIPOE) product, coupled with “on-the-ground” assessment, observation, and all-source intelligence collection, will ultimately lead to an understanding of the OE and will allow the commander to visualize the network.

Planning to Counter Threat Networks

Joint Intelligence Preparation of the Operational Environment and Threat Networks

JIPOE is the first step in identifying the essential elements that constitute the OE and is used to plan and conduct operations against threat networks. The focus of the JIPOE analysis for threat networks is to help characterize aspects of the networks.

Understanding the Threat’s Network

To neutralize or defeat a threat network, friendly forces must do more than understand how the threat network operates, its organization goals, and its place in the social order; they must also understand how the threat is shaping its environment to maintain popular support, recruit, and raise funds. Building a network function template is a method to organize known information about the network associated with structure and functions of the network. By developing a network function template, the information can be initially understood and then used to facilitate critical factors analysis (CFA). CFA is an analytical framework to assist planners in analyzing and identifying a COG and to aid operational planning.

Targeting Evaluation Criteria

A useful tool in determining a target’s suitability for attack is the criticality, accessibility, recuperability, vulnerability, effect, and recognizability (CARVER) analysis. The CARVER method as it applies to networks provides a graph-based numeric model for determining the importance of engaging an identified target, using qualitative analysis, based on seven factors: network affiliations, criticality, accessibility, recuperability, vulnerability, effect, and recognizability.

Countering Threat Networks Through the Planning of Phases

JFCs may plan and conduct CTN activities throughout all phases of a given operation. Upon gaining an understanding of the various threat networks in the OE through the joint planning process (JPP), JFCs and their staffs develop a series of prudent (feasible, suitable, and acceptable) CTN actions to be executed in conjunction with other phased activities.

Activities to Counter Threat Networks

Targeting Threat Networks

JIPOE is one of the critical inputs to support the development of these products, but must include a substantial amount of analysis on the threat network to adequately identify the critical nodes, critical capabilities (network’s functions), and critical requirements for the network. Joint force targeting efforts should employ a comprehensive approach, leveraging military force and civil agency capabilities that keep continuous pressure on multiple nodes and links of the network’s structure.

Desired Effects on Networks

When commanders decide to generate an effect on a network through engaging specific nodes, the intent may not be to cause damage, but to shape conditions of a mental or moral nature. The selection of effects desired on a network is conducted as part of target selection, which includes the consideration of capabilities to employ that was identified during capability analysis of the joint targeting cycle.

Targeting

CTN targets can be characterized as targets that must be engaged immediately because of the significant threat they represent or the immediate impact they will make related to the JFC’s intent, key nodes such as high-value individuals, or longer-term network infrastructure targets (caches, supply routes, safe houses) that are normally left in place for a period of time to exploit them. Resources to service/exploit these targets are allocated in accordance with the JFC’s priorities, which are constantly reviewed and updated through the command’s joint targeting process.

Lines of Effort by Phase

During each phase of an operation or campaign against a threat network, there are specific actions that the JFC can take to facilitate countering threats network. However, these actions are not unique to any particular phase, and must be adapted to the specific requirements of the mission and the OE.

Theater Concerns in Countering Threat Networks

Many threat networks are transnational, recruiting, financing, and operating on a global basis. Theater commanders need to be aware of the relationships among these networks and identify the basis for their particular connection to a geographic combatant commander’s area of responsibility.

Operational Approaches to Countering Threat Networks

There are many ways to integrate CTN into the overall plan. In some operations, the threat network will be the primary focus of the operation. In others, a balanced approach through multiple line of operations and lines of effort may be necessary, ensuring that civilian concerns are met while protecting them from the threat networks’ operators.

Assessments

Assessment of Operations to Counter Threat Networks

CTN assessments at the strategic, operational, and tactical levels and across the instruments of national power are vital since many networks have regional and international linkages as well as capabilities. Objectives must be developed during the planning process so that progress toward objectives can be assessed. CTN assessments require staffs to conduct analysis more intuitively and consider both anecdotal and circumstantial evidence. Since networked threats operate among civilian populations, there is a greater need for human intelligence.

Operation Assessment

CTN activities may require assessing multiple measures of effectiveness (MOEs) and measures of performance (MOPs), depending on threat network activity. The assessment process provides a feedback mechanism to the JFC to provide guidance and direction for future operations and targeting efforts against threat networks.

Assessment Framework for Countering Threat Networks

The assessment framework broadly outlines three primary activities: organize, analyze, and communicate. In conducting each of these activities, assessors must be linked to JPP, understand the operation plan, and inform the intelligence process as to what information is required to support indicators, MOEs, and MOPs. In assessing CTN operations, quantitative data and analysis will inform assessors.

CHAPTER I

OVERVIEW

“The emergence of amorphous, adaptable, and networked threats has far-reaching implications for the US national security community. These threats affect DOD [Department of Defense] priorities and war fighting strategies, driving greater integration with other departments and agencies performing national security missions, and create the need for new organizational concepts and decision- making paradigms. The impacts are likely to influence defense planning for years to come.”

Department of Defense Counternarcotics and Global Threats Strategy, April 2011

Threat networks are those whose size, scope, or capabilities threaten US interests. These networks may include the underlying informational, economic, logistical, and political components to enable these networks to function. These threats create a high level of uncertainty and ambiguity in terms of intent, organization, linkages, size, scope, and capabilities. These threat networks jeopardize the stability and sovereignty of nation-states, including the US.

They tend to operate among civilian populations and in the seams of society and may have components that are recognized locally as legitimate parts of society. Collecting information and intelligence on these networks, their nodes, links, and affiliations is challenging, and analysis of their strengths, weaknesses, and centers of gravity (COGs) differs greatly from traditional nation- state adversaries.

  1. Threat networks are part of the operational environment (OE). These networks utilize existing networks and may create new networks that seek to move money, people, information, and goods for the benefit of the network.

Not all of these interactions create instability and not all networks are a threat to the joint force and its mission. While some societies may accept a certain degree of corruption and criminal behavior as normal, it is never acceptable for these elements to develop networks that begin to pose a threat to national and regional stability. When a network begins to pose a threat, action should be considered to counter the threat.

This doctrine will focus on those networks that do present a threat with an understanding that friendly, neutral, and threat networks overlap and share nodes and links. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Some politically or ideologically based networks may avoid open confrontation with US forces; nevertheless, these networks may threaten mission success. Their activities may include spreading ideology, moving money, moving supplies (including weapons and fighters), human trafficking, drug smuggling, information relay, or acts of terrorism toward the population or local governments. Threat networks may be local, regional, or international and a threat to deployed joint forces and the US homeland.

  1. Understandingathreatnetwork’smotivationandobjectivesisrequiredtoeffectively counter its efforts. The issues that drive a network and its ideology should be clearly understood. For example, they may be driven by grievances, utopian ideals, power, revenge over perceived past wrongs, greed, or a combination of these.
  2. CTN is one of three pillars of network engagement that includes partnering with friendly networks and engaging with neutral networks in order to attain the commander’s desired military end state within a complex OE. It consists of activities to pressure threat networks or mitigate their adverse effects. These activities normally occur continuously and simultaneously at multiple levels (tactical, operational, and strategic) and may employ lethal and/or nonlethal capabilities in a direct or indirect manner. The most effective operations pressure and influence elements of these networks at multiple fronts and target multiple nodes and links.

The networks found in the OE may be simple or complex and must be identified and thoroughly analyzed. Neither all threats nor all elements of their supporting networks can be defeated, particularly if they have a regional or global presence. Certain elements of the network can be deterred, other parts neutralized, and some portions defeated. Engaging these threats through their supporting networks is not an adjunct or ad hoc set of operations and may be the primary mission of the joint force. It is not a stand-alone operation planned and conducted separately from other military operations. CTN should be fully integrated into the joint operational design, joint intelligence preparation of the operational environment (JIPOE), joint planning process (JPP), operational execution, joint targeting process, and joint assessments.

  1. Threat networks are often the most complex adversaries that exist within the OEs and frequently employ asymmetric methods to achieve their objectives. Disrupting their global reach and ability to influence events far outside of a specific operational area requires unity of effort across combatant commands (CCMDs) and all instruments of national power.

Joint staffs must realize that effectively targeting threat networks must be done in a comprehensive manner. This is accomplished by leveraging the full spectrum of capabilities available within the joint force commander’s (JFC’s) organization, from intergovernmental agencies, and/or from partner nations (PNs).

  1. Policy and Strategy
  2. DOD strategic guidance recognizes the increasing interconnectedness of the international order and the corresponding complexity of the strategic security environment.

Threat networks and their linkages transcend geographic and functional CCMD boundaries.

  1. CCDRs must be able to employ a joint force to work with interagency and interorganizational security partners in the operational area to shape, deter, and disrupt threat networks. They may employ a joint force with PNs to neutralize and defeat threat networks.
  2. CCDRs develop their strategies by analyzing all aspects of the OE and developing options to set conditions to attain strategic end states. They translate these options into an integrated set of CCMD campaign activities described in CCMD campaign and associated subordinate and supporting plans. CCDRs must understand the OE, recognize nation-state use of proxies and surrogates, and be vigilant to the dangers posed by super-empowered threat networks. Super-empowered threat networks are networks that develop or obtain nation-state capabilities in terms of weapons, influence, funding, or lethal aid.

In combination with US diplomatic, economic, and informational efforts, the joint force must leverage partners and regional allies to foster cooperation in addressing transnational challenges.

  1. Challenges of the Strategic Security Environment
  2. The strategic security environment is characterized by uncertainty, complexity, rapid change, and persistent conflict. Advances in technology and information have facilitated individual non-state actors and networks to move money, people, and resources, and spread violent ideology around the world. Non-state actors are able to conduct activities globally and nation-states leverage proxies to launch and maintain sustained campaigns in remote areas of the world.

Alliances, partnerships, cooperative arrangements, and inter-network conflict may morph and shift week-to-week or even day-to- day. Threat networks or select components often operate clandestinely. The organizational construct, geographical location, linkages, and presence among neutral or friendly populations are difficult to detect during JIPOE, and once a rudimentary baseline is established, ongoing changes are difficult to track. This makes traditional intelligence collection and analysis, as well as operations and assessments, much more challenging than against traditional military threats.

  1. Deterring threat networks is a complex and difficult challenge that is significantly different from classical notions of deterrence. Deterrence is most classically thought of as the threat to impose such high costs on an adversary that restraint is the only rational conclusion. When dealing with violent extremist organizations and other threat networks, deterrence is likely to be ineffective due to radical ideology, diffuse organization, and lack of ownership of territory.

due to the complexity of deterring violent extremist organizations, flexible approaches must be developed according to a network’s ideology, organization, sponsorship, goals, and other key factors to clearly communicate that the targeted action will not achieve the network’s objectives.

  1. CTN represents a significant planning and operational challenge because threat networks use asymmetric methods and weapons and often enjoy state cooperation, sponsorship, sympathy, sanctuary, or supply. These networked threats transcend operational areas, areas of influence, areas of interest, and the information environment (to include cyberspace [network links and nodes essential to a particular friendly or adversary capability]). The US military is one of the instruments of US national power that may be employed in concert with interagency, international, and regional security partners to counter threat networks.
  2. Threat networks have the ability to remotely plan, finance, and coordinate attacks through global communications (to include social media), transportation, and financial networks. These interlinked areas allow for the high-speed, high-volume exchange of ideas, people, goods, money, and weapons.

“Terrorists and insurgents increasingly are turning to TOC [transnational organized crime] to generate funding and acquire logistical support to carry out their violent acts. While the crime-terror[ist] nexus is still mostly opportunistic, this nexus is critical nonetheless, especially if it were to involve the successful criminal transfer of WMD [weapons of mass destruction] material to terrorists or their penetration of human smuggling networks as a means for terrorists to enter the United States.”

Strategy to Combat Transnational Organized Crime, July 2011

using the global communications network, threat networks have demonstrated their ability to recruit like- minded individuals from outside of their operational area and have been successful in recruiting even inside the US and PNs. Many threat networks have mastered social media and tapped into the proliferation of traditional and nontraditional news media outlets to create powerful narratives, which generate support and sympathy in other countries. Cyberspace is equally as important to the threat network as physical terrain. Future operations will require the ability to monitor and engage threat networks within cyberspace, since this provides them an opportunity to coordinate sophisticated operations that advance their interests.

  1. Threat Networks and Levels of Warfare
  2. The purpose of CTN activities is to shape the security environment, deter aggression, provide freedom of maneuver within the operational area and its approaches, and, when necessary, defeat threat networks.

Supporting activities may include training, use of military equipment, subject matter expertise, cyberspace operations, information operations (IO) (use of information-related capabilities [IRCs]), military information support operations (MISO), counter threat finance (CTF), interdiction operations, raids, or civil-military operations.

In nearly all cases, diplomatic efforts, sanctions, financial pressure, criminal investigations, and intelligence community activities will complement military operations.

  1. Threat networks and their supporting network capabilities (finance, logistics, smuggling, command and control [C2], etc.) will present challenges to the joint force at the tactical, operational, and strategic levels due to their adaptive nature to conditions in the OE. Figure I-1 depicts some of the threat networks that may be operating in the OE and their possible impact on the levels of warfare.

Complex alliances between threat, neutral, and friendly networks may vary at each level, by agency, and in different geographic areas in terms of their membership, composition, goals, resources, strengths, and weaknesses. Strategically they may be part of a larger ideological movement at odds with several regional governments, have regional aspirations for power, or oppose the policies of nations attempting to achieve military stability in a geographic region.

Tactically, there may be local alliances with criminal networks, tribes, or clans that may not be ideologically aligned with one another, but could find common cause in opposing joint force operations in their area or harboring grievances against the host nation (HN) government. Analysis will be required for each level of warfare and for each network throughout the operational area. This analysis should be aligned with analysis from intelligence community agencies and international partners that often inject critical information that may impact joint planning and operations.

  1. The Strategic Approach
  2. The groundwork for successful CTN activities starts with information and intelligence to develop an understanding of the OE and the threat network.
  3. Current operational art and operational design as described within JPP is applicable to CTN. Threat networks tend to be difficult to collect intelligence on, analyze, and understand. Therefore, several steps within the operational approach methodology outlined in JP 5-0, Joint Planning, such as understanding the OE and defining the problem may require more resources and time.

JP 2-01.3, Joint Intelligence Preparation of the Operational Environment, provides the template for this process used to analyze all relevant aspects of the OE. Within operational design, determining the strategic, operational, and tactical COGs and decisive points of multiple threat networks will be more challenging than analyzing a traditional military force…

  1. Strategic and operational approaches require greater interagency coordination. This is critical for achieving unity of effort against threat network critical vulnerabilities (CVs) (see Chapter II, “Threat Network Fundamentals”). When analyzing networks, there will never be a single COG. The identification of the factors that comprise the COG(s) for a network will still require greater analysis, since each individual of the network may be motivated by different aspects. For example, some members may join a network for ideological reasons, while others are motivated by monetary gain. This aspect must be understood when analyzing human networks.
  2. Threat networks will adapt rapidly and sometimes “out of view” of intelligence collection efforts.

Intelligence sharing… must be complemented by integrated planning and execution to achieve the optimal operational tempo to defeat threats. Traditionally defined geographic operational areas, roles, responsibilities, and authorities often require greater cross-area coordination and adaptation to counter threat networks. Unity of effort seeks to synchronize understanding of and actions against a group’s or groups’ political, military, economic, social, information, and infrastructure (PMESII) systems as well as the links and nodes that are part of the group’s supporting networks.

  1. Joint Force and Interagency Coordination
  2. The USG and its partners face a wide range of local, national, and transnational irregular challenges to the stability of the international system. Successful deterrence of non- state actors is more complicated and less predictable than in the past, and non-state actors may derive significant capabilities from state sponsorship.
  3. Adaptingtoanincreasinglycomplexworldrequiresunityofefforttocounterviolent extremism and strengthen regional security.

To improve understanding, USG departments and agencies should strive to develop strong relationships while learning to speak each other’s language, or better yet, use a common lexicon.

  1. At each echelon of command, the actions taken to achieve stability vary only in the amount of detail required to create an actionable picture of the enemy and the OE. Each echelon of command has unique functions that must be synchronized with the other echelons, as part of the overall operation to defeat the enemy. Achieving synergy among diplomatic, political, security, economic, and information activities demands unity of effort between all participants. This is best achieved through an integrated approach. A common interagency assessment of the OE establishes a deep and shared understanding of the cultural, ideological, religious, demographic, and geographical factors that affect the conditions in the OE.
  2. Establishing a whole-of-government approach to achieve unity of effort should begin during planning. Achieving unity of effort is problematic due to challenges in information sharing, competing priorities, differences in lexicon, and uncoordinated activities.
  3. Responsibilities
  4. Operations against threat networks require unity of effort across the USG and multiple authorities outside DOD. Multiple instruments of national power will be operating in close proximity and often conducting complementary activities across the strategic, operational, and tactical levels. In order to integrate, deconflict, and synchronize the activities of these multiple entities, the commander should form a joint interagency coordination group, with representatives from all participants operating in or around the operational area.
  5. The military provides general support to a number of USG departments and agencies for their CTN activities ranging from CT to CD. A number of USG departments and agencies have highly specialized interests in threat networks, and their activities directly impact the military’s own CTN activities. For example, the Department of the Treasury’s CTF activities help to deny the threat network the funding needed to conduct operations.

CHAPTER II

THREAT NETWORK FUNDAMENTALS 1. Threat Network Construct

  1. Network Basic Components. All networks, regardless of size, share basic components and characteristics. Understanding common components and characteristics will help to develop and establish common joint terminology and standardize outcomes for network analysis, CTN planning, activities, and assessments across the joint force and CCMDs.
  2. Networks Terminology. A threat network consists of interconnected nodes and links and may be organized using subordinate and associated networks and cells. Understanding the individual roles and connections of each element is as important to conducting operations, as is understanding the overall network structure, known as the network topology.

Network boundaries must also be determined, especially when dealing with overlapping networks and global networks. Operations will rarely be possible against an entire threat or its supporting networks. Understanding the network topology allows planners to develop an operational approach and associated tactics necessary to create the desired effects against the network.

(1) Network. A network is a group of elements consisting of interconnected nodes and links representing relationships or associations. Sometimes the terms network and system are synonymous. This publication uses the term network to distinguish threat networks from the multitude of other systems, such as an air defense system, communications system, transportation system, etc.

(2) Cell. A cell is a subordinate organization formed around a specific process, capability, or activity within a designated larger organization.

(3) Node. A node is an element of a network that represents a person, place, or physical object. Nodes represent tangible elements within a network or OE that can be targeted for action. Nodes may fall into one or more PMESII categories.

(4) Link. A link is a behavioral, physical, or functional relationship between nodes.

Links establish the interconnectivity between nodes that allows them to work together as a network—to behave in a specific way (accomplish a task or perform a function). Nodes and links are useful in identifying COGs, networks, and cells the JFC may wish to influence or change during an operation.

  1. Network Analysis
  2. Network analysis is a means of gaining understanding of a group, place, physical object, or system. It identifies relevant nodes, determines and analyzes links between nodes, and identifies key nodes.

The PMESII systems perspective is a useful starting point for analysis of threat networks.

Network analysis facilitates identification of significant information about networks that might otherwise go unnoticed. For example, network analysis can uncover positions of power within a network, show the cells that account for its structure and organization, find individuals or cells whose removal would greatly alter the network, and facilitate measuring change over time.

  1. All networks are influenced by and in turn influence the OEs in which they exist. Analysts must understand the underlying conditions; the frictions between individuals and groups; familial, business, and governmental relationships; and drivers of instability that are constantly subject to change and pressures. All of these factors evolve as the networks change shape, increase or decrease capacity, and strive to influence and control things within the OE, and they contribute to or hinder the networks’ successes. Environmental framing is selecting, organizing, and interpreting and making sense of a complex reality; it serves as a guide for analyzing, understanding, and acting.
  2. Networks are typically formed at the confluence of three conditions: the presence of a catalyst, a receptive audience, and an accommodating environment. As conditions within the OE change, the network must adapt in order to maintain a minimal capacity to function within these conditions.

(1) Catalyst. A catalyst is a condition or variable within the OE that could motivate or bind a group of individuals together to take some type of action to meet their collective needs. These catalysts may be identified as critical variables as units conduct their evaluation of the OE and may consist of a person, idea, need, event, or some combination thereof. The potential exists for the catalyst to change based on the conditions of the OE.

(2) Receptive Audience. A receptive audience is a group of individuals that feel they have more to gain by engaging in the activities of the network than by not participating. Additionally, in order for a network to form, the members of the network must have the motivation and means to conduct actions that address the catalyst that generated the network. Depending on the type of network and how it is organized, leadership may or may not be necessary for the network to form, survive, or sustain collective action. The receptive audience originates from the human dimension of the OE.

(3) Accommodating Environment. An accommodating environment is the conditions within the OE that facilitate the organization and actions of a network. Proper conditions must exist within the OE for a network to form to fill a real or perceived need. Networks can exist for a time without an accommodating environment, but without it the network will ultimately fail.

  1. Networks utilize the PMESII system structure within the OE to form, survive and function. Like the joint force, threat networks will also have desired end states and objectives. As analysis is being conducted of the OE, the joint staff should identify the critical variables within the OE for the network. A critical variable is a key resource or condition present within the OE that has a direct impact on the commander’s objectives and may affect the formation and sustainment of networks.
  2. Determining and Analyzing Node-Link Relationships

Links are derived from data or extrapolations based on data. A benefit of graphically portraying node-link relationships is that the potential impact of actions against certain nodes can become more evident. Social network analysis (SNA) provides a method that helps the JFC and staff understand the relevance of nodes and links. Network mapping is essential to conducting SNA.

  1. Link Analysis. Link analysis identifies and analyzes relationships between nodes in a network. Network mapping provides a visualization of the links between nodes, but does not provide the qualitative data necessary to fully define the links.

During link analysis, the analyst examines the conditions of the relationship, strong or weak, informal or formal, formed by familial, social, cultural, political, virtual, professional, or any other condition.

  1. Nodal Analysis. Individuals are associated with numerous networks due to their individual identities. A node’s location within a network and in relation to other nodes carries identity, power, or belief and influences behavior.

Examples of these types of identities include locations of birth, family, religion, social groups, organizations, or a host of various characteristics that define an individual. These individual attributes are often collected during identity activities and fused with attributes from unrelated collection activities to form identity intelligence (I2) products. Some aspects used to help understand and define an individual are directly related to the conditions that supported the development of relationships to other nodes.

  1. Network Analysis. Throughout the JIPOE process, at every echelon and production category, one of the most important, but least understood, aspects of analysis is sociocultural analysis (SCA). SCA is the study, evaluation, and interpretation of information about adversaries and relevant actors through the lens of group-level decision making to discern catalysts of behavior and the context that shapes behavior. SCA considers relationships and activities of the population, SNA (looking at the interpersonal, professional, and social networks tied to an individual), as well as small and large group dynamics.

SNA not only examines individuals and groups of individuals within a social structure such as a terrorist, criminal, or insurgent organization, but also examines how they interact. Interactions are often repetitive, enduring, and serve a greater purpose, and the interaction patterns affect behavior. If enough nodes and links information can be collected, behavior patterns can be observed and, to some extent, predicted.

SNA differs from link analysis because it only analyzes similar objects (e.g., people or organizations), not the relationships between the objects. SNA provides objective analysis of current and predicted network structure and interaction of networks that have an impact on the OE.

  1. Threat Networks and Cells

A network must perform a number of functions in order to survive and grow. These functions can be seen as cells that have their own internal organizational structure and communications. These cells work in concert to achieve the overall organization’s goals.

Networks do not exist in a vacuum. They normally share nodes and links with other networks. Each network may require a unique operational approach as they adapt to their OE or to achieve new objectives. They may form a greater number of cells if they are capable of independent operations consistent with the threat network’s overall operational goals.

They may move to a more hierarchical system due to lack of leadership, questions regarding loyalty of subordinates, or inexperienced lower-level personnel. Understanding these dimensions allows a commander to craft a more effective operational approach. These cells are examples only. The list is neither exclusive nor inclusive. Each network and cell will change, adapt, and morph over time.

  1. Operational Cells. Operational cells carry out the day-to-day operations of the network and are typically people-based (e.g., terrorists, guerrilla fighters, drug dealers). It is extremely difficult to gather intelligence on and depict every single node and link within an operational network. However, understanding key nodes, links, and cells that are particularly effective allows for precision targeting and greater effectiveness.
  2. Logistical Cells. Logistical cells provide threat networks the necessary supplies, weapons, ammunition, fuel, and military equipment to operate. Logistical cells are easier to observe and target than operational or communications cells since they move large amounts of material, which makes them more visible. These cells may include individuals who are not as ideologically motivated or committed as those in operational networks.

Threat logistical cells often utilize legitimate logistics nodes and links to hide their activities “in the noise” of legitimate supplies destined for a local or regional economy.

  1. Training Cells. Most network leaders desire to grow the organization for power, prestige, and advancement of their goals. Logistical cells may be used to move material, trainers, and trainees into a training area, or that portion of logistics may be a distinct part of the training cells.

Training requires the aggregation of new personnel and often includes physical structures to support activities which may also be visible and provide additional information to better understand the network.

  1. Communications Cells. Most threat networks have at minimum rudimentary communications cells for operational, logistical and financial purposes and another to communicate their strategic narrative to a target or neutral population.

The use of Internet-based social media platforms by threat networks increases the likelihood of gathering information, including geospatial information.

  1. Financial Cells. Threat networks require funding for every aspect of their activities, to maintain and expand membership, and to spread their message. Their financial cell moves money from legitimate and illegitimate business operations, foreign donors, and taxes collected or coerced from the population to the operational area.
  2. WMD Proliferation Cells. Many of these cells are not organized specifically for the proliferation of WMD. In fact, many existing cells may be utilized out of convenience. Examples of existing cells include human trafficking, counterfeiting, and drug trafficking.

The JFC should use a systems perspective to better understand the complexity of the OE and associated networks. This perspective looks across the PMESII systems to identify the nodes, links, COGs, and potential vulnerabilities within the network.

  1. Analyze the Network

Key nodes exist in every major network and are critical to their function. Nodes may be people, places, or things. For example, a town that is the primary conduit for movement of illegal narcotics would be the key node in a drug trafficking network. Some may become decisive points for military operations since, when acted upon, they could allow the JFC to gain a marked advantage over the adversary or otherwise to contribute materially to achieving success. Weakening or eliminating a key node should cause its related group of nodes and links to function less effectively or not at all, while strengthening the key node could enhance the performance of the network as a whole. Key nodes often are linked to, resident in, or influence multiple networks.

Node centrality can highlight possible positions of importance, influence, or prominence and patterns of connections. A node’s relative centrality is determined by analyzing measurable characteristics: degree, closeness, betweenness, and eigenvector.

CHAPTER III

NETWORKS IN THE OPERATIONAL ENVIRONMENT

“How many times have we killed the number three in al-Qaida? In a network, everyone is number three.”

Dr. John Arquilla, Naval Postgraduate School

  1. Networked Threats and Their Impact on the Operational Environment
  2. In a world increasingly characterized by volatility, uncertainty, complexity, and ambiguity, a wide range of local, national, and transnational irregular challenges to the stability of the international system have emerged. Traditional threats like insurgencies and criminal gangs have been exploiting weak or corrupt governments for years, but the rise of transnational extremists and their active cooperation with traditional threats has changed the global dynamic.
  3. All networks are vulnerable, and a JFC and staff armed with a comprehensive understanding of a threat network’s structure, purpose, motivations, functions, interrelationships, and operations can determine the most effective means, methods, and timing to exploit that vulnerability.

Network analysis and exploitation are not simple tasks. Networked threats are highly adaptable adversaries with the ability to select a variety of tactics, techniques, and technologies and blend them in unconventional ways to meet their strategic aims. Additionally, many threat networks supplant or even replace legitimate government functions such as health and social services, physical protection, or financial support in ungoverned or minimally governed areas. This de facto governance of an area by a threat network makes it more difficult for the joint force to simultaneously attack a threat and meet the needs of the population.

  1. Once the JFC identifies the networks in the OE and understands their interrelationships, functions, motivations, and vulnerabilities, the commander tailors the force to apply the most effective tools against the threat.

the JTF requires active support and participation by USG, HN, nongovernmental agencies, and partners, particularly when it comes to addressing cross-border sanctuary, arms flows, and the root causes of instability. This “team of teams” approach facilitates unified action, which is essential for organizing for operations against an adaptive threat.

  1. Threat Network Characteristics

Threat networks do not differ much from non-threat networks in their functional organization and requirements. Threat networks manifest themselves and interact with neutral networks for protection, to perpetuate their goals, and to expand their influence. Networks involving people have been described as insurgent, criminal, terrorist, social, political, familial, tribal, religious, academic, ethnic, or demographic. Some non-human networks include communications, financial, business, electrical/power, water, natural resources, transportation, or informational. Networks take many forms and serve different purposes, but are all comprised of people, processes, places, material, or combinations. Individual network components are identifiable, targetable, and exploitable. Almost universally, humans are members of more than one network, and most networks rely on other networks for sustainment or survival.

Organized threats leverage multiple networks within the OE based on mission requirements or to achieve objectives not unilaterally achievable. The following example shows some typical networks that a threat will use and/or exploit. This “network of networks” is always present and presents challenges to the JFC when planning operations to counter threats that nest within various friendly, neutral, and hostile networks

  1. Adaptive Networked Threats

For a threat network to survive political, economic, social, and military pressures, it must adapt to those pressures. Survival and success are directly connected to adaptability and the ability to access financial, logistical, and human resources. Networks possess many characteristics important to their success and survival, such as flexible C2 structure; a shared identity; and the knowledge, skills, and abilities of group leaders and members to adapt. They must also have a steady stream of resources and may require a sanctuary (safe haven) from which to regroup and plan.

  1. C2 Structure. There are many potential designs for the threat network’s internal organization. Some are hierarchical, some flat, and others may be a combination. The key is that to survive, networks adapt continuously to changes in the OE, especially in response to friendly actions. Commanders must be able to recognize changes in the threat’s C2 structures brought about by friendly actions and maintain pressure to prevent a successful threat reconstitution.
  2. Shared Identity. Shared identity among the membership is normally based on kinship, ideology, religion, and personal relationships that bind the network and facilitate recruitment. These identity attributes can be an important part of current and future identity activities efforts, and analysis can be initiated before hostilities are imminent.
  3. Knowledge, Skills, and Abilities of Group Leaders and Members. All threat networks have varying degrees of proficiency. In initial stages of development, a threat organization and its members may have limited capabilities. An organization’s survival rests on the knowledge, skills, and abilities of its leadership and membership. By seeking out subject matter expertise, financial backing, or proxy support from third parties, an organization can increase their knowledge, skills, and abilities, making them more adaptable and increasing their chance for survival.
  4. Resources. Resources in the form of arms, money, technology, social connectivity, and public recognition are used by threat networks. Identification and systematic strangulation of threat resources is the fundamental principle for CTN. For example, money is one of the critical resources of adversary networks. Denying the adversary its finances makes it harder, and perhaps impossible to pay, train, arm, feed, and clothe forces or gather information and produce the propaganda.
  5. Adaptability. This includes the ability to learn and adjust behaviors; modify tactics, techniques, and procedures (TTP); improve communications security and operations security; successfully employ IRCs; and create solutions for safeguarding critical nodes and reconstituting expertise, equipment, funding, and logistics lines that are lost to friendly disruption efforts. Analysts conduct trend analysis and examine key indicators within the OE that might suggest how and why networks will change and adapt. Disruption efforts will often provoke a network’s changing of its methods or practices, but often external influences, local relationships and internal friction, geographic and climate challenges, and global economic factors might also be some of the factors that motivate a threat network to change or adapt to survive.
  6. Sanctuary (Safe Havens). Safe havens allow the threat networks to conduct planning, training, and logistic reconstitution. Threat networks require certain critical capabilities (CCs) to maintain their existence, not the least of which are safe havens from which to regenerate combat power and/or areas from which to launch attacks.
  7. Network Engagement
  8. Network engagement is the interactions with friendly, neutral, and threat networks, conducted continuously and simultaneously at the tactical, operational, and strategic levels, to help achieve the commander’s objectives within an OE. To effectively counter threat networks, the joint force must seek to support and link with friendly networks and engage neutral networks through the building of mutual trust and cooperation through network engagement.
  9. Network engagement consists of three components: partnering with friendly networks, engaging neutral networks, and CTN to support the commander’s desired end state.
  10. Individuals may be associated with numerous networks due to their unique identities. Examples of these types of identities include location of birth, family, religion, social groups, organizations, or a host of various characteristics that define an individual. Therefore, it is not uncommon for an individual to be associated with more than one type of network (friendly, neutral, or threat). Individual identities provide the basis that allows for the interrelationship between friendly, neutral, and threat networks to exist. It is this interrelationship that makes categorizing networks a challenge. Classifying a network as friendly or neutral when in fact it is a threat may provide the network with too much freedom or access. Mislabeling a friendly or neutral network as a threat may cause actions to be taken against that network that can have unforeseen consequences.
  11. Networks are comprised of individuals who are involved in a multitude of activities, including social, political, monetary, religious, and personal. These human networks exist in every OE, and therefore network engagement activities will be conducted throughout all phases of the conflict continuum and across the range of operations.
  12. Networks, Links, and Identity Groups

All individuals are members of multiple, overlapping identity groups (see Figure III-3). These identity groups form links of affinity and shared understanding, which may be leveraged to form networks with shared purpose

Many threat networks rely on family and tribal bonds when recruiting for the network’s inner core. These members have been vetted for years and are almost impossible to turn. For analysts, identifying family and tribal affiliations assists in developing a targetable profile on key network personnel. Even criminal networks will tend to be densely populated by a small number of interrelated identity groups.

  1. Family Network. Some members or associates have familial bonds. These bonds may be cross-generational.
  2. Cultural Network. Network links can share affinities due to culture, which include language, religion, ideology, country of origin, and/or sense of identity. Networks may evolve over time from being culturally based to proximity based.
  3. Proximity Network. The network shares links due to geographical ties of its members (e.g., past bonding in correctional or other institutions or living within specific regions or neighborhoods). Members may also form a network with proximity to an area strategic to their criminal interests (e.g., a neighborhood or key border entry point). There may be a dominant ethnicity within the group, but they are primarily together for reasons other than family, culture, or ethnicity.
  4. Virtual Network. A network that may not physically meet but work together through the Internet or other means of communication, for legitimate or criminal purposes (e.g., online fraud, theft, or money laundering).
  5. Specialized Networks. Individuals in this network come together to undertake specific activities based on the skills, expertise, or particular capabilities they offer. This may include criminal activities.
  6. Types of Networks in an Operational Environment

There are three general types of networks found within an operational area: friendly, neutral, and hostile/threat networks. A network may also be in a state of transition and therefore difficult to classify.

  1. Threat networks

Threat networks may be formally intertwined or come together when mutually beneficial. This convergence (or nexus) between threat networks has greatly strengthened regional instability and allowed threats and alliances to increase their operational reach and power to global proportions.

  1. Identify a Threat Network

Threat networks often attempt to remain hidden. How can commanders determine not only which networks are within an operational area, but also which pose the greatest threat?

By understanding the basic, often masked sustainment functions of a given threat network, commanders may also identify individual networks within. For example, all networks require communications, resources, and people. By understanding the functions of a network, commanders can make educated assumptions as to their makeup and determine not only where they are, but also when and how to engage them. As previously stated, there are many neutral networks that are used by both friendly and threat forces; the difficult part is determining what networks are a threat and what networks are not. The “find” aspect of the find, fix, finish, exploit, analyze, and disseminate (F3EAD) targeting methodology is initially used to discover and identify networks within the OE. The F3EAD methodology is not only used for identifying specific actionable targets; it is also used to uncover the nature, functions, structures, and numbers of networks within the OE. A thorough JIPOE product, coupled with “on-the-ground” assessment, observation, and all-source intelligence collection, will ultimately lead to an understanding of the OE and will allow the commander to visualize the network.

CHAPTER IV

PLANNING TO COUNTER THREAT NETWORKS

  1. Joint Intelligence Preparation of the Operational Environment and Threat Networks
  2. A comprehensive, multidimensional assessment of the OE will assist commanders and staffs in uncovering threat network characteristics and activities, develop focused operations to attack vulnerabilities, better anticipate both the intended and unintended consequences of threat network activities and friendly countermeasures, and determine appropriate means to assess progress toward stated objectives.
  3. Joint force, component, and supporting commands and staffs use JIPOE products to prepare estimates used during mission analysis and selection of friendly courses of action (COAs). Commanders tailor the JIPOE analysis based on the mission. As previously discussed, the best COA may not be to destroy a threat’s entire network or cells; friendly or neutral populations may use the same network or cells, and to destroy it would have a negative effect.
  4. Understanding the Threat’s Network
  5. The threat has its own version of the OE that it seeks to shape to maintain support and attain its goals. In many instances, the challenge facing friendly forces is complicated by the simple fact that significant portions of a population might consider the threat as the “home team.” To neutralize or defeat a threat network, friendly forces must do more than understand how the threat network operates, its organization goals, and its place in the social order; they must also understand how the threat is shaping its environment to maintain popular support, recruit, and raise funds. The first step in understanding a network is to develop a network profile through analysis of a network’s critical factors.
  6. COG and Critical Factors Analysis (CFA). One of the most important tasks confronting the JFC and staff during planning is to identify and analyze the threat’s network, and in most cases the network’s critical factors (see Figure IV-1) and COGs.
  7. Network Function Template. Building a network function template is a method to organize known information about the network associated with structure and functions of the network. By developing a network function template, the information can be initially understood and then used to facilitate CFA. Building a network function template is not a requirement for conducting CFA, but helps the staff to visualize the interactions between functions and supporting structure within a network.
  8. Critical Factors Analysis
  9. CFA is an analytical framework to assist planners in analyzing and identifying a COG and to aid operational planning. The critical factors are the CCs, critical requirements (CRs), and CVs.

Key terminology for CFA includes:

(1) COG for network analysis is a conglomeration of tangible items and/or intangible factors that not only motivates individuals to join a network, but also promotes their will to act to achieve the network’s objectives and attain the desired end state. A COG for networks will often be difficult to target directly due to complexity and accessibility.

(2) CCs are the primary abilities essential to accomplishing the objective of the network within a given context. Analysis to identify CCs for a network is only possible with understanding the structure and functions of a network, which is supported by other network analysis methods.

(3) CRs are the essential conditions, resources, and means the network requires to perform the CC. These things are used or consumed to carry out action, enabling a CC to wholly function. Networks require resources to take action and function. These resources include personnel, equipment, money, and any other commodity that support the network’s CCs.

(4) CVs are CRs or components thereof that are deficient or vulnerable to neutralization, interdiction, or attack in a manner that achieves decisive results. A network’s CVs will change as networks adapt to conditions within the OE. Identification of CVs for a network should be considered during the targeting process, but may not necessarily be a focal point of operations without further analysis.

  1. Building a network function template involves several steps:

(1) Step 1: Identify the network’s desired end state. The network’s desired end state is associated with the catalyst that supported the formation of the network. The primary question that the staff needs to answer is what are the network’s goals? The following are examples of desired end states for various organizations:

(a) Replacing the government of country X with an Islamic caliphate.

(b) Liberating country X.
(c) Controlling the oil fields in region Y.
(d) Establishing regional hegemony.

(e) Imposing Sharia on village Z.

(f) Driving multinational forces out of the region.

 

(2) Step2: Identify possible ways or actions (COAs) that can attain the desired end state. This step refers to ways a network can take actions to attain their desired end state through their COAs. Similar in nature to how staffs analyze a conventional force to determine the likely COA that force will take, this must also be done for the networks that are selected for engagement. It is important to note that each network will have a variety of options available to them and their likely COA will be associated with the intent of the members of the network. Examples of ways for some networks may include:

(a) Conducting an insurgency operation or campaign. (b) Building PN capacity.
(c) Attacking with conventional military forces.
(d) Conducting acts of terrorism.

(e) Seizing the oil fields in Y.
(f) Destroying enemy forces.
(g) Defending village Z.
(h) Intimidating local leaders.
(i) Controlling smuggling routes.

(j) Bribing officials

 

(3) Step 3: Identify the functions that the network possesses to take actions. Using the network function template from previous analysis, the staff must refine this analysis to identify the functions within the network that could be used to support the potential ways or COAs for the network. The functions identified result in a list of CCs. Examples of items associated with the functions of a network that would support the example list of ways identified in the previous step are:

(a) Conducting an insurgency operation or campaign: insurgents are armed and can conduct attacks.

(b) Building PN capacity: forces and training capability available.

(c) Attacking with conventional military forces: military forces are at an operational level with C2 in place.

(d) Conducting acts of terrorism: network members possess the knowledge and assets to take action.

(e) Seizing the oil fields in Y: network possesses the capability to conduct coordinated attack.

(f) Destroying enemy forces: network has the assets to identify, locate, and destroy enemy personnel.

(g) Defending village Z: network possesses the capabilities and presence to conduct defense.

(h) Intimidating local leaders: network has freedom of maneuver and access to local leaders.

(i) Controlling smuggling routes: network’s sphere of influence and capabilities allow for control.

(j) Bribing officials: network has access to officials and resources to facilitate

bribes

(4) Step4:Listthemeansorresourcesavailableorneededforthenetworkto execute CCs. The purpose of this step is to determine the CRs for the network. Again, this is support from the initial analysis conducted for the network, network mapping, link analysis, SNA, and network function template. Based upon the CCs identified for the network, the staff must answer the question what resources must the network possess to employ the CCs identified? The list of CRs can be extensive, depending on the capability being analyzed. The following are examples of CRs that may be identified for a network:

(a) A group of foreign fighters.
(b) A large conventional military.
(c) A large conventional military formation (e.g., an armored corps). (d) IEDs.
(e) Local fighters.
(f) Arms and ammunition.
(g) Funds.
(h) Leadership.
(i) A local support network.

(5) Step 5: Correlate CCs and CRs to OE evaluation to identify critical variables.

(a) Understanding the CCs and CRs for various networks can be used alone in planning and targeting, but the potential to miss opportunities or accept additional risks are not understood until the staff relates these items to the analysis of the OE.

(b) A critical variable may be a CC, CR, or CV for multiple networks. Gaining an understanding of this will occur in the next step of CFA. The following are examples of critical variables that may be identified for networks:

  1. A group of foreign fighters is exposed for potential engagement.
  2. A large conventional military formation (e.g., an armored corps) is located and likely COA is identified.
  3. IED maker and resources are identified and can be neutralized. 4. Local fighters’ routes of travel and recruitment are identifiable. 5. Arms and ammunition sources of supply are identifiable.
    6. Funds are located and potential exists for seizure.
  4. Leadership is identified and accessible for engagement.
  5. A local support network is identified and understood through analysis.

(6) Step 6: Compare and contrast the CRs for each network analyzed. This step of CFA can only be accomplished after full network analysis has been completed for all selected networks within the OE. To compare and contrast, the information from the analysis of each network must be available. The intent of correlating the critical variables for each network allows for understanding:

(a) Potential desired first- and second-order effects of engagement.

(b) Potential undesired first- and second-order effects of engagement.

(c) Direct engagement opportunities.
(d) Indirect engagement opportunities.

(7) Step 7: Identify CVs for the network. Identifying CVs of a network is completed by analyzing each CR for the network with respect to criticality, accessibility, recuperability, and adaptability. This analysis is conducted from the perspective of the network with consideration of threats within the OE that may impact the network being

analyzed. Conducting the analysis from this perspective allows staffs to identify CVs for any type of network (friendly, neutral, or threat).

(a) Criticality. A CR that when engaged by a threat results in a degradation of the network’s structure, function or impact on its ability to sustain itself. Criticality considers the importance of the CR to the network and the following questions should be considered when conducting this analysis:

  1. What impact will removing the CR have on the structure of the network?
  2. What impact will removing the CR have on the functions of the network?
  3. What function is affected by engaging the CR?
    4. What effect does the CR have on other networks?
    5. Is the CR a CR for other networks? If so, which ones?
  4. How is the CR related to conditions of sustainment?

 

(b) Accessibility. A CR is accessible when capabilities of a threat to the network can be directly or indirectly employed to engage the CR. Accessibility of the CR in some cases is a limiting factor for the true vulnerability of a CR.

The following questions should be considered by the staff when analyzing a CR for accessibility:

  1. Where is the CR?
  2. Is the CR protected?
  3. Is the CR static or mobile?
  4. Who interacts with the CR? How often?
  5. Is the CR in the operational area of the threat to the network?
  6. Can the CR be engaged with threat capabilities?
  7. If the CR is inaccessible, are there alternative CRs that if engaged by a threat result in a similar effect on the network?

(c) Recuperability. The amount of time that the network needs to repair or replace a CR that is engaged by a threat capability. Analyzing the CR in regard to recuperability is associated to the network’s ability to regenerate when components of its structure have been removed or damaged. This plays a role in the adaptive nature of a network, but must not be confused with the last aspect of the analysis for CVs. The following questions should be considered by the staff when analyzing a CR for recuperability:

  1. If CR is removed:
    a. Can the CR be replaced?
  2. How long will it take to replace?
    c. Does the replacement fulfill the network’s structural and functional levels?
  3. Will the network need to make adjustments to implement the replacement for the CR?
  4. If CR is damaged:
    a. Can the CR be repaired?
    b. How long will it take to repair?
    c. Will the repaired CR return the network to its previous structural and functional levels?

(d) Adaptability. The ability of a network (with which the CR is associated) to change in response to conditions in the OE brought about by the actions of a threat taken against it, while maintaining its structure and function.

Adaptability considers the network’s ability to change or modify their functions, modify their catalyst, shift focus on potential receptive audience(s), or make any other changes to adapt to the conditions in the OE. The following questions should be considered by the staff when analyzing a CR for recuperability:

  1. Can the CR change its structure while maintaining its function?
  2. Is the CR tied to a CC that could cause it to adapt as a normal response to a change in a CC (whether due to hostile engagement or a natural change brought about by a friendly network’s adjustment to that CC)?
  3. Can the CR be changed to fulfill an emerging CC or function for the network?

 

  1. Visualizing Threat Networks
  2. Mapping the Network. Mapping threat networks starts by detailing the primary threats (e.g., terrorist group, drug cartel, money-laundering group). Mapping routinely starts with people and places and then adds functions, resources, and activities.

Mapping starts out as a simple link between two nodes and progresses to depict the organizational structure (see Figure IV-4). Individual network members themselves may not be aware of the organizational structure. It will be rare that enough intelligence and information is collected to portray an entire threat network and all its cells.

This will be a continuous process as the networks themselves transform and adapt to their environment and the joint force operations. To develop and employ theater-strategic options, the commander must understand the series of complex, interconnected relationships at work within the OE.

(1) Chain Network. The chain or line network is characterized by people, goods, or information moving along a line of separated contacts with end-to-end communication traveling through intermediate nodes.

(2) Star or Hub Network. The hub, star, or wheel network, as in a franchise or a cartel, is characterized by a set of actors tied to a central (but not hierarchical) node or actor that must communicate and coordinate with network members through the central node.

(3) All-Channel Network. The all-channel, or full-matrix network, is characterized by a collaborative network of groups where everybody connects to everyone else.

  1. Mapping Multiple Networks. Each network may be different in structure and purpose. Normally the network structure is fully mapped, and cells are shown as they relate to the larger network. It is time- and labor-intensive to map each network, so staffs should carefully consider the usefulness for how much time and effort they should allocate toward mapping the supporting networks and where to focus their efforts so that they are both providing a timely response and accurately identifying relationships and critical nodes significant for disruption efforts.
  2. Identify the Influencing Factors of the Network. Influencing factors of the network (or various networks) within an OE can be identified largely by the conditions created by the activities of the network. These conditions are what influence the behaviors, attitudes, and vulnerabilities of specific populations. Factors such as threat information activities (propaganda) may be one of the major influencers, but so are activities such as kidnapping, demanding protection payments, building places of worship, destroying historical sites, building schools, providing basic services, denying freedom of movement, harassment, illegal drug activities, prostitution, etc. To identify influencing factors, a proven method is to first look at the conditions of a specific population or group, determine how those conditions create/force behavior, and then determine the causes of the conditions. Once influence factors are identified, the next step is to determine if the conditions can be changed and/or if they cannot, determine if there is alternative, viable behavior available to the population or group.
  3. To produce a holistic view of threat, neutral, and friendly networks as a whole within a larger OE requires analysis to describe how these networks interrelate. Most important to this analysis is describing the relationships within and between the various networks that directly or indirectly affect the mission.
  4. Collaboration. Within most efforts to produce a comprehensive view of the networks, certain types of data or information may not be available to correctly explain or articulate with great detail the nature of relationships, capabilities, motives, vulnerabilities, or communications and movements. It is incumbent upon intelligence organizations to collaborate and share information, data, and analysis, and to work closely with interagency partners to respond to these intelligence gaps.
  5. Targeting Evaluation Criteria

Once the network is mapped, the JFC and staff identify network nodes and determine their suitability for targeting. A useful tool in determining a target’s suitability for attack is the criticality, accessibility, recuperability, vulnerability, effect, and recognizability (CARVER) analysis (see Figure IV-5). CARVER is a subjective and comparative system that weighs six target characteristic factors and ranks them for targeting and planning decisions. CARVER analysis can be used at all three levels of warfare: tactical, operational, and strategic. Once target evaluation criteria are established, target analysts use a numerical rating system (1 to 5) to rank the CARVER factors for each potential target. In a one to five numbering system, a score of five would indicate a very desirable rating while a score of one would reflect an undesirable rating.

A notional network-related CARVER analysis is provided in paragraph 6, “Notional Network Evaluation.” The CARVER method as it applies to networks provides a graph-based numeric model for determining the importance of engaging an identified target, using qualitative analysis, based on seven factors:

  1. Network Affiliations. Network affiliations identify each network of interest associated with the CR being evaluated. The importance of understanding the network affiliations for a potential target stems from the interrelationships between networks. Evaluating a potential target from the perspective of each affiliated network will provide the joint staff with potential second- and third-order effects on both the targeted threat networks and other interrelated networks within the OE.
  2. Criticality. Criticality is a CR that when engaged by a threat results in a degradation of the network’s structure, function, or impact on its ability to sustain itself. Evaluating the criticality of a potential target must be accomplished from the perspective of the target’s direct association or need for a specific network. Depending on the functions and structure of the network, a potential target’s criticality may differ between networks. Therefore, criticality must be evaluated and assigned a score for each network affiliation. If the analyst has completed CFA for the networks of interest, criticality should have been analyzed during the identification of CVs.
  3. Accessibility. A CR is accessible when capabilities of a threat to the network can be directly or indirectly employed to engage the CR. Inaccessible CRs may require alternate target(s) to produce desired effects. The accessibility of a potential target will remain the same, regardless of network affiliation. This element of CARVER does not require a separate evaluation of the potential target for each network. Much like criticality, accessibility will have been evaluated if the analyst has conducted CFA for the network as part of the analysis for the network.
  4. Recuperability. Recuperability is the amount of time that the network needs to repair or replace a CR that is engaged by a threat capability. Recuperability is analyzed during CFA to determine the vulnerability of a CR for the network. Since CARVER (network) is applied to evaluate the potential targets with each affiliated network, the evaluation for recuperability will differ for each network. What affects recuperability is the network’s function of regenerating members or replacing necessary assets with suitable substitutes.
  5. Vulnerability. A target is vulnerable if the operational element has the means and expertise to successfully attack the target. When determining the vulnerability of a target, the scale of the critical component needs to be compared with the capability of the attacking element to destroy or damage it. The evaluation of a potential target’s vulnerability is supported by the analysis conducted during CFA and can be used to complete this part of the CARVER (network) matrix. Vulnerability of a potential target will consist of only one value. Regardless of the network of affiliation, vulnerability is focused on evaluating available capabilities to effectively conduct actions on the target.
  6. Effect. This evaluates the potential effect on the structure, function, and sustainment of a network of engaging the CR as it relates to each affiliated network. The level of effect should consider both the first-order effect on the target itself, as well as the second-order effect on the structure and function of the network.
  7. Recognizability.RecognizabilityisthedegreetowhichaCRcanberecognizedby an operational element and/or intelligence collection under varying conditions. The recognizability of a potential target will remain the same, regardless of network of affiliation.
  8. Notional Network Evaluation
  9. The purpose of conventional target analysis (and the use of CARVER) is to determine enemy critical systems or subsystems to attack to progressively destroy or degrade the adversary’s warfighting capacity and will to fight.
  10. Using network analysis, a commander identifies the critical threat nodes operating within the OE. A CARVER analysis determines the feasibility of attacking each node (ideally simultaneously). While each CARVER value is subjective, detailed analysis allows planners to assign a realistic value.

The commander and the staff then look at other aspects of the network and, for example, determine whether they can disrupt the material needed for training, prevent the movement of trainees or trainers to the training location, or influence other groups to deny access to the area.

  1. The JFC and staff methodically analyze each identified network node and assign a numerical rating to each. In this notional example (see Figure IV-7), it is determined that the communications cells and those who finance threat operations provide the best targets to attack.
  2. Planning operations against threat networks does not differ from standard military planning. These operations still support the JFC’s broader mission and rarely stand alone. Identifying threat networks requires detailed analysis and consideration for second- and third-order effects. It is important to remember that the threat organization itself is the ultimate target and their networks are merely a means to achieve that. Neutralizing a given network may prove more beneficial to the JFC’s mission accomplishment than destroying a single multiuser network node. The most effective plans call for simultaneous operations against networks focused on multiple nodes and network functions.
  3. Countering Threat Networks Through the Planning of Phases

As previously discussed, commanders execute CTN activities across all levels of warfare.

Threat networks can be countered using a variety of approaches and means. Early in the operation or campaign, the concept of operations will be based on a synchronized and integrated international effort (USG, PNs, and HN) to ensure that conditions in the OE do not empower a threat network and to deny the network the resources it requires to expand its operations and influence. As the threat increases and conditions deteriorate, the plan will adjust to include a broader range of actions, and an increase in the level and focus of targeting of identified critical network nodes: people and activities. Constant pressure must be maintained on the network’s critical functions to deny them the initiative and disrupt their operating tempo.

Figure IV-8 depicts the notional operation plan phase construct for joint operations. Some phases may not be used during CTN activities.

  1. Shape (Phase 0)

(1) Unified action is the key to shaping the OE. The goal is to deny the threat network the resources needed to expand their operations and reduce it to a point where they no longer pose a direct threat to regional/local stability, while influencing the network to reduce or redirect its threatening objectives. Shaping operations against threat networks consist of efforts to influence their objectives, dissuade growth, state sponsorship, sanctuary, or access to resources through the unified efforts of interagency, regional, and international partners as well as HN civil authorities. Actions are taken to identify key elements in the OE that can be used to leverage support for the government or other friendly networks that must be controlled to deny the threat an operational advantage. The OE must be analyzed to identify support for the threat network, as well as that for the relevant friendly and neutral networks. Interagency/international partners help to identify the network’s key components, deny access to resources (usually external to the country), and persuade other actors (legitimate and illicit) to discontinue support for the threat. SIGINT, open-source intelligence (OSINT), and human intelligence (HUMINT) are the primary intelligence sources of actionable information. The legitimacy of the government must be reinforced in the operational area. Efforts to reinforce the government seek to identify the sources of friction within the society that can be reduced through government intervention.

Many phase I shaping activities need to be coordinated during phase 0 due to extensive legal and interagency requirements.

Due to competing resources and the potential lack of available IRCs, executing IO during phase 0 can be challenging. For this reason, consideration must be given on how IRCs can be integrated as part of the whole-of-government approach to effectively shape the information environment and to achieve the commander’s information objectives.

Shaping operations may also include security cooperation activities designed to strengthen PN or regional capabilities and capacity that contribute to greater stability. Shaping operations should focus on changing the conditions that foster the development of adversaries and threats.

(2) During phase 0 (shaping), the J-2’s threat network analysis initially provides a broad description of the structure of the underlying threat organization; identifies the critical functions, nodes, and the relationships between the threat’s activities and the greater society; and paints a picture of the “on-average” relationships.

Some of the CTN actions require long- term and sustained efforts, such as addressing recruitment in targeted communities through development programming. It is essential that the threat is decoupled from support within the affected societies. Critical elements in the threat’s operational networks must be identified and disrupted to affect their operating tempo. Even when forces are committed, the commander continues to shape the OE using various means to eliminate the threat and undertake actions, in cooperation with interagency and multinational partners, to reinforce the legitimate government in the eyes of the population.

(3) The J-2 seeks to identify and leverage information sources that can provide details on the threat network and its relationship to the regional/local political, economic, and social structures that can support and sustain it.

(4) Sharing information and intelligence with partners is paramount since collection, exploitation, and analysis against threat networks requires much greater time than against traditional military adversaries. Information sharing with partners must be balanced with operations security and cannot be done in every instance. Intelligence sharing between CCDRs across regional and functional seams provides a global picture of threat networks not bound by geography. Intelligence efforts within the shaping phase show threat network linkages in terms of leadership, organization, size, scope, logistics, financing, alliances with other networks, and membership.

  1. Deter (Phase I). The intent of this phase is to deter threat network action, formation, or growth by demonstrating partner, allied, multinational, and joint force capabilities and resolve. Many actions in the deter phase include security cooperation activities and IRCs and/or build on security cooperation activities from phase 0. Increased cooperation with partners and allies, multinational forces, interagency and interorganizational partners, international organizations, and NGOs assist in increasing information sharing and provide greater understanding of the nature, capabilities, and linkages of threat networks.

enhance deterrence through unified action by collaborating with all friendly elements and by creating a friendly network of organizations and people with far-reaching capabilities and the ability to respond with pressure at multiple points against the threat network.

Phase I begins with coordination activities to influence threat networks on multiple fronts.

Deterrent activities executed in phase I also prepare for phase II by conducting actions throughout the OE to isolate threat networks from sanctuary, resources, and information networks and increase their vulnerability to later joint force operations.

  1. Seize Initiative (Phase II). JFCs seek to seize the initiative through the application of joint force capabilities across multiple LOOs.

Destruction of a single node or cell might do little to impact network operations when assessed against the cost of operations and/or the potential for collateral damage.

As in traditional offensive operations against a traditional adversary, various operations create conditions for exploitation, pursuit, and ultimate destruction of those forces and their will to fight.

  1. Dominate (Phase III). The dominate phase against threat networks focuses on creating and maintaining overwhelming pressure against network leadership, finances, resources, narrative, supplies, and motivation. This multi-front pressure should include diplomatic and economic pressure at the strategic level and informational pressure at all levels.

They are then synchronized with military operations conducted throughout the OE and at all levels of warfare to achieve the same result as traditional operations, to shatter enemy cohesion and will. Operations against threat networks are characterized by dominating and controlling the OE through a combination of traditional warfare, irregular warfare, sustained employment of interagency capabilities, and IRCs.

  1. Stabilize (Phase IV). The stabilize phase is required when there is no fully functioning, legitimate civilian governing authority present or the threat networks have gained political control within a country or region. In cases where the threat network is government aligned, its defeat in phase III may leave that government intact, and stabilization or enablement of civil authority may not be required. After neutralizing or defeating the threat networks (which may have been functioning as a shadow government), the joint force may be required to unify the efforts of other supporting/contributing multinational, international organization, NGO, or USG department and agency participants into stability activities to provide local governance, until legitimate local entities are functioning.
  2. Enable Civil Authority (Phase V). This phase is predominantly characterized by joint force support to legitimate civil governance in the HN. Depending upon the level of HN capacity, joint force activities during phase V may be at the behest or direction of that authority. The goal is for the joint force to enable the viability of the civil authority and its provision of essential services to the largest number of people in the region. This includes coordinating joint force actions with supporting or supported multinational and HN agencies and continuing integrated finance operations and security cooperation activities to favorably influence the target population’s attitude regarding local civil authority’s objectives.

CHAPTER V

ACTIVITIES TO COUNTER THREAT NETWORKS

“Regional players almost always understand their neighborhood’s security challenges better than we do. To make capacity building more effective, we must leverage these countries’ unique skills and knowledge to our collect[ive] advantage.”

General Martin Dempsey, Chairman of the Joint Chiefs of Staff

Foreign Policy, 25 July 2014, The Bend of Power

 

  1. The Challenge

A threat network can be operating for years in the background and suddenly explode on the scene. Identifying and countering potential and actual threat networks is a complex challenge.

  1. Threat networks can take many forms and have many distinct participants from terrorists, to criminal organizations, to insurgents, locally or transnationally based…

Threat networks may leverage technologies, social media, global transportation and financial systems, and failing political systems to build a strong and highly redundant support system. Operating across a region provides the threat with a much broader array of resources, safe havens, and flexibility to react to attack and prosecute their attacks.

To counter a transnational threat, the US and its partners must pursue multinational cooperation and joint operations to achieve disruption and cooperate with HNs within a specified region in order to fully identify, describe, and mitigate via multilateral operations the transnational networks that threaten an entire region and not just individual HNs.

  1. Successfuloperationsarebasedontheabilityoffriendlyforcestodevelopandapply a detailed understanding of the structure and interactions of the OE to the planning and execution of a wide array of capabilities to reinforce the HN’s legitimacy and neutralize the threat’s ability to threaten that society.
  2. Targeting Threat Networks
  3. The commander and staff must understand the desired condition of the threat network as it relates to the commander’s objectives and desired end state as the first step of targeting any threat network.
  4. The military end state that is desired is directly related to conditions of the OE. Interrelated human networks comprise the human aspect of the OE, which includes the threat networks that are to be countered. The actual targeting of threat networks begins early in the planning process, since all actions taken must be supportive in achieving the commander’s objectives and attaining the end state. To feed the second phase of the targeting cycle, the threat network must be analyzed using network mapping, link analysis, SNA, CFA, and nodal analysis.
  5. The second phase of the joint targeting cycle is intended to begin the development of target lists for potential engagement. JIPOE is one of the critical inputs to support the development of these products, but must include a substantial amount of analysis on the threat network to adequately identify the critical nodes, CCs (network’s functions), and CRs for the network.

Similar to developing an assessment plan for operations as part of the planning process, the metrics for assessing networks must be developed early in the targeting cycle.

  1. Networks operate as integrated entities—the whole is greater than the sum of its parts. Identifying and targeting the network and its functional components requires patience. A network will go to great lengths to protect its critical components. However, the interrelated nature of network functions means that an attack on one node may have a ripple effect as the network reconstitutes.

Whenever a network reorganizes or adapts, it can expose a larger portion of its members (nodes), relationships (links), and activities. Intelligence collection should be positioned to exploit any effects from the targeting effort, which in turn must be continuous and multi-nodal.

  1. The analytical products for threat networks support the decision of targets to be added to or removed from the target list and specifics for the employment of capabilities against a target. The staff should consider the following questions when selecting targets to engage within a threat network:

(1) Who or what to target? Network analysis provides the commander and staff with the information to prioritize potential targets. Depending on the effect desired for a network, the selected node for targeting may be a person, key resource, or other physical object that is critical in producing a specific effect on the network.

(2) What are the effects desired on the target and network? Understanding the conditions in the OE and the future conditions desired to achieve objectives supports a decision on what type of effects are desired on the target and the threat network as a whole. The desired effects on the threat network should be aligned with the commander’s intent that support objectives or conditions of the threat network to meet a desired end state.

(3) How will those desired effects be produced? The array of lethal and nonlethal capabilities may be employed with the decision to engage a target, whether directly or indirectly. In addition to the ability to employ conventional weapons systems, staffs must consider nonlethal capabilities that are available.

  1. Desired Effects on Networks
  2. Damage effects on an enemy or adversary from lethal fires are classified as light, moderate, or severe. Network engagement takes into consideration the effects of both lethal and nonlethal capabilities.
  3. When commanders decide to generate an effect on a network through engaging specific nodes, the intent may not be to cause damage, but to shape conditions of a mental or moral nature. The intended result of shaping these conditions is to support achieving the commander’s objectives. The desired effects selected are the result of the commander’s vision on the future conditions for the threat networks and within the OE to achieve objectives.

Terms that are used to describe the desired effects of CTN include:

(1) Neutralize. Neutralize is a tactical mission task that results in rendering enemy personnel or materiel incapable of interfering with a particular operation. The threat network’s structure exists to facilitate its ability to perform functions that support achieving its objectives. Neutralization of an entire network may not be feasible, but through analysis, the staff has the ability to identify key parts of the threat network’s structure to target that will result in the neutralization of specific functions that may interfere with a particular operation.

(2) Degrade. To degrade is to reduce the effectiveness or efficiency of a threat. The effectiveness of a threat network is associated with its ability to function as desired to achieve the threat’s objectives. Countering the effectiveness of a network may be accomplished by eliminating CRs that the network requires to facilitate an identified CC, identified through the application of CFA for the network.

(3) Disrupt. Disrupt is a tactical mission task in which a commander integrates direct and indirect fires, terrain, and obstacles to upset an enemy’s formation or tempo, interrupt the enemy’s timetable, or cause enemy forces to commit prematurely or attack in a piecemeal fashion. From the perspective of disrupting a threat network, the staff should consider the type of operation being conducted, specific functions of the threat network, and conditions within the OE that can be leveraged and potential application of both lethal and nonlethal capabilities. Additionally, the staff should consider the potential impact and duration of time that disrupting the threat network will present in opportunities for friendly forces to exploit a potential opportunity. Should the disruption result in the elimination of key nodes from the network, the staff must also consider the network’s means and time necessary to reconstitute.

(4) Destroy. Destroy is a tactical mission task that physically renders an enemy force combat ineffective until it is reconstituted. Alternatively, to destroy a combat system is to damage it so badly that it cannot perform any function or be restored to a usable condition without being entirely rebuilt. Destroying a threat network that is adaptive and transnationally established is an extreme challenge that requires the full collaboration of DOD and intergovernmental agencies, as well as coordination with partnered nations. Isolated destruction of cells may be more plausible and could be accomplished with the comprehensive application of lethal and nonlethal capabilities. Detailed analysis of the cell is necessary to establish a baseline (pre-operation conditions) in order to assess if operations have resulted in the destruction of the selected portion of a network.

(5) Defeat. Defeat is a tactical mission task that occurs when a threat network or enemy force has temporarily or permanently lost the physical means or the will to fight. The defeated force’s commander or leader is unwilling or unable to pursue that individual’s adopted COA, thereby yielding to the friendly commander’s will, and can no longer interfere to a significant degree with the actions of friendly forces. Defeat can result from the use of force or the threat of its use. Defeat manifests itself in some sort of physical action, such as mass surrenders, abandonment of positions, equipment and supplies, or retrograde operations. A commander or leader can create different effects against an enemy to defeat that force.

(6) Deny. Deny is an action to hinder or deny the enemy the use of territory, personnel, or facilities to include destruction, removal, contamination, or erection of obstructions. An example of deny is to destroy the threat’s communications equipment as a means of denying his use of the electromagnetic spectrum. However, the duration of denial will depend on the enemy’s ability to reconstitute.

(7) Divert. To divert is to turn aside or from a path or COA. A diversion is the act of drawing the attention and forces of a threat from the point of the principal operation; an attack, alarm, or feint diverts attention. Diversion causes threat networks or enemy forces to consume resources or capabilities critical to threat operations in a way that is advantageous to friendly operations. Diversions draw the attention of threat networks or enemy forces away from critical friendly operations and prevent threat forces and their support resources from being employed for their intended purpose.

  1. Engagement Strategies
  2. Counter Resource. A counter-resource approach can progressively weaken the threat’s ability to conduct operations in the OE and require the network to seek a suitable substitute to replace eliminated or constrained resources. Like a military organization, a threat’s network or a threat’s organization is more than its C2 structure. It must have an assured supply of recruits, food, weapons, and transportation to maintain its position and grow. While the leadership provides guidance to the network, it is the financial and logistical infrastructure that sustains the network. Most threat networks are transnational in nature, drawing financial support, material support, and recruits from a worldwide audience.
  3. Decapitation. Decapitation is the removal of key nodes within the network that are functioning as leaders. Targeting leadership is designed to impact the C2 of the network. Detailed analysis of the network may provide the staff with an indication of how long the network will require to replace leadership once they are removed from the network. From a historical perspective, the removal of a single leader from an adaptive human network has resulted in short-term effects on the network.

When targeting the nodes, links, and activities of threat networks, the JFC should consider the second- and third-order effects on friendly and neutral groups that share network and cell functions. Additionally, the ripple effects throughout the network and its cells should be considered.

  1. Fragmentation. A fragmentation strategy is the surgical removal of key nodes of the network that produces a fragmented effect on the network with the intent to disrupt the network’s ability to function. Although fragmenting the network will result in immediate effects, the staff must consider when this type of strategy is appropriate. Elimination of nodes within the network may have impacts on collection efforts, depending on the node being targeted.
  2. Counter-Messaging. Threat networks form around some type of catalyst that motivates individuals from a receptive audience to join a network. The challenging aspect of a catalyst is that individuals will interpret and relate to it in their own manner. There may be some trends among members of the network that relate to the catalyst in a similar manner; this perspective is not accurate for all members of the network. Threat networks have embraced the ability to project their own messages using a number of social media sites. These messages support their objectives and are used as a recruiting tool for new members. Countering the threat network’s messages is one aspect of countering a threat network.
  3. Targeting
  4. At the tactical level, the focus is on executing operations targeting nodes and links. Accurate, timely, and relevant intelligence supports this effort. Tactical units use this intelligence along with their procedures to conduct further analysis, template, and target networks.
  5. Targeting of threat network CVs is driven by the situation, the accuracy of intelligence, and the ability of the joint force to quickly execute various targeting options to create the desired effects. In COIN operations, high-priority targets may be individuals who perform tasks that are vulnerable to detection/exploitation and impact more than one CR.

Timing is everything when attacking a network, as opportunities for attacking identified CVs may be limited.

  1. CTN targets can be characterized as targets that must be engaged immediately because of the significant threat they represent or the immediate impact they will make related to the JFC’s intent, key nodes such as high-value individuals, or longer-term network infrastructure targets (caches, supply routes, safe houses) that are normally left in place for a period of time to exploit them. Resources to service/exploit these targets are allocated in accordance with the JFC’s priorities, which are constantly reviewed and updated through the command’s joint targeting process.

(1) Dynamic Targeting. A time-sensitive targeting cell consisting of operations and intelligence personnel with direct access to engagement means and the authority to act on pre-approved targets is an essential part of any network targeting effort. Dynamic targeting facilitates the engagement of targets that have been identified too late or not selected in time to be included in deliberate targeting and that meet criteria specific to achieving the stated objectives.

(2) Deliberate Targeting. The joint fires cell is tasked to look at an extended timeline for threats and the overall working of threat networks. With this type of deliberate investigation into threat networks, the cell can identify catalysts to the threat network’s operations and sustainment that had not traditionally been targeted on a large scale.

  1. The joint targeting cycle supports the development and prosecution of threat networks. Land and maritime force commanders normally use an interrelated process to enhance joint fire support planning and interface with the joint targeting cycle known as the decide, detect, deliver, and assess (D3A) methodology. D3A incorporates the same fundamental functions of the joint targeting cycle as the find, fix, track, target, engage, and assess (F2T2EA) process and functions within phase 5 of the joint targeting cycle. The D3A methodology facilitates synchronizing maneuver, intelligence, and fire support. The F2T2EA and F3EAD methodologies support dynamic targeting. While the F3EAD model was developed for personality-based targeting, it can only be applied once the JFC has approved the joint integrated prioritized target list. Depending on the situation, multiple methodologies may be required to create the desired effect.
  2. F3EAD. F3EAD facilitates the targeting not only of individuals when timing is crucial, but also more importantly the generation of follow-on targets through timely exploitation and analysis. F3EAD facilitates synergy between operations and intelligence as it refines the targeting process. It is a continuous cycle in which intelligence and operations feed and support each other. It assists to:

(1) Analyze the threat network’s ideology, methodology, and capabilities; helps template its inner workings: personnel, organization, and activities.

(2) Identify the links between enemy CCs and CRs and observable indicators of enemy action.

(3) Focus and prioritize dedicated intelligence collection assets.

(4) Provide the resulting intelligence and products to elements capable of rapidly conducting multiple, near-simultaneous attacks against the CVs.

(5) Provide an ability to visualize the OE and array and synchronize forces and capabilities.

  1. The F3EAD process is optimized to facilitate targeting of key nodes and links tier I (enemy top-level leadership, for example) and tier II (enemy intermediaries who interact with the leaders and establish links with facilitators within the population). Tier III individuals (the low-skilled foot soldiers who are part of the threat) may be easy to reach and provide an immediate result but are a distraction to success because they are easy to replace and their elimination is only a temporary inconvenience to the enemy. F3EAD can be used for any network function that is a time-sensitive target.
  2. The F3EAD process relies on the close coordination between operational planners and intelligence collection and tactical execution. Tactical forces should be augmented by a wide array of specialists to facilitate on-site exploitation and possible follow-on operations. Exploitation of captured materials and personnel will normally involve functional specialists from higher and even national resources. The goal is to quickly conduct exploitation and facilitate follow-on targeting of the network’s critical nodes.
  3. Targeting Considerations
  4. There is no hard-and-fast rule for allocating network targets by echelon. The primary consideration is how to create the desired effect against the network as a whole.

Generally network targets fall into one of three categories: individual targets, group targets, and organizational targets.

  1. Anobjectiveofnetworktargetingmaybetodenythethreatitsfreedomofactionand maneuver by maintaining constant pressure through unpredictable actions against the network’s leadership and critical functional nodes. It is based on selecting the right means or combination thereof to neutralize the target while minimizing collateral effects.
  2. While material targets can be disabled, denied, destroyed, or captured, humans and their interrelationships or links are open to a broader range of engagement options by friendly forces. For example, when the objective is to neutralize the influence of a specific group, it may require a combination of tasks to create the desired effect.
  3. Lines of Effort by Phase
  4. Targeting is a continuous and evolving process. As the threat adjusts to joint force activities, joint force intelligence collection and targeting must also adjust. Employing a counter-resource (logistical, financial, and recruiting) approach should increase the amount of time it will take for the organization to regroup. It may also force the threat to employ its hidden resources to fill the gaps, thus increasing the risk of detection and exploitation. During each phase of an operation or campaign against a threat network, there are specific actions that the JFC can take to facilitate countering threats network (see Figure V-6). However, these actions are not unique to any particular phase, and must be adapted to the specific requirements of the mission and the OE. The simplified model in Figure V-6 is illustrative rather than a list of specific planning steps.
  5. During phase 0, analysis provides a broad description of the structure of the underlying threat organization, identifies the critical functions and nodes, and identifies the relationships between the threat’s activities and the greater society.

These forces provide a foundation of information about the region to include very specific information that falls into the categories of PMESII. Actions against the network may include targeting of the threat’s transnational resources (money, supply, safe havens, recruiting); identifying key leadership; providing resources to facilitate PNs and regional efforts; shaping international and national populations’ opinions of friendly, neutral, and threat groups; and isolating the threat from transnational allies.

  1. During phase I, CTN activities seek to provide a more complete picture of the conditions in the OE. Forces already employed in theater may be leveraged as sources of information to help build a more detailed picture. New objectives may emerge as part of phase I, and forces deployed to help achieve those objectives contribute to the developing common operational picture. A network analysis is conducted to identify a target array that will keep the threat network off balance through multi-nodal attack operations.
  2. During phase II, CTN activities concentrate on developing previously identified targets, position intelligence collection to exploit effects, and continue to refine the description of the threat and its supporting network.
  3. During phase III, CTN activities are characterized by increased physical contact and a sizable ramp-up in a variety of intelligence and information collection assets. The focus is on identifying, exploiting, and targeting the clandestine core of the network. Intelligence collection assets and specialized analytical capabilities provide around the clock support to committed forces. Actions against the network continue and feature a ramp-up in resource denial; key leaders and activities are targeted for elimination; and constant multi-nodal pressure is maintained. Activities continue to convince neutral networks of the benefits of supporting the government and dissuade threat sympathizers from providing continued support to threat networks. Ultimately, the network is isolated from support and its ability to conduct operations is severely diminished.
  4. During phase IV, CTN activities focus on identifying, exploiting, and targeting the clandestine core of the network for elimination. Intelligence collection assets and specialized analytical capabilities continue to provide support to committed forces; the goal is to prevent the threat from recovering and regrouping.
  5. During phase V, CTN activities continue to identify, exploit, and target the clandestine core of the network for elimination and to identify the threat network’s attempts to regroup and reestablish control.
  6. Theater Concerns in Countering Threat Networks
  7. Many threat networks are transnational, recruiting, financing, and operating on a global basis. These organizations cooperate on a global basis when necessary to further their respective goals.
  8. In developing their CCMD campaign plans, CCDRs need to be aware of the complex relationships that characterize networks and leverage whole-of-government resources to identify and analyze networks to include their relationships with or part of known friendly, neutral, or threat networks. Militaries are interested in the activities of criminal organizations because these organizations provide material support to insurgent and terrorist organizations that also conduct criminal activities (e.g., kidnapping, smuggling, extortion). By tracking criminal organizations, the military may identify linkages (material and financial) to the threat network, which in turn might become a target.
  9. Countering Threat Networks Through Military Operations and Activities

Some threat networks may prefer to avoid direct confrontation with law enforcement and military forces. Activities associated with military operations at any level of conflict can have a direct or indirect impact on threats and their supporting networks.

  1. Operational Approaches to Countering Threat Networks
  2. There are many ways to integrate CTN into the overall plan. In some operations, the threat network will be the primary focus of the operation. In others, a balanced approach through multiple LOOs and LOEs may be necessary, ensuring that civilian concerns are met while protecting them from the threat networks’ operators.

In all CTN activities, lethal actions directed against the network should also be combined with nonlethal actions to support the legitimate government and persuade neutrals to reject the adversary.

 

  1. Effective CTN takes a deep understanding of the interrelationships between all the networks within an operational area, determining the desired effect(s) against each network, and nodes, and gathering and leveraging all available resources and capabilities to execute operations.

A CHANGING ENVIRONMENT—THE CONVERGENCE OF THREAT NETWORKS

Transnational organized crime penetration of states is deepening, leading to co-option of government officers in some nations and weakening of governance in many others. Transnational organized crime networks insinuate themselves into the political process through bribery and in some cases have become alternate providers of governance, security, and livelihoods to win popular support.

In fiscal year 2010, 29 of the 63 top drug trafficking organizations identified by the Department of Justice had links to terrorist organizations. While many terrorist links to transnational organized crime are opportunistic, this nexus is dangerous, especially if it leads a transnational organized crime network to facilitate the transfer of weapons of mass destruction transportation of nefarious actors or materials into the US.

CHAPTER VI

ASSESSMENTS

Commanders and their staffs will conduct assessments to determine the impact CTN activities may have on the targeted networks. Other networks, including friendly and neutral networks, within the OE must also be considered during planning, operations, and assessments.

Threat networks will adapt visibly and invisibly even as collection, analysis, and assessments are being conducted, which is why assessments over time that show trends are much more valuable in CTN activities than a single snapshot over a short time frame.

  1. Complex Operational Environments

Complex geopolitical environments, difficult causal associations, and the challenge of both quantitative and qualitative analysis to support decision making all complicate the assessment process. When only partially visible threat networks are spread over large geographic areas, among the people, and are woven into friendly and neutral networks, assessing the effects of joint force operations requires as much operational art as the planning process.

  1. Assessment of Operations to Counter Threat Networks
  2. CTN assessments at the strategic, operational, and tactical levels and across the instruments of national power are vital since many networks have regional and international linkages as well as capabilities. Objectives must be developed during the planning process so that progress toward objectives can be assessed.

Dynamic interaction among friendly, threat, and neutral networks makes assessing many aspects of CTN activities difficult. As planners assess complex human behaviors, they draw on multiple sources across the OE, including analytical and subjective measures, which support an informed assessment.

  1. Real-time network change detection is extremely challenging, and conclusions with high levels of confidence are rare. Since threat networks are rapidly adaptable, technological

systems used to support collection often struggle at monitoring change. Additionally, the large amounts of information collected require resources (people) and time for analysis. It is difficult to determine how networks change, and even more challenging to determine whether network changes are the result of joint force actions and, if so, which actions or combined actions are effective. A helpful indicator used in assessment comes when threat networks leverage social networks to coordinate and conduct operations, as it provides an opportunity to gain a greater understanding of the motivation and ideology of these networks. If intelligence analysts can tap into near real-time information from threat network entities, then that information can often be geospatially fused to create a better assessment. This is dependent on having access to accurate network data, the ability to analyze the data quickly, and the ability to detect deception.

  1. CTN assessments require staffs to conduct analysis more intuitively and consider both anecdotal and circumstantial evidence. Since networked threats operate among civilian populations, there is a greater need for HUMINT. Collection of HUMINT is time-consuming and reliability of sources can be problematic, but if employed properly and cross-cued with other disciplines, it is extremely valuable in irregular warfare. Tactical unit reporting such as patrol debriefs and unit after action reports when assimilated across an OE may provide the most valuable information on assessing the impact of operations.

OSINT will often be more valuable in assessing operations against threat networks and be the single greatest source of intelligence.

  1. Operation Assessment
  2. Theassessmentprocessisacontinuouscyclethatseekstoobserveandevaluatethe ever-changing OE and inform decisions about the future, making operations more effective. Base-lining is critical in phase 0 and the initial JIPOE process for assessments to be effective.

Assessments feed back into the JIPOE process to maintain tempo in the commander’s decision cycle. This is a continuous process, and the baseline resets for each cycle. Change is constant within the complex OE and when operating against multiple, adaptive, interconnected threat networks.

  1. Commanders establish priorities for assessment through their planning guidance, commander’s critical information requirements (CCIRs), and decision points. Priority intelligence requirements, a component of CCIR, detail exactly what data the intelligence collection plan should be seeking to inform the commander regarding threat networks.

CTN activities may require assessing multiple MOEs and measures of performance (MOPs), depending on threat network activity. As an example, JFCs may choose to neutralize or disrupt one type of network while conducting direct operations against another network to destroy it.

  1. Assessment precedes and guides every operation process activity and concludes each operation or phase of an operation. Like any cycle, assessment is continuous. The assessment process is not an end unto itself; it exists to inform the commander and improve the operation’s progress
  2. Integrated successfully, assessment in CTN activities will:

(1) Depict progress toward achieving the commander’s objectives and attaining the commander’s end state.

(2) Help in understanding how the OE is changing due to the impact of CTN activities on threat network structures and functions.

(3) Informthecommander’sdecisionmakingforoperationaldesignandplanning, prioritization, resource allocation, and execution.

(4) Produce actionable recommendations that inform the commander where to devote resources along the most effective LOOs and LOEs.

  1. Assessment Framework for Countering Threat Networks

The assessment framework broadly outlines three primary activities: organize, analyze, and communicate.

Multi-Service Tactics, Techniques, and Procedures for Operation Assessment

  1. Organize the Data

(1) Based on the OE and the operation plan or campaign plan, the commander and staff develop objectives and assessment criteria to determine progress. The organize activity includes ensuring the indicators are included within the collection plan, information collected and then analyzed by the intelligence section is organized using an information management plan, and that information is readily available to the staff to conduct the assessment. Multiple threat networks within an OE may require multiple MOPs, MOEs, metrics, and branches to the plan. Threat networks operating collaboratively or against each other complicate the assessment process. If threat networks conduct operations or draw resources from outside the operational area, there will be a greater reliance on other CCDRs or interagency partners for data and information.

Within the context of countering threat networks, example objective, measures of effectiveness (MOEs), and indicators could be:

Objective: Threat network resupply operations in “specific geographic area” are disrupted.

MOE: Suppliers to threat networks cease providing support. Indicator 1: Fewer trucks leaving supply depots.

Indicator 2: Guerrillas/terrorists change the number of engagements or length of engagement times to conserve resources.

Indicator 3: Increased threat network raids on sites containing resources they require (grocery stores, lumber yards, etc.)

(2) Metrics must be collectable, relevant, measurable, timely, and complementary. The process uses assessment criteria to evaluate task performance at all levels of warfare to determine progress of operations toward achieving objectives. Both qualitative and quantitative analyses are required. With threat networks, direct impacts alone may not be enough, requiring indirect impacts for a holistic assessment. Operations against a network’s financial resources may be best judged by analyzing the quality of equipment that they are able to deploy in the OE.

  1. Analyze the Data

(1) Analyzing data is the heart of the assessment process for CTN activities. Baselining is critical to support analysis. Baselining should not only be rooted in the initial JIPOE, but should go back to GCC theater intelligence collection and shaping operations. Understanding how threat networks formed and adapted prior to joint force operations provides assessors a significantly better baseline and assists in developing indicators.

(2) Data analysis seeks to answer essential questions:

(a) What happened to the threat network(s) as a result of joint force operations? Specific examples may include the following: How have links changed? How have nodes been affected? How have relationships changed? What was the impact on structure and functions? Specifically, what was the impact on operations, logistics, recruiting, financing, and propaganda?

(b) What operations caused this effect directly or indirectly? (Why did it happen?) It is likely that multiple instruments of national power efforts across several LOOs and LOEs impacted the threat network(s), and it is equally unlikely that a direct cause and effect is discernible.

Analysts must be aware of the danger of searching for a trend that may not be evident. Events may sometimes have dramatic effects on threat networks, but not be visible to outside/foreign/US observers.

(c) Whatarethelikelyfutureopportunitiestocounterthethreatnetworkand what are the risks to neutral and friendly networks? CTN activities should target CVs. Interdiction operations, for example, may create future opportunities to disrupt finances. Cyberspace operations may target Internet propaganda and create opportunities to reduce the appeal of threat networks to neutral populations.

(d) What needs to be done to apply pressure at multiple points across the instruments of national power (diplomatic, informational, military, and economic) to the targeted threat networks to attain the JFC’s desired military end state?

(3) Military units find stability tasks to be the most challenging to analyze since they are conducted among a civilian population. Adding a social dynamic complicates use of mathematical and deterministic formulas when human nature and social interactions play a major part in the OE. Overlaps between threat networks and neutral networks, such as the civilian population, complicate assessments and the second- and third-order effects analysis.

(4) The proximate cause of effects in complex situations can be difficult to determine. Even direct effects in these situations can be more difficult to create, predict, and measure, particularly when they relate to moral and cognitive issues (such as religion and the “mind of the adversary,” respectively). Indirect effects in these situations often are difficult to foresee. Indirect effects often can be unintended and undesired since there will always be gaps in our understanding of the OE. Unpredictable third-party actions, unintended consequences of friendly operations, subordinate initiative and creativity, and the fog and friction of conflict will contribute to an uncertain OE. Simply determining undesired effects on threat networks requires a greater degree of critical thinking and qualitative analysis than traditional operations. Undesired effects on neutral and friendly networks cannot be ignored.

(5) Statistical analysis is necessary and allows large volumes of data to be analyzed, but critical thinking must precede its use and qualitative analysis must accompany any conclusions. SNA is a form of statistical analysis on human networks that has proven to be a particularly valuable tool in understanding network dynamics and in showing network changes over time but it must be complemented by other types of analysis and traditional intelligence analysis. It can support the JIPOE process as well as the planning, targeting, and assessment processes. SNA requires significant data collection and since threat networks are difficult to collect on and may adapt unseen, it must be used in conjunction with other tools.

  1. Communicate the Assessment

(1) The assessment of CTN activities is only valuable to the commander and other participants if it is effectively communicated in a format that allows for rapid changes to LOOs/LOEs and operational and tactical actions for CTN activities.

(2) Communicating the CTN assessment clearly and concisely with sufficient information to support the staff’s recommendations, but not too much trivial detail, is challenging.

(3) Well-designed CTN assessment products show changes in indicators describing the OE and the performance of organizations as it related to CTN activities.

 

APPENDIX A

DEPARTMENT OF DEFENSE COUNTER THREAT FINANCE 1. Introduction

  1. JFCs face adaptive networked threats that rapidly adjust their operations to offset friendly force advantages and pose a wide array of challenges across the range of military operations.

CTN activities are a focused approach to understanding and operating against adaptive network threats such as terrorism, insurgency and organized crime. CTF refers to the activities and actions taken by the JFC to deny, disrupt, destroy, or defeat the generation, storage, movement, and use of assets to fund activities that support a threat network’s ability to negatively affect the JFC’s ability to attain the desired end state. Disrupting threat network finances decreases the threat network’s ability to achieve their objectives. That can range from sophisticated communications systems to support international propaganda programs, to structures to facilitate obtaining funding from foreign based sources, to foreign based cell support, to more local objectives to pay, train, arm, feed, and equip fighters. Disrupting threat network finances decreases their ability to conduct operations that threaten US personnel, interests, and national security.

  1. CTF activities against threat networks should be conducted with an understanding of the OE, in support of the JFC’s objectives, and nested with other counter threat network operations, actions, and activities. CTF activities cause the threat network to adjust its financial operations by disrupting or degrading its methods, routes, movement, and source of revenue. Understanding that financial elements are present at all levels of a threat network, CTF activities should be considered when developing MOEs during planning with the intent of forecasting potential secondary and tertiary effects.
  2. Effective CTF operations depend on developing an understanding of the functional organization of the threat network, the threat network’s financial capabilities, methods of operation, methods of communication, and operational areas, and upon detecting how revenue is raised, moved, stored, and used.
  3. Key Elements of Threat Finance
  4. Threatfinanceisthemannerinwhichadversarialgroupsraise,move,store,anduse funds to support their activities. Following the money and analyzing threat finance networks is important to:

(1) Identify facilitators and gatekeepers.
(2) Estimate threat networks’ scope of funding.
(3) Identify modus operandi.
(4) Understand the links between financial networks.
(5) Determine geographic movement and location of financial networks.

(6) Capture and prosecute members of threat networks.

  1. Raising Money. Fund-raising through licit and illicit channels is the first step in being able to carry out or support operations. This includes raising funds to pay for such mundane items as food, lodging, transportation, training, and propaganda. Raising money can involve network activity across local and international levels. It is useful to look at each source of funding as separate nodes that fit into a much larger financial network. That network will have licit and illicit components.

(1) Funds can be raised through illicit means, such as drug and human trafficking, arms trading, smuggling, kidnapping, robbery, and arson.

(2) Alternatively, funds can be raised through ostensibly legal channels. Threat networks can receive funds from legitimate humanitarian and business organizations and individual donations.

(3) Legitimate funds are coming led with illicit funds destined for threat networks, making it extremely difficult for governments to track threat finances in the formal financial system. Such transactions are perfectly legal until they can be linked to a criminal or terrorist act. Therefore, these transactions are extremely hard to detect in the absence of other indicators or through the identification of the persons involved.

  1. Moving Money. Moving money is one of the most vulnerable aspects of the threat finance process. To make the illicit money usable to threat networks it must be laundered. This can be done through the use of front companies, legitimate businesses, cash couriers, or third parties that may be willing to take on the risks in exchange for a cut of the profits. These steps are called “placement” and “layering.”

(1) During the placement stage, the acquired funds or assets are placed into a local, national, or international financial system for future use. This is necessary if the generated funds or assets are not in a form useable by their recipient, e.g., converting cash to wire transfers or checks.

(2) During the layering stage, numerous transactions are conducted with the assets or proceeds to create distance between the origination of the funds or assets and their eventual destination. Distance is created by moving money through several accounts, businesses or people, or by repeatedly converting the money or asset into a different form.

  1. Storing Money. Money or goods that have successfully been moved to a location that can be accessed by the threat network may need to be stored until it is ready to be spent.
  2. Using Money. Once a threat network has raised, moved, and stored their money, they are able to spend it. This is called “integration.” Roughly half of the money that was initially raised will go to operational expenses and the cost of laundering the money to convert it to useable funds. During integration, the funds or assets are placed at the disposal of the threat network for their utilization or re-investment into other licit and illicit operations.
  3. Planning Considerations
  4. CTF requires the integration of the efforts of disparate organizations in a whole-of- government approach in a complex environment. Joint operation/campaign plans and operation orders should be crafted to recognize that the core competencies of various agencies and military activities are coordinated and resources integrated, when and where appropriate, with those of others to achieve the operational objectives.
  5. The JFC and staff need to understand the impact that changes to the OE will have on CTF activities. The adaptive nature of threat networks will force changes to the network’s business practices and operations based on the actions of friendly networks within the OE. This understanding can lead to the creation of a more comprehensive, feasible, and achievable plan.
  6. CTF planning will identify the organizations and entities that will be required to conduct CTF action and activities.
  7. Intelligence Support Requirements
  8. CTF activities require detailed, timely, and accurate intelligence of threat networks’ financial activities to inform planning and decision making. Analysts can present the JFC with a reasonably accurate scope of the threat network’s financial capabilities and impact probabilities if they have a thorough understanding of the threat network’s financial requirements and what the threat network is doing to meet those requirements.
  9. JFCs should identify intelligence requirements for threat finance-related activities to establish collection priorities prior to the onset of operations.
  10. Intelligence support can focus on following the money by tracking the generation, storage, movement, and use of funds, which may provide additional insight into threat network leadership activities and other critical components of the threat network’s financial business practices. Trusted individuals or facilitators within the network often handle the management of financial resources. These individuals and their activities may lead to the identification of CVs within the network and decisive points for the JFC to target the network.
  11. Operation
  12. DOD may not always be the lead agency for CTF. Frequently the efforts and products of CTF analysis will be used to support criminal investigations or regulatory sanction activities, either by the USG or one of its partners. This can prove advantageous as contributions from other components can expand and enhance an understanding of threat financial networks. Threat finance activities can have global reach and are generally not geographically constrained. At times much of the threat finance network, including potentially key nodes, may extend beyond the JFC’s operational area.
  13. Military support to CTF is not a distinct type of military operation; rather it represents military activities against a specific network capability of business and financial processes used by an adversary network.

(1) Major Operations. CTF can reduce or eliminate the adversary’s operational capability by reducing or eliminating their ability to pay troops and procure weapons, supplies, intelligence, recruitment, and propaganda capabilities.

(2) Arms Control and Disarmament. CTF can be used to disrupt the financing of trafficking in small arms, IED or WMD proliferation and procurement, research to develop more lethal or destructive weapons, hiring technical expertise, or providing physical and operational security.

(6) DOD Support to CD Operations. The US military may conduct training of PN/HN security and law enforcement forces, assist in the gathering of intelligence, and participate in the targeting and interception of drug shipments. Disrupting the flow of drug profits via C

(7) Enforcement of Sanctions. CTF encompasses all forms of value transfer to the adversary, not just currency. DOD organizations can provide assistance to organizations that are interdicting the movement of goods and/or any associated value remittance as a means to enforce sanctions.

(8) COIN. CTF can be used to counter, disrupt, or interdict the flow of value to an insurgency. Additionally, CTF can be used against corruption, as well as drug and other criminal revenue-generating activities that fund or fuel insurgencies and undermine the legitimacy of the HN government. In such cases, CTF is aimed at insurgent organizations as well as other malevolent actors in the environment.

(9) Peace Operations. In peace operations, CTF can be used to stem the flow of external sources of support to conflicts to contain and reduce the conflict.

  1. Military support tasks to CTF can fall into four broad categories:

(1) Support civil agency and HN activities (including law enforcement):

(a) Provide Protection. US military forces may provide overwatch for law enforcement or PN/HN military CTF activities.

(b) Provide Logistics. US military forces may provide transportation, especially tactical movement-to-objective support, to law enforcement or PN/HN military CTF activities.

(c) Provide Command, Control, and Communications Support. US military forces may provide information technology and communications support to civilian agencies or PN/HN CTF personnel. This support may include provision of hardware and software, encryption, bandwidth, configuration support, networking, and account administration and cybersecurity.

(2) Direct military actions:

(a) Capture/Kill. US military forces may, with the support of mission partners as necessary, conduct operations to capture or kill key members of the threat finance network.

(b) Interdiction of Value Transfers. US military forces may, with the support of mission partners, conduct operations to interdict value transfers to the threat network as necessary. This may be a raid to seize cash from an adversary safe house, foreign exchange house, hawala or other type of informal remittance systems; seizure of electronic media including mobile banking systems commonly known as “red sims” and computer systems that contain data support payment and communication data in the form of cryptocurrency or exchanges in the virtual environment; interdiction to stop the smuggling of

goods used in trade-based money laundering; or command and control flights to provide aerial surveillance of drug-smuggling aircraft in support of law enforcement interdiction.

(c) Training HN/PN Forces. US military forces may provide training to PN/HN CTF personnel under specific authorities.

(3) Intelligence Collection. US military forces may conduct all-source intelligence operations, which will deal primarily with the collection, exploitation, analysis, and reporting of CTF information. These operations may involve deploying intelligence personnel to collect HUMINT and the operations of ships at sea and forces ashore to collect SIGINT, OSINT, and GEOINT.

(4) Operations to Generate Information and Intelligence. Occasionally, US military forces may conduct operations either with SOF or conventional forces designed to provoke a response by the adversary’s threat finance network for the purpose of collecting information or intelligence on that network. These operations are pre-planned and carefully coordinated with the intelligence community to ensure the synchronization and posture of the collection assets as well as the operational forces.

  1. Threat Finance Cells

(1) Threatfinancecellscanbeestablishedatanylevelbasedonavailablepersonnel resources. Expertise on adversary financial activities can be provided through the creation of threat finance cells at brigade headquarters and higher. The threat finance cell would include a mix of analysts and subject matter experts on law enforcement, regulatory matters, and financial institutions that would be drawn from DOD and civil USG agency resources. The threat finance cell’s responsibilities vary by echelon. At division and brigade, the threat finance cell:

(a) Provides threat finance expertise and advice to the commander and staff.

(b) Assiststheintelligencestaffinthedevelopmentofintelligencecollection priorities focused on adversary financial and support systems that terminate in the unit’s operational area.

(c) Consolidatesinformationonpersonsprovidingdirectorindirectfinancial, material and logistics support to adversary organizations in the unit’s operational area.

(d) Provides information concerning adversary exploitation of US resources such as transportation, logistical, and construction contractors working in support of US facilities; exploitation of NGO resources; and exploitation of supporting HN personnel.

(e) Identifies adversary organizations coordinating or cooperating with local criminals, organized crime, or drug trafficking organizations.

(f) Providesassessmentsoftheadversary’sfinancialviability¾abilitytofund, maintain, and grow operations¾and the implications for friendly operations.

(g) Developstargetingpackagerecommendationsforadversaryfinancialand logistics support persons for engagement by lethal and nonlethal means.

(h) Notifies commanders when there are changes in the financial or support operations of the adversary organization, which could indicate changes in adversary operating tempo or support capability.

(i) Coordinatesandsharesinformationwithotherthreatfinancecellstobuilda comprehensive picture of the adversary’s financial activities.

(2) At the operational level, the joint force J-2 develops and maintains an understanding of the OE, which includes economic and financial aspects. If established, the threat finance cell supports the J-2 to develop and maintain an understanding of the economic and financial environment of the HN and surrounding countries to assist in the detection and tracking of illicit financial activities, understanding where financial support is coming from, how that support is being moved into the area of operation and how that financial support is being used. The threat finance cell:

(a) Works with the J-2 to develop threat finance-related priority intelligence requirements and establish threat finance all-source intelligence collection priorities. The threat finance cell assists the J-2 in the detection, identification, tracking, analysis, and targeting of adversary personnel and networks associated with financial support across the operational area.

(b) The threat finance cell coordinates with tactical and theater threat finance cells and shares information with those entities as well as multinational forces, HN, and as appropriate and in coordination with the joint force J-2, the intelligence community.

(c) The threat finance cell, in coordination with the J-2, establishes a financial network picture for all known adversary organizations in the operational area; establishes individual portfolios or target packages for persons identified as providing financial or material support to the adversary’s organizations in the operational area; identifies adversary financial TTP for fund-raising, transfer mechanisms, distribution, management and control, and disbursements; and identifies and distributes information on fund-raising methods that are being used by specific groups in the area of operations. The threat finance cell can also:

  1. Identify specific financial institutions that are involved with or that are providing financial support to the adversary and how those institutions are being exploited by the adversary.
  2. Provide CTF expertise on smuggling and cross border financial and logistics activities.
  3. Establish and maintain information on adversary operating budgets in the area of operation to include revenue streams, operating costs, and potential additions, or depletions, to strategic or operational reserves.
  4. Targets identified by the operational-level threat finance cell are shared with the tactical threat finance cells. This allows the tactical threat finance cells to support and coordinate tactical units to act as an action arm for targets identified by the operational-level CTF organization, and coordinate tactical intelligence assets and sources against adversary organizations identified by the operational-level CTF organization.
  5. Multi-echelon information sharing is critical to unraveling the complexities of an adversary’s financial infrastructure. Operational-level CTF organizations require the detailed financial intelligence that is typically obtained by resources controlled by the tactical organizations.
  6. The operational-level threat finance cell facilitates the provision of support by USG and multinational organizations at the tactical level. This is especially true for USG department and agencies that have representation at the American Embassy.

(3) Tactical-level threat finance cells will require support from the operational level to obtain HN political support to deal with negative influencers that can only be influenced or removed by national-level political leaders, including governors, deputy governors, district leads, agency leadership, chiefs of police, shura leaders, elected officials and other persons serving in official positions; HN security forces; civilian institutions; and even NGOs/charities that may be providing the adversary with financial and logistical support.

(4) The threat finance cell should be integrated into the battle rhythm. Battle rhythm events should follow the following criteria:

(a) Name of board or cell: Descriptive and unique.

(b) Lead staff section: Who receives, compiles, and delivers information.

(c) When/where does it meet in battle rhythm: Allocation of resources (time and facilities), and any collaborative tool requirements.

(d) Purpose: Brief description of the requirement.

(e) Inputs required from: Staff sections, centers, groups, cells, offices, elements, boards, working groups, and planning teams required to provide products (once approved, these become specified tasks).

(f) When? Suspense for inputs.

(g) Output/process/product: Products and links to other staff sections, centers, groups, cells, offices, elements, boards, working groups, and planning teams.

(h) Time of delivery: When outputs will be available.

(i) Membership: Who has to attend (task to staff to provide participants and representatives).

 

  1. Assessment
  2. JFCs should know the importance and use of CTF capabilities within the context of measurable results for countering adversaries and should embed this knowledge within their staff. By assessing common elements found in adversaries’ financial operations, such as composition, disposition, strength, personnel, tactics, and logistics, JFCs can gain an understanding of what they might encounter while executing an operation and identify vulnerabilities of the adversary. Preparing a consolidated, whole-of-government set of metrics for threat finance will be extremely challenging.
  3. Metricsonthreatfinancemayappeartobeoflittlevaluebecauseitisverydifficult to obtain fast results or intelligence that can is immediately actionable. Actions against financial networks may take months to prepare, organize, and implement, due to the difficulty of collecting relevant detailed information and the time lags associated with processing, analysis, and reporting findings on threat financial networks.
  4. The JFC’s staff should assess the adversary’s behaviors based on the JFC’s desired end state and determine whether the adversary’s behavior is moving closer to that end state.
  5. The JFC and staff should consult with participating agencies and nations to establish a set of metrics which are appropriate to the mission or LOOs assigned to the CTF organization.

APPENDIX B

THE CONVERGENCE OF ILLICIT NETWORKS

  1. The convergence of illicit networks (e.g., criminals, terrorists, and insurgents) incorporates the state or degree to which two or more organizations, elements, or individuals approach or interrelate. Conflict in Iraq and Afghanistan has seen a substantial increase in the cooperative arrangements of illicit networks to further their respective interests. From the Taliban renting their forces out to provide security for drug operations to al-Qaida using criminal organizations to smuggle resources, temporary cooperative arrangements are now a routine aspect of CTN operations.
  2. The US intelligence community has concluded that transnational organized crime has grown significantly in size, scope, and influence in recent years. A public summary of the assessment identified a convergence of terrorist, criminal, and insurgent networks as one of five key threats to US national security. Terrorists and insurgents increasingly have and will continue to turn to crime to generate funding and will acquire logistical support from criminals, in part because of successes by USG departments and agencies and PNs in attacking other sources of their funding, such as from state sponsors. In some instances, terrorists and insurgents prefer to conduct criminal activities themselves; when they cannot do so, they turn to outside individuals and facilitators. Some criminal organizations have adopted terrorist organizations’ practice of extreme and widespread violence in an overt effort to intimidate governments and populations at various levels.
  3. To counter threat networks, it is imperative to understand the converging nature of the relationship among terrorist groups, insurgencies, and transnational criminal organizations. The proliferation of these illicit networks and their activities globally threaten US national security interests. Together, these groups not only destabilize environments through violence, but also become dominant actors in shadow economies, distorting market forces. Indications are that although the operations and objectives of criminal groups, insurgents, and terrorists differ, these groups interact on a regular basis for mutually beneficial reasons. They each pose threats to state sovereignty. They share the common goals of ensuring that poorly governed and post-conflict countries have ineffective laws and law enforcement, porous borders, a culture of corruption, and lucrative criminal opportunities.

Organized crime has been traditionally treated as a law enforcement rather than national security concern. The convergence of organized criminal networks with the other non-state actors requires a more sophisticated, interactive, and comprehensive response that takes into account the dynamics of the relationships and adapts to the shifting tactics employed by the various threat networks.

  1. Mounting evidence suggests that the modus operandi of these entities often diverges and the interactions among them are on the rise. This spectrum of convergence (Figure B-1) has received increasing attention in law enforcement and national security policy-making circles. Until recently, the prevalent view was that terrorists and insurgents were clearly distinguishable from organized criminal groups by their motivations and the methods used to achieve their objectives. Terrorist and insurgent groups use or threaten to use extreme violence to attain political ends, while organized criminal groups are primarily motivated by profit. Today, these distinctions are no longer useful for developing effective diplomatic, law enforcement, and military strategies, simply because the lines between them have become blurred, and the security issues have become intertwined.

The convergence of organized criminal networks and other illicit non-state actors, whether for short-term tactical partnerships or broader strategic imperatives, requires a much more sophisticated response or unified approach, one that takes into account the evolving nature of the relationships as well as the environmental conditions that draw them together.

  1. The convergence of illicit networks has provided law enforcement agencies with a broader mandate to combat terrorism. Labeling terrorists as criminals undermines the reputation of terrorists as freedom fighters with principles and a clear political ideology, thereby hindering their ability to recruit members or raise funds.

just as redefining terrorists as criminals damages their reputation, ironically it might prove to be useful at other times to redefine criminals as terrorists, such as in the case of the Haqqani network in Afghanistan. For instance, this change in term might make additional resources available to law enforcement agencies, such as those of the military or the intelligence services, thereby making law enforcement more effective.

  1. However, there are some limitations associated with the latter approach. The adage that a terrorist to one is a freedom fighter to another holds true. This difference of opinion therefore renders it difficult for states to cooperate in joint CT operations.
  2. The paradigm of fighting terrorism, insurgency, and transnational crime separately, utilizing distinct sets of authorities, tools, and methods, is not adequate to meet the challenges posed by the convergence of these networks into a criminal-terrorist-insurgency conglomeration. While the US has maintained substantial long-standing efforts to combat terrorism and transnational crime separately, the government has been challenged to evaluate whether the existing array of authorities, responsibilities, programs, and resources sufficiently responds to the combined criminal-terrorism threat. Common foreign policy options have centered on diplomacy, foreign assistance, financial actions, intelligence, military action, and investigations. At issue is how to conceptualize this complex illicit networks phenomenon and oversee the implementation of cross-cutting activities that span geographic regions, functional disciplines, and a multitude of policy tools that are largely dependent on effective interagency coordination and international cooperation.
  3. Terrorist Organizations
  4. Terrorism is the unlawful use of violence or threat of violence, often motivated by religious, political, or other ideological beliefs, to instill fear and coerce governments or societies in pursuit of goals that are usually political.
  5. In addition to increasing law enforcement capabilities for CT, the US, like many nations, has developed specialized, but limited, military CT capabilities. CT actions are activities and operations taken to neutralize terrorists and their organizations and networks to render them incapable of using violence to instill fear and coerce governments or societies to achieve their goals.
  6. Insurgencies
  7. Insurgency is the organized use of subversion and violence to seize, nullify, or challenge political control of a region. Insurgency uses a mixture of subversion, sabotage, political, economic, psychological actions, and armed conflict to achieve its political aims. It is a protracted politico-military struggle designed to weaken the control and legitimacy of an established government, a military occupation government, an interim civil administration, or a peace process while increasing insurgent control and legitimacy.
  8. COIN is a comprehensive civilian and military effort designed to simultaneously defeat and contain insurgency and address its root causes. COIN is primarily a political struggle and incorporates a wide range of activities by the HN government, of which security is only one element, albeit an important one. Unified action is required to successfully conduct COIN operations and should include all HN, US, and multinational partners.
  9. Of the groups designated as FTOs by DOS, the vast majority possess the characteristics of an insurgency: an element of the larger group is conducting insurgent type operations, or the group is providing assistance in the form of funding, training, or fighters to another insurgency. Colombia’s government and the Revolutionary Armed Forces of Colombia reached an agreement to enter into peace negotiations in 2012, taking another big step toward ending the 50-year old insurgency.
  10. The convergence of illicit networks contributes to the undermining of the fabric of society. Since the proper response to this kind of challenge is effective civil institutions, including uncorrupted and effective police, the US must be capable of deliberately applying unified action across all instruments of national power in assisting allies and PNs when asked.
  11. Transnational Criminal Organizations
  12. From the National Security Strategy, combating transnational criminal and trafficking networks requires a multidimensional strategy that safeguards citizens, breaks the financial strength of criminal and terrorist networks, disrupts illicit trafficking networks, defeats transnational criminal organizations, fights government corruption, strengthens the rule of law, bolsters judicial systems, and improves transparency.
  13. Transnational criminal organizations are self-perpetuating associations of individuals that operate to obtain power, influence, monetary and/or commercial gains, wholly or in part by illegal means. These organizations protect their activities through a pattern of corruption and/or violence or protect their illegal activities through a transnational organizational structure and the exploitation of transnational commerce or communication mechanisms.

Transnational criminal networks are not only expanding operations, but they are also diversifying activities, creating a convergence of threats that has become more complex, volatile, and destabilizing. These networks also threaten US interests by forging alliances with corrupt elements of national governments and using the power and influence of those elements to further their criminal activities. In some cases, national governments exploit these relationships to further their interests to the detriment of the US.

  1. The convergence of illicit networks continues to grow as global sanctions affect the ability of terrorist organizations and insurgencies to raise funds to conduct their operations.
  2. Although drug trafficking still represents the most lucrative illicit activity in the world, other criminal activity, particularly human and arms trafficking, have also expanded. As a consequence, international criminal organizations have gone global; drug trafficking organizations linked to the Revolutionary Armed Forces of Colombia, for example, have agents in West Africa
  3. As the power and influence of these organizations has grown, their ability to undermine, corrode, and destabilize governments has increased. The links forged between these criminal groups, terrorist movements, and insurgencies have resulted in a new type of threat: ever-evolving networks that exploit permissive OEs and the seams and gaps in policy and application of unified action to conduct their criminal, violent, and politically motivated activities. Threat networks adapt their structures and activities faster than countries can combat their illicit activities. In some instances, illicit networks are now running criminalized states.

 

Drawing the necessary distinctions and differentiations [between coexistence, cooperation, and convergence] allows the necessary planning to begin in order to deal with the matter, not only in the Sahel, but across the globe:

By knowing your enemies, you can find out what it is they want. Once you know what they want, you can decide whether to deny it to them and thereby demonstrate the futility of their tactics, give it to them, or negotiate and give them a part of it in order to cause them to end their campaign. By knowing your enemies, you can make an assessment not just of their motives but also their capabilities and of the caliber of their leaders and their organizations.

It is often said that knowledge is power. However, in isolation knowledge does not enable us to understand the problem or situation. Situational awareness and analysis is required for comprehension, while comprehension and judgment is required for understanding. It is this understanding that equips decision makers with the insight and foresight required to make effective decisions.

Extract from Alda, E., and Sala, J. L., Links Between Terrorism, Organized Crime and Crime: The Case of the Sahel Region. Stability: International Journal of Security and Development, 10 September 2014.

 

APPENDIX C

COUNTERING THREAT NETWORKS IN THE MARITIME DOMAIN

  1. Overview

The maritime domain connects a myriad of geographically dispersed nodes of friendly, neutral, and threat networks, and serves as the primary conduit for nearly all global commerce. The immense size, dynamic environments, and legal complexities of this domain create significant challenges to establishing effective maritime governance in many regions of the world.

APPENDIX D

IDENTITY ACTIVITIES SUPPORT TO COUNTERING THREAT NETWORK OPERATIONS

  1. Identity activities are a collection of functions and actions that recognize and differentiate one person from another to support decision making. Identity activities include the collection of identity attributes and physical materials and their processing and exploitation.
  2. Identity attributes are the biometric, biographical, behavioral, and reputational data collected during encounters with an individual and across all intelligence disciplines that can be used alone or with other data to identify an individual. The processing and analysis of these identity attributes results in the identification of individuals, groups, networks, or populations of interest, and facilitates the development of I2 products that allow an operational commander to:

(1) Identify previously unknown threat identities.

(2) Positively link identity information, with a high degree of certainty, to a specific human actor.

(3) Reveal the actor’s pattern of life and connect the actor to other persons, places, materials, or events.

(4) Characterize the actor’s associates’ potential level of threat to US interests.

  1. I2 fuses identity attributes and other information and intelligence associated with those attributes collected across all disciplines. I2 and DOD law enforcement criminal intelligence products are crucial to commanders’, staffs’, and components’ ability to identify and select specific threat individuals as targets, associate them with the means to create desired effects, and support the JFC’s operational objectives.
  2. Identity Activities Considerations
  3. Identity activities leverage enabling intelligence activities to help identify threat actors by connecting individuals to other persons, places, events, or materials, analyzing patterns of life, and characterizing capability and intent to harm US interests.
  4. The joint force J-2 is normally responsible for production of I2 within the CCMD.

(1) I2 products are normally developed through the JIPOE process and provide detailed information about threat activity identities in the OE. All-source analysis, coupled with identity information, significantly enhances understanding of the location of threat actors and provides detailed information about threat activity and potential high-threat areas within the OE. I2 products enable improved force protection, targeted operations, enhanced intelligence collection, and coordinated planning.

  1. Development of I2 requires coordination throughout the USG and PNs, and may necessitate an intelligence federation agreement. During crises, joint forces may also garner support from the intelligence community through intelligence federation.
  2. Identity Activities at the Strategic, Operational, and Tactical Levels
  3. At the strategic level, identity activities are dependent on interagency and PN information and intelligence sharing, collaboration, and decentralized approaches to gain identity information and intelligence, provide analyses, and support the vetting the status (friendly, adversary, neutral, or unknown) of individuals outside the JFC’s area of operations who could have an impact on the JFC’s missions and objectives.
  4. At the operational level, identity activities employ collaborative and decentralized approaches blending technical capabilities and analytic abilities to provide identification and vetting of individuals within the AOR.
  5. At the tactical level, identity information obtained via identity activities continues to support the unveiling of anonymities. Collection and analysis of identity-related data helps tactical commanders further understand the OE and to decide on the appropriate COAs with regards to individual(s) operating within it; as an example, identity information often forms the basis for targeting packages. In major combat operations, I2 products help provide the identities of individuals moving about the operational area who are conducting direct attacks on combat forces, providing intelligence for the enemy, and/or disrupting logistic operations.
  6. US Special Operations Command and partners currently deploy land-based exploitation analysis centers to rapidly process and exploit biometric data, documents, electronic media, and other material to support I2 operations and gain greater situational awareness of threats.
  7. Policy and Legal Considerations for Identity Activities Support to Countering Threat Networks
  8. The authorities to collect, store, share, and use identity data will vary depending upon the AOR and the PNs involved in the CTN activities. Different countries have strict legal restrictions on the collection and use of personally identifiable information, and the JFC may need separate bilateral and/or multinational agreements to alleviate partners’ privacy concerns.
  9. Socio-cultural considerations also may vary depending upon the AOR. In some cultures, for example, a female subject’s biometric data may need to be collected by a female. In other cultures, facial photography may be the preferred biometric collection methodology so as not to cross sociocultural boundaries.
  10. Evidence-based operations and support to rule of law for providing identity data to HN law enforcement and judicial systems should be considered.

The prosecution of individuals, networks, and criminals relies on identity data. However, prior to providing identity data to HN law enforcement and judicial systems, one should consult with their staff judge advocate or legal advisor.

APPENDIX E

EXPLOITATION IN SUPPORT OF COUNTERING THREAT NETWORKS 1. Exploitation and the Joint Force

  1. Oneofthemajorchallengesconfrontingthejointforceistheaccurateidentification of the threat network’s key personnel, critical functions, and sources of supply. Threat networks often go to extraordinary lengths to protect critical information about the identity of their members and the physical signatures of their operations. These networks leave behind an extraordinary amount of potentially useful information in the form of equipment, documents, and even materials recovered from captured personnel. This information can lead to a deeper understanding of the threat network’s nodes, links, and functions and assists in continuous analysis and mapping of the network. If the friendly force has the ability to collect and analyze the materials found in the OE, then they can gain the insights needed to cause significant damage to the threat network’s operations. Exploitation provides a means to match individuals to events, places, devices, weapons, related paraphernalia, or contraband as part of a network attack.
  2. Conflicts in Iraq and Afghanistan have witnessed a paradigm shift in how the US military’s intelligence community supports the immediate intelligence needs of the deployed force and the type of information that can be derived from analysis of equipment, materials, documents, and personnel encountered on the battlefield. To meet the challenges posed by threat networks in an irregular warfare environment, the US military formed a deployable, multidisciplinary exploitation capability designed to provide immediate feedback on the tactical and operational relevance of threat equipment, materials, documents, and personnel encountered by the force. This expeditionary capability is modular, scalable, and includes collection, technical, and forensic exploitation and analytical capabilities linked to the national labs and the intelligence enterprise.
  3. Exploitation is accomplished through a combination of forward deployed and reachback resources to support the commander’s operational requirements.
  4. Exploitation employs a wide array of enabling capabilities and interagency resources, from forward deployed experts to small cells or teams providing scientific or technical support, or interagency or partner laboratories, and centers of excellence providing real-time support via reachback. Exploitation activities require detailed planning, flexible execution, and continuous assessment. Exploitation is designed to provide:

(1) Support to targeting, which occurs as a result of technical and forensic exploitation of recovered materials used to identify participants in the activity and provide organizational insights that are targetable.

(2) Support to component and material sourcing and tracking and supply chain interdiction uses exploitation techniques to determine origin, design, construction methods, components, and pre-cursors of threat weapons to identify where the materials originated, the activities of the threat’s logistical networks, and the local supply sources.

(3) Support to prosecution is accomplished when the results of the exploitation link individuals to illicit activities. When supporting law enforcement activities, recovered materials are handled with a chain of custody that tracks materials through the progressive stages of exploitation. The materials can be used to support detainment and prosecution of captured insurgents or to associate suspected perpetrators who are connected later with a hostile act.

(4) Support to force protection including identifying threat TTP and weapons’ capabilities that defeat friendly countermeasures, including jamming devices and armor.

(5) Identification of signature characteristics derived from threat weapon fabrication and employment methods that can aid in cuing collection assets.

  1. Tactical exploitation delivers preliminary assessments and information about the weapons employed and the people who employed them

Operational-level exploitation can be conducted by deployed labs and provides detailed forensic and technical analysis of captured materials. When combined with all-source intelligence reporting, it supports detailed analysis of threat networks to inform subsequent targeting activities. In an irregular warfare environment, where the mission and time permit, commanders should routinely employ forensics-trained collection capabilities (explosive ordnance disposal [EOD] unit, weapons intelligence team [WIT], etc.) in their overall ground operations to take advantage of battlefield opportunities.

(1) Tactical exploitation begins at the point of collection. The point of collection includes turnover of material from HN government or civilian personnel, material and information discovered during a maritime interception operation, cache discovery, raid, IED incident, post-blast site, etc.

(2) Operational-level exploitation employs technical and forensic examination techniques of collected data and material and is conducted by highly trained examiners in expeditionary or reachback exploitation facilities.

  1. Strategic exploitation is designed to inform theater- and national-level decision makers. A commander’s strategic exploitation assets may include forward deployed or reachback joint captured materiel exploitation centers and labs capable of conducting formally accredited and/or highly sophisticated exploitation techniques. These assets can respond to theater strategic intelligence requirements and, when very specialized capabilities are leveraged, provide support to national requirements.

Strategic exploitation is designed to support national strategy and policy development. Strategic requirements usually involve targeting of high-value or high-priority actors, force protection design improvement programs, and source interdiction programs designed to deny the adversary externally furnished resources.

  1. Exploitation activities are designed to provide a progressively detailed multidisciplinary analysis of materials recovered from the OE. From the initial tactical evaluation at the point of collection, to the operational forward deployed technical/forensic field laboratory and subsequent evaluation, the enterprise is designed to provide a timely, multidisciplinary analysis to support decision making at all echelons. Exploitation capabilities vary in scope and complexity, span peacetime to wartime activities, and can be applied during all military operations.
  2. Collection and Exploitation
  3. An integrated and synchronized effort to detect, collect, process, and analyze information, materials, or people and disseminate the resulting facts provides the JFC with information or actionable intelligence.

Collection also includes the documentation of contextual information and material observed at the incident site or objective. All the activities vital to collection and exploitation are relevant to identity activities as many of the operations and efforts are capable of providing identity attributes used for developing I2 products.

(1) Site Exploitation. The JFC may employ hasty or deliberate site exploitation during operations to recognize, collect, process, preserve, and analyze information, personnel, and/or material found during the conduct of operations. Based on the type of operation, commanders and staffs assess the probability that forces will encounter a site capable of yielding information or intelligence and plan for the integration of various capabilities to conduct site exploitation.

(2) Expeditionary Exploitation Capabilities. Operational-level expeditionary labs are the focal point for the theater’s exploitation and analysis activities that provide the commander with the time-sensitive information needed to shape the OE.

(a) Technical Exploitation. Technical exploitation includes electronic and mechanical examination and analysis of collected material. This process provides information regarding weapon design, material, and suitability of mechanical and electronic components of explosive devices, improvised weapons, and associated components.

  1. Electronic Exploitation. Electronic exploitation at the operational level is limited and may require strategic-level exploitation available at reachback labs or forward deployed labs.
  2. Mechanical Exploitation. Mechanical exploitation of material (mechanical components of conventional and improvised weapons and their associated platforms) focuses on devices incorporating manual mechanisms: combinations of physical parts that transmit forces, motion, or energy.

(b) Forensic Exploitation. Forensic exploitation applies scientific techniques to link people with locations, events, and material that aid the development of targeting, interrogation, and HN/PN prosecution support.

(c) DOMEX. DOMEX consists of three exploitation techniques: document exploitation, cellular exploitation, and media exploitation. Documents, cell phones, and media recovered during collection activities, when properly processed and exploited, provide valuable information, such as adversary plans and intentions, force locations, equipment capabilities, and logistical status. Exploitable materials include paper documents such as maps, sketches, letters, cell phones, smart phones, and digitally recorded media such as hard drives and thumb drives.

  1. Supporting the Intelligence Process
  2. Within their operational areas, commanders are concerned with identifying the members of and systematically targeting the threat network, addressing threats to force protection, denying the threat network access to resources, and supporting the rule of law. Information derived from exploitation can provide specific information and actionable intelligence to address these concerns. Exploitation reporting provides specific information to help answer the CCIRs. Exploitation analysis is also used to inform the intelligence process by identifying specific individuals, locations, and activities that are of interest to the commander
  3. Exploitation products may inform follow-on intelligence collection and analysis activities. Exploitation products can facilitate a more refined analysis of the threat network’s likely activities and, when conducted during shape and deter phases, typically enabled by HN, interagency and/or international partners, can help identify threats and likely countermeasures in advance of any combat operations.
  4. Exploitation Organization and Planning
  5. A wide variety of Service and national exploitation resources and capabilities are available to support forward deployed forces. These deployable resources are generally scalable and can make extensive use of reachback to provide analytical support. The JIPOE product will serve as a basis for determining the size and mix of capabilities that will be required to support initial operations.
  6. J-2E. During the planning process, the JFC should consider the need for exploitation support to help fulfill the requirements for information about the OE, identify potential threats to US forces, and understand the capabilities and capacity of the adversary network.

The J-2E (when organized) establishes policies and procedures for the coordination and synchronization of the exploitation of captured threat materials. The J-2E will:

(1) Evaluate and establish the commander’s collection and exploitation requirements for deployed laboratory systems or material evacuation procedures based on the mission, its object and duration, threat faced, military geographic factors, and authorities granted to collect and process captured material.

(2) Ensure broad discoverability, accessibility, and usability of exploitation information at all levels to support force protection, targeting, material sourcing, signature characterization of enemy activities, and the provision of materials collected, transported, and accounted for with the fidelity necessary to support prosecution of captured insurgents or terrorists.

(3) Prepare collection plans for a subordinate exploitation task force responsible for finding and recovering battlefield materials.

(4) Provide direction to forces to ensure that the initial site collection and exploitation activities are conducted to meet the commanders’ requirements and address critical information and intelligence gaps.

(5) Ensure that exploitation enablers are integrated and synchronized at all levels and their activities support collection on behalf of the commander’s priority intelligence requirements. Planning includes actions to:

(a) Identify units and responsibilities.

(b) Ensure exploitation requirements are included in the collection plan.

(c) Define priorities and standard operating procedures for materiel recovery and exploitation.

(d) Coordinate transportation for materiel.

(e) Establish technical intelligence points of contact at all levels to expedite dissemination.

(f) Identify required augmentation skill sets and additional enablers.

  1. Exploitation Task Force

(1) As an alternative to using the JFC’s staff to manage exploitation activities, the JFC can establish an exploitation task force, integrating tactical-level and operational-level organizations and streamlining communications under a single headquarters whose total focus is on the exploitation effort. The task force construct is useful when a large number of exploitation assets have been deployed to support large-scale, long-duration operations. The organization and employment of the task force will depend on the mission, the threat, and the available enabling forces.

The combination of collection assets with specialized exploitation enablers allows the task force to conduct focused threat network analysis and targeting, provide direct support packages of exploitation enablers to higher headquarters, and organize and conduct unit-level training programs.

(a) Site Exploitation Teams. These units are task-organized teams specifically detailed and trained at the tactical level. The mission of site exploitation teams is to conduct systematic discovery activities and search operations, and properly identify, document, and preserve the point of collection and its material.

(b) EOD Teams. EOD personnel have special training and equipment to render safe explosive ordnance and IEDs, make intelligence reports on such items or components, and supervise the safe removal thereof.

(c) WITs. WITs are task-organized teams, often with organic EOD support that exploit a site of intelligence value by collecting IED-related material, performing tactical questioning, collecting forensic materials, including latent fingerprints, preserving and documenting DOMEX, including cell phones and other electronic media, providing in-depth documentation of the site, including sketches and photographs, evaluating the effects of threat weapons systems, and preparing material for evacuation.

(d) CBRN Response Teams. When WMD or hazardous CBRN precursors may be present, CBRN response teams can be detailed to supervise the site exploitation. CBRN response team personnel are trained to properly recognize, preserve, neutralize, and collect hazardous CBRN or explosive materials.

(f) DOMEX. DOMEX support is scalable and ranges from a single liaison offer, utilizing reachback for full analysis, to a fully staffed joint document exploitation center for primary document exploitation.

APPENDIX F

THE CLANDESTINE CHARACTERISTICS OF THREAT NETWORKS 1. Introduction

  1. MaintainingregionalstabilitycontinuestoposeamajorchallengefortheUSandits PNs. The threat takes many forms from locally based to mutually supporting and regionally focused transnational criminal organizations, terrorist groups, and insurgencies that leverage global transportation and information networks to communicate and obtain and transfer resources (money, material, and personnel). In the long term, for the threat to win it must survive and to survive it must be organized and operate so that no one strike will cripple the organization. Today’s threat networks are characterized by flexible organizational structures, adaptable and dynamic operational capabilities, a highly nuanced understanding of the OE, and a clear vision of their long-term goals.
  2. While much has been made of the revolution brought about by technology and its impact on a threat network’s organization and operational methods, the impacts have been evolutionary rather than revolutionary. The threat network is well aware that information technology, while increasing the rate and volume of information exchange, has also increased the risk to clandestine operations due to the increase in electromagnetic and cyberspace signatures, which puts these types of communications at risk of detection by governments, like the USG, that can apply technological advantage to identify, monitor, track, and exploit these signatures.
  3. When it comes to designing a resilient and adaptable organizational structure, every successful threat network over time adopted the traditional clandestine cellular network architecture. This type of network architecture provides a means of survival in form through a cellular or compartmentalized structure; and in function through the use of clandestine arts or tradecraft to minimize the signature of the organization—all based on the logic that the primary concern is that the movement needs to survive to attain its political goals.
  4. When faced with a major threat or the loss of a key leader, clandestine cellular networks contain the damage and simply morph and adapt to new leaders, just as they morph and adapt to new terrain and OEs. In some cases the networks are degraded, in others they are strengthened, but in both cases, they continue to fight on, winning by not losing. It is this “logic” of clandestine cellular networks—winning by not losing—that ensures their survival.
  5. CTN activities that focus on high-value or highly connected individuals (organizational facilitators) may achieve short-term gains but the cellular nature of most threat networks allows them to quickly replace individual losses and contain the damage. Operations should isolate the threat network from the friendly or neutral populations, regularly deny them the resources required to operate, and eliminate leadership at all levels so friendly forces can deny them the freedom of movement and freedom of action the threat needs to survive.
  6. Principles of Clandestine Cellular Networks

The survival of clandestine portions of a threat network organization rests on six principles: compartmentalization, resilience, low signature, purposeful growth, operational risk, and organizational learning. These six principles can help friendly forces to analyze current network theories, doctrine, and clandestine adversaries to identify strengths and weaknesses.

  1. Compartmentalization comes both from form and function and protects the organization by reducing the number of individuals with direct knowledge of other members, plans, and operations. Compartmentalization provides the proverbial wall to counter friendly exploitation and intelligence-driven operations.
  2. Resilience comes from organizational form and functional compartmentalization and not only minimizes damage due to counter network strikes on the network, but also provides a functional method for reconnecting the network around individuals (nodes) that have been killed or captured.
  3. Low signature is a functional component based on the application of clandestine art or tradecraft that minimizes the signature of communications, movement, inter-network interaction, and operations of the network.
  4. Purposeful growth highlights the fact that these types of networks do not grow in accordance with modern information network theories, but grow with purpose or aim: to gain access to a target, sanctuary, population, intelligence, or resources. Purposeful growth primarily relies on clandestine means of recruiting new members based on the overall purpose of the network, branch, or cell.
  5. Operational risk balances the acceptable risk for conducting operations to gain or maintain influence, relevance, or reach to attain the political goals and long-term survival of the movement. Operations increase the observable signature of the organization, threatening its survival. Clandestine cellular networks of the underground develop overt fighting forces (rural and urban) to interact with the population, the government, the international community, and third-party countries conducting FID in support of the government forces. This interaction invariably leads to increased observable signature and counter-network operations against the network’s overt elements. However, as long as the clandestine core is protected, these overt elements are considered expendable and quickly replaced.
  6. Organizational learning is the fundamental need to learn and adapt the clandestine cellular network to the current situation, the threat environment, overall organizational goals, relationships with external support mechanisms, the changing TTP of the counter network forces, new technologies, and the physical dimension, human factors, and cyberspace.
  7. Organization of Clandestine Cellular Networks
  8. Clandestine elements of an insurgency use form—organization and structure—for compartmentalization, relying on the basic network building block, the compartmented cell, from which the term cellular is derived. The cell size can differ significantly from one to any number of members, as well as the type of interaction within the cell, depending on the cell’s function. There are generally three basic functions—operations, intelligence, and support. The cell members may not know each other, such as in an intelligence cell, with the cell leader being the only connection between the other members. In more active operational cells, such as a direct-action cell, all the members are connected, know each other, perhaps are friends or are related, and conduct military-style operations that require large amounts of communications. Two or more cells linked to a common leader are referred to as branches of a larger network. For example, operational cells may be supported by an intelligence cell or logistics cell. Building upon the branch is the network, which is made up of multiple compartmentalized branches, generally following a pattern of intelligence (and counterintelligence) branches, operational branches (direct action or urban guerrilla cells), support branches (logistics and other operational enablers like propaganda support), and overt political branches or shadow governments
  9. The key concept for organizational form is compartmentalization of the clandestine cellular network (i.e., each element is isolated or separated from the others). Structural compartmentalization is in two forms: the cut-out, which is a method ensuring that opponents are unable to directly link two individuals together, and through lack of knowledge; no personal information is known about other cell members, so capture of one does not put the others at risk. In any cell where the members must interact directly, such as in an operational or support cell, the entire cell may be detained, but if the structural compartmentalization is sound, then the counter-network forces will not be able to exploit the cell to target other cells, the leaders of the branch, or overall network.
  10. The basic model for a cellular clandestine network consists of the underground, the auxiliary, and the fighters. The underground and auxiliary are the primary components that utilize clandestine cellular networks; the fighters are the more visible overt action arm of the insurgency (Figure F-2). The underground and auxiliary cannot be easily replaced, while the fighters can suffer devastating defeats (Fallujah in 2006) without threatening the existence of the organization.
  11. The underground is responsible for the overall command, control, communications, information, subversion, intelligence, and covert direct action operations, such as terrorism, sabotage, and intimidation. The original members and core of the threat network generally operate as members of the underground. The underground cadres develop the organization, ideally building it from the start as a clandestine cellular network to ensure its secrecy, low- signature, and survivability. The underground members operate as the overarching leaders, leaders of the organization cells, training cadres, and/or subject matter experts for specialized skills, such as propaganda, bomb making, or communications.
  12. The auxiliary is the clandestine support personnel, directed by the underground, which provide logistics, operational support, and intelligence collection of the underground and the fighters. The auxiliary members use their normal daily routines to provide them cover for their activities in support of the threat, to include freedom of movement to transport materials and personnel, specialized skills (electricians, doctors, engineers, etc.), or specialized capabilities for operations. These individuals may hold jobs such as local security forces, doctors and nurses, shipping and transportation specialists, and businesspeople that provide them with a reason for security forces to allow them freedom of movement even in a crisis.
  13. The fighters are the most visible and the most easily replaced members of the threat network. While their size and armament will vary, they use a more traditional hierarchical organizational structure. The fighters are normally used for the high-risk missions where casualties are expected and can be recovered from in short order.
  14. The Elements of a Clandestine Cellular Network
  15. A growing insurgency/terrorist/criminal movement is a complex undertaking that must be carefully managed if its critical functions are to be performed successfully. Using the clandestine cellular model, the organization’s leader and staff will manage a number of subordinate functional networks
  16. These functional networks will be organized into small cells, usually arranged so that only the cell leader knows the next connection in the organization. As the organization grows, the number of required interactions will increase, but the number of actively participating members in those multicellular interactions will remain limited. Unfortunately, the individual’s increased activity also increases the risk of detection.
  17. Clandestine cellular networks are largely decentralized for execution at the tactical level, but maintain a traditional or decentralized hierarchical form above the tactical level. The core leadership may be an individual, with numerous deputies, which can limit the success of decapitation strikes. Alternatively, the core leadership could be in the form of a centralized group of core individuals, which may act as a centralized committee. The core could also be a coordinating committee of like-minded threat leaders who coordinate their efforts, actions, and effects for an overall goal, while still maintaining their own agendas.
  18. To maintain a low signature necessary for survival, network leaders give maximum latitude for tactical decision making to cell leaders. This allows them to maintain tactical agility and freedom of action based on local conditions. The key consideration of the underground leader, with regard to risk versus maintaining influence, is to expose only the periphery tactical elements to direct contact with the counter-network forces.

LASTING SUCCESS

For the counter-network operator, the goal is to conduct activities that are designed to break the compartmentalization and facilitate the need for direct communication with members of other cells in the same branch or members of other networks. By maintaining pressure and leveraging the effects of a multi-nodal attack, friendly forces could potentially cause a catastrophic “cascading failure” and the disruption, neutralization, or destruction of multiple cells, branches, or even the entire network. Defeat of a network’s overt force is only a setback. Lasting success can only come with securing the relevant population, isolating the network from external support, and identifying and neutralizing the hard-core members of the network.

Various Sources

  1. Even with rigorous compartmentalization and internal discipline, there are structural weaknesses that can be detected and exploited. These structural points of weaknesses include the interaction between the underground and the auxiliary and between the auxiliary and the fighters and the interaction with external networks (transnational criminal, terrorist, other insurgents) who may not have the same level of compartmentalization.
  2. Network Descriptors
  3. Networks and cells can be described as open or closed. Understanding whether a network or cell is open or closed helps the intelligence analysts and planners to determine the scale, vulnerability, and purpose behind the network or cell. An open network is one that is growing purposefully, recruiting members to gain strength, access to targeted areas or support populations, or to replace losses. Given proper compartmentalization, open networks provide an extra security buffer for the core movement leaders by adding layers to the organization between the core and the periphery cells. Since the periphery cells on the outer edge of the network have higher signatures than the core, they draw the friendly force’s attention and are more readily identified by the friendly force, protecting the core.
  4. Closed cells or networks have limited or no growth, having been hand selected or directed to limit growth in order to minimize signature, chances of compromise, and to focus on a specific mission. While open networks are focused on purposeful growth, the opposite is true of the closed networks that are purposefully compartmentalized to a certain size based on their operational purpose. This is especially pertinent for use as terrorist cells, made up of generally closed, non-growing networks of specially selected or close-knit individuals. Closed networks have an advantage in operational security since the membership is fixed and consists of trusted individuals. Compartmentalizing a closed network protects the network from infiltration, but once penetrated, it can be defeated in detail.

APPENDIX G

SOCIAL NETWORK ANALYSIS

  1. In military operations, maps have always played an important role as an invaluable tool to better understanding the OE. Understanding the physical terrain is often secondary to understanding the people. Identifying and understanding the human factors is critical. The ability to map, visualize, and measure threat, friendly, and neutral networks to identify key nodes enables commanders at the strategic, operational, and tactical levels to better optimize solutions and develop the plan.
  2. Planners should understand the environment made up of human relationships and connections established by cultural, tribal, religious, and familial demographics and affiliations.
  3. By using advanced analytical methodologies such as SNA, analysts can map out, visualize, and understand the human factors.
  4. Social Network Analysis
  5. Overview

(1) SNA is a method that provides the identification of key nodes in the network based on four types of centrality (i.e., degree, closeness, betweenness, and eigenvector) using network diagrams. SNA focuses on the relationships (links or ties) between people, groups, or organizations (called nodes or actors). SNA does this by providing tools and quantitative measures that help to map out, visualize, and understand networks and the relationships between people (the human factors) and how those networks and relationships may be influenced.

Network diagrams, a graphical depiction of network analysis, used within SNA are referred to as sociograms that depict the social community structure as a network with ties between nodes (see Figure G-1). Like physical terrain maps of the earth, sociograms can have differing levels of detail.

(2) SNA provides a deeper understanding of the visualization of people within social networks and assists in ranking potential ability to influence or be influenced by those social networks. SNA provides an understanding of the organizational dynamics of a social network, which can be used for detailed analysis of a network to determine options on how to best influence, coerce, support, attack, or exploit them. In particular, it allows planners to identify and portray the details of a network structure, illuminate key players, highlight cohesive cells or subgroups within the network and identify individuals or groups that can or cannot be influenced, supported, manipulated, or coerced.

(3) SNA helps organize the informality of illusive and evolving networks. SNA techniques highlight the structure of a previously unobserved association by focusing on the preexisting relationships and ties that bind groups together. By focusing on roles, organizational positions, and prominent or influential actors, planners can analyze the structure of an organization, how the group functions, how members are influenced, how power is exerted, and how resources are exchanged. These factors allow the joint force to plan and execute operations that will result in desired effects on the targeted network.

(4) The physical, cultural, and social aspects of human factors involve complicated dynamics among people and organizations. These dynamics cannot be fully understood using traditional link analysis alone. SNA is distinguished from traditional, variable-based analysis that typically focuses on a person’s attributes such as gender, race, age, height, income, and religious affiliation.

While personal attributes remain fairly constant, social groups, affiliations or relationships constantly evolve. For example, a person can be a storeowner (business social network), a father (kinship social network), a member of the local government (political social network), a member of a church (religious social network), and be part of the insurgent underground (resistance social network). A person’s position in each social network matters more than their unchanging personal attributes. Their behavior in each respective network changes according to their role, influence, and authority in the network.

(1) Metrics. Analysts draw on a number of metrics and methods to better understand human networks. Common SNA metrics are broadly categorized into three metric families: network topology, actor centrality, and brokers and bridges.

(a) Network Diagram. Network topology is used to measure the overall network structure, such as its size, shape, density, cohesion, and levels of centralization and hierarchy (see Figure G-2). These types of measures can provide an understanding of a network’s ability to remain resilient and perform tasks efficiently. Network topology provides the planner with an understanding of how the network is organized and structured.

(b) Centrality. Indicators of centrality identify the key nodes within a network diagram, which may include identifying influential person(s) in a social network. Identification of the centrality helps identify key nodes in the network and illuminate potential leaders and can lead analysts to potential brokers within the network (see Figure G- 3). Centrality also measures and ranks people and organizations within a network based on how central they are to that network.

  1. Degree Centrality. The degree centrality of a node is based purely on the number of nodes it is linked to and the strength of those nodes. It is measured by a simple count of the number of direct links one node has to other nodes within the network. While this number is meaningless on its own, higher levels of degree centrality compared to other nodes may indicate an individual with a higher degree of power or influence within the network.

Nodes with a low degree of centrality (few direct links) are sometimes described as peripheral nodes (e.g., nodes I and J in Figure G-3). Although they have relatively low centrality scores, peripheral nodes can nevertheless play significant roles as resource gatherers or sources of fresh information from outside the main network.

  1. Closeness Centrality. Closeness centrality is the length of a node’s shortest path to any other node in the network. It is measured by a simple count of the number of links or steps from a node to the farther node away from it in the network, with the lowest numbers indicating nodes with the highest levels of closeness centrality. Nodes with a high level of closeness centrality have the closest association with every other node in the network. A high level of closeness centrality affords a node the best ability to directly or indirectly access the largest amount of nodes with the shortest path.

Closeness is calculated by adding the number of hops between a node and all others in a network

  1. Betweenness Centrality. Betweenness centrality is present when a node serves as the only connection between small clusters (e.g., cliques, cells) or individual nodes and the larger network. It is not measured by counting like degree and closeness centrality are; it is either present or not present (i.e., yes or no). Having betweenness centrality allows a node to monitor and control the exchanges between the smaller and larger networks that they connect, essentially acting as a broker for information between sections of the network.
  2. Eigen vector centrality measures the degree to which a node is linked to centralized nodes and is often a measure of the influence of a node in a network. It assumes that the greater number or stronger ties to more central or influential nodes increases the importance of a node. It essentially determines the “prestige” of a node based on how many other important nodes it is linked to. A node with a high eigenvector centrality is more closely linked to critical hubs.

(c) Brokers and Bridges. Brokerage metrics use a combination of methods to identify either nodes (brokers) that occupy strategic positions within the network or the relationships (bridges) connecting disparate parts of the network (see Figure G-4). Brokers have the potential to function as intermediaries or liaisons in a network and can control the flow of information or resources. Nodes that lie on the periphery of a network (displaying low centrality scores) are often connected to other networks that have not been mapped. This helps the planner identify gaps in their analysis and areas that still need mapping to gain a full understanding of the OE. These outer nodes provide an opportunity to gather fresh information not currently available.

  1. Density

Network density examines how well connected a network is by comparing the number of links present to the total number of links possible, which provides an understanding of how sparse or connected the network is. Network density can indicate many things. A dense network may have more influence than a sparse network. A highly interconnected network has fewer individual member constraints, may be less likely to rely on others as information brokers, be in a better position to participate in activities, or be closer to leadership, and therefore able to exert more influence upon them.

  1. Centralization. Centralization helps provide insights on whether the network is centralized around a few key personnel/organizations or decentralized among many cells or subgroups. A network centralized around one key person may further allow planners to focus in on these key personnel to influence the entire network.
  2. Density and centralization can inform whether an adversary force has a centralized hierarchy or command structure, if they are operating under a core C2 network with multiple, relatively autonomous hubs, or if they are a group of ad hoc decentralized resistance elements with very little interconnectedness or cohesive C2. Centralization metrics can also identify the most central people or organizations with the resistance.

Although hierarchical charts are helpful, they do not convey the underlying powerbrokers and key players that are influential with a social network and can often miss identifying the brokers that control the flow of information or resources throughout the network.

  1. Interrelationship of Networks

The JFC should identify the key stakeholders, key players, and power brokers in a potential operational area.

  1. People generally identify themselves as members of one or more cohesive networks. Networks may form due to common associations between individuals that may include tribes, sub-tribes, clans, family, religious affiliations, clubs, political organizations, and professional or hobby associations. SNA helps examine the individual networks that exist within the population that are critical to understanding the human dynamics in the OE based upon known relationships.
  2. Various networks within the OE are interrelated due to an individual’s association with multiple networks. SNA provides the staff with understanding of nodes within a single network, but can be expanded to conduct analysis on interrelated networks. This may provide the joint staff with an indication of the potential association, level of connectivity and potential influence of a single node to one more interrelated network. This aspect is essential for CTN, since a threat network’s relationship with other networks must be considered by the joint staff during planning and targeting.
  3. Other Considerations
  4. Collection. Two types of data need to be collected to conduct SNA: relational data (such as family/kinship ties, business ties, trust ties, financial ties, communication ties, grievance ties, political ties, etc.) and attribute data that captures important individual characteristics (tribe affiliations, job title, address, leadership positions, etc.). Collecting, updating, and verifying this information should be coordinated across the whole of USG.

(1) Ties (or links) are the relationship between actors (nodes) (see Figure G-5). By focusing on the preexisting relationships and ties that bind a group together, SNA will help provide an understanding of the structure of the network and help identify the unobserved associations of the actors within that network. To draw an accurate picture of a network, planners need to identify ties among its members. Strong bonds formed over time by connections like family, friendship, or organizational associations characterize these ties.

(2) Capturing the relational data of social ties between people and organizations requires collection, recording, and visualization. The joint force must collect specific types of data in a structured format with standardized data definitions across the force in order to visualize the human factors in systematic sociograms.

  1. Analysis

(1) Sociograms identify influential people and organizations as well as information gaps in order to prioritize collection efforts. The social structure depicted in a sociogram implies an inherent flow of information and resources through a network. Roles and positions identify prominent or influential individuals, structures of organizations, and how the networks function. Sociograms can model the human dynamics between participants in a network, highlight how to influence the network, identify who exhibits power within the network, and illustrate resource exchanges within the network. Sociograms can also provide a description and picture of the regime networks, or neutral entities, and uncover how the population is segmented.

(2) Sociograms are representations of the actual network and may not provide a complete or true depiction of the network. This could be the result of incomplete information or including or not including appropriate ties or actors. In addition, networks are constantly changing and a sociogram is only as good as the last time it was updated.

  1. Challenges. Collecting human factors data to support SNA requires a concerted effort over an extended period. Data can derive from traditional intelligence gathering capabilities, historical data, open-source information, exploiting social media, known relationships, and direct observation. This human factor data should be codified into a standardized data coding process defined by a standardized reference. Entering this human factor data is a process of identifying, extracting, and categorizing raw data to facilitate analysis. For analysts to ensure they are analyzing the sociocultural relational data collected in a standardized way, the JFC can produce a reference that provides standardized definitions of relational terms. Standardization will ensure that when analysts or planners exchange analytical products or data their analysis has the same meaning to all parties involved. This is needed to avoid confusion or misrepresentation in the data analysis. Standardized data definitions ensure consistency at all levels; facilitate data and analysis product transfer among differing organizations; and allow multiple organizations to produce interoperable products concurrently.

APPENDIX H

REFERENCES

The development of JP 3-25 is based on the following primary references:

  1. General
  2. Title 10, United States Code.
    b. Strategy to Combat Transnational Organized Crime.
    c. Executive Order 12333, United States Intelligence Activities.
  3. Department of Defense Publications
  4. Department of Defense Counternarcotics and Global Threats Strategy.
    b. Department of Defense Directive (DODD) 2000.19E, Joint Improvised Explosive

Device Defeat Organization.

  1. DODD 3300.03, DOD Document and Media Exploitation (DOMEX).
  1. DODD 5205.14, DOD Counter Threat Finance (CTF) Policy.
  2. DODD 5205.15E, DOD Forensic Enterprise (DFE).
  1. DODD 5240.01, DOD Intelligence Activities.
  1. DODD 8521.01E, Department of Defense Biometrics.
  2. Department of Defense Instruction (DODI) O-3300.04, Defense Biometric Enabled

Intelligence (BEI) and Forensic Enabled Intelligence (FEI).

  1. DODI5200.08, Security of DOD Installations and Resources and the DOD Physical Security Review Board (PSRB).
  2. Chairman of the Joint Chiefs of Staff Publications
  3. JP 2-01.3, Joint Intelligence Preparation of the Operational Environment. b. JP 3-05, Special Operations.
    c. JP 3-07.2, Antiterrorism.
    d. JP 3-07.3, Peace Operations.
  4. JP 3-07.4, Counterdrug Operations.
    f. JP 3-08, Interorganizational Cooperation.
  5. JP 3-13, Information Operations.
    h. JP 3-13.2, Military Information Support Operations.
    i. JP 3-15.1, Counter-Improvised Explosive Device Operations. j. JP 3-16, Multinational Operations.
    k. JP 3-20, Security Cooperation.
    l. JP 3-22, Foreign Internal Defense.
    m. JP 3-24, Counterinsurgency.
    n. JP 3-26, Counterterrorism.
    o. JP 3-40, Countering Weapons of Mass Destruction.
    p. JP 3-57, Civil-Military Operations.
    q. JP 3-60, Joint Targeting.
    r. JP 5-0, Joint Planning.
    s. Joint Doctrine Note 1-16, Identity Activities.
  6. Multi-Service Publication

ATP 5-0.3/MCRP 5-1C/NTTP 5-01.3/AFTTP 3-2.87, Multi-Service Tactics, Techniques, and Procedures for Operation Assessment.

  1. Other Publications
  2. The Haqqani Network: Pursuing Feuds Under the Guise of Jihad? CTX Journal, Vol. 3, No. 4, November 2013, Major Lars W. Lilleby, Norwegian Army.
  3. Foreign Disaster Response, Military Review, November-December 2011.
  4. US Military Response to the 2010 Haiti Earthquake, RAND Arroyo Center, 2013.
  5. DOD Support to Foreign Disaster Relief, July 13, 2011.
  6. United Nations Stabilization Mission in Haiti website.
  7. Kirk Meyer, Former Director of the Afghan Threat Finance Cell—CTX Journal, Vol. 4, No. 3, August 2014.
  8. Networks and Netwars: The Future of Terror[ism], Crime, and Militancy, Edited by John Arquilla, David Ronfeldt.
  9. General Martin Dempsey, Chairman of the Joint Chiefs of Staff, Foreign Policy,25 July 2014, The Bend of Power.
  10. Alda,E.,andSala,J.L.LinksBetweenTerrorism,OrganizedCrimeandCrime:The Case of the Sahel Region. Stability: International Journal of Security and Development, Vol. 3, No. 1, Article 27, pp. 1-9.
  11. International Maritime Bureau Piracy (Piracy Reporting Center).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes on Structured Analytic Techniques for Intelligence Analysis

Selections from Structured Analytic Techniques for Intelligence Analysis by Richards J Heuer and Randolph H Pherson.

In contrast to the bipolar dynamics of the Cold War, this new world is strewn with failing states, proliferation dangers, regional crises, rising powers, and dangerous nonstate actors—all at play against a backdrop of exponential change in fields as diverse as population and technology.

To be sure, there are still precious secrets that intelligence collection must uncover—things that are knowable and discoverable. But this world is equally rich in mysteries having to do more with the future direction of events and the intentions of key actors. Such things are rarely illuminated by a single piece of secret intelligence data; they are necessarily subjects for analysis.

intelligence analysis differs from similar fields of intellectual endeavor.  Intelligence analysts must traverse a minefield of potential errors.

First, they typically must begin addressing their subjects where others have left off; in most cases the questions they get are about what happens next, not about what is known.

Second, they cannot be deterred by lack of evidence. As Heuer pointed out in his earlier work, the essence of the analysts’ challenge is having to deal with ambiguous situations in which information is never complete and arrives only incrementally—but with constant pressure to arrive at conclusions.

Third, analysts must frequently deal with an adversary that actively seeks to deny them the information they need and is often working hard to deceive them.

Finally, analysts, for all of these reasons, live with a high degree of risk—essentially the risk of being wrong and thereby contributing to ill-informed policy decisions.

The risks inherent in intelligence analysis can never be eliminated, but one way to minimize them is through more structured and disciplined thinking about thinking.

The key point is that all analysts should do something to test the conclusions they advance. To be sure, expert judgment and intuition have their place—and are often the foundational elements of sound analysis— but analysts are likely to minimize error to the degree they can make their underlying logic explicit in the ways these techniques demand.

Just as intelligence analysis has seldom been more important, the stakes in the policy process it informs have rarely been higher. Intelligence analysts these days therefore have a special calling, and they owe it to themselves and to those they serve to do everything possible to challenge their own thinking and to rigorously test their conclusions.

Preface: Origin and Purpose

 

Structured analysis involves a step-by-step process that externalizes an individual analyst’s thinking in a manner that makes it readily apparent to others, thereby enabling it to be shared, built on, and critiqued by others. When combined with the intuitive judgment of subject matter experts, such a structured and transparent process can significantly reduce the risk of analytic error.

Each step in a technique prompts relevant discussion and, typically, this generates more divergent information and more new ideas than any unstructured group process. The step-by-step process of structured analytic techniques structures the interaction among analysts in a small analytic group or team in a way that helps to avoid the multiple pitfalls and pathologies that often degrade group or team performance.

By defining the domain of structured analytic techniques, providing a manual for using and testing these techniques, and outlining procedures for evaluating and validating these techniques, this book lays the groundwork for continuing improvement of how analysis is done, both within the Intelligence Community and beyond.

Audience for This Book

 

This book is for practitioners, managers, teachers, and students of intelligence analysis and foreign affairs in both the public and private sectors. Managers, commanders, action officers, planners, and policymakers who depend upon input from analysts to help them achieve their goals should also find it useful. Academics who specialize in qualitative methods for dealing with unstructured data will be interested in this pathbreaking book as well.

 

Techniques such as Analysis of Competing Hypotheses, Key Assumptions Check, and Quadrant Crunching developed specifically for intelligence analysis are now being adapted for use in other fields. New techniques that the authors developed to fill gaps in what is currently available for intelligence analysis are being published for the first time in this book and have broad applicability.

Introduction and Overview

 

Analysis in the U.S. Intelligence Community is currently in a transitional stage, evolving from a

mental activity done predominantly by a sole analyst to a collaborative team or group activity.

The driving forces behind this transition include the following:

  • The growing complexity of international issues and the consequent requirement for

multidisciplinary input to most analytic products.

  • The need to share more information more quickly across organizational boundaries.
  • The dispersion of expertise, especially as the boundaries between analysts, collectors, and operators become blurred.
  • And the need to identify and evaluate the validity of alternative mental models.

This transition is being enabled by advances in technology, such as the Intelligence Community’s Intellipedia and new A-Space collaborative network, “communities of interest,” the mushrooming growth of social networking practices among the upcoming generation of analysts, and the increasing use of structured analytic techniques that guide the interaction among analysts.

 

OUR VISION

 

Structured analysis is a mechanism by which internal thought processes are externalized in a systematic and transparent manner so that they can be shared, built on, and easily critiqued by others. Each technique leaves a trail that other analysts and managers can follow to see the basis for an analytic judgment.

This transparency also helps ensure that differences of opinion among analysts are heard and seriously considered early in the analytic process. Analysts have told us that this is one of the most valuable benefits of any structured technique.

Structured analysis helps analysts ensure that their analytic framework—the foundation upon which they form their analytic judgments—is as solid as possible. By helping break down a specific analytic problem into its component parts and specifying a step-by-step process for handling these parts, structured analytic techniques help to organize the amorphous mass of data with which most analysts must contend. This is the basis for the terms structured analysis and structured analytic techniques. Such techniques make an analyst’s thinking more open and available for review and critique than the traditional approach to analysis. It is this transparency that enables the effective communication at the working level that is essential for interoffice and interagency collaboration.

Structured analytic techniques in general, however, do form a methodology—a set of principles and procedures for qualitative analysis of the kinds of uncertainties that intelligence analysts must deal with on a daily basis.

There is, of course, no formula for always getting it right, but the use of structured techniques can reduce the frequency and severity of error. These techniques can help analysts mitigate the proven cognitive limitations, side-step some of the known analytic pitfalls, and explicitly confront the problems associated with unquestioned mental models (also known as mindsets). They help analysts think more rigorously about an analytic problem and ensure that preconceptions and assumptions are not taken for granted but are explicitly examined and tested.

Intelligence analysts, like humans in general, do not start with an empty mind. Whenever people try to make sense of events, they begin with some body of experience or knowledge that gives them a certain perspective or viewpoint which we are calling a mental model. Intelligence specialists who are expert in their field have well developed mental models.

If an analyst’s mindset is seen as the problem, one tends to blame the analyst for being inflexible or outdated in his or her thinking.

1.2 THE VALUE OF TEAM ANALYSIS

 

Our vision for the future of intelligence analysis dovetails with that of the Director of National Intelligence’s Vision 2015, in which intelligence analysis increasingly becomes a collaborative enterprise, with the focus of collaboration shifting “away from coordination of draft products toward regular discussion of data and hypotheses early in the research phase.”

 

Analysts have also found that use of a structured process helps to depersonalize arguments when there are differences of opinion. Fortunately, today’s technology and social networking programs make structured collaboration much easier than it has ever been in the past.

1.3 THE ANALYST’S TASK

 

we developed a taxonomy for a core group of fifty techniques that appear to be the most useful for the Intelligence Community, but also useful for those engaged in related analytic pursuits in academia, business, law enforcement, finance, and medicine. This list, however, is not static.

 

It is expected to increase or decrease as new techniques are identified and others are tested and found wanting. Some training programs may have a need to boil down their list of techniques to the essentials required for one particular type of analysis.

 

willingness to share in a collaborative environment is also conditioned by the sensitivity of the

information that one is working with.

 

1.4 HISTORY OF STRUCTURED ANALYTIC TECHNIQUES

 

The first use of the term “Structured Analytic Techniques” in the Intelligence Community was in 2005. However, the origin of the concept goes back to the 1980s, when the eminent teacher of intelligence analysis, Jack Davis, first began teaching and writing about what he called “alternative analysis.” The term referred to the evaluation of alternative explanations or hypotheses, better understanding of other cultures, and analyzing events from the other country’s point of view rather than by mirror imaging.

 

organized the techniques into three categories: diagnostic techniques, contrarian techniques, and imagination techniques.

It proposes that most analysis be done in two phases: a divergent analysis or creative phase with broad participation by a social network using a wiki, followed by a convergent analysis phase and final report done by a small analytic team.

1.6 AGENDA FOR THE FUTURE

A principal theme of this book is that structured analytic techniques facilitate effective collaboration among analysts. These techniques guide the dialogue among analysts with common interests as they share evidence and alternative perspectives on the meaning and significance of this evidence. Just as these techniques provide structure to our individual thought processes, they also structure the interaction of analysts within a small team or group. Because structured techniques are designed to generate and evaluate divergent information and new ideas, they can help avoid the common pitfalls and pathologies that commonly beset other small group processes. In other words, structured analytic techniques are enablers of collaboration.

2 Building a Taxonomy

A taxonomy is a classification of all elements of the domain of information or knowledge. It defines a domain by identifying, naming, and categorizing all the various objects in this space. The objects are organized into related groups based on some factor common to each object in the group.

The word taxonomy comes from the Greek taxis meaning arrangement, division, or order and nomos meaning law.

 

Development of a taxonomy is an important step in organizing knowledge and furthering the development of any particular discipline.

 

“a taxonomy differentiates domains by specifying the scope of inquiry, codifying naming conventions, identifying areas of interest, helping to set research priorities, and often leading to new

theories. Taxonomies are signposts, indicating what is known and what has yet to be discovered.”

 

To the best of our knowledge, a taxonomy of analytic methods for intelligence analysis has not previously been developed, although taxonomies have been developed to classify research methods used in forecasting, operations research, information systems, visualization tools, electronic commerce, knowledge elicitation, and cognitive task analysis.

 

After examining taxonomies of methods used in other fields, we found that there is no single right way to organize a taxonomy—only different ways that are more or less useful in achieving a specified goal. In this case, our goal is to gain a better understanding of the domain of structured analytic techniques, investigate how these techniques contribute to providing a better analytic product, and consider how they relate to the needs of analysts. The objective has been to identify various techniques that are currently available, identify or develop additional potentially useful techniques, and help analysts compare and select the best technique for solving any specific analytic problem. Standardization of terminology for structured analytic techniques will facilitate collaboration across agency boundaries during the use of these techniques.

 

 

2.1 FOUR CATEGORIES OF ANALYTIC METHODS

 

The taxonomy described here posits four functionally distinct methodological approaches to intelligence analysis. These approaches are distinguished by the nature of the analytic methods used, the type of quantification if any, the type of data that are available, and the type of training that is expected or required. Although each method is distinct, the borders between them can be blurry.

 

* Expert judgment: This is the traditional way most intelligence analysis has been done. When done well, expert judgment combines subject matter expertise with critical thinking. Evidentiary reasoning, historical method, case study method, and reasoning by analogy are included in the expert judgment category. The key characteristic that distinguishes expert judgment from structured analysis is that it is usually an individual effort in which the reasoning remains largely in the mind of the individual analyst until it is written down in a draft report. Training in this type of analysis is generally provided through postgraduate education, especially in the social sciences and liberal arts, and often along with some country or language expertise.

 

* Structured analysis: Each structured analytic technique involves a step-by-step process that externalizes the analyst’s thinking in a manner that makes it readily apparent to others, thereby enabling it to be reviewed, discussed, and critiqued piece by piece, or step by step. For this reason, structured analysis often becomes a collaborative effort in which the transparency of the analytic process exposes participating analysts to divergent or conflicting perspectives. This type of analysis is believed to mitigate the adverse impact on analysis of known cognitive limitations and pitfalls. Frequently used techniques include Structured Brainstorming, Scenarios, Indicators, Analysis of Competing Hypotheses, and Key Assumptions Check. Structured techniques can be used by analysts who have not been trained in statistics, advanced mathematics, or the hard sciences. For most analysts, training in structured analytic techniques is obtained only within the Intelligence Community.

 

* Quantitative methods using expert-generated data: Analysts often lack the empirical data needed to analyze an intelligence problem. In the absence of empirical data, many methods are designed to use quantitative data generated by expert opinion, especially subjective probability judgments. Special procedures are used to elicit these judgments. This category includes methods such as Bayesian inference, dynamic modeling, and simulation. Training in the use of these methods is provided through graduate education in fields such as mathematics, information science, operations research, business, or the sciences.

 

* Quantitative methods using empirical data: Quantifiable empirical data are so different from expert- generated data that the methods and types of problems the data are used to analyze are also quite different. Econometric modeling is one common example of this method. Empirical data are collected by various types of sensors and are used, for example, in analysis of weapons systems. Training is generally obtained through graduate education in statistics, economics, or the hard sciences.

 

 

2.2 TAXONOMY OF STRUCTURED ANALYTIC TECHNIQUES

Structured techniques have been used by Intelligence Community methodology specialists and some analysts in selected specialties for many years, but the broad and general use of these techniques by the average analyst is a relatively new approach to intelligence analysis. The driving forces behind the development and use of these techniques are:

(1) an increased appreciation of cognitive limitations and pitfalls that make intelligence analysis so difficult

(2) prominent intelligence failures that have prompted reexamination of how intelligence analysis is generated

(3) policy support and technical support for interagency collaboration from the Office of the Director of National Intelligence

(4) a desire by policymakers who receive analysis that it be more transparent as to how the conclusions were reached.

 

There are eight categories of structured analytic techniques, which are listed below:

Decomposition and Visualization (chapter 4)
Idea Generation (chapter 5)
Scenarios and Indicators (chapter 6)
Hypothesis Generation and Testing (chapter 7)

Decision Support (chapter 11)

Assessment of Cause and Effect (chapter 8)

Challenge Analysis (chapter 9)
Conflict Management (chapter 10)

 

3 Criteria for Selecting Structured Techniques

 

Techniques that require a major project of the type usually outsourced to an outside expert or company are not included. Several interesting techniques that were recommended to us were not included for this reason. A number of techniques that tend to be used exclusively for a single type of analysis, such as tactical military, law enforcement, or business consulting, have not been included.

In this collection of techniques we build on work previously done in the Intelligence Community.

3.2 TECHNIQUES EVERY ANALYST SHOULD MASTER

 

The average intelligence analyst is not expected to know how to use every technique in this book. All analysts should, however, understand the functions performed by various types of techniques and recognize the analytic circumstances in which it is advisable to use them.

 

Structured Brainstorming: Perhaps the most commonly used technique, Structured Brainstorming is a simple exercise often employed at the beginning of an analytic project to elicit relevant information or insight from a small group of knowledgeable analysts. The group’s goal might be to identify a list of such things as relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, potential responses by an adversary or competitor to some action or situation, or, for law enforcement, potential suspects or avenues of investigation.

 

Cross-Impact Matrix: If the brainstorming identifies a list of relevant variables, driving forces, or key players, the next step should be to create a Cross-Impact Matrix and use it as an aid to help the group visualize and then discuss the relationship between each pair of variables, driving forces, or players. This is a learning exercise that enables a team or group to develop a common base of knowledge about, for example, each variable and how it relates to each other variable.

 

Key Assumptions Check: Requires analysts to explicitly list and question the most important working assumptions underlying their analysis. Any explanation of current events or estimate of future developments requires the interpretation of incomplete, ambiguous, or potentially deceptive evidence. To fill in the gaps, analysts typically make assumptions about such things as another country’s intentions or capabilities, the way governmental processes usually work in that country, the relative strength of political forces, the trustworthiness or accuracy of key sources, the validity of previous analyses on the same subject, or the presence or absence of relevant changes in the context in which the activity is occurring.

 

Indicators: Indicators are observable or potentially observable actions or events that are monitored to detect or evaluate change over time. For example, indicators might be used to measure changes toward an undesirable condition such as political instability, a humanitarian crisis, or an impending attack. They can also point toward a desirable condition such as economic or democratic reform. The special value of indicators is that they create an awareness that prepares an analyst’s mind to recognize the earliest signs of significant change that might otherwise be overlooked. Developing an effective set of indicators is more difficult than it might seem. The Indicator Validator helps analysts assess the diagnosticity of their indicators.

 

Analysis of Competing Hypotheses: This technique requires analysts to start with a full set of plausible hypotheses rather than with a single most likely hypothesis. Analysts then take each item of evidence, one at a time, and judge its consistency or inconsistency with each hypothesis. The idea is to refute hypotheses rather than confirm them. The most likely hypothesis is the one with the least evidence against it, not the most evidence for it. This process applies a key element of scientific method to intelligence analysis.

 

Premortem Analysis and Structured Self-Critique:  These two easy-to-use techniques enable a small team of analysts who have been working together on any type of future-oriented analysis to challenge effectively the accuracy of their own conclusions. Premortem Analysis uses a form of reframing, in which restating the question or problem from another perspective enables one to see it in a different way and come up with different answers.

 

With Structured Self-Critique, analysts respond to a list of questions about a variety of factors, including sources of uncertainty, analytic processes that were used, critical assumptions, diagnosticity of evidence, information gaps, and the potential for deception. Rigorous use of both of these techniques can help prevent a future need for a postmortem.

 

What If? Analysis: one imagines that an unexpected event has happened and then, with the benefit of “hindsight,” analyzes how it could have happened and considers the potential consequences. This type of exercise creates an awareness that prepares the analyst’s mind to recognize early signs of a significant change, and it may enable a decision maker to plan ahead for that contingency.

 

3.3 COMMON ERRORS IN SELECTING TECHNIQUES

 

The value and accuracy of an analytic product depends in part upon selection of the most appropriate technique or combination of techniques for doing the analysis… Lacking effective guidance, analysts are vulnerable to various influences:

 

  • College or graduate-school recipe: Analysts are inclined to use the tools they learned in college or graduate school whether or not those tools are the best application for the different context of intelligence analysis.
  • Tool rut: Analysts are inclined to use whatever tool they already know or have readily available. Psychologist Abraham Maslow observed that “if the only tool you have is a hammer, it is tempting to treat everything as if it were a nail.”
  • Convenience shopping: The analyst, guided by the evidence that happens to be available, uses a method appropriate for that evidence, rather than seeking out the evidence that is really needed to address the intelligence issue. In other words, the evidence may sometimes drive the technique selection instead of the analytic need driving the evidence collection.
  • Time constraints: Analysts can easily be overwhelmed by their in-boxes and the myriad tasks they have to perform in addition to their analytic workload. The temptation is to avoid techniques that would “take too much time.”

 

3.4 ONE PROJECT, MULTIPLE TECHNIQUES

 

Multiple techniques can also be used to check the accuracy and increase the confidence in an analytic conclusion. Research shows that forecasting accuracy is increased by combining “forecasts derived from methods that differ substantially and draw from different sources of information.”

 

3.5 STRUCTURED TECHNIQUE SELECTION GUIDE

Analysts must be able, with minimal effort, to identify and learn how to use those techniques that best meet their needs and fit their styles.

 

4 Decomposition and Visualization

 

Two common approaches for coping with this limitation of our working memory are decomposition —that is, breaking down the problem or issue into its component parts so that each part can be considered separately—and visualization—placing all the parts on paper or on a computer screen in some organized manner designed to facilitate understanding how the various parts interrelate.

 

Any technique that gets a complex thought process out of the analyst’s head and onto paper or the computer screen can be helpful. The use of even a simple technique such as a checklist can be extremely productive.

 

Analysis is breaking information down into its component parts. Anything that has parts also has a structure that relates these parts to each other. One of the first steps in doing analysis is to determine an appropriate structure for the analytic problem, so that one can then identify the various parts and begin assembling information on them. Because there are many different kinds of analytic problems, there are also many different ways to structure analysis.

—Richards J. Heuer Jr., The Psychology of Intelligence Analysis (1999).

 

Overview of Techniques

 

Getting Started Checklist, Customer Checklist, and Issue Redefinition are three techniques that can be combined to help analysts launch a new project. If an analyst can start off in the right direction and avoid having to change course later, a lot of time can be saved.

 

Chronologies and Timelines are used to organize data on events or actions. They are used whenever it is important to understand the timing and sequence of relevant events or to identify key events and gaps.

 

Sorting is a basic technique for organizing data in a manner that often yields new insights. Sorting is effective when information elements can be broken out into categories or subcategories for comparison by using a computer program, such as a spreadsheet.

 

Ranking, Scoring, and Prioritizing provide how-to guidance on three different ranking techniques—Ranked Voting, Paired Comparison, and Weighted Ranking. Combining an idea-generation technique such as Structured Brainstorming with a ranking technique is an effective way for an analyst to start a new project or to provide a foundation for interoffice or interagency collaboration. The idea-generation technique is used to develop lists of driving forces, variables to be considered, indicators, possible scenarios, important players, historical precedents, sources of information, questions to be answered, and so forth. Such lists are even more useful once they are ranked, scored, or prioritized to determine which items are most important, most useful, most likely, or should be at the top of the priority list.

 

Matrices are generic analytic tools for sorting and organizing data in a manner that facilitates comparison and analysis. They are used to analyze the relationships among any two sets of variables or the interrelationships among a single set of variables. A Matrix consists of a grid with as many cells as needed for whatever problem is being analyzed. Some analytic topics or problems that use a matrix occur so frequently that they are described in this book as separate techniques.

 

Network Analysis is used extensively by counterterrorism, counternarcotics, counterproliferation, law enforcement, and military analysts to identify and monitor individuals who may be involved in illegal activity. Social Network Analysis is used to map and analyze relationships among people, groups, organizations, computers, Web sites, and any other information processing entities.

 

Mind Maps and Concept Maps are visual representations of how an individual or a group thinks about a topic of interest.

 

Process Maps and Gantt Charts were developed for use in industry and the military, but they are also useful to intelligence analysts. Process Mapping is a technique for identifying and diagramming each step in a complex process; this includes event flow charts, activity flow charts, and commodity flow charts.

 

4.1 GETTING STARTED CHECKLIST

 

The Method

Analysts should answer several questions at the beginning of a new project. The following is our list of suggested starter questions, but there is no single best way to begin. Other lists can be equally effective.

 

  • What has prompted the need for the analysis? For example, was it a news report, a new intelligence report, a new development, a perception of change, or a customer request?
    What is the key intelligence question that needs to be answered?
    Why is this issue important, and how can analysis make a meaningful contribution?
  • Has your organization or any other organization ever answered this question or a similar question before, and, if so, what was said? To whom was this analysis delivered, and what has changed since that time?
  • Who are the principal customers? Are these customers’ needs well understood? If not, try to gain a better understanding of their needs and the style of reporting they like.
    Are there other stakeholders who would have an interest in the answer to this question? Who might see the issue from a different perspective and prefer that a different question be answered? Consider meeting with others who see the question from a different perspective.
  • From your first impressions, what are all the possible answers to this question? For example, what alternative explanations or outcomes should be considered before making an analytic judgment on the issue?
  • Depending on responses to the previous questions, consider rewording the key intelligence question. Consider adding subordinate or supplemental questions.
  • Generate a list of potential sources or streams of reporting to be explored.
  • Reach out and tap the experience and expertise of analysts in other offices or organizations—both within and outside the government—who are knowledgeable on this topic. For example, call a meeting or conduct a virtual meeting to brainstorm relevant evidence and to develop a list of alternative hypotheses, driving forces, key indicators, or important players.

 

4.2 CUSTOMER CHECKLIST

 

The Customer Checklist helps an analyst tailor the product to the needs of the principal customer for the analysis. When used appropriately, it ensures that the product is of maximum possible value to this customer.

 

The Method

  • Before preparing an outline or drafting a paper, ask the following questions:
  • Who is the key person for whom the product is being developed?
  • Will this product answer the question the customer asked or the question the customer should be asking? If necessary, clarify this before proceeding.
  • What is the most important message to give this customer?
  • How is the customer expected to use this information?
  • How much time does the customer have to digest this product?
  • What format would convey the information most effectively?
  • Is it possible to capture the essence in one or a few key graphics?
  • What classification is most appropriate for this product? Is it necessary to consider publishing the paper at more than one classification level?
  • What is the customer’s level of tolerance for technical language? How much detail would the customer expect? Can the details be provided in appendices or backup papers, graphics, notes, or pages?
  • Will any structured analytic technique be used? If so, should it be flagged in the product?
  • Would the customer expect you to reach out to other experts within or outside the Intelligence Community to tap their expertise in drafting this paper? If this has been done, how has the contribution of other experts been flagged in the product? In a footnote? In a source list?
  • To whom or to what source might the customer turn for other views on this topic? What data or analysis might others provide that could influence how the customer reacts to what is being prepared in this product?

 

 

4.3 ISSUE REDEFINITION

 

 

Many analytic projects start with an issue statement. What is the issue, why is it an issue, and how will it be addressed? Issue Redefinition is a technique for experimenting with different ways to define an issue. This is important, because seemingly small differences in how an issue is defined can have significant effects on the direction of the research.

 

When to Use It

Using Issue Redefinition at the beginning of a project can get you started off on the right foot. It may also be used at any point during the analytic process when a new hypothesis or critical new evidence is introduced. Issue Redefinition is particularly helpful in preventing “mission creep,” which results when analysts unwittingly take the direction of analysis away from the core intelligence question or issue at hand, often as a result of the complexity of the problem or a perceived lack of information.

 

Value Added

Proper issue identification can save a great deal of time and effort by forestalling unnecessary research and analysis on a poorly stated issue. Issues are often poorly presented when they are:

 

  • Solution driven (Where are the weapons of mass destruction in Iraq?)
  • Assumption driven (When China launches rockets into Taiwan, will the Taiwanese government collapse?)
  • Too broad or ambiguous (What is the status of Russia’s air defense system?)
  • Too narrow or misdirected (Who is voting for President Hugo Chávez in the election?)

 

The Method

 

* Rephrase: Redefine the issue without losing the original meaning. Review the results to see if they provide a better foundation upon which to conduct the research and assessment to gain the best answer. Example: Rephrase the original question, “How much of a role does Aung San Suu Kyi play in the ongoing unrest in Burma?” as, “How active is the National League for Democracy, headed by Aung San Suu Kyi, in the antigovernment riots in Burma?”

 

* Ask why? Ask a series of “why” or “how” questions about the issue definition. After receiving the first response, ask “why” to do that or “how” to do it. Keep asking such questions until you are satisfied that the real problem has emerged. This process is especially effective in generating possible alternative answers.

 

* Broaden the focus: Instead of focusing on only one piece of the puzzle, step back and look at several pieces together. What is the issue connected to? Example: The original question, “How corrupt is the Pakistani president?” leads to the question, “How corrupt is the Pakistani government as a whole?”

 

* Narrow the focus: Can you break down the issue further? Take the question and ask about the components that make up the problem. Example: The original question, “Will the European Union ratify a new constitution?” can be broken down to, “How do individual member states view the new European Union constitution?”

 

* Redirect the focus: What outside forces impinge on this issue? Is deception involved? Example: The original question, “What are the terrorist threats against the U.S. homeland?” is revised to, “What opportunities are there to interdict terrorist plans?”

 

* Turn 180 degrees: Turn the issue on its head. Is the issue the one asked or the opposite of it? Example: The original question, “How much of the ground capability of China’s People’s Liberation Army would be involved in an initial assault on Taiwan?” is rephrased as, “How much of the ground capability of China’s People’s Liberation Army would not be involved in the initial Taiwan assault?”

 

Relationship to Other Techniques

 

Issue Redefinition is often used simultaneously with the Getting Started Checklist and the Customer Checklist. The technique is also known as Issue Development, Problem Restatement, and Reframing the Question.

 

4.4 CHRONOLOGIES AND TIMELINES

 

When to Use It

Chronologies and timelines aid in organizing events or actions. Whenever it is important to understand the timing and sequence of relevant events or to identify key events and gaps, these techniques can be useful. The events may or may not have a cause-and-effect relationship.

 

Value Added

Chronologies and timelines aid in the identification of patterns and correlations among events. These techniques also allow you to relate seemingly disconnected events to the big picture to highlight or identify significant changes or to assist in the discovery of trends, developing issues, or anomalies. They can serve as a catch-all for raw data when the meaning of the data has not yet been identified. Multiple-level timelines allow analysts to track concurrent events that may have an effect on each other. Although timelines may be developed at the onset of an analytic task to ascertain the context of the activity to be analyzed, timelines and chronologies also may be used in postmortem intelligence studies to break down the intelligence reporting, find the causes for intelligence failures, and highlight significant events after an intelligence surprise.

 

When researching the problem, ensure that the relevant information is listed with the date or order in which it occurred. Make sure the data are properly referenced.
Review the chronology or timeline by asking the following questions.

  • What are the temporal distances between key events? If “lengthy,” what caused the delay? Are there missing pieces of data that may fill those gaps that should be collected?
  • Did the analyst overlook piece(s) of intelligence information that may have had an impact on or be related to the events?
  • Conversely, if events seem to have happened more rapidly than were expected, or if not all events appear to be related, is it possible that the analyst has information related to multiple event timelines?
  • Does the timeline have all the critical events that are necessary for the outcome to occur?
  • When did the information become known to the analyst or a key player?
  • What are the intelligence gaps?
  • Are there any points along the timeline when the target is particularly vulnerable to U.S. intelligence collection activities or countermeasures?
  • What events outside this timeline could have influenced the activities?
  • If preparing a timeline, synopsize the data along a line, usually horizontal or vertical. Use the space on both sides of the line to highlight important analytic points. For example, place facts above the line and points of analysis or commentary below the line.
  • Alternatively, contrast the activities of different groups, organizations, or streams of information by placement above or below the line. If multiple actors are involved, you can use multiple lines, showing how and where they converge.
  • Look for relationships and patterns in the data connecting persons, places, organizations, and other activities. Identify gaps or unexplained time periods, and consider the implications of the absence of evidence. Prepare a summary chart detailing key events and key analytic points in an annotated timeline.

 

 

4.5 SORTING

 

When to Use It

Sorting is effective when information elements can be broken out into categories or subcategories for comparison with each other, most often by using a computer program, such as a spreadsheet. This technique is particularly effective during the initial data gathering and hypothesis generation phases of analysis, but you may also find sorting useful at other times.

Value Added

Sorting large amounts of data into relevant categories that are compared with each other can provide analysts with insights into trends, similarities, differences, or abnormalities of intelligence interest that otherwise would go unnoticed. When you are dealing with transactions data in particular (for example, communications intercepts or transfers of goods or money), it is very helpful to sort the data first.

 

The Method

Follow these steps:

* Review the categories of information to determine which category or combination of categories might show trends or an abnormality that would provide insight into the problem you are studying. Place the data into a spreadsheet or a database using as many fields (columns) as necessary to differentiate among the data types (dates, times, locations, people, activities, amounts, etc.). List each of the facts, pieces of information, or hypotheses involved in the problem that are relevant to your sorting schema. (Use paper, whiteboard, movable sticky notes, or other means for this.)

* Review the listed facts, information, or hypotheses in the database or spreadsheet to identify key fields that may allow you to uncover possible patterns or groupings. Those patterns or groupings then illustrate the schema categories and can be listed as header categories. For example, if an examination of terrorist activity shows that most attacks occur in hotels and restaurants but that the times of the attacks vary, “Location” is the main category; while “Date” and “Time” are secondary categories.

  • Group those items according to the sorting schema in the categories that were defined in step 1.
  • Choose a category and sort the data within that category. Look for any insights, trends, or oddities.

Good analysts notice trends; great analysts notice anomalies.

* Review (or ask others to review) the sorted facts, information, or hypotheses to see if there are alternative ways to sort them. List any alternative sorting schema for your problem. One of the most useful applications for this technique is to sort according to multiple schemas and examine results for correlations between data and categories. But remember that correlation is not the same as causation.

 

Origins of This Technique

Sorting is a long-established procedure for organizing data. The description here is from Defense Intelligence Agency training materials.

 

 

4.6 RANKING, SCORING, PRIORITIZING

 

When to Use It

 

A ranking technique is appropriate whenever there are too many items to rank easily just by looking at the list; the ranking has significant consequences and must be done as accurately as possible; or it is useful to aggregate the opinions of a group of analysts.

 

Value Added

 

Combining an idea-generation technique with a ranking technique is an excellent way for an analyst to start a new project or to provide a foundation for inter-office or interagency collaboration. An idea-generation technique is often used to develop lists of such things as driving forces, variables to be considered, or important players. Such lists are more useful once they are ranked, scored, or prioritized.

 

Ranked Voting

In a Ranked Voting exercise, members of the group individually rank each item in order according to the member’s preference or what the member regards as the item’s importance.

 

Paired Comparison

Paired Comparison compares each item against every other item, and the analyst can assign a score to show how much more important or more preferable or more probable one item is than the others. This technique provides more than a simple ranking, as it shows the degree of importance or preference for each item. The list of items can then be ordered along a dimension, such as importance or preference, using an interval-type scale.

Follow these steps to use the technique:

  • List the items to be compared. Assign a letter to each item.
  • Create a table with the letters across the top and down the left side as in Figure 4.6a. The results of the comparison of each pair of items are marked in the cells of this table. Note the diagonal line of darker-colored cells. These cells are not used, as each item is never compared with itself. The cells below this diagonal line are not used because they would duplicate a comparison in the cells above the diagonal line. If you are working in a group, distribute a blank copy of this table to each participant.
  • Looking at the cells above the diagonal row of gray cells, compare the item in the row with the one in the column. For each cell, decide which of the two items is more important (or more preferable or more probable). Write the letter of the winner of this comparison in the cell, and score the degree of difference on a scale from 0 (no difference) to 3 (major difference) as in Figure 4.6a.
  • Consolidate the results by adding up the total of all the values for each of the items and put this number in the “Score” column. For example, in Figure 4.6a item B has one 3 in the first row plus one 2, and two 1s in the second row, for a score of 7.
  • Finally, it may be desirable to convert these values into a percentage of the total score. To do this, divide the total number of scores (20 in the example) by the score for each individual item. Item B, with a score of 7, is ranked most important or most preferred. Item B received a score of 35 percent (7 divided by 20) as compared with 25 percent for item D and only 5 percent each for items C and E, which received only one vote each. This example shows how Paired Comparison captures the degree of difference between each ranking.
  • To aggregate rankings received from a group of analysts, simply add the individual scores for each analyst.

 

Weighted Ranking

In Weighted Ranking, a specified set of criteria are used to rank items. The analyst creates a table with items to be ranked listed across the top row and criteria for ranking these items listed down the far left column

* Create a table with one column for each item. At the head of each column, write the name of an item or assign it a letter to save space.

* Add two more blank columns on the left side of this table. Count the number of selection criteria, and then adjust the table so that it has that number of rows plus three more, one at the top to list the items and two at the bottom to show the raw scores and percentages for each item. In the first column on the left side, starting with the second row, write in all the selection criteria down the left side of the table. There is some value in listing the criteria roughly in order of importance, but that is not critical. Leave the bottom two rows blank for the scores and percentages.

* Now work down the far left hand column assigning weights to the selection criteria based on their relative importance for judging the ranking of the items. Depending upon how many criteria there are, take either 10 points or 100 points and divide these points between the selection criteria based on what is believed to be their relative importance in ranking the items. In other words, ask what percentage of the decision should be based on each of these criteria? Be sure that the weights for all the selection criteria combined add up to either 10 or 100, whichever is selected. Also be sure that all the criteria are phrased in such a way that a higher weight is more desirable.

  • Work across the rows to write the criterion weight in the left side of each cell.
  • Next, work across the matrix one row (selection criterion) at a time to evaluate the relative ability of each of the items to satisfy that selection criteria. Use a ten-point rating scale, where 1 = low and 10 = high, to rate each item separately. (Do not spread the ten points proportionately across all the items as was done to assign weights to the criteria.) Write this rating number after the criterion weight in the cell for each item.

 

* Again, work across the matrix one row at a time to multiply the criterion weight by the item rating for that criterion, and enter this number for each cell as shown in Figure 4.6b.

* Now add the columns for all the items. The result will be a ranking of the items from highest to lowest score. To gain a better understanding of the relative ranking of one item as compared with another, convert these raw scores to percentages. To do this, first add together all the scores in the “Totals” row to get a total number. Then divide the score for each item by this total score to get a percentage ranking for each item. All the percentages together must add up to 100 percent. In Figure 4.6b it is apparent that item B has the number one ranking (with 20.3 percent), while item E has the lowest (with 13.2 percent).

 

4.7 MATRICES


A matrix is an analytic tool for sorting and organizing data in a manner that facilitates comparison and analysis. It consists of a simple grid with as many cells as needed for whatever problem is being analyzed.

 

When to Use It

Matrices are used to analyze the relationship between any two sets of variables or the interrelationships between a single set of variables. Among other things, they enable analysts to:

  • Compare one type of information with another.
  • Compare pieces of information of the same type.
  • Categorize information by type.
  • Identify patterns in the information.
  • Separate elements of a problem.

A matrix is such an easy and flexible tool to use that it should be one of the first tools analysts think of when dealing with a large body of data. One limiting factor in the use of matrices is that information must be organized along only two dimensions.

 

Value Added

Matrices provide a visual representation of a complex set of data. By presenting information visually, a matrix enables analysts to deal effectively with more data than they could manage by juggling various pieces of information in their head. The analytic problem is broken down to component parts so that each part (that is, each cell in the matrix) can be analyzed separately, while ideally maintaining the context of the problem as a whole.

 

The Method

A matrix is a tool that can be used in many different ways and for many different purposes. What matrices have in common is that each has a grid with sufficient columns and rows for you to enter two sets of data that you want to compare. Organize the category headings for each set of data in some logical sequence before entering the headings for one set of data in the top row and the headings for the other set in the far left column. Then enter the data in the appropriate cells.

 

4.8 NETWORK ANALYSIS

 

Network Analysis is the review, compilation, and interpretation of data to determine the presence of associations among individuals, groups, businesses, or other entities; the meaning of those associations to the people involved; and the degrees and ways in which those associations can be strengthened or weakened. It is the best method available to help analysts understand and identify opportunities to influence the behavior of a set of actors about whom information is sparse. In the fields of law enforcement and national security, information used in Network Analysis usually comes from informants or from physical or technical surveillance.

 

 

Analysis of networks is broken down into three stages, and analysts can stop at the stage that answers their questions.

* Network Charting is the process of and associated techniques for identifying people, groups, things, places, and events of interest (nodes) and drawing connecting lines (links) between them on the basis of various types of association. The product is often referred to as a Link Chart.

* Network Analysis is the process and techniques that take the chart and strive to make sense of the data represented by the chart by grouping associations (sorting) and identifying patterns in and among those groups.

* Social Network Analysis (SNA) is the mathematical measuring of variables related to the distance between nodes and the types of associations in order to derive even more meaning from the chart, especially

 

 

 

 

about the degree and type of influence one node has on another.

When to Use It

Network Analysis is used extensively in law enforcement, counterterrorism analysis, and analysis of transnational issues such as narcotics and weapons proliferation to identify and monitor individuals who may be involved in illegal activity.

 

When to Use It

Network Analysis is used extensively in law enforcement, counterterrorism analysis, and analysis of transnational issues such as narcotics and weapons proliferation to identify and monitor individuals who may be involved in illegal activity.

 

Value Added

Network Analysis has proved to be highly effective in helping analysts identify and understand patterns of organization, authority, communication, travel, financial transactions, or other interactions between people or groups that are not apparent from isolated pieces of information. It often identifies key leaders, information brokers, or sources of funding.

 

Potential Pitfalls

This method is extremely dependent upon having at least one good source of information. It is hard to know when information may be missing, and the boundaries of the network may be fuzzy and constantly changing, in which case it is difficult to determine whom to include. The constantly changing nature of networks over time can cause information to become outdated.

 

The Method

Analysis of networks attempts to answer the question “Who is related to whom and what is the nature of their relationship and role in the network?” The basic network analysis software identifies key nodes and shows the links between them. SNA software measures the frequency of flow between links and explores the significance of key attributes of the nodes. We know of no software that does the intermediate task of grouping nodes into meaningful clusters, though algorithms do exist and are used by individual analysts. In all cases, however, you must interpret what is represented, looking at the chart to see how it reflects organizational structure, modes of operation, and patterns of behavior.

 

Network charting usually involves the following steps.

  • Identify at least one reliable source or stream of data to serve as a beginning point. Identify, combine, or separate nodes within this reporting.
    List each node in a database, association matrix, or software program.
    Identify interactions among individuals or groups.
  • List interactions by type in a database, association matrix, or software program.
    Identify each node and interaction by some criterion that is meaningful to your analysis. These criteria often include frequency of contact, type of contact, type of activity, and source of information.
    Draw the connections between nodes—connect the dots—on a chart by hand, using a computer drawing tool, or using Network Analysis software.
  • Work out from the central nodes, adding links and nodes until you run out of information from the good sources.
    Add nodes and links from other sources, constantly checking them against the information you already have. Follow all leads, whether they are people, groups, things, or events, and regardless of source. Make note of the sources.
  • Stop in these cases: when you run out of information, when all of the new links are dead ends, when all of the new links begin to turn in on each other like a spider web, or when you run out of time.
    Update the chart and supporting documents regularly as new information becomes available, or as you have time.
  • Rearrange the nodes and links so that the links cross over each other as little as possible.
  • Cluster the nodes. Do this by looking for “dense” areas of the chart and relatively “empty” areas. Draw shapes around the dense areas. Use a variety of shapes, colors, and line styles to denote different types of clusters, your relative confidence in the cluster, or any other criterion you deem important.
  • Cluster the clusters, if you can, using the same method.
  • Label each cluster according to the common denominator among the nodes it contains. In doing this you will identify groups, events, activities, and/or key locations. If you have in mind a model for groups or activities, you may be able to identify gaps in the chart by what is or is not present that relates to the model.
  • Look for “cliques”—a group of nodes in which every node is connected to every other node, though not to many nodes outside the group. These groupings often look like stars or pentagons. In the intelligence world, they often turn out to be clandestine cells.
  • Look in the empty spaces for nodes or links that connect two clusters. Highlight these nodes with shapes or colors. These nodes are brokers, facilitators, leaders, advisers, media, or some other key connection that bears watching. They are also points where the network is susceptible to disruption.
  • Chart the flow of activities between nodes and clusters. You may want to use arrows and time stamps. Some software applications will allow you to display dynamically how the chart has changed over time. Analyze this flow. Does it always go in one direction or in multiple directions? Are the same or different nodes involved? How many different flows are there? What are the pathways? By asking these questions, you can often identify activities, including indications of preparation for offensive action and lines of authority. You can also use this knowledge to assess the resiliency of the network. If one node or pathway were removed, would there be alternatives already built in?
  • Continually update and revise as nodes or links change.

 

 

4.9 MIND MAPS AND CONCEPT MAPS

Mind Maps and Concept Maps are visual representations of how an individual or a group thinks about a topic of interest. Such a diagram has two basic elements: the ideas that are judged relevant to whatever topic one is thinking about, and the lines that show and briefly describe the connections between these ideas.

Whenever you think about a problem, develop a plan, or consider making even a very simple decision, you are putting a series of thoughts together. That series of thoughts can be represented visually with words or images connected by lines that represent the nature of the relationship between them. Any thinking for any purpose, whether about a personal decision or analysis of an intelligence issue, can be diagrammed in this manner.

  • By an individual or a group to help sort out their own thinking and achieve a shared understanding of key concepts.

After having participated in this group process to define the problem, the group should be better able to identify what further research needs to be done and able to parcel out additional work among the best qualified members of the group. The group should also be better able to prepare a report that represents as fully as possible the collective wisdom of the group as a whole.

The Method

Start a Mind Map or Concept Map with a focal question that defines what is to be included. Then follow these steps:

  • Make a list of concepts that relate in some way to the focal question.
  • Starting with the first dozen or so concepts, sort them into groupings within the diagram space in some logical manner. These groups may be based on things they have in common or on their status as either direct or indirect causes of the matter being analyzed.
  • Begin making links between related concepts, starting with the most general concepts. Use lines with arrows to show the direction of the relationship. The arrows may go in either direction or in both directions.
  • Choose the most appropriate words for describing the nature of each relationship. The lines might be labeled with words such as “causes,” “influences,” “leads to,” “results in,” “is required by,” or “contributes to.” Selecting good linking phrases is often the most difficult step.
  • While building all the links between the concepts and the focal question, look for and enter crosslinks between concepts.
  • Don’t be surprised if, as the map develops, you discover that you are now diagramming on a different focus question from the one you started with. This can be a good thing. The purpose of a focus question is not to lock down the topic but to get the process going.
  • Finally, reposition, refine, and expand the map structure as appropriate.

Mind Mapping has only one main or central idea, and all other ideas branch off from it radially in all directions. The central idea is preferably shown as an image rather than in words, and images are used throughout the map. “Around the central word you draw the 5 or 10 main ideas that relate to that word. You then take each of those child words and again draw the 5 or 10 main ideas that relate to each of

those words.” A Concept Map has a more flexible form. It can have multiple hubs and clusters. It can also be designed around a central idea, but it does not have to be and often is not designed that way. It does not normally use images. A Concept Map is usually shown as a network, although it too can be shown as a hierarchical structure like Mind Mapping when that is appropriate. Concept Maps can be very complex and are often meant to be viewed on a large-format screen.

 

4.10 PROCESS MAPS AND GANTT CHARTS

Process Mapping is an umbrella term that covers a variety of procedures for identifying and depicting visually each step in a complex procedure. It includes flow charts of various types (Activity Flow Charts,

Commodity Flow Charts, Causal Flow Charts), Relationship Maps, and Value Stream Maps commonly used to assess and plan improvements for business and industrial processes. A Gantt Chart is a specific type of Process Map that was developed to facilitate the planning, scheduling, and management of complex industrial projects.

When to Use It

Process Maps, including Gantt Charts, are used by intelligence analysts to track, understand, and monitor the progress of activities of intelligence interest being undertaken by a foreign government, a criminal or terrorist group, or any other nonstate actor. For example, a Process Map can be used to monitor progress in developing a new weapons system, preparations for a major military action, or the execution of any other major plan that involves a sequence of observable steps. It is often used to identify and describe the modus operandi of a criminal or terrorist group, including the preparatory steps that such a group typically takes prior to a major action.

Value Added

The process of constructing a Process Map or a Gantt Chart helps analysts think clearly about what someone else needs to do to complete a complex project.

When a complex plan or process is understood well enough to be diagrammed or charted, analysts can then answer questions such as the following: What are they doing? How far along are they? What do they still need to do? What resources will they need to do it? How much time do we have before they have this capability? Is there any vulnerable point in this process where they can be stopped or slowed down?

The Process Map or Gantt Chart is a visual aid for communicating this information to the customer. If sufficient information can be obtained, the analyst’s understanding of the process will lead to a set of indicators that can be used to monitor the status of an ongoing plan or project.

The Method

There is a substantial difference in appearance between a Process Map and a Gantt Chart. In a Process Map, the steps in the process are diagrammed sequentially with various symbols representing starting and end points, decisions, and actions connected with arrows. Diagrams can be created with readily available software such as Microsoft Visio.

Example

The Intelligence Community has considerable experience monitoring terrorist groups. This example describes how an analyst would go about creating a Gantt Chart of a generic terrorist attack-planning process (see Figure 4.10). The analyst starts by making a list of all the tasks that terrorists must complete, estimating the schedule for when each task will be started and finished, and determining what resources are needed for each task. Some tasks need to be completed in a sequence, with each task being more-or-less completed before the next activity can begin. These are called sequential, or linear, activities. Other activities are not dependent upon completion of any other tasks. These may be done at any time before or after a particular stage is reached. These are called nondependent, or parallel, tasks.

Note whether each terrorist task to be performed is sequential or parallel. It is this sequencing of dependent and nondependent activities that is critical in determining how long any particular project or process will take. The more activities that can be worked in parallel, the greater the chances of a project being completed on time. The more tasks that must be done sequentially, the greater the chances of a single bottleneck delaying the entire project.

Gantt Charts that map a generic process can also be used to track data about a more specific process as it is received.

information about a specific group’s activities could be layered by using a different color or line type. Layering in the specific data allows an analyst to compare what is expected with the actual data. The chart can then be used to identify and narrow gaps or anomalies in the data and even to identify and challenge assumptions about what is expected or what is happening.

5.0 Idea Generation
5 Idea Generation

New ideas, and the combination of old ideas in new ways, are essential elements of effective intelligence analysis. Some structured techniques are specifically intended for the purpose of eliciting or generating ideas at the very early stage of a project, and they are the topic of this chapter.

 

Structured Brainstorming is not a group of colleagues just sitting around talking about a problem. Rather, it is a group process that follows specific rules and procedures. It is often used at the beginning of a project to identify a list of relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, or, in law enforcement, potential suspects or avenues of investigation. It requires little training, and is one of the most frequently used structured techniques in the Intelligence Community.

The wiki format—including the ability to upload documents and even hand-drawn graphics or photos —allows analysts to capture and track brainstorming ideas and return to them at a later date.

Nominal Group Technique, often abbreviated NGT, serves much the same function as Structured Brainstorming, but it uses a quite different approach. It is the preferred technique when there is a concern that a senior member or outspoken member of the group may dominate the meeting, that junior members may be reluctant to speak up, or that the meeting may lead to heated debate. Nominal Group Technique encourages equal participation by requiring participants to present ideas one at a time in round-robin fashion until all participants feel that they have run out of ideas.

Starbursting is a form of brainstorming that focuses on generating questions rather than answers. To help in defining the parameters of a research project, use Starbursting to identify the questions that need to be answered. Questions start with the words Who, What, When, Where, Why, and How.

Cross-Impact Matrix is a technique that can be used after any form of brainstorming session that identifies a list of variables relevant to a particular analytic project. The results of the brainstorming session are put into a matrix, which is used to guide a group discussion that systematically examines how each variable influences all other variables to which it is judged to be related in a particular problem context.

Morphological Analysis is useful for dealing with complex, nonquantifiable problems for which little data are available and the chances for surprise are significant. It is a generic method for systematically identifying and considering all possible relationships in a multidimensional, highly complex, usually nonquantifiable problem space. It helps prevent surprises in intelligence analysis by generating a large number of outcomes for any complex situation, thus reducing the chance that events will play out in a way that the analyst has not previously imagined and has not at least considered.

Quadrant Crunching is an application of Morphological Analysis that uses key assumptions and their opposites as a starting point for systematically generating a large number of alternative outcomes. For example, an analyst might use Quadrant Crunching to identify the many different ways that a terrorist might attack a water supply. The technique forces analysts to rethink an issue from a broad range of perspectives and systematically question all the assumptions that underlie their lead hypothesis.

5.1 STRUCTURED BRAINSTORMING

When to Use It

Structured Brainstorming is one of the most widely used analytic techniques. It is often used at the beginning of a project to identify a list of relevant variables, driving forces, a full range of hypotheses, key players or stakeholders, available evidence or sources of information, potential solutions to a problem, potential outcomes or scenarios, or, for law enforcement, potential suspects or avenues of investigation.

 

The Method

There are seven general rules to follow, and then a twelve-step process for Structured Brainstorming. Here are the rules:

  • Be specific about the purpose and the topic of the brainstorming session. Announce the topic beforehand, and ask participants to come to the session with some ideas or to forward them to the facilitator before the session.
  • New ideas are always encouraged. Never criticize an idea during the divergent (creative) phase of the process no matter how weird or unconventional or improbable it might sound. Instead, try to figure out how the idea might be applied to the task at hand.
  • Allow only one conversation at a time, and ensure that everyone has an opportunity to speak.
  • Allocate enough time to do the brainstorming correctly. It often takes one hour to set the rules of the game, get the group comfortable, and exhaust the conventional wisdom on the topic. Only then do truly creative ideas begin to emerge.
  • To avoid groupthink and stimulate divergent thinking, include one or more “outsiders” in the group— that is, astute thinkers who do not share the same body of knowledge or perspective as the other group members but do have some familiarity with the topic.
  • Write it down! Track the discussion by using a whiteboard, an easel, or sticky notes (see Figure 5.1).
  • Summarize the key findings at the end of the session. Ask the participants to write down the most important thing they learned on a 3 x 5 card as they depart the session. Then prepare a short summary and distribute the list to the participants (who may add items to the list) and to others interested in the topic (including supervisors and those who could not attend). Capture these findings and disseminate them to attendees and other interested parties either by e-mail or, preferably, a wiki.
  1. Figure 5.1 Picture of Brainstorming
  • Pass out Post-it or “sticky” notes and Sharpie-type pens or markers to all participants.
  • Pose the problem or topic in terms of a “focal question.” Display this question in one sentence for all to see on a large easel or whiteboard.
  • Ask the group to write down responses to the question with a few key words that will fit on a Post-it.
  • When a response is written down, the participant is asked to read it out loud or to give it to the facilitator who will read it out loud. Sharpie-type pens are used so that people can easily see what is written on the Post-it notes later in the exercise.
  • Stick all the Post-its on a wall in the order in which they are called out. Treat all ideas the same. Encourage participants to build on one another’s ideas.
  • Usually there is an initial spurt of ideas followed by pauses as participants contemplate the question. After five or ten minutes there is often a long pause of a minute or so. This slowing down suggests that the group has “emptied the barrel of the obvious” and is now on the verge of coming up with some fresh insights and ideas. Do not talk during this pause even if the silence is uncomfortable.
  • After two or three long pauses, conclude this divergent thinking phase of the brainstorming session.
  • Ask all participants as a group to go up to the wall and rearrange the Post-its in some organized manner. This arrangement might be by affinity groups (groups that have some common characteristic), scenarios, a predetermined priority scale, or a time sequence. Participants are not allowed to talk during this process. Some Post-its may be moved several times, but they will gradually be clustered into logical groupings. Post-its may be copied if necessary to fit one idea into more than one group.
  • When all Post-its have been arranged, ask the group to select a word or phrase that best describes each grouping.
  • Look for Post-its that do not fit neatly into any of the groups. Consider whether such an outlier is useless noise or the germ of an idea that deserves further attention.
  • Assess what the group has accomplished. Have new ideas or concepts been identified, have key issues emerged, or are there areas that need more work or further brainstorming?
  • To identify the potentially most useful ideas, the facilitator or group leader should establish up to five criteria for judging the value or importance of the ideas. If so desired, then use the Ranking, Scoring,

Prioritizing technique, described in chapter 4, for voting on or ranking or prioritizing ideas

  • Set the analytic priorities accordingly, and decide on a work plan for the next steps in the analysis.

Relationship to Other Techniques

As discussed under “When to Use It,” some form of brainstorming is commonly combined with a wide variety of other techniques.

Structured Brainstorming is also called Divergent/Convergent Thinking.

Origins of This Technique

Brainstorming was a creativity technique used by advertising agencies in the 1940s. It was popularized in a book by advertising manager Alex Osborn, Applied Imagination: Principles and Procedures of Creative Problem Solving. There are many versions of brainstorming. The description here is a combination of information from Randy Pherson, “Structured Brainstorming,” in Handbook of Analytic Tools and Techniques (Reston, Va.: Pherson Associates, LLC, 2008), and training materials from the CIA’s Sherman Kent School for Intelligence Analysis.

5.2 VIRTUAL BRAINSTORMING

Virtual Brainstorming is the same as Structured Brainstorming except that it is done online with participants who are geographically dispersed or unable to meet in person.

The Method

Virtual Brainstorming is usually a two-phase process. It usually begins with the divergent process of creating as many relevant ideas as possible. The second phase is a process of convergence when the ideas are sorted into categories, weeded out, prioritized, or combined and molded into a conclusion or plan of action.

5.3 NOMINAL GROUP TECHNIQUE

Nominal Group Technique (NGT) is a process for generating and evaluating ideas. It is a form of brainstorming, but NGT has always had its own identity as a separate technique.

When to Use It

NGT prevents the domination of a discussion by a single person. Use it whenever there is concern that a senior officer or executive or an outspoken member of the group will control the direction of the meeting by speaking before anyone else.

The Method

An NGT session starts with the facilitator asking an open-ended question, such as, “What factors will influence …?” “How can we learn if …?” “In what circumstances might … happen?” “What should be included or not included in this research project?” The facilitator answers any questions about what is expected of participants and then gives participants five to ten minutes to work privately to jot down on note cards their initial ideas in response to the focal question. This part of the process is followed by these steps:

  • The facilitator calls on one person at a time to present one idea. As each idea is presented, the facilitator writes a summary description on a flip chart or whiteboard. This process continues in a round-robin fashion until all ideas have been exhausted.
  • When no new ideas are forthcoming, the facilitator initiates a group discussion to ensure that there is a common understanding of what each idea means. The facilitator asks about each idea, one at a time, in the order presented, but no argument for or against any idea is allowed. It is possible at this time to expand or combine ideas, but no change can be made to any idea without the approval of the original presenter of the idea.
  • Voting to rank or prioritize the ideas as discussed in chapter 4 is optional, depending upon the purpose of the meeting. When voting is done, it is usually by secret ballot, although various voting procedures may be used depending in part on the number of ideas and the number of participants. It usually works best to employ a ratio of one vote for every three ideas presented. For example, if the facilitator lists twelve ideas, each participant is allowed to cast four votes.

Origins of This Technique

Nominal Group Technique was developed by A. L. Delbecq and A. H. Van de Ven and first described in “A Group Process Model for Problem Identification and Program Planning,” Journal of Applied Behavioral Science

5.4 STARBURSTING

Starbursting is a form of brainstorming that focuses on generating questions rather than eliciting ideas or answers. It uses the six questions commonly asked by journalists: Who? What? When? Where? Why? and How?

When to Use It

Use Starbursting to help define your research project. After deciding on the idea, topic, or issue to be analyzed, brainstorm to identify the questions that need to be answered by the research. Asking the right questions is a common prerequisite to finding the right answer.

Origin of This Technique

Starbursting is one of many techniques developed to stimulate creativity.

5.5 CROSS-IMPACT MATRIX

Cross-Impact Matrix helps analysts deal with complex problems when “everything is related to everything else.” By using this technique, analysts and decision makers can systematically examine how each factor in a particular context influences all other factors to which it appears to be related.

When to Use It

The Cross-Impact Matrix is useful early in a project when a group is still in a learning mode trying to sort out a complex situation.

The Method

Assemble a group of analysts knowledgeable on various aspects of the subject. The group brainstorms a list of variables or events that would likely have some effect on the issue being studied. The project coordinator then creates a matrix and puts the list of variables or events down the left side of the matrix and the same variables or events across the top.

The matrix is then used to consider and record the relationship between each variable or event and every other variable or event.

5.6 MORPHOLOGICAL ANALYSIS

A method for systematically structuring and examining all the possible relationships in a multidimensional, highly complex, usually nonquantifiable problem space. The basic idea is to identify a set of variables and then look at all the possible combinations of these variables.

For intelligence analysis, it helps prevent surprise by generating a large number of feasible outcomes for any complex situation. This exercise reduces the chance that events will play out in a way that the analyst has not previously imagined and considered.

When to Use It

Morphological Analysis is most useful for dealing with complex, nonquantifiable problems for which little information is available and the chances for surprise are great. It can be used, for example, to identify possible variations of a threat, possible ways a crisis might occur between two countries, possible ways a set of driving forces might interact, or the full range of potential outcomes in any ambiguous situation.

Although Morphological Analysis is typically used for looking ahead, it can also be used in an investigative context to identify the full set of possible explanations for some event.

Value Added

By generating a comprehensive list of possible outcomes, analysts are in a better position to identify and select those outcomes that seem most credible or that most deserve attention. This list helps analysts and decision makers focus on what actions need to be undertaken today to prepare for events that could occur in the future. They can then take the actions necessary to prevent or mitigate the effect of bad outcomes and help foster better outcomes. The technique can also sensitize analysts to low probability/high impact developments, or “nightmare scenarios,” which could have significant adverse implications for influencing policy or allocation of resources.

The product of Morphological Analysis is often a set of potential noteworthy scenarios, with indicators of each, plus the intelligence collection requirements for each scenario. Another benefit is that morphological analysis leaves a clear audit trail about how the judgments were reached.

The Method

Morphological analysis works through two common principles of creativity techniques: decomposition and forced association. Start by defining a set of key parameters or dimensions of the problem, and then break down each of those dimensions further into relevant forms or states or values that the dimension can assume —as in the example described later in this section. Two dimensions can be visualized as a matrix and three dimensions as a cube. In more complicated cases, multiple linked matrices or cubes may be needed to break the problem down into all its parts.

The principle of forced association then requires that every element be paired with and considered in connection with every other element in the morphological space. How that is done depends upon the complexity of the case. In a simple case, each combination may be viewed as a potential scenario or problem solution and examined from the point of view of its possibility, practicability, effectiveness, or other criteria. In complex cases, there may be thousands of possible combinations and computer assistance is required. With or without computer assistance, it is often possible to quickly eliminate about 90 percent of the combinations as not physically possible, impracticable, or undeserving of attention. This narrowing-down process allows the analyst to concentrate only on those combinations that are within the realm of the possible and most worthy of attention.

5.7 QUADRANT CRUNCHING

Quadrant Crunching helps analysts avoid surprise by examining multiple possible combinations of selected key variables. It also helps analysts to identify and systematically challenge assumptions, explore the implications of contrary assumptions, and discover “unknown unknowns.” By generating multiple possible outcomes for any situation, Quadrant Crunching reduces the chance that events could play out in a way that has not previously been at least imagined and considered. Training and practice are required before an analyst should use this technique, and an experienced facilitator is recommended.

The technique forces analysts to rethink an issue from many perspectives and systematically question assumptions that underlie their lead hypothesis. As a result, analysts can be more confident that they have considered a broad range of possible permutations for a particularly complex and ambiguous situation. In so doing, analysts are more likely to anticipate most of the ways a situation can develop (or terrorists might launch an attack) and to spot indicators that signal a specific scenario is starting to develop.

The Method

Quadrant Crunching is sometimes described as a Key Assumptions Check on steroids. It is most useful when there is a well-established lead hypothesis that can be articulated clearly.

Quadrant Crunching calls on the analyst to break down the lead hypothesis into its component parts, identifying the key assumptions that underlie the lead hypothesis, or dimensions that focus on Who, What, When, Where, Why, and How. Once the key dimensions of the lead hypothesis are articulated, the analyst generates at least two examples of contrary dimensions.

 

Relationship to Other Techniques

Quadrant Crunching is a specific application of a generic method called Morphological Analysis (described in this chapter). It draws on the results of the Key Assumptions Check and can contribute to Multiple Scenarios Generation. It can also be used to identify Indicators.

Origins of This Technique

The Quadrant Crunching technique was developed by Randy Pherson and Alan Schwartz to meet a specific analytic need. It was first published in Randy Pherson, Handbook of Analytic Tools and Techniques

6.0 Scenarios and Indicators
6 Scenarios and Indicators

In the complex, evolving, uncertain situations that intelligence analysts and decision makers must deal with, the future is not easily predicable. Some events are intrinsically of low predictability. The best the analyst can do is to identify the driving forces that may determine future outcomes and monitor those forces as they interact to become the future. Scenarios are a principal vehicle for doing this. Scenarios are plausible and provocative stories about how the future might unfold.

 

Scenarios Analysis provides a framework for considering multiple plausible futures. As Peter

Schwartz, author of The Art of the Long View, has argued, “The future is plural.”1 Trying to divine or predict a single outcome often is a disservice to senior intelligence officials, decision makers, and other clients. Generating several scenarios (for example, those that are most likely, least likely, and most dangerous) helps focus attention on the key underlying forces and factors most likely to influence how a situation develops. Analysts can also use scenarios to examine assumptions and deliver useful warning messages when high impact/low probability scenarios are included in the exercise.

 

Identification and monitoring of indicators or signposts can provide early warning of the direction in which the future is heading, but these early signs are not obvious. The human mind tends to see what it expects to see and to overlook the unexpected. These indicators take on meaning only in the context of a specific scenario with which they have been identified. The prior identification of a scenario and associated indicators can create an awareness that prepares the mind to recognize early signs of significant change.

 

Change sometimes happens so gradually that analysts don’t notice it, or they rationalize it as not being of fundamental importance until it is too obvious to ignore. Once analysts take a position on an issue, they typically are slow to change their minds in response to new evidence. By going on the record in advance to specify what actions or events would be significant and might change their minds, analysts can avert this type of rationalization.

 

Another benefit of scenarios is that they provide an efficient mechanism for communicating complex ideas. A scenario is a set of complex ideas that can be described with a short label.

 

Overview of Techniques

 

 

Indicators are a classic technique used to seek early warning of some undesirable event. Indicators are often paired with scenarios to identify which of several possible scenarios is developing. They are also used to measure change toward an undesirable condition, such as political instability or a desirable condition, such as economic reform. Use indicators whenever you need to track a specific situation to monitor, detect, or evaluate change over time.

 

Indicators Validator is a new tool that is useful for assessing the diagnostic power of an indicator. An indicator is most diagnostic when it clearly points to the likelihood of only one scenario or hypothesis and suggests that the others are unlikely. Too frequently indicators are of limited value, because they may be consistent with several different outcomes or hypotheses.

 

6.1 SCENARIOS ANALYSIS

 

Identification and analysis of scenarios helps to reduce uncertainties and manage risk. By postulating different scenarios analysts can identify the multiple ways in which a situation might evolve. This process can help decision makers develop plans to exploit whatever opportunities the future may hold or, conversely, to avoid risks. Monitoring of indicators keyed to various scenarios can provide early warnings of the direction in which the future may be heading.

 

When to Use It

Scenarios Analysis is most useful when a situation is complex or when the outcomes are too uncertain to trust a single prediction. When decision makers and analysts first come to grips with a new situation or challenge, there usually is a degree of uncertainty about how events will unfold.

 

Value Added

When analysts are thinking about scenarios, they are rehearsing the future so that decision makers can be prepared for whatever direction that future takes. Instead of trying to estimate the most likely outcome (and being wrong more often than not), scenarios provide a framework for considering multiple plausible futures.

 

Analysts have learned, from past experience, that involving decision makers in a scenarios exercise is an effective way to communicate the results of this technique and to sensitize them to important uncertainties. Most participants find the process of developing scenarios as useful as any written report or formal briefing. Those involved in the process often benefit in several ways. Analysis of scenarios can:

 

  • Suggest indicators to monitor for signs that a particular future is becoming more or less likely.
  • Help analysts and decision makers anticipate what would otherwise be surprising developments by forcing them to challenge assumptions and consider plausible “wild card” scenarios or discontinuous events.
  • Produce an analytic framework for calculating the costs, risks, and opportunities represented by different outcomes.
  • Provide a means of weighing multiple unknown or unknowable factors and presenting a set of plausible outcomes.
  • Bound a problem by identifying plausible combinations of uncertain factors.

 

When decision makers or analysts from different intelligence disciplines or organizational cultures are included on the team, new insights invariably emerge as new information and perspectives are introduced.

 

6.1.1 The Method: Simple Scenarios

Of the three scenario techniques described here, Simple Scenarios is the easiest one to use. It is the only one of the three that can be implemented by an analyst working alone rather than in a group or a team, and it is the only one for which a coach or a facilitator is not needed.

. Here are the steps for using this technique:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Make a list of forces, factors, and events that are likely to influence the future.
  • Organize the forces, factors, and events that are related to each other into five to ten affinity groups that are expected to be the driving forces in how the focal issue will evolve.
  • Label each of these drivers and write a brief description of each. For example, one training exercise for this technique is to forecast the future of the fictional country of Caldonia by identifying and describing six drivers. Generate a matrix, as shown in Figure 6.1.1, with a list of drivers down the left side. The columns of the matrix are used to describe scenarios. Each scenario is assigned a value for each driver. The values are strong or positive (+), weak or negative (–), and blank if neutral or no change.

 

  • Government effectiveness: To what extent does the government exert control over all populated regions of the country and effectively deliver services?
  • Economy: Does the economy sustain a positive growth rate?
  • Civil society: Can nongovernmental and local institutions provide appropriate services and security to the population?
  • Insurgency: Does the insurgency pose a viable threat to the government? Is it able to extend its dominion over greater portions of the country?
  • Drug trade: Is there a robust drug-trafficking economy?
  • Foreign influence: Do foreign governments, international financial organizations, or nongovernmental organizations provide military or economic assistance to the government?
  • Generate at least four different scenarios—a best case, worst case, mainline, and at least one other by assigning different values (+, 0, –) to each driver.
  • This is a good time to reconsider both drivers and scenarios. Is there a better way to conceptualize and describe the drivers? Are there important forces that have not been included? Look across the matrix to see the extent to which each driver discriminates among the scenarios. If a driver has the same value across all scenarios, it is not discriminating and should be deleted. To stimulate thinking about other possible scenarios, consider the key assumptions that were made in deciding on the most likely scenario. What if some of these assumptions turn out to be invalid? If they are invalid, how might that affect the outcome, and are such outcomes included within the available set of scenarios?
  • For each scenario, write a one-page story to describe what that future looks like and/or how it might come about. The story should illustrate the interplay of the drivers.
  • For each scenario, describe the implications for the decision maker.
  • Generate a list of indicators, or “observables,” for each scenario that would help you discover that events are starting to play out in a way envisioned by that scenario.
  • Monitor the list of indicators on a regular basis.

6.1.2 The Method: Alternative Futures Analysis

Alternative Futures Analysis and Multiple Scenarios Generation differ from Simple Scenarios in that they are usually larger projects that rely on a group of experts, often including academics and decision makers. They use a more systematic process, and the assistance of a knowledgeable facilitator is very helpful.

The steps in the Alternative Futures Analysis process are:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Brainstorm to identify the key forces, factors, or events that are most likely to influence how the issue will develop over a specified time period.
  • If possible, group these various forces, factors, or events to form two critical drivers that are expected to determine the future outcome. In the example on the future of Cuba (Figure 6.1.2), the two key drivers are Effectiveness of Government and Strength of Civil Society. If there are more than two critical drivers, do not use this technique. Use the Multiple Scenarios Generation technique, which can handle a larger number of scenarios.
  • As in the Cuba example, define the two ends of the spectrum for each driver.
  • Draw a 2 × 2 matrix. Label the two ends of the spectrum for each driver.
  • Note that the square is now divided into four quadrants. Each quadrant represents a scenario generated by a combination of the two drivers. Now give a name to each scenario, and write it in the relevant quadrant.
  • Generate a narrative story of how each hypothetical scenario might come into existence. Include a hypothetical chronology of key dates and events for each of the scenarios.
  • Describe the implications of each scenario should it be what actually develops.
  • Generate a list of indicators, or “observables,” for each scenario that would help determine whether events are starting to play out in a way envisioned by that scenario.
  • Monitor the list of indicators on a regular basis.

Figure 6.1.2 Alternative Futures Analysis: Cuba

6.1.3 The Method: Multiple Scenarios Generation

Multiple Scenarios Generation is similar to Alternative Futures Analysis except that with this technique, you are not limited to two critical drivers generating four scenarios. By using multiple 2 × 2 matrices pairing every possible combination of multiple driving forces, you can create a very large number of possible scenarios. This is sometimes desirable to make sure nothing has been overlooked. Once generated, the scenarios can be screened quickly without detailed analysis of each one.

Once sensitized to these different scenarios, analysts are more likely to pay attention to outlying data that would suggest that events are playing out in a way not previously imagined.

Training and an experienced facilitator are needed to use this technique. Here are the basic steps:

  • Clearly define the focal issue and the specific goals of the futures exercise.
  • Brainstorm to identify the key forces, factors, or events that are most likely to influence how the issue will develop over a specified time period.
  • Define the two ends of the spectrum for each driver.
  • Pair the drivers in a series of 2 × 2 matrices.
  • Develop a story or two for each quadrant of each 2 × 2 matrix.
  • From all the scenarios generated, select those most deserving of attention because they illustrate compelling and challenging futures not yet being considered.
  • Develop indicators for each scenario that could be tracked to determine whether or not the scenario is developing.

 

6.2 INDICATORS

Indictors are observable phenomena that can be periodically reviewed to help track events, spot emerging trends, and warn of unanticipated changes. An indicators list is a pre-established set of observable or

potentially observable actions, conditions, facts, or events whose simultaneous occurrence would argue strongly that a phenomenon is present or is very likely to occur. Indicators can be monitored to obtain tactical, operational, or strategic warnings of some future development that, if it were to occur, would have a major impact.

The identification and monitoring of indicators are fundamental tasks of intelligence analysis, as they are the principal means of avoiding surprise. They are often described as forward-looking or predictive indicators. In the law enforcement community indicators are also used to assess whether a target’s activities or behavior is consistent with an established pattern. These are often described as backward-looking or descriptive indicators.

When to Use It

Indicators provide an objective baseline for tracking events, instilling rigor into the analytic process, and enhancing the credibility of the final product. Descriptive indicators are best used to help the analyst assess whether there are sufficient grounds to believe that a specific action is taking place. They provide a systematic way to validate a hypothesis or help substantiate an emerging viewpoint.

In the private sector, indicators are used to track whether a new business strategy is working or whether a low-probability scenario is developing that offers new commercial opportunities.

Value Added

The human mind sometimes sees what it expects to see and can overlook the unexpected. Identification of indicators creates an awareness that prepares the mind to recognize early signs of significant change. Change often happens so gradually that analysts don’t see it, or they rationalize it as not being of fundamental importance until it is too obvious to ignore. Once analysts take a position on an issue, they can be reluctant to change their minds in response to new evidence. By specifying in advance the threshold for what actions or events would be significant and might cause them to change their minds, analysts can seek to avoid this type of rationalization.

Defining explicit criteria for tracking and judging the course of events makes the analytic process more visible and available for scrutiny by others, thus enhancing the credibility of analytic judgments. Including an indicators list in the finished product helps decision makers track future developments and builds a more concrete case for the analytic conclusions.

Preparation of a detailed indicator list by a group of knowledgeable analysts is usually a good learning experience for all participants. It can be a useful medium for an exchange of knowledge between analysts from different organizations or those with different types of expertise—for example, analysts who specialize in a particular country and those who are knowledgeable about a particular field, such as military mobilization, political instability, or economic development.

The indicator list becomes the basis for directing collection efforts and for routing relevant information to all interested parties. It can also serve as the basis for the analyst’s filing system to keep track of these indicators.

When analysts or decision makers are sharply divided over the interpretation of events (for example, how the war in Iraq or Afghanistan is progressing), of the guilt or innocence of a “person of interest,” or the culpability of a counterintelligence suspect, indicators can help depersonalize the debate by shifting attention away from personal viewpoints to more objective criteria. Emotions often can be diffused and substantive disagreements clarified if all parties agree in advance on a set of criteria that would demonstrate that developments are—or are not—moving in a particular direction or that a person’s behavior suggests that he or she is guilty as suspected or is indeed a spy.

Potential Pitfalls

The quality of indicators is critical, as poor indicators lead to analytic failure. For these reasons, analysts must periodically review the validity and relevance of an indicators list.

The Method

The first step in using this technique is to create a list of indicators. (See Figure 6.2b for a sample indicators list.) The second step is to monitor these indicators regularly to detect signs of change. Developing the indicator list can range from a simple process to a sophisticated team effort.

For example, with minimum effort you could jot down a list of things you would expect to see if a particular situation were to develop as feared or foreseen. Or you could join with others to define multiple variables that would influence a situation and then rank the value of each variable based on incoming information about relevant events, activities, or official statements. In both cases, some form of brainstorming, hypothesis generation, or scenario development is often used to identify the indicators.

A good indicator must meet several criteria, including the following:

Observable and collectible. There must be some reasonable expectation that, if present, the indicator will be observed and reported by a reliable source. If an indicator is to monitor change over time, it must be collectable over time.
Valid. An indicator must be clearly relevant to the end state the analyst is trying to predict or assess, and it must be inconsistent with all or at least some of the alternative explanations or outcomes. It must accurately measure the concept or phenomenon at issue.
Reliable. Data collection must be consistent when comparable methods are used. Those observing and collecting data must observe the same things. Reliability requires precise definition of the indicators. Stable. An indicator must be useful over time to allow comparisons and to track events. Ideally, the indicator should be observable early in the evolution of a development so that analysts and decision makers have time to react accordingly.
Unique. An indicator should measure only one thing and, in combination with other indicators, should point only to the phenomenon being studied. Valuable indicators are those that are not only consistent with a specified scenario or hypothesis but are also inconsistent with alternative scenarios or hypotheses. The Indicators Validator tool, described later in this chapter, can be used to check the diagnosticity of indicators.

Maintaining separate indicator lists for alternative scenarios or hypotheses is particularly useful when making a case that a certain event is unlikely to happen, as in What If? Analysis or High Impact/Low Probability Analysis.

After creating the indicator list or lists, you or the analytic team should regularly review incoming reporting and note any changes in the indicators. To the extent possible, you or the team should decide well in advance which critical indicators, if observed, will serve as early-warning decision points. In other words, if a certain indicator or set of indicators is observed, it will trigger a report advising of some modification in the intelligence appraisal of the situation.

Techniques for increasing the sophistication and credibility of an indicator list include the following:

Establishing a scale for rating each indicator
Providing specific definitions of each indicator
Rating the indicators on a scheduled basis (e.g., monthly, quarterly, or annually)
Assigning a level of confidence to each rating
Providing a narrative description for each point on the rating scale, describing what one would expect to observe at that level
Listing the sources of information used in generating the rating

6.3 INDICATORS VALIDATOR

The Indicators Validator is a simple tool for assessing the diagnostic power of indicators.

When to Use It

The Indicators Validator is an essential tool to use when developing indicators for competing hypotheses or alternative scenarios. Once an analyst has developed a set of alternative scenarios or future worlds, the next step is to generate indicators for each scenario (or world) that would appear if that particular world were beginning to emerge. A critical question that is not often asked is whether a given indicator would appear only in the scenario to which it is assigned or also in one or more alternative scenarios. Indicators that could appear in several scenarios are not considered diagnostic, suggesting that they are not particularly useful in determining whether a specific scenario is emerging. The ideal indicator is highly consistent for the world to which it is assigned and highly inconsistent for all other worlds.

Value Added

Employing the Indicators Validator to identify and dismiss nondiagnostic indicators can significantly increase the credibility of an analysis. By applying the tool, analysts can rank order their indicators from most to least diagnostic and decide how far up the list they want to draw the line in selecting the indicators that will be used in the analysis. In some circumstances, analysts might discover that most or all the indicators for a given scenario have been eliminated because they are also consistent with other scenarios, forcing them to brainstorm a new and better set of indicators. If analysts find it difficult to generate independent lists of diagnostic indicators for two scenarios, it may be that the scenarios are not sufficiently dissimilar, suggesting that they should be combined.

The Method

The first step is to populate a matrix similar to that used for Analysis of Competing Hypotheses. This can be done manually or by using the Indicators Validator software. The matrix should list:

Alternative scenarios or worlds (or competing hypotheses) along the top of the matrix (as is done for hypotheses in Analysis of Competing Hypotheses)
Indicators that have already been generated for all the scenarios down the left side of the matrix (as is done with evidence in Analysis of Competing Hypotheses)

In each cell of the matrix, assess whether the indicator for that particular scenario is

 

Highly likely to appear

Likely to appear
Could appear
Unlikely to appear

Highly unlikely to appear

Once this process is complete, re-sort the indicators so that the most discriminating indicators are displayed at the top of the matrix and the least discriminating indicators at the bottom.

The most discriminating indicator is “Highly Likely” to emerge in one scenario and “Highly Unlikely” to emerge in all other scenarios.
The least discriminating indicator is “Highly Likely” to appear in all scenarios.
Most indicators will fall somewhere in between.

The Indicators with the most “Highly Unlikely” and “Unlikely” ratings are the most discriminating and should be retained.
Indicators with few or no “Highly Unlikely” or “Unlikely” ratings should be eliminated.
Once nondiscriminating indicators have been eliminated, regroup the indicators under their assigned scenario. If most indicators for a particular scenario have been eliminated, develop new—and more diagnostic—indicators for that scenario.

Recheck the diagnostic value of any new indicators by applying the Indicators Validator to them as well.

 

7.0 Hypothesis Generation and Testing
7 Hypothesis Generation and Testing

Intelligence analysis will never achieve the accuracy and predictability of a true science, because the information with which analysts must work is typically incomplete, ambiguous, and potentially

deceptive. Intelligence analysis can, however, benefit from some of the lessons of science and adapt some of the elements of scientific reasoning.

The scientific process involves observing, categorizing, formulating hypotheses, and then testing those hypotheses. Generating and testing hypotheses is a core function of intelligence analysis. A possible explanation of the past or a judgment about the future is a hypothesis that needs to be tested by collecting and presenting evidence.

The generation and testing of hypotheses is a skill, and its subtleties do not come naturally. It is a form of reasoning that people can learn to use for dealing with high-stakes situations. What does come naturally is drawing on our existing body of knowledge and experience (mental model) to make an intuitive judgment. In most circumstances in our daily lives, this is an efficient approach that works most of the time.

When one is facing a complex choice of options, the reliance on intuitive judgment risks following a practice called “satisficing,” a term coined by Nobel Prize winner Herbert Simon by combining the words satisfy and suffice.1 It means being satisfied with the first answer that seems adequate, as distinct from assessing multiple options to find the optimal or best answer. The “satisficer” who does seek out additional information may look only for information that supports this initial answer rather than looking more broadly at all the possibilities.

 

The truth of a hypothesis can never be proven beyond doubt by citing only evidence that is consistent with the hypothesis, because the same evidence may be and often is consistent with one or more other hypotheses. Science often proceeds by refuting or disconfirming hypotheses. A hypothesis that cannot be refuted should be taken just as seriously as a hypothesis that seems to have a lot of evidence in favor of it. A single item of evidence that is shown to be inconsistent with a hypothesis can be sufficient grounds for rejecting that hypothesis. The most tenable hypothesis is often the one with the least evidence against it.

Analysts often test hypotheses by using a form of reasoning known as abduction, which differs from the two better known forms of reasoning, deduction and induction. Abductive reasoning starts with a set of facts. One then develops hypotheses that, if true, would provide the best explanation for these facts. The most tenable hypothesis is the one that best explains the facts. Because of the uncertainties inherent to intelligence analysis, conclusive proof or refutation of hypotheses is the exception rather than the rule.

The Analysis of Competing Hypotheses (ACH) technique, was developed by Richards Heuer specifically for use in intelligence analysis. It is the application to intelligence analysis of Karl Popper’s theory of science.2 Popper was one of the most influential philosophers of science of the twentieth century. He is known for, among other things, his position that scientific reasoning should start with multiple hypotheses and proceed by rejecting or eliminating hypotheses, while tentatively accepting only those hypotheses that cannot be refuted.

This chapter describes techniques that are intended to be used specifically for hypothesis generation.

 

Overview of Techniques

Hypothesis Generation is a category that includes three specific techniques—Simple Hypotheses, Multiple Hypotheses Generator, and Quadrant Hypothesis Generation. Simple Hypotheses is the easiest of the three, but it is not always the best selection. Use Multiple Hypotheses Generator to identify a large set of all possible hypotheses. Quadrant Hypothesis Generation is used to identify a set of hypotheses when there are just two driving forces that are expected to determine the outcome.

Diagnostic Reasoning applies hypothesis testing to the evaluation of significant new information. Such information is evaluated in the context of all plausible explanations of that information, not just in the context of the analyst’s well-established mental model. The use of Diagnostic Reasoning reduces the risk of surprise, as it ensures that an analyst will have given at least some consideration to alternative conclusions. Diagnostic Reasoning differs from the Analysis of Competing Hypotheses (ACH) technique in that it is used to evaluate a single item of evidence, while ACH deals with an entire issue involving multiple pieces of evidence and a more complex analytic process.

Analysis of Competing Hypotheses

The requirement to identify and then refute all reasonably possible hypotheses forces an analyst to recognize the full uncertainty inherent in most analytic situations. At the same time, the ACH software helps the analyst sort and manage evidence to identify paths for reducing that uncertainty.

Argument Mapping is a method that can be used to put a single hypothesis to a rigorous logical test. The structured visual representation of the arguments and evidence makes it easier to evaluate any analytic judgment. Argument Mapping is a logical follow on to an ACH analysis. It is a detailed presentation of the arguments for and against a single hypothesis, while ACH is a more general analysis of multiple hypotheses. The successful application of Argument Mapping to the hypothesis favored by the ACH analysis would increase confidence in the results of both analyses.

Deception Detection is discussed in this chapter because the possibility of deception by a foreign intelligence service or other adversary organization is a distinctive type of hypothesis that analysts must frequently consider. The possibility of deception can be included as a hypothesis in any ACH analysis. Information identified through the Deception Detection technique can then be entered as evidence in the ACH matrix.

7.1 HYPOTHESIS GENERATION

In broad terms, a hypothesis is a potential explanation or conclusion that is to be tested by collecting and presenting evidence. It is a declarative statement that has not been established as true—an “educated guess” based on observation that needs to be supported or refuted by more observation or through experimentation.

A good hypothesis:

Is written as a definite statement, not as a question. Is based on observations and knowledge.
Is testable and falsifiable.
Predicts the anticipated results clearly.

Contains a dependent and an independent variable. The dependent variable is the phenomenon being explained. The independent variable does the explaining.

When to Use It

Analysts should use some structured procedure to develop multiple hypotheses at the start of a project when:

The importance of the subject matter is such as to require systematic analysis of all alternatives. Many variables are involved in the analysis.
There is uncertainty about the outcome.
Analysts or decision makers hold competing views.

Value Added

Generating multiple hypotheses at the start of a project can help analysts avoid common analytic pitfalls such as these:

Coming to premature closure.
Being overly influenced by first impressions.
Selecting the first answer that appears “good enough.”
Focusing on a narrow range of alternatives representing marginal, not radical, change. Opting for what elicits the most agreement or is desired by the boss.
Selecting a hypothesis only because it avoids a previous error or replicates a past success.

7.1.1 The Method: Simple Hypotheses

To use the Simple Hypotheses method, define the problem and determine how the hypotheses are expected to be used at the beginning of the project.

Gather together a diverse group to review the available evidence and explanations for the issue, activity, or behavior that you want to evaluate. In forming this diverse group, consider that you will need different types of expertise for different aspects of the problem, cultural expertise about the geographic area involved, different perspectives from various stakeholders, and different styles of thinking (left brain/right brain, male/female). Then:

Ask each member of the group to write down on a 3 × 5 card up to three alternative explanations or hypotheses. Prompt creative thinking by using the following:

Situational logic: Take into account all the known facts and an understanding of the underlying forces at work at that particular time and place.
Historical analogies: Consider examples of the same type of phenomenon.
Theory: Consider theories based on many examples of how a particular type of situation generally plays out.

Collect the cards and display the results on a whiteboard. Consolidate the list to avoid any duplication. Employ additional group and individual brainstorming techniques to identify key forces and factors. Aggregate the hypotheses into affinity groups and label each group.
Use problem restatement and consideration of the opposite to develop new ideas.

Update the list of alternative hypotheses. If the hypotheses will be used in ACH, strive to keep them mutually exclusive—that is, if one hypothesis is true all others must be false.
Have the group clarify each hypothesis by asking the journalist’s classic list of questions: Who, What, When, Where, Why, and How?

Select the most promising hypotheses for further exploration.

7.1.2 The Method: Multiple Hypotheses Generator

The Multiple Hypotheses Generator provides a structured mechanism for generating a wide array of hypotheses. Analysts often can brainstorm a useful set of hypotheses without such a tool, but the Hypotheses Generator may give greater confidence than other techniques that a critical alternative or an outlier has not been overlooked. To use this method:

Define the issue, activity, or behavior that is subject to examination. Do so by using the journalist’s classic list of Who, What, When, Where, Why, and How for explaining this issue, activity, or behavior.

7.1.3 The Method: Quadrant Hypothesis Generation

Use the quadrant technique to identify a basic set of hypotheses when there are two easily identified key driving forces that will determine the outcome of an issue. The technique identifies four potential scenarios that represent the extreme conditions for each of the two major drivers. It spans the logical possibilities inherent in the relationship and interaction of the two driving forces, thereby generating options that analysts otherwise may overlook.

These are the steps for Quadrant Hypothesis Generation:

Identify the two main drivers by using techniques such as Structured Brainstorming or by surveying subject matter experts. A discussion to identify the two main drivers can be a useful exercise in itself. Construct a 2 × 2 matrix using the two drivers.
Think of each driver as a continuum from one extreme to the other. Write the extremes of each of the drivers at the end of the vertical and horizontal axes.

Fill in each quadrant with the details of what the end state would be as shaped by the two drivers. Develop signposts that show whether events are moving toward one of the hypotheses. Use the signposts or indicators of change to develop intelligence collection strategies to determine the direction in which events are moving.

7.2 DIAGNOSTIC REASONING

Diagnostic Reasoning applies hypothesis testing to the evaluation of a new development, the assessment of a new item of intelligence, or the reliability of a source. It is different from the Analysis of Competing Hypotheses (ACH) technique in that Diagnostic Reasoning is used to evaluate a single item of evidence, while ACH deals with an entire issue involving multiple pieces of evidence and a more complex analytic process.

When to Use It

Analysts should use Diagnostic Reasoning instead of making a snap intuitive judgment when assessing the meaning of a new development in their area of interest, or the significance or reliability of a new intelligence report. The use of this technique is especially important when the analyst’s intuitive interpretation of a new piece of evidence is that the new information confirms what the analyst was already thinking.

Value Added

Diagnostic Reasoning helps balance people’s natural tendency to interpret new information as consistent with their existing understanding of what is happening—that is, the analyst’s mental model. It is a common experience to discover that much of the evidence supporting what one believes is the most likely conclusion is really of limited value in confirming one’s existing view, because that same evidence is also consistent with alternative conclusions. One needs to evaluate new information in the context of all possible explanations of that information, not just in the context of a well-established mental model. The use of Diagnostic Reasoning reduces the element of surprise by ensuring that at least some consideration has been given to alternative conclusions.

The Method

Diagnostic Reasoning is a process by which you try to refute alternative judgments rather than confirm what you already believe to be true. Here are the steps to follow:

* When you receive a potentially significant item of information, make a mental note of what it seems to mean (i.e., an explanation of why something happened or what it portends for the future). Make a quick intuitive judgment based on your current mental model.

* Brainstorm, either alone or in a small group, the alternative judgments that another analyst with a different perspective might reasonably deem to have a chance of being accurate. Make a list of these alternatives.

* For each alternative, ask the following question: If this alternative were true or accurate, how likely is it that I would see this new information?

* Make a tentative judgment based on consideration of these alternatives. If the new information is equally likely with each of the alternatives, the information has no diagnostic value and can be ignored. If the information is clearly inconsistent with one or more alternatives, those alternatives might be ruled out. Following this mode of thinking for each of the alternatives, decide which alternatives need further attention and which can be dropped from consideration.

* Proceed further by seeking evidence to refute the remaining alternatives rather than confirm them.

7.3 ANALYSIS OF COMPETING HYPOTHESES

Analysis of Competing Hypotheses (ACH) is a technique that assists analysts in making judgments on issues that require careful weighing of alternative explanations or estimates. ACH involves identifying a set of

mutually exclusive alternative explanations or outcomes (presented as hypotheses), assessing the consistency or inconsistency of each item of evidence with each hypothesis, and selecting the hypothesis that best fits the evidence. The idea behind this technique is to refute rather than to confirm each of the hypotheses. The most likely hypothesis is the one with the least evidence against it, as well as evidence for it, not the one with the most evidence for it.

When to Use It

ACH is appropriate for almost any analysis where there are alternative explanations for what has happened, is happening, or is likely to happen. Use it when the judgment or decision is so important that you cannot afford to be wrong. Use it when your gut feelings are not good enough, and when you need a systematic approach to prevent being surprised by an unforeseen outcome. Use it on controversial issues when it is desirable to identify precise areas of disagreement and to leave an audit trail to show what evidence was considered and how different analysts arrived at their judgments.

ACH also is particularly helpful when an analyst must deal with the potential for denial and deception, as it was initially developed for that purpose.

Value Added

There are a number of different ways by which ACH helps analysts produce a better analytic product. These include the following:

* It prompts analysts to start by developing a full set of alternative hypotheses. This process reduces the risk of what is called “satisficing”—going with the first answer that comes to mind that seems to meet the need. It ensures that all reasonable alternatives are considered before the analyst gets locked into a preferred conclusion.

* It requires analysts to try to refute hypotheses rather than support a single hypothesis. The technique helps analysts overcome the tendency to search for or interpret new information in a way that confirms their preconceptions and avoids information and interpretations that contradict prior beliefs. A word of caution, however. ACH works this way only when the analyst approaches an issue with a relatively open mind. An analyst who is already committed to a belief in what the right answer is will often find a way to interpret the evidence as consistent with that belief. In other words, as an antidote to confirmation bias, ACH is similar to a flu shot. Taking the flu shot will usually keep you from getting the flu, but it won’t make you well if you already have the flu.

* It helps analysts to manage and sort evidence in analytically useful ways. It helps maintain a record of relevant evidence and tracks how that evidence relates to each hypothesis. It also enables analysts to sort data by type, date, and diagnosticity of the evidence.

* It spurs analysts to present conclusions in a way that is better organized and more transparent as to how these conclusions were reached than would otherwise be possible.

* It provides a foundation for identifying indicators that can be monitored to determine the direction in which events are heading.

* It leaves a clear audit trail as to how the analysis was done.
As a tool for interoffice or interagency collaboration, ACH ensures that all analysts are working from the

same database of evidence, arguments, and assumptions and ensures that each member of the team has had an opportunity to express his or her view on how that information relates to the likelihood of each hypothesis. Users of ACH report that:

* The technique helps them gain a better understanding of the differences of opinion with other analysts or between analytic offices.

* Review of the ACH matrix provides a systematic basis for identification and discussion of differences between two or more analysts.

* Reference to the matrix helps depersonalize the argumentation when there are differences of opinion. The Method

Simultaneous evaluation of multiple, competing hypotheses is difficult to do without some type of analytic aid. To retain three or five or seven hypotheses in working memory and note how each item of information fits into each hypothesis is beyond the capabilities of most people. It takes far greater mental agility than the common practice of seeking evidence to support a single hypothesis that is already believed to be the most likely answer. ACH can be accomplished, however, with the help of the following eight-step process:

* First, identify the hypotheses to be considered. Hypotheses should be mutually exclusive; that is, if one hypothesis is true, all others must be false. The list of hypotheses should include all reasonable possibilities. Include a deception hypothesis if that is appropriate. For each hypothesis, develop a brief scenario or “story” that explains how it might be true.

* Make a list of significant “evidence,” which for ACH means everything that is relevant to evaluating the hypotheses—including evidence, arguments, assumptions, and the absence of things one would expect to see if a hypothesis were true. It is important to include assumptions as well as factual evidence, because the matrix is intended to be an accurate reflection of the analyst’s thinking about the topic. If the analyst’s thinking is driven by assumptions rather than hard facts, this needs to become apparent so that the assumptions can be challenged. A classic example of absence of evidence is the Sherlock Holmes story of the dog barking in the night. The failure of the dog to bark was persuasive evidence that the guilty party was not an outsider but an insider who was known to the dog.

* Analyze the diagnosticity of the evidence, arguments, and assumptions to identify which inputs are most influential in judging the relative likelihood of the hypotheses. Assess each input by working across the matrix. For each hypothesis, ask, “Is this input consistent with the hypothesis, inconsistent with the hypothesis, or is it not relevant?” If it is consistent, place a “C” in the box; if it is inconsistent, place an “I”; if it is not relevant to that hypothesis leave the box blank. If a specific item of evidence, argument, or assumption is particularly compelling, place two “CCs” in the box; if it strongly undercuts the hypothesis, place two “IIs.” When you are asking if an input is consistent or inconsistent with a specific hypothesis, a common response is, “It all depends on….” That means the rating for the hypothesis will be based on an assumption—whatever assumption the rating “depends on.” You should write down all such assumptions. After completing the matrix, look for any pattern in those assumptions—that is, the same assumption being made when ranking

multiple items of evidence. After sorting the evidence for diagnosticity, note how many of the highly diagnostic inconsistency ratings are based on assumptions. Consider how much confidence you should have in those assumptions and then adjust the confidence in the ACH Inconsistency Scores accordingly. See Figure 7.3a for an example.

* Refine the matrix by reconsidering the hypotheses. Does it make sense to combine two hypotheses into one or to add a new hypothesis that was not considered at the start? If a new hypothesis is added, go back and evaluate all the evidence for this hypothesis. Additional evidence can be added at any time.

* Draw tentative conclusions about the relative likelihood of each hypothesis, basing your conclusions on an analysis of the diagnosticity of each item of evidence. The software calculates an inconsistency score based on the number of “I” or “II” ratings or a weighted inconsistency score that also includes consideration of the weight assigned to each item of evidence. The hypothesis with the lowest inconsistency score is tentatively the most likely hypothesis. The one with the most inconsistencies is the least likely.

* Analyze the sensitivity of your tentative conclusion to a change in the interpretation of a few critical items of evidence. Do this by using the ACH software to sort the evidence by diagnosticity. This identifies the most diagnostic evidence that is driving your conclusion. See Figure 7.3b. Consider the consequences for your analysis if one or more of these critical items of evidence were wrong or deceptive or subject to a different interpretation. If a different interpretation would be sufficient to change your conclusion, go back and do everything that is reasonably possible to double check the accuracy of your interpretation.

* Report the conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one. State which items of evidence were the most diagnostic and how compelling a case they make in distinguishing the relative likelihood of the hypotheses.

* Identify indicators or milestones for future observation. Generate two lists: the first focusing on future events or what might be developed through additional research that would help prove the validity of your analytic judgment, and the second, a list of indicators that would suggest that your judgment is less likely to be correct. Monitor both lists on a regular basis, remaining alert to whether new information strengthens or weakens your case.

Potential Pitfalls

The inconsistency or weighted inconsistency scores generated by the ACH software for each hypothesis are not the product of a magic formula that tells you which hypothesis to believe in! The ACH software takes you through a systematic analytic process, and the computer makes the calculation, but the judgment that emerges is only as accurate as your selection and evaluation of the evidence to be considered.

Because it is more difficult to refute hypotheses than to find information that confirms a favored hypothesis, the generation and testing of alternative hypotheses will often increase rather than reduce the analyst’s level of uncertainty. Such uncertainty is frustrating, but it is usually an accurate reflection of the true situation. The ACH procedure has the offsetting advantage of focusing your attention on the few items of critical evidence that cause the uncertainty or which, if they were available, would alleviate it.

Assumptions or logical deductions omitted: If the scores in the matrix do not support what you believe is the most likely hypothesis, the matrix may be incomplete. Your thinking may be influenced by assumptions or logical deductions that have not been included in the list of evidence/arguments. If so, these should be included so that the matrix fully reflects everything that influences your judgment on this issue. It is important for all analysts to recognize the role that unstated or unquestioned (and sometimes unrecognized) assumptions play in their analysis. In political or military analysis, for example, conclusions may be driven by assumptions about another country’s capabilities or intentions.

Insufficient attention to less likely hypotheses: If you think the scoring gives undue credibility to one or more of the less likely hypotheses, it may be because you have not assembled the evidence needed to refute them. You may have devoted insufficient attention to obtaining such evidence, or the evidence may simply not be there.

Definitive evidence: There are occasions when intelligence collectors obtain information from a trusted and well-placed inside source. The ACH analysis can assign a “High” weight for Credibility, but this is probably not enough to reflect the conclusiveness of such evidence and the impact it should have on an analyst’s thinking about the hypotheses. In other words, in some circumstances one or two highly authoritative reports from a trusted source in a position to know may support one hypothesis so strongly that they refute all other hypotheses regardless of what other less reliable or less definitive evidence may show.

Unbalanced set of evidence: Evidence and arguments must be representative of the problem as a whole. If there is considerable evidence on a related but peripheral issue and comparatively few items of evidence on the core issue, the inconsistency or weighted inconsistency scores may be misleading.

Diminishing returns: As evidence accumulates, each new item of inconsistent evidence or argument has less impact on the inconsistency scores than does the earlier evidence.

When you are evaluating change over time, it is desirable to delete the older evidence periodically or to partition the evidence and analyze the older and newer evidence separately.

Origins of This Technique

Richards Heuer originally developed the ACH technique as a method for dealing with a particularly difficult type of analytic problem at the CIA in the 1980s. It was first described publicly in his book The Psychology of Intelligence Analysis

7.4 ARGUMENT MAPPING

Argument Mapping is a technique that can be used to test a single hypothesis through logical reasoning. The process starts with a single hypothesis or tentative analytic judgment and then uses a box-and-arrow

diagram to lay out visually the argumentation and evidence both for and against the hypothesis or analytic judgment.

When to Use It

When making an intuitive judgment, use Argument Mapping to test your own reasoning. Creating a visual map of your reasoning and the evidence that supports this reasoning helps you better understand the strengths, weaknesses, and gaps in your argument.

Argument Mapping and Analysis of Competing Hypotheses (ACH) are complementary techniques that work well either separately or together. Argument Mapping is a detailed presentation of the argument for a single hypothesis, while ACH is a more general analysis of multiple hypotheses. The ideal is to use both.

Value Added

An Argument Map makes it easier for both analysts and recipients of the analysis to evaluate the soundness of any conclusion. It helps clarify and organize one’s thinking by showing the logical relationships between the various thoughts, both pro and con. An Argument Map also helps the analyst recognize assumptions and identify gaps in the available knowledge.

The Method

An Argument Map starts with a hypothesis—a single-sentence statement, judgment, or claim about which the analyst can, in subsequent statements, present general arguments and detailed evidence, both pro and con. Boxes with arguments are arrayed hierarchically below this statement, and these boxes are connected with arrows. The arrows signify that a statement in one box is a reason to believe, or not to believe, the statement in the box to which the arrow is pointing. Different types of boxes serve different functions in the reasoning process, and boxes use some combination of color-coding, icons, shapes, and labels so that one can quickly distinguish arguments supporting a hypothesis from arguments opposing it.

7.5 DECEPTION DETECTION

Deception is an action intended by an adversary to influence the perceptions, decisions, or actions of another to the advantage of the deceiver. Deception Detection is a set of checklists that analysts can use to

help them determine when to look for deception, discover whether deception actually is present, and figure out what to do to avoid being deceived. “The accurate perception of deception in counterintelligence analysis is extraordinarily difficult. If deception is done well, the analyst should not expect to see any evidence of it. If, on the other hand, it is expected, the analyst often will find evidence of deception even when it is not there.”4

When to Use It

Analysts should be concerned about the possibility of deception when:

  • The potential deceiver has a history of conducting deception.
  • Key information is received at a critical time, that is, when either the recipient or the potential deceiver has a great deal to gain or to lose.
  • Information is received from a source whose bona fides are questionable.
  • Analysis hinges on a single critical piece of information or reporting.
  • Accepting new information would require the analyst to alter a key assumption or key judgment.
  • Accepting the new information would cause the Intelligence Community, the U.S. government, or the client to expend or divert significant resources.
  • The potential deceiver may have a feedback channel that illuminates whether and how the deception information is being processed and to what effect.

Value Added

Most analysts know they cannot assume that everything that arrives in their inbox is valid, but few know how to factor such concerns effectively into their daily work practices. If an analyst accepts the possibility that some of the information received may be deliberately deceptive, this puts a significant cognitive burden on the analyst. All the evidence is open then to some question, and it becomes difficult to draw any valid inferences from the reporting. This fundamental dilemma can paralyze analysis unless practical tools are available to guide the analyst in determining when it is appropriate to worry about deception, how best to detect deception in the reporting, and what to do in the future to guard against being deceived.

The Method

Analysts should routinely consider the possibility that opponents are attempting to mislead them or to hide important information. The possibility of deception cannot be rejected simply because there is no evidence of it; if it is well done, one should not expect to see evidence of it.

Analysts have also found the following “rules of the road” helpful in dealing with deception.

  • Avoid over-reliance on a single source of information.
  • Seek and heed the opinions of those closest to the reporting.
  • Be suspicious of human sources or sub-sources who have not been met with personally or for whom it is unclear how or from whom they obtained the information.
  • Do not rely exclusively on what someone says (verbal intelligence); always look for material evidence (documents, pictures, an address or phone number that can be confirmed, or some other form of concrete, verifiable information).
  • Look for a pattern where on several occasions a source’s reporting initially appears correct but later turns out to be wrong and the source can offer a seemingly plausible, albeit weak, explanation for the discrepancy.
  • Generate and evaluate a full set of plausible hypothesis—including a deception hypothesis, if appropriate—at the outset of a project.
  • Know the limitations as well as the capabilities of the potential deceiver.

DECEPTION DETECTION CHECKLISTs

 

Motion, Opportunity, and Means (MOM):

Motive: What are the goals and motives of the potential deceiver?
Channels: What means are available to the potential deceiver to feed information to us?
Risks: What consequences would the adversary suffer if such a deception were revealed?
Costs: Would the potential deceiver need to sacrifice sensitive information to establish the credibility of the deception channel?
Feedback: Does the potential deceiver have a feedback mechanism to monitor the impact of the deception operation?

 

Past Opposition Practices (POP):

Does the adversary have a history of engaging in deception?
Does the current circumstance fit the pattern of past deceptions?
If not, are there other historical precedents?
If not, are there changed circumstances that would explain using this form of deception at this time?

 

Manipulability of Sources (MOSES):

Is the source vulnerable to control or manipulation by the potential deceiver?

What is the basis for judging the source to be reliable?
Does the source have direct access or only indirect access to the information? How good is the source’s track record of reporting?

 

Evaluation of Evidence (EVE):

How accurate is the source’s reporting? Has the whole chain of evidence including translations been checked?
Does the critical evidence check out? Remember, the sub-source can be more critical than the source.

Does evidence from one source of reporting (e.g., human intelligence) conflict with that coming from another source (e.g., signals intelligence or open source reporting)?
Do other sources of information provide corroborating evidence?
Is any evidence one would expect to see noteworthy by its absence?

Relationship to Other Techniques

Analysts can combine Deception Detection with Analysis of Competing Hypotheses to assess the possibility of deception. The analyst explicitly includes deception as one of the hypotheses to be analyzed, and information identified through the MOM, POP, MOSES, and EVE checklists is then included as evidence in the ACH analysis.

 

8.0 Cause and Effect
8 Assessment of Cause and Effect

At tempts to explain the past and forecast the future are based on an understanding of cause and effect. Such understanding is difficult, because the kinds of variables and relationships studied by the intelligence analyst are, in most cases, not amenable to the kinds of empirical analysis and theory development that are common in academic research. The best the analyst can do is to make an informed judgment, but such judgments depend upon the analyst’s subject matter expertise and reasoning ability and are vulnerable to various cognitive pitfalls and fallacies of reasoning.

 

One of the most common causes of intelligence failures is mirror imaging, the unconscious assumption that other countries and their leaders will act as we would in similar circumstances. Another is the tendency to attribute the behavior of people, organizations, or governments to the nature of the actor and underestimate the influence of situational factors. Conversely, people tend to see their own behavior as conditioned almost entirely by the situation in which they find themselves. This is known as the “fundamental attribution error.”

There is also a tendency to assume that the results of an opponent’s actions are what the opponent intended, and we are slow to accept the reality of simple mistakes, accidents, unintended consequences, coincidences, or small causes leading to large effects. Analysts often assume that there is a single cause and stop their search for an explanation when the first seemingly sufficient cause is found. Perceptions of causality are partly determined by where one’s attention is directed; as a result, information that is readily available, salient, or vivid is more likely to be perceived as causal than information that is not. Cognitive limitations and common errors in the perception of cause and effect are discussed in greater detail in Richards Heuer’s Psychology of Intelligence Analysis.

 

The Psychology of Intelligence Analysis describes three principal strategies that intelligence analysts use to make judgments to explain the cause of current events or forecast what might happen in the future:

* Situational logic: Making expert judgments based on the known facts and an understanding of the underlying forces at work at that particular time and place. When an analyst is working with incomplete, ambiguous, and possibly deceptive information, these expert judgments usually depend upon assumptions about capabilities, intent, or the normal workings of things in the country of concern. Key Assumptions Check, which is one of the most commonly used structured techniques, is described in this chapter.

* Comparison with historical situations: Combining an understanding of the facts of a specific situation with knowledge of what happened in similar situations in the past, either in one’s personal experience or historical events. This strategy involves the use of analogies. The Structured Analogies technique described in this chapter adds rigor and increased accuracy to this process.

* Applying theory: Basing judgments on the systematic study of many examples of the same phenomenon. Theories or models often based on empirical academic research are used to explain how and when certain types of events normally happen. Many academic models are too generalized to be applicable to the unique characteristics of most intelligence problems.

Overview of Techniques

Key Assumptions Check is one of the most important and frequently used techniques. Analytic judgment is always based on a combination of evidence and assumptions, or preconceptions, that influence how the evidence is interpreted.

Structured Analogies applies analytic rigor to reasoning by analogy. This technique requires that the analyst systematically compares the issue of concern with multiple potential analogies before selecting the one for which the circumstances are most similar to the issue of concern. It seems natural to use analogies when making decisions or forecasts as, by definition, they contain information about what has happened in similar situations in the past. People often recognize patterns and then consciously take actions that were successful in a previous experience or avoid actions that previously were unsuccessful. However, analysts need to avoid the strong tendency to fasten onto the first analogy that comes to mind and supports their prior view about an issue.

Role Playing, as described here, starts with the current situation, perhaps with a real or hypothetical new development that has just happened and to which the players must react.

Red Hat Analysis is a useful technique for trying to perceive threats and opportunities as others see them. Intelligence analysts frequently endeavor to forecast the behavior of a foreign leader, group, organization, or country. In doing so, they need to avoid the common error of mirror imaging, the natural tendency to assume that others think and perceive the world in the same way we do. Red Hat Analysis is of limited value without significant cultural understanding of the country and people involved.

Outside-In Thinking broadens an analyst’s thinking about the forces that can influence a particular issue of concern. This technique requires the analyst to reach beyond his or her specialty area to consider broader social, organizational, economic, environmental, technological, political, and global forces or trends that can affect the topic being analyzed.

Policy Outcomes Forecasting Model is a theory-based procedure for estimating the potential for political change. Formal models play a limited role in political/strategic analysis, since analysts generally are concerned with what they perceive to be unique events, rather than with any need to search for general patterns in events. Conceptual models that tell an analyst how to think about a problem and help the analyst through that thought process can be useful for frequently recurring issues, such as forecasting policy outcomes or analysis of political instability. Models or simulations that use a mathematical algorithm to calculate a conclusion are outside the domain of structured analytic techniques that are the topic of this book.

Prediction Markets are speculative markets created for the purpose of making predictions about future events. Just as betting on a horse race sets the odds on which horse will win, betting that some future occurrence will or will not happen sets the estimated probability of that future occurrence. Although the use of this technique has been successful in the private sector, it may not be a workable method for the Intelligence Community.

8.1 KEY ASSUMPTIONS CHECK

Analytic judgment is always based on a combination of evidence and assumptions, or preconceptions, which influence how the evidence is interpreted.2 The Key Assumptions Check is a systematic effort to make explicit and question the assumptions (the mental model) that guide an analyst’s interpretation of evidence and reasoning about any particular problem. Such assumptions are usually necessary and unavoidable as a means of filling gaps in the incomplete, ambiguous, and sometimes deceptive information with which the analyst must work. They are driven by the analyst’s education, training, and experience, plus the organizational context in which the analyst works.

An organization really begins to learn when its most cherished assumptions are challenged by counterassumptions. Assumptions underpinning existing policies and procedures should therefore be unearthed, and alternative policies and procedures put forward based upon counterassumptions.

—Ian I. Mitroff and Richard O. Mason,

Creating a Dialectical Social Science: Concepts, Methods, and Models

 

When to Use It

Any explanation of current events or estimate of future developments requires the interpretation of evidence. If the available evidence is incomplete or ambiguous, this interpretation is influenced by assumptions about how things normally work in the country of interest. These assumptions should be made explicit early in the analytic process.

If a Key Assumptions Check is not done at the outset of a project, it can still prove extremely valuable if done during the coordination process or before conclusions are presented or delivered.

Value Added

Preparing a written list of one’s working assumptions at the beginning of any project helps the analyst:

  • Identify the specific assumptions that underpin the basic analytic line.
  • Achieve a better understanding of the fundamental dynamics at play.
  • Gain a better perspective and stimulate new thinking about the issue.
  • Discover hidden relationships and links between key factors.
  • Identify any developments that would cause an assumption to be abandoned.
  • Avoid surprise should new information render old assumptions invalid.

A sound understanding of the assumptions underlying an analytic judgment sets the limits for the confidence the analyst ought to have in making a judgment.

The Method

The process of conducting a Key Assumptions Check is relatively straightforward in concept but often challenging to put into practice. One challenge is that participating analysts must be open to the possibility that they could be wrong. It helps to involve in this process several well-regarded analysts who are generally familiar with the topic but have no prior commitment to any set of assumptions about the issue at hand. Keep in mind that many “key assumptions” turn out to be “key uncertainties.”

Here are the steps in conducting a Key Assumptions Check:

* Gather a small group of individuals who are working the issue along with a few “outsiders.” The primary analytic unit already is working from an established mental model, so the “outsiders” are needed to bring other perspectives.

* Ideally, participants should be asked to bring their list of assumptions when they come to the meeting. If this was not done, start the meeting with a silent brainstorming session. Ask each participant to write down several assumptions on 3 × 5 cards.

*  Collect the cards and list the assumptions on a whiteboard for all to see.

*  Elicit additional assumptions. Work from the prevailing analytic line back to the key arguments that support it. Use various devices to help prod participants’ thinking:

  • Ask the standard journalist questions. Who: Are we assuming that we know who all the key players are? What: Are we assuming that we know the goals of the key players? When: Are we assuming that conditions have not changed since our last report or that they will not change in the foreseeable future? Where: Are we assuming that we know where the real action is going to be? Why: Are we assuming that we understand the motives of the key players? How: Are we assuming that we know how they are going to do it?
  • After identifying a full set of assumptions, go back and critically examine each assumption. Ask:
  • Why am I confident that this assumption is correct?
    In what circumstances might this assumption be untrue?
    Could it have been true in the past but no longer be true today?
    How much confidence do I have that this assumption is valid?
    If it turns out to be invalid, how much impact would this have on the analysis?
  • Place each assumption in one of three categories:
  • Basically solid.
    Correct with some caveats.
    Unsupported or questionable—the “key uncertainties.”

Refine the list, deleting those that do not hold up to scrutiny and adding new assumptions that emerge from the discussion. Above all, emphasize those assumptions that would, if wrong, lead to changing the analytic conclusions.

There is a particularly noteworthy interaction between Key Assumptions Check and Analysis of Competing Hypotheses (ACH). Key assumptions need to be included as “evidence” in an ACH matrix to ensure that the matrix is an accurate reflection of the analyst’s thinking. And analysts frequently identify assumptions during the course of filling out an ACH matrix. This happens when an analyst assesses the consistency or inconsistency of an item of evidence with a hypothesis and concludes that this judgment is dependent upon something else—usually an assumption. Users of ACH should write down and keep track of the assumptions they make when evaluating evidence against the hypotheses.

8.2 STRUCTURED ANALOGIES
The Structured Analogies technique applies increased rigor to analogical reasoning by requiring that the

issue of concern be compared systematically with multiple analogies rather than with a single analogy.

When to Use It

When one is making any analogy, it is important to think about more than just the similarities. It is also necessary to consider those conditions, qualities, or circumstances that are dissimilar between the two phenomena. This should be standard practice in all reasoning by analogy and especially in those cases when one cannot afford to be wrong.

We recommend that analysts considering the use of this technique read Richard D. Neustadt and Ernest R. May, “Unreasoning from Analogies,” chapter 4, in Thinking in Time: The Uses of History for Decision Makers (New York: Free Press, 1986). Also recommended is Giovanni Gavetti and Jan W. Rivkin, “How Strategists Really Think: Tapping the Power of Analogy,” Harvard Business Review (April 2005).

Value Added

Reasoning by analogy helps achieve understanding by reducing the unfamiliar to the familiar. In the absence of data required for a full understanding of the current situation, reasoning by analogy may be the only alternative.

The benefit of the Structured Analogies technique is that it avoids the tendency to fasten quickly on a single analogy and then focus only on evidence that supports the similarity of that analogy. Analysts should take into account the time required for this structured approach and may choose to use it only when the cost of being wrong is high.

The following is a step-by-step description of this technique.

*  Describe the issue and the judgment or decision that needs to be made.

*  Identify a group of experts who are familiar with the problem a

* Ask the group of experts to identify as many analogies as possible without focusing too strongly on how similar they are to the current situation. Various universities and international organizations maintain databases to facilitate this type of research. For example, the Massachusetts Institute of Technology (MIT) maintains its Cascon System for Analyzing International Conflict, a database of 85 post–World War II conflicts that are categorized and coded to facilitate their comparison with current conflicts of interest.

* Review the list of potential analogies and agree on which ones should be examined further.

* Develop a tentative list of categories for comparing the analogies to determine which analogy is closest to the issue in question. For example, the MIT conflict database codes each case according to the following broad categories as well as finer subcategories: previous or general relations between sides, great power and allied involvement, external relations generally, military-strategic, international organization (UN, legal, public opinion), ethnic (refugees, minorities), economic/resources, internal politics of the sides, communication and information, actions in disputed area.

* Write up an account of each selected analogy, with equal focus on those aspects of the analogy that are similar and those that are different. The task of writing accounts of all the analogies should be divided up among the experts. Each account can be posted on a wiki where each member of the group can read and comment on them.

* Review the tentative list of categories for comparing the analogous situations to make sure they are still appropriate. Then ask each expert to rate the similarity of each analogy to the issue of concern. The experts should do the rating in private using a scale from 0 to 10, where 0 = not at all similar, 5 = somewhat similar, and 10 = very similar.

* After combining the ratings to calculate an average rating for each analogy, discuss the results and make a forecast for the current issue of concern. This will usually be the same as the outcome of the most similar analogy. Alternatively, identify several possible outcomes, or scenarios, based on the diverse outcomes of analogous situations. Then use the analogous cases to identify drivers or policy actions that might influence the outcome of the current situation.

8.3 ROLE PLAYING

In Role Playing, analysts assume the roles of the leaders who are the subject of their analysis and act out their responses to developments. This technique is also known as gaming, but we use the name Role Playing here

to distinguish it from the more complex forms of military gaming. This technique is about simple Role Playing, when the starting scenario is the current existing situation, perhaps with a real or hypothetical new development that has just happened and to which the players must react.

When to Use It

Role Playing is often used to improve understanding of what might happen when two or more people, organizations, or countries interact, especially in conflict situations or negotiations. It shows how each side might react to statements or actions from the other side. Many years ago Richards Heuer participated in several Role Playing exercises, including one with analysts of the Soviet Union from throughout the Intelligence Community playing the role of Politburo members deciding on the successor to Soviet leader Leonid Brezhnev.

Role Playing has a desirable byproduct that might be part of the rationale for using this technique. It is a useful mechanism for bringing together people who, although they work on a common problem, may have little opportunity to meet and discuss their perspectives on this problem. A role-playing game may lead to the long-term benefits that come with mutual understanding and ongoing collaboration. To maximize this benefit, the organizer of the game should allow for participants to have informal time together.

Value Added

Role Playing is a good way to see a problem from another person’s perspective, to gain insight into how others think, or to gain insight into how other people might react to U.S. actions.

Role Playing is particularly useful for understanding the potential outcomes of a conflict situation. Parties to a conflict often act and react many times, and they can change as a result of their interactions. There is a body of research showing that experts using unaided judgment perform little better than chance in predicting the outcome of such conflict. Performance is improved significantly by the use of “simulated interaction” (Role Playing) to act out the conflicts.

Role Playing does not necessarily give a “right” answer, but it typically enables the players to see some things in a new light. Players become more conscious that “where you stand depends on where you sit.”

Potential Pitfalls

One limitation of Role Playing is the difficulty of generalizing from the game to the real world. Just because something happens in a role-playing game does not necessarily mean the future will turn out that way. This observation seems obvious, but it can actually be a problem. Because of the immediacy of the experience and the personal impression made by the simulation, the outcome may have a stronger impact on the participants’ thinking than is warranted by the known facts of the case. As we shall discuss, this response needs to be addressed in the after-action review.

When the technique is used for intelligence analysis, the goal is not an explicit prediction but better understanding of the situation and the possible outcomes. The method does not end with the conclusion of the Role Playing. There must be an after-action review of the key turning points and how the outcome might have been different if different choices had been made at key points in the game.

The Method

Most of the gaming done in the Department of Defense and in the academic world is rather elaborate so it requires substantial preparatory work.

Whenever possible, a Role Playing game should be conducted off site with cell phones turned off. Being away from the office precludes interruptions and makes it easier for participants to imagine themselves in a different environment with a different set of obligations, interests, ambitions, fears, and historical memories.

The analyst who plans and organizes the game leads a control team. This team monitors time to keep the game on track, serves as the communication channel to pass messages between teams, leads the after-action review, and helps write the after-action report to summarize what happened and lessons learned. The control team also plays any role that becomes necessary but was not foreseen, for example, a United Nations mediator. If necessary to keep the game on track or lead it in a desired direction, the control team may introduce new events, such as a terrorist attack that inflames emotions or a new policy statement on the issue by the U.S. president.

After the game ends or on the following day, it is necessary to conduct an after-action review. If there is agreement that all participants played their roles well, there may be a natural tendency to assume that the outcome of the game is a reasonable forecast of what will eventually happen in real life.

8.4 RED HAT ANALYSIS

Intelligence analysts frequently endeavor to forecast the actions of an adversary or a competitor. In doing so, they need to avoid the common error of mirror imaging, the natural tendency to assume that others think and

perceive the world in the same way we do. Red Hat Analysis is a useful technique for trying to perceive threats and opportunities as others see them, but this technique alone is of limited value without significant cultural understanding of the other country and people involved.

 

To see the options faced by foreign leaders as these leaders see them, one must understand their values and assumptions and even their misperceptions and misunderstandings. Without such insight, interpreting foreign leaders’ decisions or forecasting future decisions is often little more than partially informed speculation. Too frequently, behavior of foreign leaders appears ‘irrational’ or ‘not in their own best interest.’ Such conclusions often indicate analysts have projected American values and conceptual frameworks onto the foreign leaders and societies, rather than understanding the logic of the situation as it appears to them.

—Richards J. Heuer Jr., Psychology of Intelligence Analysis (1999).

When to Use It

The chances of a Red Hat Analysis being accurate are better when one is trying to foresee the behavior of a specific person who has the authority to make decisions. Authoritarian leaders as well as small, cohesive groups, such as terrorist cells, are obvious candidates for this type of analysis. The chances of making an accurate forecast about an adversary’s or a competitor’s decision is significantly lower when the decision is constrained by a legislature or influenced by conflicting interest groups.

Value Added

There is a great deal of truth to the maxim that “where you stand depends on where you sit.” Red Hat Analysis is a reframing technique that requires the analyst to adopt—and make decisions consonant with— the culture of a foreign leader, cohesive group, criminal, or competitor. This conscious effort to imagine the situation as the target perceives it helps the analyst gain a different and usually more accurate perspective on a problem or issue. Reframing the problem typically changes the analyst’s perspective from that of an analyst observing and forecasting an adversary’s behavior to that of a leader who must make a difficult decision within that operational culture.

This reframing process often introduces new and different stimuli that might not have been factored into a traditional analysis. For example, in a Red Hat exercise, participants might ask themselves these questions: “What are my supporters expecting from me?” “Do I really need to make this decision now?” What are the consequences of making a wrong decision?” “How will the United States respond?”

Potential Pitfalls

Forecasting human decisions or the outcome of a complex organizational process is difficult in the best of circumstances.

It is even more difficult when dealing with a foreign culture and significant gaps in the available information. Mirror imaging is hard to avoid because, in the absence of a thorough understanding of the foreign situation and culture, your own perceptions appear to be the only reasonable way to look at the problem.

A common error in our perceptions of the behavior of other people, organizations, or governments of all types is likely to be even more common when assessing the behavior of foreign leaders or groups.

This is the tendency to attribute the behavior of people, organizations, or governments to the nature of the actor and to underestimate the influence of situational factors. This error is especially easy to make when one assumes that the actor has malevolent intentions but our understanding of the pressures on that actor is limited. Conversely, people tend to see their own behavior as conditioned almost entirely by the situation in which they find themselves. We seldom see ourselves as a bad person, but we often see malevolent intent in others.

This is known to cognitive psychologists as the fundamental attribution error.

The Method

* Gather a group of experts with in-depth knowledge of the target, operating environment, and senior decision maker’s personality, motives, and style of thinking. If at all possible, try to include people who are well grounded in the adversary’s culture, who speak the same language, share the same ethnic background, or have lived extensively in the region.

* Present the experts with a situation or a stimulus and ask the experts to put themselves in the adversary’s or competitor’s shoes and simulate how they would respond.

* Emphasize the need to avoid mirror imaging. The question is not “What would you do if you were in their shoes?” but “How would this person or group in that particular culture and circumstance most likely think, behave, and respond to the stimulus?”

* If trying to foresee the actions of a group or an organization, consider using the Role Playing technique.

* In presenting the results, describe the alternatives that were considered and the rationale for selecting the path the person or group is most likely to take. Consider other less conventional means of presenting the results of your analysis, such as the following:

Describing a hypothetical conversation in which the leader and other players discuss the issue in the first person.
Drafting a document (set of instructions, military orders, policy paper, or directives) that the adversary or competitor would likely generate.

Relationship to Other Techniques

Red Hat Analysis differs from a Red Team Analysis in that it can be done or organized by any analyst who needs to understand or forecast foreign behavior and who has, or can gain access to, the required cultural expertise.

8.5 OUTSIDE-IN THINKING

Outside-In Thinking identifies the broad range of global, political, environmental, technological, economic, or social forces and trends that are outside the analyst’s area of expertise but that may profoundly affect the issue of concern. Many analysts tend to think from the inside out, focused on familiar factors in their specific area of responsibility with which they are most familiar.

When to Use It

This technique is most useful in the early stages of an analytic process when analysts need to identify all the critical factors that might explain an event or could influence how a particular situation will develop. It should be part of the standard process for any project that analyzes potential future outcomes, for this approach covers the broader environmental context from which surprises and unintended consequences often come.

Outside-In Thinking also is useful if a large database is being assembled and needs to be checked to ensure that no important field in the database architecture has been overlooked. In most cases, important categories of information (or database fields) are easily identifiable early on in a research effort, but invariably one or two additional fields emerge after an analyst or group of analysts is well into a project, forcing them to go back and review all previous files, recoding for that new entry. Typically, the overlooked fields are in the broader environment over which the analysts have little control. By applying Outside-In Thinking, analysts can better visualize the entire set of data fields early on in the research effort.

Value Added

Most analysts focus on familiar factors within their field of specialty, but we live in a complex, interrelated world where events in our little niche of that world are often affected by forces in the broader environment over which we have no control. The goal of Outside-In Thinking is to help analysts get an entire picture, not just the part of the picture with which they are already familiar.

Outside-In Thinking reduces the risk of missing important variables early in the analytic process. It encourages analysts to rethink a problem or an issue while employing a broader conceptual framework.

The Method

  • Generate a generic description of the problem or phenomenon to be studied.
  • Form a group to brainstorm the key forces and factors that could have an impact on the topic but over which the subject can exert little or no influence, such as globalization, the emergence of new technologies, historical precedent, and the growth of the Internet.
  • Employ the mnemonic STEEP +2 to trigger new ideas (Social, Technical, Economic, Environmental, Political plus Military and Psychological).
  • Move down a level of analysis and list the key factors about which some expertise is available.
  • Assess specifically how each of these forces and factors could have an impact on the problem.
  • Ascertain whether these forces and factors actually do have an impact on the issue at hand basing your conclusion on the available evidence.
  • Generate new intelligence collection tasking to fill in information gaps.

Relationship to Other Techniques

Outside-In Thinking is essentially the same as a business analysis technique that goes by different acronyms, such as STEEP, STEEPLED, PEST, or PESTLE. For example, PEST is an acronym for Political, Economic, Social, and Technological, while STEEPLED also includes Legal, Ethical, and Demographic. All require the analysis of external factors that may have either a favorable or unfavorable influence on an organization.

8.6 POLICY OUTCOMES FORECASTING MODEL

The Policy Outcomes Forecasting Model structures the analysis of competing political forces in order to forecast the most likely political outcome and the potential for significant political change. The model was

originally designed as a quantitative method using expert-generated data, not as a structured analytic technique. However, like many quantitative models, it can also be used simply as a conceptual model to guide how an expert analyst thinks about a complex issue.

When to Use It

The Policy Outcomes Forecasting Model has been used to analyze the following types of questions:

What policy is Country W likely to adopt toward its neighbor?
Is the U.S. military likely to lose its base in Country X?
How willing is Country Y to compromise in its dispute with Country X?
In what circumstances can the government of Country Z be brought down?

Use this model when you have substantial information available on the relevant actors (individual leaders or organizations), their positions on the issues, the importance of the issues to each actor, and the relative strength of each actor’s ability to support or oppose any specific policy. Judgments about the positions and the strengths and weaknesses of the various political forces can then be used to forecast what policies might be adopted and to assess the potential for political change.

Use of this model is limited to situations when there is a single issue that will be decided by political bargaining and maneuvering, and when the potential outcomes can be visualized on a continuous line

Value Added

Like any model, Policy Outcomes Forecasting provides a systematic framework for generating and organizing information about an issue of concern. Once the basic analysis is done, it can be used to analyze the significance of changes in the position of any of the stakeholders. An analyst may also use the data to answer What If? questions such as the following:

Would a leader strengthen her position if she modified her stand on a contentious issue?
Would the military gain the upper hand if the current civilian leader were to die?
What would be the political consequences if a traditionally apolitical institution—such as the church or the military—became politicized?

An analyst or group of analysts can make an informed judgment about an outcome by explicitly identifying all the stakeholders in the outcome of an issue and then determining how close or far apart they are on the issue, how influential each one is, and how strongly each one feels about it. Assembling all this data in a graphic such as Figure 8.6 helps the analyst manage the complexity, share and discuss the information with other analysts, and present conclusions in an efficient and effective manner.

The Method

Define the problem in terms of a policy or leadership choice issue. The issue must vary along a single dimension so that options can be arrayed from one extreme to another in a way that makes sense within the country in which the decision will be made.

These alternative policies are rated on a scale from 0 to 100, with the position on the scale reflecting the distance or difference between the policies.

These options range between the two extremes—full nationalization of energy investment at the left end of the scale and private investment only at the right end. Note that the position of these policies on the horizontal scale captures the full range of the policy debate and reflects the estimated political distance or difference between each of the policies.

The next step is to identify all the actors, no matter how strong or weak, that will try to influence the policy outcome.

First, their position on the horizontal scale shows where the actor stands on the issue, and, second, their height above the scale is a measure of the relative amount of clout each actor has and is prepared to use to influence the outcome of the policy decision. To judge the relative height of each actor, identify the strongest actor and arbitrarily assign that actor a strength of 100. Assign proportionately lower values to other actors based on your judgment or gut feeling about how their strength and political clout compare with those of the actor assigned a strength of 100.

This graphic representation of the relevant variables is used as an aid in assessing and communicating to others the current status of the most influential forces on this issue and the potential impact of various changes in this status.

Origins of This Technique

The Policy Outcomes Forecasting Model described here is a simplified, nonquantitative version of a policy forecasting model developed by Bruce Bueno de Mesquita and described in his book The War Trap (New Haven: Yale University Press, 1981). It was further refined by Bueno de Mesquita et al., in Forecasting Political Events: The Future of Hong Kong (New Haven: Yale University Press, 1988).

In the 1980s, CIA analysts used this method with the implementing software to analyze scores of policy and political instability issues in more than thirty countries. Analysts used their subject expertise to assign numeric values to the variables. The simplest version of this methodology uses the positions of each actor, the relative strength of each actor, and the relative importance of the issue to each actor to calculate which actor’s or group’s position would get the most support if each policy position had to compete with every other policy position in a series of “pairwise “contests. In other words, the model finds the policy option around which a coalition will form that can defeat every other possible coalition in every possible contest between any two policy options (the “median voter” model). The model can also test how sensitive the policy forecast is to various changes in the relative strength of the actors or in their positions or in the importance each attaches to the issue.

A testing program at that time found that traditional analysis and analyses using the policy forces analysis software were both accurate in hitting the target about 90 percent of the time, but the software hit the bull’s- eye twice as often. Also, reports based on the policy forces software gave greater detail on the political dynamics leading to the policy outcome and were less vague in their forecasts than were traditional analyses.

8.7 PREDICTION MARKETS

Prediction Markets are speculative markets created solely for the purpose of allowing participants to make predictions in a particular area. Just as betting on a horse race sets the odds on which horse will win, supply and demand in the prediction market sets the estimated probability of some future occurrence. Two books, The Wisdom of Crowds by James Surowiecki and Infotopia by Cass Sunstein, have popularized the concept of Prediction Markets.

We do not support the use of Prediction Markets for intelligence analysis for reasons that are discussed below. We have included Prediction Markets in this book because it is an established analytic technique and it has been suggested for use in the Intelligence Community.

The following arguments have been made against the use of Prediction Markets for intelligence analysis:

* Prediction Markets can be used only in situations that will have an unambiguous outcome, usually within a predictable time period. Such situations are commonplace in business and industry, though much less so in intelligence analysis.

* Prediction Markets do have a strong record of near-term forecasts, but intelligence analysts and their customers are likely to be uncomfortable with their predictions. No matter what the statistical record of accuracy with this technique might be, consumers of intelligence are unlikely to accept any forecast without understanding the rationale for the forecast and the qualifications of those who voted on it.

* If people in the crowd are offering their unsupported opinions, and not informed judgments, the utility of the prediction is questionable. Prediction Markets are more likely to be useful in dealing with commercial preferences or voting behavior and less accurate, for example, in predicting the next terrorist attack in the United States, a forecast that would require special expertise and knowledge.

* Like other financial markets, such as commodities futures markets, Prediction Markets are subject to liquidity problems and speculative attacks mounted in order to manipulate the results. Financially and politically interested parties may seek to manipulate the vote. The fewer the participants, the more vulnerable a market is.

* Ethical objections have been raised to the use of a Prediction Market for national security issues. The Defense Advanced Research Projects Agency (DARPA) proposed a Policy Analysis Market in 2003. It would have worked in a manner similar to the commodities market, and it would have allowed investors to earn profits by betting on the likelihood of such events as regime changes in the Middle East and the likelihood of terrorist attacks. The DARPA plan was attacked on grounds that “it was unethical and in bad taste to accept wagers on the fate of foreign leaders and a terrorist attack. The project was canceled a day after it was announced.” Although attacks on the DARPA plan in the media may have been overdone, there is a legitimate concern about government-sponsored betting on international events.

Relationship to Other Techniques

The Delphi Method is a more appropriate method for intelligence agencies to use to aggregate outside expert opinion; Delphi also has a broader applicability for other types of intelligence analysis.

9.0 Challenge Analysis
9 Challenge Analysis

Challenge analysis encompasses a set of analytic techniques that have also been called contrarian analysis, alternative analysis, competitive analysis, red team analysis, and devil’s advocacy. What all of these have in common is the goal of challenging an established mental model or analytic consensus in order to broaden the range of possible explanations or estimates that are seriously considered. The fact that this same activity has been called by so many different names suggests there has been some conceptual diversity about how and why these techniques are being used and what might be accomplished by their use.

There is a broad recognition in the Intelligence Community that failure to question a consensus judgment, or a long-established mental model, has been a consistent feature of most significant intelligence failures. The postmortem analysis of virtually every major U.S. intelligence failure since Pearl Harbor has identified an analytic mental model (mindset) as a key factor contributing to the failure. The situation changed, but the analyst’s mental model did not keep pace with that change or did not recognize all the ramifications of the change.

This record of analytic failures has generated discussion about the “paradox of expertise.” The experts can be the last to recognize the reality and significance of change. For example, few experts on the Soviet Union foresaw its collapse, and the experts on Germany were the last to accept that Germany was going to be reunified. Going all the way back to the Korean War, experts on China were saying that China would not enter the war—until it did.

A mental model formed through education and experience serves an essential function; it is what enables the analyst to provide on a daily basis reasonably good intuitive assessments or estimates about what is happening or likely to happen.

The problem is that a mental model that has previously provided accurate assessments and estimates for many years can be slow to change. New information received incrementally over time is easily assimilated into one’s existing mental model, so the significance of gradual change over time is easily missed. It is human nature to see the future as a continuation of the past.

There is also another logical rationale for consistently challenging conventional wisdom. Former CIA Director Michael Hayden has stated that “our profession deals with subjects that are inherently ambiguous, and often deliberately hidden. Even when we’re at the top of our game, we can offer policymakers insight, we can provide context, and we can give them a clearer picture of the issue at hand, but we cannot claim certainty for our judgments.” The director went on to suggest that getting it right seven times out of ten might be a realistic expectation.

This chapter describes three types of challenge analysis techniques: self-critique, critique of others, and solicitation of critique by others.

Self-critique: Two techniques that help analysts challenge their own thinking are Premortem Analysis and Structured Self-Critique. These techniques can counteract the pressures for conformity or consensus that often suppress the expression of dissenting opinions in an analytic team or group. We adapted Premortem Analysis from business and applied it to intelligence analysis.

Critique of others: Analysts can use What If? Analysis or High Impact/Low Probability Analysis to tactfully question the conventional wisdom by making the best case for an alternative explanation or outcome.

Critique by others: Several techniques are available for seeking out critique by others. Devil’s Advocacy is a well-known example of that. The term “Red Team” is used to describe a group that is assigned to take an adversarial perspective. The Delphi Method is a structured process for eliciting opinions from a panel of outside experts.

Reframing Techniques

Three of the techniques in this chapter work by a process called reframing. A frame is any cognitive structure that guides the perception and interpretation of what one sees. A mental model of how things normally work can be thought of as a frame through which an analyst sees and interprets evidence. An individual or a group of people can change their frame of reference, and thus challenge their own thinking about a problem, simply by changing the questions they ask or changing the perspective from which they ask the questions. Analysts can use this reframing technique when they need to generate new ideas, when they want to see old ideas from a new perspective, or any other time when they sense a need for fresh thinking.

it is fairly easy to open the mind to think in different ways. The trick is to restate the question, task, or problem from a different perspective that activates a different set of synapses in the brain. Each of the three applications of reframing described in this chapter does this in a different way. Premortem Analysis asks analysts to imagine themselves at some future point in time, after having just learned that a previous analysis turned out to be completely wrong. The task then is to figure out how and why it might have gone wrong. What If? Analysis asks the analyst to imagine that some unlikely event has occurred, and then to explain how it could happen and the implications of the event. Structured Self-Critique asks a team of analysts to reverse its role from advocate to critic in order to explore potential weaknesses in the previous analysis. This change in role can empower analysts to express concerns about the consensus view that might previously have been suppressed. These techniques are generally more effective in a small group than with a single analyst. Their effectiveness depends in large measure on how fully and enthusiastically participants in the group embrace the imaginative or alternative role they are playing. Just going through the motions is of limited value.

Overview of Techniques

Premortem Analysis reduces the risk of analytic failure by identifying and analyzing a potential failure before it occurs. Imagine yourself several years in the future. You suddenly learn from an unimpeachable source that your estimate was wrong. Then imagine what could have happened to cause the estimate to be wrong. Looking back from the future to explain something that has happened is much easier than looking into the future to forecast what will happen, and this exercise helps identify problems one has not foreseen.

Structured Self-Critique is a procedure that a small team or group uses to identify weaknesses in its own analysis. All team or group members don a hypothetical black hat and become critics rather than supporters of their own analysis. From this opposite perspective, they respond to a list of questions about sources of uncertainty, the analytic processes that were used, critical assumptions, diagnosticity of evidence, anomalous evidence, information gaps, changes in the broad environment in which events are happening, alternative decision models, availability of cultural expertise, and indicators of possible deception. Looking at the responses to these questions, the team reassesses its overall confidence in its own judgment.

What If? Analysis is an important technique for alerting decision makers to an event that could happen, or is already happening, even if it may seem unlikely at the time. It is a tactful way of suggesting to decision makers the possibility that they may be wrong. What If? Analysis serves a function similar to that of Scenario Analysis—it creates an awareness that prepares the mind to recognize early signs of a significant change, and it may enable a decision maker to plan ahead for that contingency. The analyst imagines that an event has occurred and then considers how the event could have unfolded.

High Impact/Low Probability Analysis is used to sensitize analysts and decision makers to the possibility that a low-probability event might actually happen and stimulate them to think about measures that could be taken to deal with the danger or to exploit the opportunity if it does occur. The analyst assumes the event has occurred, and then figures out how it could have happened and what the consequences might be.

Devil’s Advocacy is a technique in which a person who has been designated the Devil’s Advocate, usually by a responsible authority, makes the best possible case against a proposed analytic judgment, plan, or decision.

Red Team Analysis as described here is any project initiated by management to marshal the specialized substantive, cultural, or analytic skills required to challenge conventional wisdom about how an adversary or competitor thinks about an issue.

Delphi Method is a procedure for obtaining ideas, judgments, or forecasts electronically from a geographically dispersed panel of experts. It is a time-tested, extremely flexible procedure that can be used on any topic or issue for which expert judgment can contribute.

9.1 PREMORTEM ANALYSIS

The goal of a Premortem Analysis is to reduce the risk of surprise and the subsequent need for a postmortem investigation of what went wrong. It is an easy-to-use technique that enables a group of analysts

who have been working together on any type of future-oriented analysis to challenge effectively the accuracy of their own conclusions.

When to Use It

Premortem Analysis should be used by analysts who can devote a few hours to challenging their own analytic conclusions about the future to see where they might be wrong. It may be used by a single analyst but, like all structured analytic techniques, it is most effective when used in a small group.

After the trainees formulated a plan of action, they were asked to imagine that it is several months or years into the future, and their plan has been implemented but has failed. They were then asked to describe how it might have failed, and, despite their original confidence in the plan, they could easily come up with multiple explanations for failure—reasons that were not identified when the plan was first proposed and developed.

Klein reported his trainees showed a “much higher level of candor” when evaluating their own plans after being exposed to the premortem exercise, as compared with other more passive attempts at getting them to self-critique their own plans.

Value Added

Briefly, there are two creative processes at work here. First, the questions are reframed, an exercise that typically elicits responses that are different from the original ones. Asking questions about the same topic, but from a different perspective, opens new pathways in the brain, as we noted in the introduction to this chapter. Second, the Premortem approach legitimizes dissent. For various reasons, many members of small groups suppress dissenting opinions, leading to premature consensus. In a Premortem Analysis, all analysts are asked to make a positive contribution to group goals by identifying weaknesses in the previous analysis.

Research has documented that an important cause of poor group decisions is the desire for consensus. This desire can lead to premature closure and agreement with majority views regardless of whether they are perceived as right or wrong. Attempts to improve group creativity and decision making often focus on ensuring that a wider range of information and opinions are presented to the group and given consideration.

In a candid newspaper column written long before he became CIA Director, Leon Panetta wrote that “an unofficial rule in the bureaucracy says that to ‘get along, go along.’ In other words, even when it is obvious that mistakes are being made, there is a hesitancy to report the failings for fear of retribution or embarrassment. That is true at every level, including advisers to the president. The result is a ‘don’t make waves’ mentality … that is just another fact of life you tolerate in a big bureaucracy.”

The Method

The best time to conduct a Premortem Analysis is shortly after a group has reached a conclusion on an action plan, but before any serious drafting of the report has been done. If the group members are not already familiar with the Premortem technique, the group leader, another group member, or a facilitator steps up and makes a statement along the lines of the following. “Okay, we now think we know the right answer, but we need to double-check this.

To free up our minds to consider other possibilities, let’s imagine that we have made this judgment, our report has gone forward and been accepted, and now, x months or years later, we gain access to a crystal ball. Peering into this ball, we learn that our analysis was wrong, and things turned out very differently from the way we had expected. Now, working from that perspective in the future, let’s put our imaginations to work and brainstorm what could have possibly happened to cause our analysis to be so wrong.”

After all ideas are posted on the board and visible to all, the group discusses what it has learned by this exercise, and what action, if any, the group should take. This generation and initial discussion of ideas can often be accomplished in a single two-hour meeting, which is a small investment of time to undertake a systematic challenge to the group’s thinking.

 

9.2 STRUCTURED SELF-CRITIQUE

Structured Self-Critique is a systematic procedure that a small team or group can use to identify weaknesses in its own analysis. All team or group members don a hypothetical black hat and become critics rather than

supporters of their own analysis. From this opposite perspective, they respond to a list of questions about sources of uncertainty, the analytic processes that were used, critical assumptions, diagnosticity of evidence, anomalous evidence, information gaps, changes in the broad environment in which events are happening, alternative decision models, availability of cultural expertise, and indicators of possible deception. As it reviews responses to these questions, the team reassesses its overall confidence in its own judgment.

When to Use It

You can use Structured Self-Critique productively to look for weaknesses in any analytic explanation of events or estimate of the future. It is specifically recommended for use in the following ways:

  • As the next step after a Premortem Analysis raises unresolved questions about any estimated future outcome or event.
    As a double check prior to the publication of any major product such as a National Intelligence Estimate.
  • As one approach to resolving conflicting opinions

The Method

Start by re-emphasizing that all analysts in the group are now wearing a black hat. They are now critics, not advocates, and they will now be judged by their ability to find weaknesses in the previous analysis, not on the basis of their support for the previous analysis. Then work through the following topics or questions:

Sources of uncertainty: Identify the sources and types of uncertainty in order to set reasonable expectations for what the team might expect to achieve. Should one expect to find: (a) a single correct or most likely answer, (b) a most likely answer together with one or more alternatives that must also be considered, or (c) a number of possible explanations or scenarios for future development? To judge the uncertainty, answer these questions:

  • Is the question being analyzed a puzzle or a mystery? Puzzles have answers, and correct answers can be identified if enough pieces of the puzzle are found. A mystery has no single definitive answer; it depends upon the future interaction of many factors, some known and others unknown. Analysts can frame the boundaries of a mystery only “by identifying the critical factors and making an intuitive judgment about how they have interacted in the past and might interact in the future.”
    How does the team rate the quality and timeliness of its evidence?
  • Are there a greater than usual number of assumptions because of insufficient evidence or the complexity of the situation?
  • Is the team dealing with a relatively stable situation or with a situation that is undergoing, or potentially about to undergo, significant change?

Analytic process: In the initial analysis, did the team do the following. Did it identify alternative hypotheses and seek out information on these hypotheses? Did it identify key assumptions? Did it seek a broad range of diverse opinions by including analysts from other offices, agencies, academia, or the private sector in the deliberations? If these steps were not taken, the odds of the team having a faulty or incomplete analysis are increased. Either consider doing some of these things now or lower the team’s level of confidence in its judgment.

Critical assumptions: Assuming that the team has already identified key assumptions, the next step is to identify the one or two assumptions that would have the greatest impact on the analytic judgment if they turned out to be wrong. In other words, if the assumption is wrong, the judgment will be wrong. How recent and well-documented is the evidence that supports each such assumption? Brainstorm circumstances that could cause each of these assumptions to be wrong, and assess the impact on the team’s analytic judgment if the assumption is wrong. Would the reversal of any of these assumptions support any alternative hypothesis? If the team has not previously identified key assumptions, it should do a Key Assumptions Check now.

Diagnostic evidence: Identify alternative hypotheses and the several most diagnostic items of evidence that enable the team to reject alternative hypotheses. For each item, brainstorm for any reasonable alternative interpretation of this evidence that could make it consistent with an alternative hypothesis. See Diagnostic Reasoning in chapter 7.

Information gaps: Are there gaps in the available information, or is some of the information so dated that it may no longer be valid? Is the absence of information readily explainable? How should absence of information affect the team’s confidence in its conclusions?

Missing evidence: Is there any evidence that one would expect to see in the regular flow of intelligence or open source reporting if the analytic judgment is correct, but that turns out not to be there? Anomalous evidence: Is there any anomalous item of evidence that would have been important if it had been believed or if it could have been related to the issue of concern, but that was rejected as unimportant because it was not believed or its significance was not known? If so, try to imagine how this item might be a key clue to an emerging alternative hypothesis.

Changes in the broad environment: Driven by technology and globalization, the world as a whole seems to be experiencing social, technical, economic, environmental, and political changes at a faster rate than ever before in history. Might any of these changes play a role in what is happening or will happen? More broadly, what key forces, factors, or events could occur independently of the issue that is the subject of analysis that could have a significant impact on whether the analysis proves to be right or wrong? Alternative decision models: If the analysis deals with decision making by a foreign government or nongovernmental organization (NGO), was the group’s judgment about foreign behavior based on a rational actor assumption? If so, consider the potential applicability of other decision models, specifically that the action was or will be the result of bargaining between political or bureaucratic forces, the result

of standard organizational processes, or the whim of an authoritarian leader. If information for a more thorough analysis is lacking, consider the implications of that for confidence in the team’s judgment. Cultural expertise: If the topic being analyzed involves a foreign or otherwise unfamiliar culture or subculture, does the team have or has it obtained cultural expertise on thought processes in that culture?

Deception: Does another country, NGO, or commercial competitor about which the team is making judgments have a motive, opportunity, or means for engaging in deception to influence U.S. policy or to change your behavior? Does this country, NGO, or competitor have a past history of engaging in denial, deception, or influence operations?

9.3 WHAT IF? ANALYSIS

What If? Analysis imagines that an unexpected event has occurred with potential major impact. Then, with the benefit of “hindsight,” the analyst figures out how this event could have come about and what the consequences might be.

When to Use It

This technique should be in every analyst’s toolkit. It is an important technique for alerting decision makers to an event that could happen, even if it may seem unlikely at the present time. What If? Analysis serves a function similar to Scenario Analysis—it creates an awareness that prepares the mind to recognize early signs of a significant change, and it may enable the decision maker to plan ahead for that contingency. It is most appropriate when two conditions are present:

A mental model is well ingrained within the analytic or the customer community that a certain event will not happen.

There is a perceived need for others to focus on the possibility that this event could actually happen and to consider the consequences if it does occur.

Value Added

Shifting the focus from asking whether an event will occur to imagining that it has occurred and then explaining how it might have happened opens the mind to think in different ways. What If? Analysis shifts the discussion from, “How likely is it?” to these questions:

  • How could it possibly come about?
  • What would be the impact?
  • Has the possibility of the event happening increased?

The technique also gives decision makers the following additional benefits:

A better sense of what they might be able to do today to prevent an untoward development from occurring, or what they might do today to leverage an opportunity for advancing their interests. A list of specific indicators to monitor to help determine if the chances of a development actually occurring are increasing.

The What If? technique is a useful tool for exploring unanticipated or unlikely scenarios that are within the realm of possibility and that would have significant consequences should they come to pass.

9.4 HIGH IMPACT/LOW PROBABILITY ANALYSIS

High Impact/Low Probability Analysis provides decision makers with early warning that a seemingly unlikely event with major policy and resource repercussions might actually occur.

When to Use It

High Impact/Low Probability Analysis should be used when one wants to alert decision makers to the possibility that a seemingly long-shot development that would have a major policy or resource impact may be more likely than previously anticipated. Events that would have merited such treatment before they occurred include the reunification of Germany in 1989 and the collapse of the Soviet Union in 1991.

The more nuanced and concrete the analyst’s depiction of the plausible paths to danger, the easier it is for a decision maker to develop a package of policies to protect or advance vital U.S. interests.

Potential Pitfalls

Analysts need to be careful when communicating the likelihood of unlikely events. The meaning of the word “unlikely” can be interpreted as meaning anywhere from 1 percent to 25 percent probability, while “highly unlikely” may mean from 1 percent to 10 percent.

The Method

An effective High Impact/Low Probability Analysis involves these steps:

  • Clearly describe the unlikely event.
  • Define the high-impact consequences if this event occurs. Consider both the actual event and the secondary impacts of the event.
  • Identify any recent information or reporting suggesting that the likelihood of the unlikely event occurring may be increasing.
  • Postulate additional triggers that would propel events in this unlikely direction or factors that would greatly accelerate timetables, such as a botched government response, the rise of an energetic challenger, a major terrorist attack, or a surprise electoral outcome that benefits U.S. interests.
  • Develop one or more plausible pathways that would explain how this seemingly unlikely event could unfold. Focus on the specifics of what must happen at each stage of the process for the train of events to play out.
  • Generate a list of indicators that would help analysts and decision makers recognize that events were beginning to unfold in this way.
    Identify factors that would deflect a bad outcome or encourage a positive outcome.

Once the list of indicators has been developed, the analyst must periodically review the list. Such periodic reviews help analysts overcome prevailing mental models that the events being considered are too unlikely to merit serious attention.

Relationship to Other Techniques

High Impact/Low Probability Analysis is sometimes confused with What If? Analysis. Both deal with low- probability or unlikely events. High Impact/Low Probability Analysis is primarily a vehicle for warning decision makers that recent, unanticipated developments suggest that an event previously deemed highly unlikely might actually occur. Based on recent evidence or information, it projects forward to discuss what could occur and the consequences if the event does occur. It challenges the conventional wisdom. What If? Analysis does not require new or anomalous information to serve as a trigger. It reframes the question by assuming that a surprise event has happened.

9.5 DEVIL’S ADVOCACY

Devil’s Advocacy is a process for critiquing a proposed analytic judgment, plan, or decision, usually by a single analyst not previously involved in the deliberations that led to the proposed judgment, plan, or decision.

The origins of devil’s advocacy “lie in a practice of the Roman Catholic Church in the early 16th century. When a person was proposed for beatification or canonization to sainthood, someone was assigned the role of critically examining the life and miracles attributed to that individual; his duty was to especially bring forward facts that were unfavorable to the candidate.”

When to Use It

Devil’s Advocacy is most effective when initiated by a manager as part of a strategy to ensure that alternative solutions are thoroughly considered. The following are examples of well-established uses of Devil’s Advocacy that are widely regarded as good management practices:

* Before making a decision, a policymaker or military commander asks for a Devil’s Advocate analysis of what could go wrong.

* An intelligence organization designates a senior manager as a Devil’s Advocate to oversee the process of reviewing and challenging selected assessments.

* A manager commissions a Devil’s Advocacy analysis when he or she is concerned about seemingly widespread unanimity on a critical issue throughout the Intelligence Community, or when the manager suspects that the mental model of analysts working an issue for a long time has become so deeply ingrained that they are unable to see the significance of recent changes.

Within the Intelligence Community, Devil’s Advocacy is sometimes defined as a form of self-critique… We do not support this approach for the following reasons:

* Calling such a technique Devil’s Advocacy is inconsistent with the historic concept of Devil’s Advocacy that calls for investigation by an independent outsider.

* Research shows that a person playing the role of a Devil’s Advocate, without actually believing it, is significantly less effective than a true believer and may even be counterproductive. Apparently, more attention and respect is accorded to someone with the courage to advance their own minority view than to someone who is known to be only playing a role. If group members see the Devil’s Advocacy as an analytic exercise they have to put up with, rather than the true belief of one of their members who is courageous enough to speak out, this exercise may actually enhance the majority’s original belief—“a smugness that may occur because one assumes one has considered alternatives though, in fact, there has been little serious reflection on other possibilities.” What the team learns from the Devil’s Advocate presentation may be only how to better defend the team’s own entrenched position.

* There are other forms of self-critique, especially Premortem Analysis and Structured Self-Critique as described in this chapter, which may be more effective in prompting even a cohesive, heterogeneous team to question their mental model and to analyze alternative perspectives.

9.6 RED TEAM ANALYSIS

The term “red team” or “red teaming” has several meanings. One definition is that red teaming is “the practice of viewing a problem from an adversary or competitor’s perspective.”16 This is how red teaming is commonly viewed by intelligence analysts.

When to Use It

Management should initiate a Red Team Analysis whenever there is a perceived need to challenge the conventional wisdom on an important issue or whenever the responsible line office is perceived as lacking the level of cultural expertise required to fully understand an adversary’s or competitor’s point of view.

Value Added

Red Team Analysis can help free analysts from their own well-developed mental model—their own sense of rationality, cultural norms, and personal values. When analyzing an adversary, the Red Team approach requires that an analyst change his or her frame of reference from that of an “observer” of the adversary or competitor, to that of an “actor” operating within the adversary’s cultural and political milieu. This reframing or role playing is particularly helpful when an analyst is trying to replicate the mental model of authoritarian leaders, terrorist cells, or non-Western groups that operate under very different codes of behavior or motivations than those to which most Americans are accustomed.

9.7 DELPHI METHOD

Delphi is a method for eliciting ideas, judgments, or forecasts from a group of experts who may be geographically dispersed. It is different from a survey in that there are two or more rounds of questioning.

After the first round of questions, a moderator distributes all the answers and explanations of the answers to all participants, often anonymously. The expert participants are then given an opportunity to modify or clarify their previous responses, if so desired, on the basis of what they have seen in the responses of the other participants. A second round of questions builds on the results of the first round, drills down into greater detail, or moves to a related topic. There is great flexibility in the nature and number of rounds of questions that might be asked.

Over the years, Delphi has been used in a wide variety of ways, and for an equally wide variety of purposes. Although many Delphi projects have focused on developing a consensus of expert judgment, a variant called Policy Delphi is based on the premise that the decision maker is not interested in having a group make a consensus decision, but rather in having the experts identify alternative policy options and present all the supporting evidence for and against each option. That is the rationale for including Delphi in this chapter on challenge analysis. It can be used to identify divergent opinions that may be worth exploring.

One group of Delphi scholars advises that the Delphi technique “can be used for nearly any problem involving forecasting, estimation, or decision making”—as long as the problem is not so complex or so new as to preclude the use of expert judgment. These Delphi advocates report using it for diverse purposes that range from “choosing between options for regional development, to predicting election outcomes, to deciding which applicants should be hired for academic positions, to predicting how many meals to order for a conference luncheon.”

Value Added

One of Sherman Kent’s “Principles of Intelligence Analysis,” which are taught at the CIA’s Sherman Kent School for Intelligence Analysis, is “Systematic Use of Outside Experts as a Check on In-House Blinders.” Consultation with relevant experts in academia, business, and nongovernmental organizations is also encouraged by Intelligence Community Directive No. 205, on Analytic Outreach, dated July 2008.

The Method

In a Delphi project, a moderator (analyst) sends a questionnaire to a panel of experts who may be in different locations. The experts respond to these questions and usually are asked to provide short explanations for their responses. The moderator collates the results from this first questionnaire and sends the collated responses back to all panel members, requesting them to reconsider their responses based on what they see and learn from the other experts’ responses and explanations. Panel members may also be asked to answer another set of questions.

Examples

To show how Delphi can be used for intelligence analysis, we have developed three illustrative applications:

* Evaluation of another country’s policy options: The Delphi project manager or moderator identifies several policy options that a foreign country might choose. The moderator then asks a panel of experts on the country to rate the desirability and feasibility of each option, from the other country’s point of view, on a five- point scale ranging from “Very Desirable” or “Feasible” to “Very Undesirable” or “Definitely Infeasible.” Panel members are also asked to identify and assess any other policy options that ought to be considered and to identify the top two or three arguments or items of evidence that guided their judgments. A collation of all responses is sent back to the panel with a request for members to do one of the following: reconsider their position in view of others’ responses, provide further explanation of their judgments, or reaffirm their previous response. In a second round of questioning, it may be desirable to list key arguments and items of evidence and ask the panel to rate them on their validity and their importance, again from the other country’s perspective.

* Analysis of Alternative Hypotheses: A panel of outside experts is asked to estimate the probability of each hypothesis in a set of mutually exclusive hypotheses where the probabilities must add up to 100 percent. This could be done as a stand-alone project or to double-check an already completed Analysis of Competing Hypotheses (ACH) analysis (chapter 7). If two analyses using different analysts and different methods arrive at the same conclusion, this is grounds for a significant increase in confidence in the conclusion. If the analyses disagree, that may also be useful to know as one can then seek to understand the rationale for the different judgments.

* Warning analysis or monitoring a situation over time: An analyst asks a panel of experts to estimate the probability of a future event. This might be either a single event for which the analyst is monitoring early warning indicators or a set of scenarios for which the analyst is monitoring milestones to determine the direction in which events seem to be moving.

10.0 Conflict Management
10 Conflict Management

challenge analysis frequently leads to the identification and confrontation of opposing views. That is, after all, the purpose of challenge analysis, but two important

questions are raised. First, how can confrontation be managed so that it becomes a learning experience rather than a battle between determined adversaries? Second, in an analysis of any topic with a high degree of uncertainty, how can one decide if one view is wrong or if both views have merit and need to be discussed in an analytic report?

The Intelligence Community’s procedure for dealing with differences of opinion has often been to force a consensus, water down the differences, or add a dissenting footnote to an estimate. Efforts are under way to move away from this practice, and we share the hopes of many in the community that this approach will become increasingly rare as members of the Intelligence Community embrace greater interagency collaboration early in the analytic process, rather than mandated coordination at the end of the process after all parties are locked into their positions. One of the principal benefits of using structured analytic techniques for interoffice and interagency collaboration is that these techniques identify differences of opinion early in the analytic process. This gives time for the differences to be at least understood, if not resolved, at the working level before management becomes involved.

If an analysis meets rigorous standards and conflicting views still remain, decision makers are best served by an analytic product that deals directly with the uncertainty rather than minimizing it or suppressing it. The greater the uncertainty, the more appropriate it is to go forward with a product that discusses the most likely assessment or estimate and gives one or more alternative possibilities. Factors to be considered when assessing the amount of uncertainty include the following:

An estimate of the future generally has more uncertainty than an assessment of a past or current event. Mysteries, for which there are no knowable answers, are far more uncertain than puzzles, for which an

answer does exist if one could only find it.3
The more assumptions that are made, the greater the uncertainty. Assumptions about intent or capability, and whether or not they have changed, are especially critical.
Analysis of human behavior or decision making is far more uncertain than analysis of technical data. The behavior of a complex dynamic system is more uncertain than that of a simple system. The more variables and stakeholders involved in a system, the more difficult it is to foresee what might happen.

If the decision is to go forward with a discussion of alternative assessments or estimates, the next step might be to produce any of the following:

A comparative analysis of opposing views in a single report. This calls for analysts to identify the sources and reasons for the uncertainty (e.g., assumptions, ambiguities, knowledge gaps), consider the implications of alternative assessments or estimates, determine what it would take to resolve the uncertainty, and suggest indicators for future monitoring that might provide early warning of which alternative is correct.

An analysis of alternative scenarios as described in chapter 6.
A What If? Analysis or High Impact/Low Probability Analysis as described in chapter 9. A report that is clearly identified as a “second opinion.”

Overview of Techniques

Adversarial Collaboration in essence is an agreement between opposing parties on how they will work together in an effort to resolve their differences, to gain a better understanding of how and why they differ, or as often happens to collaborate on a joint paper explaining the differences. Six approaches to implementing adversarial collaboration are described.

Structured Debate is a planned debate of opposing points of view on a specific issue in front of a “jury of peers,” senior analysts, or managers. As a first step, each side writes up its best possible argument for its position and passes this summation to the opposing side. The next step is an oral debate that focuses on refuting the other side’s arguments rather than further supporting one’s own arguments. The goal is to elucidate and compare the arguments against each side’s argument. If neither argument can be refuted, perhaps both merit some consideration in the analytic report.

10.1 ADVERSARIAL COLLABORATION

Adversarial Collaboration is an agreement between opposing parties about how they will work together to resolve or at least gain a better understanding of their differences. Adversarial Collaboration is a relatively new concept championed by Daniel Kahneman, the psychologist who along with Amos Tversky initiated much of the research on cognitive biases described in Richards Heuer’s Psychology of Intelligence Analysis… he commented as follows on Adversarial Collaboration:  

Adversarial collaboration involves a good-faith effort to conduct debates by carrying out joint research—in some cases there may be a need for an agreed arbiter to lead the project and collect the data. Because there is no expectation of the contestants reaching complete agreement at the end of the exercise, adversarial collaboration will usually lead to an unusual type of joint publication, in which disagreements are laid out as part of a jointly authored paper.

Kahneman’s approach to Adversarial Collaboration involves agreement on empirical tests for resolving a dispute and conducting those tests with the help of an impartial arbiter. A joint report describes the tests, states what has been learned that both sides agree on, and provides interpretations of the test results on which they disagree.

When to Use It

Adversarial Collaboration should be used only if both sides are open to discussion of an issue. If one side is fully locked into its position and has repeatedly rejected the other side’s arguments, this technique is unlikely to be successful. It is then more appropriate to use Structured Debate in which a decision is made by an independent arbiter after listening to both sides.

Value Added

Adversarial Collaboration can help opposing analysts see the merit of another group’s perspective. If successful, it will help both parties gain a better understanding of what assumptions or evidence is behind their opposing opinions on an issue and to explore the best way of dealing with these differences. Can one side be shown to be wrong, or should both positions be reflected in any report on the subject? Can there be agreement on indicators to show the direction in which events seem to be moving?

The Method

Six approaches to Adversarial Collaboration are described here. What they all have in common is the forced requirement to understand and address the other side’s position rather than simply dismiss it. Mutual understanding of the other side’s position is the bridge to productive collaboration. These six techniques are not mutually exclusive; in other words, one might use several of them for any specific project.

Key Assumptions Check:

Analysis of Competing Hypotheses:

Argument Mapping:

Mutual Understanding:

Joint Escalation:

The analysts should be required to prepare a joint statement describing the disagreement and to present it jointly to their superiors. This requires each analyst to understand and address, rather than simply dismiss, the other side’s position. It also ensures that managers have access to multiple perspectives on the conflict, its causes, and the various ways it might be resolved.

The Nosenko Approach: Yuriy Nosenko was a Soviet intelligence officer who defected to the United States in 1964. Whether he was a true defector or a Soviet plant was a subject of intense and emotional controversy within the CIA for more than a decade. In the minds of some, this historic case is still controversial.

The interesting point here is the ground rule that the team was instructed to follow. After reviewing the evidence, each officer identified those items of evidence thought to be of critical importance in making a judgment on Nosenko’s bona fides. Any item that one officer stipulated as critically important had to be addressed by the other two members.

It turned out that fourteen items were stipulated by at least one of the team members and had to be addressed by both of the others. Each officer prepared his own analysis, but they all had to address the same fourteen issues. Their report became known as the “Wise Men” report.

10.2 STRUCTURED DEBATE

A Structured Debate is a planned debate between analysts or analytic teams holding opposing points of view on a specific issue. It is conducted according to a set of rules before an audience, which may be a “jury of peers” or one or more senior analysts or managers.

When to Use It

Structured Debate is called for when there is a significant difference of opinion within or between analytic units or within the policymaking community, or when Adversarial Collaboration has been unsuccessful or is impractical, and it is necessary to make a choice between two opposing opinions or to go forward with a comparative analysis of both. A Structured Debate requires a significant commitment of analytic time and resources.

Value Added

In the method proposed here, each side presents its case in writing, and the written report is read by the other side and the audience prior to the debate. The oral debate then focuses on refuting the other side’s position. Glib and personable speakers can always make their arguments for a position sound persuasive. Effectively refuting the other side’s position is a very different ball game, however. The requirement to refute the other side’s position brings to the debate an important feature of the scientific method, that the most likely hypothesis is actually the one with the least evidence against it as well as good evidence for it.

The Method

Start by defining the conflict to be debated. If possible, frame the conflict in terms of competing and mutually exclusive hypotheses. Ensure that all sides agree with the definition. Then follow these steps:

*  Identify individuals or teams to develop the best case that can be made for each hypothesis.

*  Each side writes up the best case for its point of view. This written argument must be structured with an explicit presentation of key assumptions, key pieces of evidence, and careful articulation of the logic behind the argument.

* The written arguments are exchanged with the opposing side, and the two sides are given time to develop counterarguments to refute the opposing side’s position.

The debate phase is conducted in the presence of a jury of peers, senior analysts, or managers who will provide guidance after listening to the debate. If desired, there might also be an audience of interested observers.

* The debate starts with each side presenting a brief (maximum five minutes) summary of its argument for its position. The jury and the audience are expected to have read each side’s full argument.

* Each side then presents to the audience its rebuttal of the other side’s written position. The purpose here is to proceed in the oral arguments by systematically refuting alternative hypotheses rather than by presenting more evidence to support one’s own argument. This is the best way to evaluate the strengths of the opposing arguments.

* After each side has presented its rebuttal argument, the other side is given an opportunity to refute the rebuttal.

* The jury asks questions to clarify the debaters’ positions or gain additional insight needed to pass judgment on the debaters’ positions.

* The jury discusses the issue and passes judgment. The winner is the side that makes the best argument refuting the other side’s position, not the side that makes the best argument supporting its own position. The jury may also recommend possible next steps for further research or intelligence collection efforts. If neither side can refute the other’s arguments, it may be that both sides have a valid argument that should be represented in any subsequent analytic report.

Origins of This Technique

The history of debate goes back to the Socratic dialogues in ancient Greece and even before, and many different forms of debate have evolved since then. Richards Heuer formulated the idea of focusing the debate between intelligence analysts on refuting the other side’s argument rather than supporting one’s own argument.

 

11.0 Decision Support
11 Decision Support

Managers, commanders, planners, and other decision makers all make choices or tradeoffs among competing goals, values, or preferences. Because of limitations in human short-term memory, we usually cannot keep all the pros and cons of multiple options in mind at the same time. That causes us to focus first on one set of problems or opportunities and then another, a situation that often leads to vacillation or procrastination in making a firm decision. Some decision-support techniques help overcome this cognitive limitation by laying out all the options and interrelationships in graphic form so that analysts can test the results of alternative options while still keeping the problem as a whole in view. Other techniques help decision makers untangle the complexity of a situation or define the opportunities and constraints in the environment in which the choice needs to be made.

 

It is not the analyst’s job to make the choices or decide on the tradeoffs, but intelligence analysts can and should use decision-support techniques to provide timely support to managers, commanders, planners, and decision makers who do make these choices. The Director of National Intelligence’s Vision 2015 foresees intelligence driven by customer needs and a “shifting focus from today’s product- centric model toward a more interactive model that blurs the distinction between producer and

consumer.”

Caution is in order, however, whenever one thinks of predicting or even explaining another person’s decision, regardless of whether the person is of similar background or not. People do not always act rationally in their own best interests. Their decisions are influenced by emotions and habits, as well as by what others might think or values of which others may not be aware.

The same is true of organizations and governments. One of the most common analytic errors is the assumption that an organization or a government will act rationally, that is, in its own best interests. There are three major problems with this assumption:

* Even if the assumption is correct, the analysis may be wrong, because foreign organizations and governments typically see their own best interests quite differently from the way Americans see them.

* Organizations and governments do not always have a clear understanding of their own best interests. Governments in particular typically have a variety of conflicting interests.

* The assumption that organizations and governments commonly act rationally in their own best interests is not always true. All intelligence analysts seeking to understand the behavior of another country should be familiar with Graham Allison’s analysis of U.S. and Soviet decision making during the Cuban

missile crisis. It describes three different models for how governments make decisions—bureaucratic bargaining processes and standard organizational procedures as well as the rational actor model.

Decision making and decision analysis are large and diverse fields of study and research. The decision- support techniques described in this chapter are only a small sample of what is available, but they do meet many of the basic requirements for intelligence analysis.

Overview of Techniques

Complexity Manager is a simplified approach to understanding complex systems—the kind of systems in which many variables are related to each other and may be changing over time. Government policy decisions are often aimed at changing a dynamically complex system. It is because of this dynamic complexity that many policies fail to meet their goals or have unforeseen and unintended consequences. Use Complexity Manager to assess the chances for success or failure of a new or proposed policy, identify opportunities for influencing the outcome of any situation, determine what would need to change in order to achieve a specified goal, or identify potential unintended consequences from the pursuit of a policy goal.

Decision Matrix is a simple but powerful device for making tradeoffs between conflicting goals or preferences. An analyst lists the decision options or possible choices, the criteria for judging the options, the weights assigned to each of these criteria, and an evaluation of the extent to which each option satisfies each of the criteria. This process will show the best choice—based on the values the analyst or a decision maker puts into the matrix. By studying the matrix, one can also analyze how the best choice would change if the values assigned to the selection criteria were changed or if the ability of an option to satisfy a specific criterion were changed. It is almost impossible for an analyst to keep track of these factors effectively without such a matrix, as one cannot keep all the pros and cons in working memory at the same time. A Decision Matrix helps the analyst see the whole picture.

Force Field Analysis is a technique that analysts can use to help a decision maker decide how to solve a problem or achieve a goal, or to determine whether it is possible to do so. The analyst identifies and assigns weights to the relative importance of all the factors or forces that are either a help or a hindrance in solving the problem or achieving the goal. After organizing all these factors in two lists, pro and con, with a weighted value for each factor, the analyst or decision maker is in a better position to recommend strategies that would be most effective in either strengthening the impact of the driving forces or reducing the impact of the restraining forces.

Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy ideas. It is intended to offset the human tendency of analysts and decision makers to jump to conclusions before conducting a full analysis of a problem, as often happens in group meetings. The first step is for the analyst or the project team to make lists of Pros and Cons. If the analyst or team is concerned that people are being unduly negative about an idea, he or she looks for ways to “Fix” the Cons, that is, to explain why the Cons are unimportant or even to transform them into Pros. If concerned that people are jumping on the bandwagon too quickly, the analyst tries to “Fault” the Pros by exploring how they could go wrong. The analyst can also do both Pros and Cons. Of the various techniques described in this chapter, this one is probably the easiest and quickest to use.

SWOT Analysis is used to develop a plan or strategy for achieving a specified goal. (SWOT is an acronym for Strengths, Weaknesses, Opportunities, and Threats.) In using this technique, the analyst first lists the strengths and weaknesses in the organization’s ability to achieve a goal, and then lists opportunities and threats in the external environment that would either help or hinder the organization from reaching the goal.

11.1 COMPLEXITY MANAGER

Complexity Manager helps analysts and decision makers understand and anticipate changes in complex systems. As used here, the word complexity encompasses any distinctive set of interactions that are more complicated than even experienced intelligence analysts can think through solely in their head.3

When to Use It

As a policy support tool, Complexity Manager can be used to assess the chances for success or failure of a new or proposed program or policy, and opportunities for influencing the outcome of any situation. It also can be used to identify what would have to change in order to achieve a specified goal, as well as unintended consequences from the pursuit of a policy goal.

Value Added

Complexity Manager can often improve an analyst’s understanding of a complex situation without the time delay and cost required to build a computer model and simulation. The steps in the Complexity Manager technique are the same as the initial steps required to build a computer model and simulation. These are identification of the relevant variables or actors, analysis of all the interactions between them, and assignment of rough weights or other values to each variable or interaction.

Scientists who specialize in the modeling and simulation of complex social systems report that “the earliest —and sometimes most significant—insights occur while reducing a problem to its most fundamental players, interactions, and basic rules of behavior,” and that “the frequency and importance of additional insights diminishes exponentially as a model is made increasingly complex.”

Complexity Manager does not itself provide analysts with answers. It enables analysts to find a best possible answer by organizing in a systematic manner the jumble of information about many relevant variables. It enables analysts to get a grip on the whole problem, not just one part of the problem at a time. Analysts can then apply their expertise in making an informed judgment about the problem. This structuring of the analyst’s thought process also provides the foundation for a well-organized report that clearly presents the rationale for each conclusion. This may also lead to some form of visual presentation, such as a Concept Map or Mind Map, or a causal or influence diagram.

The Method

Complexity Manager requires the analyst to proceed through eight specific steps:

  1. Define the problem: State the problem (plan, goal, outcome) to be analyzed, including the time period to be covered by the analysis.
  2. Identify and list relevant variables: Use one of the brainstorming techniques described in chapter 4 to identify the significant variables (factors, conditions, people, etc.) that may affect the situation of interest during the designated time period. Think broadly to include organizational or environmental constraints that are beyond anyone’s ability to control. If the goal is to estimate the status of one or more variables several years in the future, those variables should be at the top of the list. Group the other variables in some logical manner with the most important variables at the top of the list.
  3. Create a Cross-Impact Matrix: Create a matrix in which the number of rows and columns are each equal to the number of variables plus one. Leaving the cell at the top left corner of the matrix blank, enter all the variables in the cells in the row across the top of the matrix and the same variables in the column down the left side. The matrix then has a cell for recording the nature of the relationship between all pairs of variables. This is called a Cross-Impact Matrix—a tool for assessing the two-way interaction between each pair of variables. Depending on the number of variables and the length of their names, it may be convenient to use the variables’ letter designations across the top of the matrix rather than the full names.
  4. Assess the interaction between each pair of variables: Use a diverse team of experts on the relevant topic to analyze the strength and direction of the interaction between each pair of variables, and enter the results in the relevant cells of the matrix. For each pair of variables, ask the question: Does this variable impact the paired variable in a manner that will increase or decrease the impact or influence of that variable?

There are two different ways one can record the nature and strength of impact that one variable has on another. Figure 11.1 uses plus and minus signs to show whether the variable being analyzed has a positive or negative impact on the paired variable. The size of the plus or minus sign signifies the strength of the impact on a three-point scale. The small plus or minus shows a weak impact, the medium size a medium impact, and the large size a strong impact. If the variable being analyzed has no impact on the paired variable, the cell is left empty. If a variable might change in a way that could reverse the direction of its impact, from positive to negative or vice versa, this is shown by using both a plus and a minus sign.

After rating each pair of variables, and before doing further analysis, consider pruning the matrix to eliminate variables that are unlikely to have a significant effect on the outcome. It is possible to measure the relative significance of each variable by adding up the weighted values in each row and column.

  1. Analyze direct impacts: Write several paragraphs about the impact of each variable, starting with variable A. For each variable, describe the variable for further clarification if necessary. Identify all the variables that impact on that variable with a rating of 2 or 3, and briefly explain the nature, direction, and, if appropriate, the timing of this impact. How strong is it and how certain is it? When might these impacts be observed? Will the impacts be felt only in certain conditions?
  2. Analyze loops and indirect impacts: The matrix shows only the direct impact of one variable on another. When you are analyzing the direct impacts variable by variable, there are several things to look for and make note of. One is feedback loops. For example, if variable A has a positive impact on variable B, and variable B also has a positive impact on variable A, this is a positive feedback loop. Or there may be a three-variable loop, from A to B to C and back to A. The variables in a loop gain strength from each other, and this boost may enhance their ability to influence other variables. Another thing to look for is circumstances where the causal relationship between variables A and B is necessary but not sufficient for something to happen. For example, variable A has the potential to influence variable B, and may even be trying to influence variable B, but it can do so effectively only if variable C is also present. In that case, variable C is an enabling variable and takes on greater significance than it ordinarily would have.

All variables are either static or dynamic. Static variables are expected to remain more or less unchanged during the period covered by the analysis. Dynamic variables are changing or have the potential to change. The analysis should focus on the dynamic variables as these are the sources of surprise in any complex system. Determining how these dynamic variables interact with other variables and with each other is critical to any forecast of future developments. Dynamic variables can be either predictable or unpredictable. Predictable change includes established trends or established policies that are in the process of being implemented. Unpredictable change may be a change in leadership or an unexpected change in policy or available resources.

  1. Draw conclusions: Using data about the individual variables assembled in Steps 5 and 6, draw conclusions about the system as a whole. What is the most likely outcome or what changes might be anticipated during the specified time period? What are the driving forces behind that outcome? What things could happen to cause a different outcome? What desirable or undesirable side effects should be anticipated? If you need help to sort out all the relationships, it may be useful to sketch out by hand a diagram showing all the causal relationships. A Concept Map (chapter 4) may be useful for this purpose. If a diagram is helpful during the analysis, it may also be helpful to the reader or customer to include such a diagram in the report.
  2. Conduct an opportunity analysis: When appropriate, analyze what actions could be taken to influence this system in a manner favorable to the primary customer of the analysis.

Origins of This Technique

Complexity Manager was developed by Richards Heuer to fill an important gap in structured techniques that are available to the average analyst. It is a very simplified version of older quantitative modeling techniques, such as system dynamics.

11.2 DECISION MATRIX

Decision Matrix helps analysts identify the course of action that best achieves specified goals or preferences.

When to Use It

The Decision Matrix technique should be used when a decision maker has multiple options to choose from, multiple criteria for judging the desirability of each option, and/or needs to find the decision that maximizes a specific set of goals or preferences. For example, it can be used to help choose among various plans or strategies for improving intelligence analysis, to select one of several IT systems one is considering buying, to determine which of several job applicants is the right choice, or to consider any personal decision, such as what to do after retiring. A Decision Matrix is not applicable to most intelligence analysis, which typically deals with evidence and judgments rather than goals and preferences. It can be used, however, for supporting a decision maker’s consideration of alternative courses of action.

11.3 FORCE FIELD ANALYSIS

Force Field Analysis is a simple technique for listing and assessing all the forces for and against a change, problem, or goal. Kurt Lewin, one of the fathers of modern social psychology, believed that all organizations

are systems in which the present situation is a dynamic balance between forces driving for change and restraining forces. In order for any change to occur, the driving forces must exceed the restraining forces, and the relative strength of these forces is what this technique measures. This technique is based on Lewin’s theory.

The Method

* Define the problem, goal, or change clearly and concisely.

* Brainstorm to identify the main forces that will influence the issue. Consider such topics as needs, resources, costs, benefits, organizations, relationships, attitudes, traditions, interests, social and cultural trends, rules and regulations, policies, values, popular desires, and leadership to develop the full range of forces promoting and restraining the factors involved.

* Make one list showing the forces or people “driving” the change and a second list showing the forces or people “restraining” the change.

* Assign a value (the intensity score) to each driving or restraining force to indicate its strength. Assign the weakest intensity scores a value of 1 and the strongest a value of 5. The same intensity score can be assigned to more than one force if you consider the factors equal in strength. List the intensity scores in parentheses beside each item.

* Calculate a total score for each list to determine whether the driving or the restraining forces are dominant.

* Examine the two lists to determine if any of the driving forces balance out the restraining forces.

* Devise a manageable course of action to strengthen those forces that lead to the preferred outcome and weaken the forces that would hinder the desired outcome.

11.4 PROS-CONS-FAULTS-AND-FIXES

Pros-Cons-Faults-and-Fixes is a strategy for critiquing new policy ideas. It is intended to offset the human tendency of a group of analysts and decision makers to jump to a conclusion before full analysis of the problem has been completed.

When to Use It

Making lists of pros and cons for any action is a very common approach to decision making. The “Faults” and “Fixes” are what is new in this strategy. Use this technique to make a quick appraisal of a new idea or a more systematic analysis of a choice between two options.

Value Added

It is unusual for a new idea to meet instant approval. What often happens in meetings is that a new idea is brought up, one or two people immediately explain why they don’t like it or believe it won’t work, and the idea is then dropped. On the other hand, there are occasions when just the opposite happens. A new idea is immediately welcomed, and a commitment to support it is made before the idea is critically evaluated. The Pros-Cons-Faults-and-Fixes technique helps to offset this human tendency toward jumping to conclusions.

The Method

Start by clearly defining the proposed action or choice. Then follow these steps:

* List the Pros in favor of the decision or choice. Think broadly and creatively and list as many benefits, advantages, or other positives as possible.

* List the Cons, or arguments against what is proposed. There are usually more Cons than Pros, as most humans are naturally critical. It is easier to think of arguments against a new idea than to imagine how the new idea might work. This is why it is often difficult to get careful consideration of a new idea.

* Review and consolidate the list. If two Pros are similar or overlapping, consider merging them to eliminate any redundancy. Do the same for any overlapping Cons.

* If the choice is between two clearly defined options, go through the previous steps for the second option. If there are more than two options, a technique such as Decision Matrix may be more appropriate than Pros-Cons-Faults-and-Fixes.

* At this point you must make a choice. If the goal is to challenge an initial judgment that the idea won’t work, take the Cons, one at a time, and see if they can be “Fixed.” That means trying to figure a way to neutralize their adverse influence or even to convert them into Pros. This exercise is intended to counter any unnecessary or biased negativity about the idea. There are at least four ways an argument listed as a Con might be Fixed:

 

  • Propose a modification of the Con that would significantly lower the risk of the Con being a problem.
  • Identify a preventive measure that would significantly reduce the chances of the Con being a problem.
  • Do contingency planning that includes a change of course if certain indicators are observed.
  • Identify a need for further research or information gathering to confirm or refute the assumption that the Con is a problem.

* If the goal is to challenge an initial optimistic assumption that the idea will work and should be pursued, take the Pros, one at a time, and see if they can be “Faulted.” That means to try and figure out how the Pro might fail to materialize or have undesirable consequences. This exercise is intended to counter any wishful thinking or unjustified optimism about the idea. There are at least three ways a Pro might be Faulted:

  • Identify a reason why the Pro would not work or why the benefit would not be received.
  • Identify an undesirable side effect that might accompany the benefit.
  • Identify a need for further research or information gathering to confirm or refute the assumption that the Pro will work or be beneficial.

A third option is to combine both approaches, to Fault the Pros and Fix the Cons.

11.5 SWOT ANALYSIS

SWOT is commonly used by all types of organizations to evaluate the Strengths, Weaknesses,

Opportunities, and Threats involved in any project or plan of action. The strengths and weaknesses are internal to the organization, while the opportunities and threats are characteristics of the external environment.

12.0 Guide to Collaboration
12 Practitioner’s Guide to Collaboration

Analysis in the U.S. Intelligence Community is now in a transitional stage from being predominantly a mental activity done by a solo analyst to becoming a collaborative or group activity.

 

The increasing use of structured analytic techniques is central to this transition. Many things change when the internal thought process of analysts can be externalized in a transparent manner so that ideas can be shared, built on, and easily critiqued by others.

 

This chapter is not intended to describe collaboration as it exists today. It is a visionary attempt to foresee how collaboration might be put into practice in the future when interagency collaboration is the norm and the younger generation of analysts has had even more time to imprint its social networking practices on the Intelligence Community.

 

12.1 SOCIAL NETWORKS AND ANALYTIC TEAMS

There are several ways to categorize teams and groups. When discussing the U.S. Intelligence Community, it seems most useful to deal with three types: the traditional analytic team, the special project team, and social network.

* Traditional analytic team: This is the typical work team assigned to perform a specific task. It has a leader appointed by a manager or chosen by the team, and all members of the team are collectively accountable for the team’s product. The team may work jointly to develop the entire product or, as is commonly done for National Intelligence Estimates, each team member may be responsible for a specific section of the work.

The core analytic team, with participants usually working at the same agency, drafts a paper and sends it to other members of the government community for comment and coordination.

* Special project team: Such a team is usually formed to provide decision makers with near–real time analytic support during a crisis or an ongoing operation. A crisis support task force or field-deployed interagency intelligence team that supports a military operation exemplifies this type of team.

* Social networks: Experienced analysts have always had their own network of experts in their field or related fields with whom they consult from time to time and whom they may recruit to work with them on a specific analytic project. Social networks are critical to the analytic business. They do the day-to-day monitoring of events, produce routine products as needed, and may recommend the formation of a more formal analytic team to handle a specific project. The social network is the form of group activity that is now changing dramatically with the growing ease of cross-agency secure communications and the availability of social networking software.

The key problem that arises with social networks is the geographic distribution of their members. Even within the Washington, D.C., metropolitan area, distance is a factor that limits the frequency of face-to-face meetings.

Research on effective collaborative practices has shown that geographically distributed teams are most likely to succeed when they satisfy six key imperatives. Participants must

  • Know and trust each other; this usually requires that they meet face-to-face at least once. Feel a personal need to engage the group in order to perform a critical task.
  • Derive mutual benefits from working together.
  • Connect with each other virtually on demand and easily add new members.
  • Perceive incentives for participating in the group, such as saving time, gaining new insights from interaction with other knowledgeable analysts, or increasing the impact of their contribution.
  • Share a common understanding of the problem with agreed lists of common terms and definitions.

12.2 DIVIDING THE WORK

Managing the geographic distribution of the social network can also be addressed effectively by dividing the analytic task into two parts—first exploiting the strengths of the social network for divergent or creative analysis to identify ideas and gather information, and, second, forming a small analytic team that employs convergent analysis to meld these ideas into an analytic product.

 

Structured analytic techniques and collaborative software work very well with this two-part approach to analysis. A series of basic techniques used for divergent analysis early in the analytic process works well for a geographically distributed social network communicating via a wiki.

 

A project leader informs a social network of an impending project and provides a tentative project description, target audience, scope, and process to be followed. The leader also gives the name of the wiki to be used and invites interested analysts knowledgeable in that area to participate. Any analyst with access to the secure network also has access to the wiki and is authorized to add information and ideas to it. Any or all of the following techniques, as well as others, may come into play during the divergent analysis phase as specified by the project leader:

  • Issue Redefinition as described in chapter 4.
  • Collaboration in sharing and processing data using other techniques such as timelines, sorting, networking, mapping, and charting as described in chapter 4.
  • Some form of brainstorming, as described in chapter 5, to generate a list of driving forces, variables, players, etc.
  • Ranking or prioritizing this list, as described in chapter 4.
  • Putting this list into a Cross-Impact Matrix, as described in chapter 5, and then discussing and recording in the wiki the relationship, if any, between each pair of driving forces, variables, or players in that matrix.
  • Developing a list of alternative explanations or outcomes (hypotheses) to be considered (chapter 7).
  • Developing a list of items of evidence available to be considered when evaluating these hypotheses (chapter 7).
  • Doing a Key Assumptions Check (chapter 8). This will be less effective when done on a wiki than in a face-to-face meeting, but it would be useful to learn the network’s thinking about key assumptions.

Most of these steps involve making lists, which can be done quite effectively with a wiki. Making such input via a wiki can be even more productive than a face-to-face meeting, because analysts have more time to think about and write up their thoughts and are able to look at their contribution over several days and make additions or changes as new ideas come to them.

The process should be overseen and guided by a project leader. In addition to providing a sound foundation for further analysis, this process enables the project leader to identify the best analysts to be included in the smaller team that conducts the second phase of the project—making analytic judgments and drafting the report. Team members should be selected to maximize the following criteria: level of expertise on the subject, level of interest in the outcome of the analysis, and diversity of opinions and thinking styles among the group.

12.3 COMMON PITFALLS WITH SMALL GROUPS

the use of structured analytic techniques frequently helps analysts avoid many of the common pitfalls of the small-group process.

Much research documents that the desire for consensus is an important cause of poor group decisions. Development of a group consensus is usually perceived as success, but, in reality, it is often indicative of failure. Premature consensus is one of the more common causes of suboptimal group performance. It leads to failure to identify or seriously consider alternatives, failure to examine the negative aspects of the preferred

position, and failure to consider the consequences that might follow if the preferred position is wrong.8 This phenomenon is what is commonly called groupthink.

12.4 BENEFITING FROM DIVERSITY

Improvement of group performance requires an understanding of these problems and a conscientious effort to avoid or mitigate them. The literature on small-group performance is virtually unanimous in emphasizing that groups make better decisions when their members bring to the table a diverse set of ideas, opinions, and perspectives. What premature consensus, groupthink, and polarization all have in common is a failure to recognize assumptions and a failure to adequately identify and consider alternative points of view.

Briefly, then, the route to better analysis is to create small groups of analysts who are strongly encouraged by their leader to speak up and express a wide range of ideas, opinions, and perspectives. The use of structured analytic techniques generally ensures that this happens. These techniques guide the dialogue between analysts as they share evidence and alternative perspectives on the meaning and significance of the evidence. Each step in the technique prompts relevant discussion within the team, and such discussion can generate and evaluate substantially more divergent information and new ideas than can a group that does not use such a structured process.

12.5 ADVOCACY VS. OBJECTIVE INQUIRY

The desired diversity of opinion is, of course, a double-edged sword, as it can become a source of conflict which degrades group effectiveness.

In a task-oriented team environment, advocacy of a specific position can lead to emotional conflict and reduced team effectiveness. Advocates tend to examine evidence in a biased manner, accepting at face value information that seems to confirm their own point of view and subjecting any contrary evidence to highly critical evaluation. Advocacy is appropriate in a meeting of stakeholders that one is attending for the purpose of representing a specific interest. It is also “an effective method for making decisions in a courtroom when both sides are effectively represented, or in an election when the decision is made by a vote of the people.”

…many CIA and FBI analysts report that their preferred use of ACH is to gain a better understanding of the differences of opinion between them and other analysts or between analytic offices. The process of creating an ACH matrix requires identification of the evidence and arguments being used and determining how these are interpreted as either consistent or inconsistent with the various hypotheses.

Considerable research on virtual teaming shows that leadership effectiveness is a major factor in the success or failure of a virtual team. Although leadership usually is provided by a group’s appointed leader, it can also emerge as a more distributed peer process and is greatly aided by the use of a trained facilitator (see Figure 12.6). When face-to-face contact is limited, leaders, facilitators, and team members must compensate by paying more attention than they might otherwise devote to the following tasks:

  • Articulating a clear mission, goals, specific tasks, and procedures for evaluating results.
  • Defining measurable objectives with milestones and timelines for achieving them.
  • Identifying clear and complementary roles and responsibilities.
  • Building relationships with and between team members and with stakeholders. Agreeing on team norms and expected behaviors.
  • Defining conflict resolution procedures.
  • Developing specific communication protocols and practices

 

 

 

 

13.0 Evaluation of Techniques
13 Evaluation of Structured Analytic Techniques

13.1 ESTABLISHING FACE VALIDITY

The taxonomy of structured analytic techniques presents each category of structured technique in the context of how it is intended to mitigate or avoid a specific cognitive or group process problem. In other words, each structured analytic technique has face validity because there is a rational reason for expecting it to help mitigate or avoid a recognized problem that can occur when one is doing intelligence analysis. For example, a great deal of research in human cognition during the past sixty years shows the limits of working memory and suggests that one can manage a complex problem most effectively by breaking it down into smaller pieces.

Satisficing is a common analytic shortcut that people use in making everyday decisions when there are multiple possible answers. It saves a lot of time when you are making judgments or decisions of little consequence, but it is ill-advised when making judgments or decisions with significant consequences for national security.

The ACH process does not guarantee a correct judgment, but this anecdotal evidence suggests that ACH does make a significant contribution to better analysis.

13.2 LIMITS OF EMPIRICAL TESTING

Findings from empirical experiments can be generalized to apply to intelligence analysis only if the test conditions match relevant conditions in which intelligence analysis is conducted. There are so many variables that can affect the research results that it is very difficult to control for all or even most of them. These variables include the purpose for which a technique is used, implementation procedures, context of the experiment, nature of the analytic task, differences in analytic experience and skill, and whether the analysis is done by a single analyst or as a group process. All of these variables affect the outcome of any experiment that ostensibly tests the utility of an analytic technique. In a number of readily available examples of research on structured analytic techniques, we identified serious questions about the applicability of the research findings to intelligence analysis.

Different Purpose or Goal

Many structured analytic techniques can be used for several different purposes, and research findings on the effectiveness of these techniques can be generalized and applied to the Intelligence Community only if the technique is used in the same way and for the same purpose as in the actual practice of the Intelligence Community. For example, Philip Tetlock, in his important book Expert Political Judgment, describes two experiments showing that scenario development may not be an effective technique. The experiments compared judgments on a political issue before and after the test subjects prepared scenarios in an effort to gain a better understanding of the issues. The experiments showed that the predictions by both experts and nonexperts were more accurate before generating the scenarios; in other words, the generation of scenarios actually reduced the accuracy of their predictions. Several experienced analysts have separately cited this finding as evidence that scenario development may not be a useful method for intelligence analysis.
However, Tetlock’s conclusions should not be generalized to apply to intelligence analysis, as those experiments tested scenarios as a predictive tool. The Intelligence Community does not use scenarios for prediction.

Different Implementation Procedures

There are specific procedures for implementing many structured techniques. If research on the effectiveness of a specific technique is to be applicable to intelligence analysis, the research should use the same implementing procedure(s) for that technique as those used by the Intelligence Community.

Different Environment

When evaluating the validity of a technique, it is necessary to control for the environment in which the technique is used. If this is not done, the research findings may not always apply to intelligence analysis.

This is by no means intended to suggest that techniques developed for use in other domains should not be used in intelligence analysis. On the contrary, other domains are a productive source of such techniques, but the best way to apply them to intelligence analysis needs to be carefully evaluated.

Misleading Test Scenario

Empirical testing of a structured analytic technique requires developing a realistic test scenario. The test group analyzes this scenario using the structured technique while the control group analyzes the scenario without the benefit of any such technique. The MITRE Corporation conducted an experiment to test the ability of the

Analysis of Competing Hypotheses (ACH) technique to prevent confirmation bias. Confirmation bias is the tendency of people to seek information or assign greater weight to information that confirms what they already believe and to underweight or not seek information that supports an alternative belief.

Typically, intelligence analysts do not begin the process of attacking an intelligence problem by developing a full set of hypotheses. Richards Heuer, who developed the ACH methodology, has always believed that a principal benefit of ACH in mitigating confirmation bias is that it does requires analysts to develop a full set of hypotheses before evaluating any of them.

Differences in Analytic Experience and Skill

There is a difference between structured techniques in the skill level and amount of training that is required to implement them effectively.

When one is evaluating any technique, the level of skill and training required is an important variable. Any empirical testing needs to control for this variable, which suggests that testing of any medium- to high-skill technique should be done with current or former intelligence analysts, including analysts at different skill levels.

an analytic tool is not like a machine that works whenever it is turned on. It is a strategy for achieving a goal. Whether or not one reaches the goal depends in part upon the skill of the person executing the strategy.

Conclusion

Using empirical experiments to evaluate structured techniques is difficult because the outcome of any experiment is influenced by so many variables. Experiments conducted outside the Intelligence Community typically fail to replicate the important conditions that influence the outcome of analysis within the community.

13.3 A NEW APPROACH TO EVALUATION

There is a better way to evaluate structured analytic techniques. In this section we outline a new approach that is embedded in the reality of how analysis is actually done in the Intelligence Community. We then show how this approach might be applied to the analysis of three specific techniques.

Step 1 is to identify what we know, or think we know, about the benefits from using any particular structured technique. This is the face validity as described earlier in this chapter plus whatever analysts believe they have learned from frequent use of a technique. For example, we think we know that ACH provides several benefits that help produce a better intelligence product. A full analysis of ACH would consider each of the following potential benefits:

It requires analysts to start by developing a full set of alternative hypotheses. This reduces the risk of satisficing.
It enables analysts to manage and sort evidence in analytically useful ways.
It requires analysts to try to refute hypotheses rather than to support a single hypothesis. This process reduces confirmation bias and helps to ensure that all alternatives are fully considered.

It can help a small group of analysts identify new and divergent information as they fill out the matrix, and it depersonalizes the discussion when conflicting opinions are identified.
It spurs analysts to present conclusions in a way that is better organized and more transparent as to how these conclusions were reached.

It can provide a foundation for identifying indicators that can be monitored to determine the direction in which events are heading.
It leaves a clear audit trail as to how the analysis was done.

Step 2 is to obtain evidence to test whether or not a technique actually provides the expected benefits. Acquisition of evidence for or against these benefits is not limited to the results of empirical experiments. It includes structured interviews of analysts, managers, and customers; observations of meetings of analysts as they use these techniques; and surveys as well as experiments.

Step 3 is to obtain evidence of whether or not these benefits actually lead to higher quality analysis. Quality of analysis is not limited to accuracy. Other measures of quality include clarity of presentation, transparency in how the conclusion was reached, and construction of an audit trail for subsequent review, all of which are benefits that might be gained, for example, by use of ACH. Evidence of higher quality might come from independent evaluation of quality standards or interviews of customers receiving the reports. Cost effectiveness, including cost in analyst time as well as money, is another criterion of interest. As stated previously in this book, we claim that the use of a structured technique often saves analysts time in the long run. That claim should also be subjected to empirical analysis.

Indicators Validator

The Indicators Validator described in chapter 6 is a new technique developed by Randy Pherson to test the power of a set of indicators to provide early warning of future developments, such as which of several potential scenarios seems to be developing. It uses a matrix similar to an ACH matrix with scenarios listed across the top and indicators down the left side. For each combination of indicator and scenario, the analyst rates on a five-point scale the likelihood that this indicator will or will not be seen if that scenario is developing. This rating measures the diagnostic value of each indicator or its ability to diagnose which scenario is becoming most likely.

It is often found that indicators have little or no value because they are consistent with multiple scenarios. The explanation for this phenomenon is that when analysts are identifying indicators, they typically look for indicators that are consistent with the scenario they are concerned about identifying. They don’t think about the value of an indicator being diminished if it is also consistent with other hypotheses.

The Indicators Validator was developed to meet a perceived need for analysts to better understand the requirements for a good indicator. Ideally, however, the need for this technique and its effectiveness should be tested before all analysts working with indicators are encouraged to use it. Such testing might be done as follows:

* Check the need for the new technique. Select a sample of intelligence reports that include an indicators list and apply the Indicators Validator to each indicator on the list. How often does this test identify indicators that have been put forward despite their having little or no diagnostic value?

* Do a before-and-after comparison. Identify analysts who have developed a set of indicators during the course of their work. Then have them apply the Indicators Validator to their work and see how much difference it makes.

14.0 Vision of the Future
14 Vision of the Future

The Intelligence Community is pursuing several paths in its efforts to improve the quality of intelligence analysis. One of these paths is the increased use of structured analytic techniques, and this book is intended to encourage and support that effort.

 

14.4 IMAGINING THE FUTURE: 2015

Imagine it is now 2015. Our three assumptions have turned out to be accurate, and collaboration in the use of structured analytic techniques is now widespread. What has happened to make this outcome possible, and how has it transformed the way intelligence analysis is done in 2015? This is our vision of what could be happening by that date.

The use of A-Space has been growing for the past five years. Younger analysts in particular have embraced it in addition to Intellipedia as a channel for secure collaboration with their colleagues working on related topics in other offices and agencies. Analysts in different geographic locations arrange to meet as a group from time to time, but most of the ongoing interaction is accomplished via collaborative tools such as A-Space, communities of interest, and Intellipedia.

By 2015, the use of structured analytic techniques has expanded well beyond the United States. The British, Canadian, Australian, and several other foreign intelligence services increasingly incorporate structured techniques into their training programs and their processes for conducting analysis. After the global financial crisis that began in 2008, a number of international financial and business consulting firms adapted several of the core intelligence analysis techniques to their business needs, concluding that they could no longer afford multi-million dollar mistakes that could have been avoided by engaging in more rigorous analysis as part of their business processes.

Notes from Intelligence Support to Urban Operations TC 2-91.4

 

Introduction

URBAN AREAS AND MODERN OPERATIONS

With the continuing growth in the world’s urban areas and increasing population concentrations in urban areas, the probability that Army forces will conduct operations in urban environments is ever more likely. As urbanization has changed the demographic landscape, potential enemies recognize the inherent danger and complexity of this environment to the attacker. Some may view it as their best chance to negate the technological and firepower advantages of modernized opponents. Given the global population trends and the likely strategies and tactics of future threats, Army forces will likely conduct operations in, around, and over urban areas—not as a matter of fate, but as a deliberate choice linked to national security objectives and strategy. Stability operations––where keeping the social structure, economic structure, and political support institutions intact and functioning or having to almost simultaneously provide the services associated with those structures and institutions is the primary mission––may dominate urban operations. This requires specific and timely intelligence support, placing a tremendous demand on the intelligence warfighting functions for operations, short-term planning, and long-term planning.

Providing intelligence support to operations in the complex urban environment can be quite challenging. It may at first seem overwhelming. The amount of detail required for operations in urban environments, along with the large amounts of varied information required to provide intelligence support to these operations, can be daunting. Intelligence professionals must be flexible and adaptive in applying doctrine (including tactics, techniques, and procedures) based on the mission variables: mission, enemy, terrain and weather, troops and support available, time available, and civil considerations (METT-TC).

As with operations in any environment, a key to providing good intelligence support in the urban environment lies in identifying and focusing on the critical information required for each specific mission. The complexity of the urban environment requires focused intelligence. A comprehensive framework must be established to support the commander’s requirements while managing the vast amount of information and intelligence required for urban operations. By addressing the issues and considerations listed in this manual, the commander, G-2 or S-2, and intelligence analyst will be able to address most of the critical aspects of the urban environment and identify both the gaps in the intelligence collection effort and those systems and procedures that may answer them. This will assist the commander in correctly identifying enemy actions so that Army forces can focus on the enemy and seize the initiative while maintaining an understanding of the overall situation.

 

 

Chapter 1
Intelligence and the Urban Environment

OVERVIEW

1-1. The special considerations that must be taken into account in any operation in an urban environment go well beyond the uniqueness of the urban terrain.

JP 3-06 identifies three distinguishing characteristics of the urban environment: physical terrain, population, and infrastructure. Also, FM 3-06 identifies three key overlapping and interdependent components of the urban environment: terrain (natural and manmade), society, and the supporting infrastructure.

CIVIL CONSIDERATIONS (ASCOPE)

1-2. Normally the factors used in the planning and execution of tactical military missions are evaluated in terms of the mission variables: METT-TC. Due to the importance of civil considerations (the letter “C” in METT-TC) in urban operations, those factors are discussed first in this manual. Civil considerations are the influence of manmade infrastructure, civilian institutions, and attitudes and activities of the civilian leaders, populations, and organizations within an area of operations on the conduct of military operations (ADRP 5- 0).

1-3. An appreciation of civil considerations and the ability to analyze their impact on operations enhances several aspects of urban operations––among them, the selection of objectives; location, movement, and control of forces; use of weapons; and force protection measures. Civil considerations comprise six characteristics, expressed in the acronym ASCOPE:

  • A
  • S
  • C
  • O
  • P
  • E

1-4. Civil considerations, in conjunction with the components of the urban environment, provide a useful structure for intelligence personnel to begin to focus their intelligence preparation of the battlefield and organize the huge undertaking of providing intelligence to operations in the urban environment. They should not be considered as separate entities but rather as interdependent. Understanding this interrelationship of systems provides focus for the intelligence analyst and allows the commander a greater understanding of the urban area in question

TERRAIN

1-5. Terrain in the urban environment is complex and challenging. It possesses all the characteristics of the natural landscape, coupled with manmade construction, resulting in a complicated and fluid environment that influences the conduct of military operations in unique ways. Urban areas, the populace within them, their expectations and perceptions, and the activities performed within their boundaries form the economic, political, and cultural focus for the surrounding areas. What military planners must consider for urban areas may range from a few dozen dwellings surrounded by farmland to major metropolitan cities.

1-14. Urban areas are usually regional centers of finance, politics, transportation, industry, and culture. They have population concentrations ranging from several thousand up to millions of people. The larger the city, the greater its regional influence. Because of their psychological, political, or logistic value, control of regionally important cities has often led to pitched battle scenes. In the last 40 years, many cities have expanded dramatically, losing their well-defined boundaries as they extend into the countryside.

URBAN AREAS

1-16. As defined in FM 3-06, urban areas are generally classified as––

  • l Megalopolis (population over 10million).
  • Metropolis (population between 1 to 10 million).
  • City (population 100,000 to 1million).
  • Town or small city (population 3,000 to 100,000).
  • Village (population less than 3,000).

URBAN PATTERNS

1-17. Manmade terrain in the urban environment is overlaid on the natural terrain of the area, and manmade patterns are affected by the underlying natural terrain. It can be useful to keep the underlying natural terrain in mind when analyzing the manmade patterns of the urban environment.

URBAN FUNCTIONAL ZONES

1-24. To provide an accurate depiction of an urban area, it is necessary to have a basic understanding of its numerous physical subdivisions or zones. These zones are functional in nature and reflect “where” something routinely occurs within the urban area.

SOCIETY (SOCIO-CULTURAL)

1-70. When local support is necessary for success, as is often the case in operations in the urban environment, the population is central to accomplishing the mission. The center of gravity for operations in urban environments is often human. To effectively operate among an urban population and maintain their goodwill, it is important to develop a thorough understanding of the society and its culture, to include values, needs, history, religion, customs, and social structure.

1-71. U.S. forces can avoid losing local support for the mission and anticipate local reaction to friendly courses of action by understanding, respecting, and following local customs when possible. The history of a people often explains why the urban population behaves the way it does. For example, U.S. forces might forestall a violent demonstration by understanding the significance of the anniversary of a local hero’s death.

1-72. Accommodating the social norms of a population is potentially the most influential factor in the conduct of urban operations. Unfortunately, this is often neglected. Social factors have greater impact in urban operations than in any other environment. The density of the local populations and the constant interaction between them and U.S. forces greatly increase the importance of social considerations. The fastest way to damage the legitimacy of an operation is to ignore or violate social mores or precepts of a particular population. Groups develop norms and adamantly believe in them all of their lives. The step most often neglected is understanding and respecting these differences.

1-73. The interaction of different cultures during operations in the urban environment may demand greater recognition than in other environments. This greater need for understanding comes from the increased interaction with the civilian populace. Norms and values could involve such diverse areas as food, sleep patterns, casual and close relationships, manners, and cleanliness. Understanding these differences is only a start in developing cultural awareness.

1-74. Religious beliefs and practices are among the most important, yet least understood, aspects of the cultures of other peoples. In many parts of the world, religious norms are a matter of life and death. In many religious wars, it is not uncommon to find suicidal acts in the name of their god. In those situations, religious beliefs are considered more important than life itself.

1-75. Failure to recognize, respect, understand, and incorporate an understanding of the cultural and religious aspects of the society with which U.S. forces are interacting could rapidly lead to an erosion of the legitimacy of the U.S. or multinational force mission. When assessing events, intelligence professionals must consider the norms of the local culture or society. For example, while bribery is not an accepted norm in our society, it may be a totally acceptable practice in another society. If U.S. intelligence professionals assess an incidence of this nature using our own societal norms and values as a reference, it is highly likely that the significance of the event will be misinterpreted.

1-77. Many developing country governments are characterized by nepotism, favor trading, sabotage, and indifference. Corruption is pervasive and institutionalized as a practical way to manage excess demand for city services. The power of officials is often primarily based on family and personal connections, economic, political or military power bases and age, and only after that on education, training, and competence.

1-78. A local government’s breakdown from its previous level of effectiveness will quickly exacerbate problems of public health and mobility. Attempts to get the local-level bureaucracy to function along U.S. lines will produce further breakdown or passive indifference. Any unintentional or intentional threat to the privileges of ranking local officials or to members of their families will be stubbornly resisted. Avoiding such threats and assessing the importance of particular officials requires knowledge of family ties.

1-79. U.S. military planners must also recognize that the urban populace will behave according to their own self-interest. The urban populace will focus on the different interests at work: those of U.S. or multinational forces, those of elements hostile to U.S. or multinational forces, those of international or nongovernmental organizations (NGOs) that may be present; those of local national opportunities and those of the general population. Friendly forces must be constantly aware of these interests and how the local national population perceives them.

1-80. Another significant cultural problem is the presence of displaced persons within an urban area. Rural immigrants, who may have different cultural norms, when combined with city residents displaced by urban conflict, can create a significant strategic problem. Noncombatants and refugees without hostile intent can stop an advancing unit or inadvertently complicate an operation. Additionally, there may be enemy troops, criminal gangs, vigilantes, paramilitary factions, and factions within those groups hiding in the waves of the displaced.

1-81. The enemy knows that it will be hard to identify the threat among neutral or disinterested parties.

Chechen rebels and the Hezbollah effectively used the cover of refugees to attack occupying forces and counted on heavy civilian casualties in the counterattack to gain support with the local population. The goal is to place incalculable stresses on the Soldiers in order to break down discipline and operational integrity.

1-82. Defining the structure of the social hierarchy is often critical to understanding the population. Identifying those in positions of authority is important as well. These city officials, village elders, or tribal chieftains are often the critical nodes of the society and influence the actions of the population at large. In many societies, nominal titles do not equal power––influence does. Many apparent leaders are figureheads, and the true authority lies elsewhere.

1-83. Some areas around the world are not governed by the rule of law, but instead rely upon tradition. Often, ethnic loyalty, religious affiliation, and tribal membership provide societal cohesion and the sense of proper behavior and ethics in dealing with outsiders, such as the U.S. or multinational partners. It is important to understand the complicated inner workings of a society rife with internal conflict, although to do so is difficult and requires a thorough examination of a society’s culture and history.

1-85. While certain patterns do exist, most urban centers are normally composed of a multitude of different peoples, each with their own standards of conduct. Individuals act independently and in their own best interest, which will not always coincide with friendly objectives.

Treating the urban population as a homogenous entity can lead to false assumptions, cultural misunderstandings, and poor situational understanding.

POPULATION

1-86. A population of significant size and density inhabits, works in, and uses the manmade and natural terrain in the urban environment. Civilians remaining in an urban environment may be significant as a threat, an obstacle, a logistics support problem (to include medical support), or a source of support and information.

1-89. Another issue is the local population’s requirement for logistic or medical support. U.S. troops deployed to Somalia and the Balkans immediately had to deal with providing logistic support to starving populations until local and international organizations could take over those functions.

1-90. From an intelligence standpoint, the local population can be a valuable information source.

1-92. Although the population is not a part of the terrain, the populace can impact the mission in both positive and negative ways. Individuals or groups in the population can be coopted by one side or another to perform a surveillance and reconnaissance function, performing as moving reconnaissance to collect information. City residents have intimate knowledge of the city. Their observations can provide information and insights about intelligence gaps and other activities that help reach an understanding of the environment. For instance, residents often know about shortcuts through town. They might also be able to observe and report on a demonstration or meeting that occurs in their area.

1-93. Unarmed combatants operating within the populace or noncombatants might provide intelligence to armed combatants engaged in a confrontation.

1-94. The presence of noncombatants in a combat zone can lead to restrictive rules of engagement, which may impact the way in which a unit accomplishes its mission. The population, groups or individuals or sectors within an urban area can be the target audience of influence activities (such as MISO or threat psychological operations).

1-95. Populations present during urban operations can physically restrict movement and maneuver by limiting or changing the width of routes. People may assist movement if a group can be used as human barrier between one combatant group and another. Refugee flows, for example, can provide covert infiltration or exfiltration routes for members of a force. There may also be unintended restrictions to routes due to normal urban activities which can impact military operations.

1-96. One of the largest challenges to friendly operations is the portion of the population that supports the adversary. Even people conducting their daily activities may inadvertently “get in the way” of any type of operation. For example, curiosity-driven crowds in Haiti often affected patrols by inadvertently forcing units into the middle of the street or pushing them into a single file.

INFRASTRUCTURE

1-101. The infrastructure of an urban environment consists of the basic resources, support systems, communications, and industries upon which the population depends. The key elements that allow an urban area to function are also significant to operations, especially stability operations. The force that controls the water, electricity, telecommunications, natural gas, food production and distribution, and medical facilities will virtually control the urban area. These facilities may not be located within the city’s boundaries. The infrastructure upon which an urban area depends may also provide human services and cultural and political structures that are critical beyond that urban area, perhaps for the entire nation.

1-102. A city’s infrastructure is its foundation. It includes buildings, bridges, roads, airfields, ports, subways, sewers, power plants, industrial sectors, communications, and similar physical structures. Infrastructure varies from city to city. In developed countries, the infrastructure and service sectors are highly sophisticated and well integrated. In developing cities, even basic infrastructure may be lacking. To understand how the infrastructure of a city supports the population, it needs to be viewed as a system of systems. Each component affects the population, the normal operation of the city, and the potential long- term success of military operations conducted there.

1-103. Military planners must understand the functions and interrelationships of these components to assess how disruption or restoration of the infrastructure affects the population and ultimately the mission. By determining the critical nodes and vulnerabilities of a city, allied forces can delineate specific locations within the urban area that are vital to overall operations. Additionally, military planners must initially regard these structures as civilian places or objects, and plan accordingly, until reliable information indicates they are being used for a military purpose.

1-104. Much of the analysis conducted for terrain and society can apply when assessing the urban infrastructure. For example, commanders, staffs, and analysts could not effectively assess the urban economic and commercial infrastructure without simultaneously considering labor. All aspects of the society relate and can be used to further analyze the urban work force since they are a sub-element of the urban society.

TRANSPORTATION

1-106. The transportation network is a critical component of a city’s day-to-day activity. It facilitates the movement of material and personnel around the city. This network includes roads, railways, subways, bus systems, airports, and harbors.

COMMUNICATIONS

1-108. Communication facilities in modern cities are expansive and highly developed. Complicated networks of landlines, radio relay stations, fiber optics, cellular service, and the Internet provide a vast web of communication capabilities. This communication redundancy allows for the constant flow of information.

1-109. National and local engineers and architects may have developed a communication infrastructure more effective and robust than it might first appear.

1-110. Developing countries may have little in the way of communication infrastructure. Information flow can depend on less sophisticated means—couriers, graffiti, rumors/gossiping and local printed media. Even in countries with little communication infrastructure, radios, cell phones, and satellite communications may be readily available to pass information. Understanding communication infrastructure of a city is important because it ultimately controls the flow of information to the population and the enemy.

ENERGY

1-111. All societies require energy (such as wood, coal, oil, natural gas, nuclear, and solar) for basic heating, cooking, and electricity. Energy is needed for industrial production and is therefore vital to the economy. In fact, every sector of a city’s infrastructure relies on energy to some degree. Violence may result from energy scarcity. From a tactical and operational perspective, protecting an urban area’s energy supplies prevents unnecessary hardship to the civilian population and, therefore, facilitates mission accomplishment. Power plants, refineries, and pipelines that provide energy resources for the urban area may not be located within the urban area. Energy facilities are potential targets in an urban conflict. Combatant forces may target these facilities to erode support for the local authorities or to deny these facilities to their enemies.

1-112. Electricity is vital to city populations. Electric companies provide a basic service that provides heat, power, and lighting. Because electricity cannot be stored in any sizable amount, damage to any portion of this utility will immediately affect the population. Electrical services are not always available or reliable in the developing world.

1-113. Interruptions in service are common occurrences in many cities due to a variety of factors. Decayed infrastructure, sabotage, riots, military operations, and other forms of conflict can disrupt electrical service. As a critical node of the overall city service sector, the electrical facilities are potential targets in an urban conflict. Enemy forces may target these facilities to erode support for the local authorities or friendly forces.

WATER AND WASTE DISPOSAL

1-115. Deliberate acts of poisoning cannot be overlooked where access to the water supply is not controlled. U.S. forces may gain no marked tactical advantage by controlling this system, but its protection minimizes the population’s hardship and thus contributes to overall mission success. A buildup of garbage on city streets poses many hazards to include health threats and obstacles. Maintenance or restoration of urban garbage removal to landfills can minimize this threat and improve the confidence of the civilian population in the U.S. friendly mission.

RESOURCES AND MATERIAL PRODUCTION

1-116. Understanding the origination and storage sites of resources that maintain an urban population can be especially critical in stability operations. These sites may need to be secured against looting or attack by threat forces in order to maintain urban services and thereby retain or regain the confidence of the local population in the U.S. mission. Additionally, military production sites may need to be secured to prevent the population from gaining uncontrolled access to quantities of military equipment.

FOOD DISTRIBUTION

1-117. A basic humanitarian need of the local populace is food. During periods of conflict, food supplies in urban areas often become scarce. Maintaining and restoring normal food distribution channels in urban areas will help prevent a humanitarian disaster and greatly assist in maintaining or regaining the good will of the local population for U.S. forces. It may be impossible to immediately restore food distribution channels following a conflict, and U.S. forces may have to work with NGOs that specialize in providing these types of services. This may require friendly forces to provide protection for NGO convoys and personnel in areas where conflict may occur.

MEDICAL FACILITIES

1-118. While the health services infrastructure of most developed cities is advanced, medical facilities are deficient in many countries. International humanitarian organizations may represent the only viable medical care available.

LOCAL POLICE, MILITARY UNITS WITH POLICE AUTHORITY OR MISSIONS, AND FIREFIGHTING UNITS

1-119. These elements can be critical in maintaining public order. Their operations must be integrated with friendly forces in friendly forces controlled areas to ensure that stability and security are restored or maintained. As discussed in chapter 3, the precinct structure of these organizations can also provide a good model for the delineation of unit boundaries with the urban area. It may be necessary for friendly forces to provide training for these elements.

CRISIS MANAGEMENT AND CIVIL DEFENSE

1-120. Local crisis management procedures and civil defense structures can aid U.S. forces in helping to care for noncombatants in areas of ongoing or recent military operations. Additionally, the crisis management and civil defense leadership will often be local officials that may be able to provide structure to help restore or maintain security and local services in urban areas under friendly control. Many larger urban areas have significant response teams and assets to deal with crises. The loss of these key urban “maintainers” may severely impact not only military operations within the urban environment but also threaten the health or mobility of those living there. During periods of combat this may also affect the ability of Soldiers to fight as fires or chemical spills remain unchecked or sewer systems back up. This is especially true when automatic pumping stations that normally handle rising water levels are deprived of power. It may be necessary for friendly forces to provide training for these elements.

SUBTERRANEAN FEATURES

1-121. Subterranean features can be extremely important in identifying underground military structures, concealed avenues of approach, and maintaining public services

Chapter 2
The Threat in the Urban Environment

OVERVIEW

2-1. The obligation of intelligence professionals includes providing adequate information to enable leaders to distinguish threats from nonthreats and combatants from noncombatants. This legal requirement of distinction is the initial obligation of decision makers who rely primarily on the intelligence they are provided.

2-2. Threats in the urban environment can be difficult to identify due to the often complex nature of the forces and the environment. In urban terrain, friendly forces will encounter a variety of potential threats, such as, conventional military forces, paramilitary forces, insurgents or guerillas, terrorists, common criminals, drug traffickers, warlords, and street gangs. These threats may operate independently or some may operate together. Individuals may be active members of one or more groups. Many urban threats lack uniforms or obvious logistic trains and use networks rather than hierarchical structures.

2-3. Little information may be available concerning threat tactics, techniques, and procedures (TTP) so intelligence staffs must collect against these TTP and build threat models. The enemy situation is often extremely fluid––locals friendly to us today may be tomorrow’s belligerents. Adversaries seek to blend in with the local population to avoid being captured or killed. Enemy forces who are familiar with the city layout have an inherently superior awareness of the current situation. Finally, U.S. forces often fail to understand the motives of the urban threat due to difficulties of building cultural awareness and situational understanding for a complex environment and operation. Intelligence personnel must assist the commander in correctly identifying enemy actions so that U.S. forces can focus on the enemy and seize the initiative while maintaining an understanding of the overall situation.

2-4. Potential urban enemies share some characteristics. The broken and compartmented terrain is best suited to the use of small unit operations. Typical urban fighters are organized in squad size elements and employ guerrilla tactics, terrorist tactics, or a combination of the two. They normally choose to attack (often using ambushes) on terrain which canalizes U.S. forces and limits our ability to maneuver or mass while allowing the threat forces to inflict casualties on U.S. forces and then withdraw. Small arms, sniper rifles, rocket-propelled grenades, mines, improvised explosive devices, Molotov cocktails, and booby traps are often the preferred weapons. These weapons range from high tech to low tech and may be 30 to 40 years old or built from hardware supplies, but at close range in the urban environment many of their limitations can be negated.

CONVENTIONAL MILITARY AND PARAMILITARY FORCES

2-6. Conventional military and paramilitary forces are the most overt threats to U.S. and multinational forces. Identifying the capabilities and intent of these threat forces is standard for intelligence professionals for any type of operation in any type of environment. In the urban environment, however, more attention must be paid to threat capabilities that support operations in the urban environment and understanding of what, if any, specialized training these forces have received in conducting urban warfare.

INSURGENTS OR GUERRILLAS

2-7. Several factors are important in analyzing any particular insurgency. Commanders and staffs must perform this analysis within an insurgency’s operational environment. (See FM 3-24/MCWP 3-33.5 for doctrine on analyzing insurgencies. See table 2-2 for examples of information requirements associated with analyzing insurgencies.) Under the conditions of insurgency within the urban environment, the analyst must place more emphasis on—

  • Developing population status overlays showing potential hostile neighborhoods.
  • Developing an understanding of “how” the insurgent or guerrilla organization operates and is organized with a focus toward potential strengths and weaknesses.
  • Determining primary operating or staging areas.
  • Determining mobility corridors and escape routes.
  • Determining most likely targets.
  • Determining where the threat’s logistic facilities are located and how their support organizations operate.
  • Determining the level of popular support (active and passive).
  • Determining the recruiting, command and control, reconnaissance and surveillance, logistics (to include money), and operations techniques and methods of the insurgent or guerrilla organization.
  • Locating neutrals and those actively opposing these organizations.
  • Using pattern analysis and other tools to establish links between the insurgent or guerilla organization and other organizations (to include family links).
  • Determining the underlying social, political, and economic issues that caused the insurgency in the first place and which are continuing to cause the members of the organization as well as elements of the population to support it.

TERRORISTS

2-8. The terrorism threat of is a growing concern for the U.S. military. The opportunities for terrorism are greater in cities due to the presence of large numbers of potential victims, the likelihood of media attention, and the presence of vulnerable infrastructure. Many terrorist cells operate in cities because they can blend with the surrounding population, find recruits, and obtain logistic support. Terrorist cells are not confined to the slum areas of the developing world. In fact, many of the intelligence collection, logistic support, and planning cells for terrorist groups exist in the cities of Western Europe and even the United States.

CRIME AND CRIMINAL ORGANIZATIONS

2-10. These organizations can threaten the successful completion of U.S. operations both directly and indirectly. Criminals and criminal organizations may directly target U.S. forces, stealing supplies or extorting money or contracts. Likewise, increased criminal activity can undermine the U.S. efforts to establish a sense of security among the local populace. Additionally, guerillas, insurgents, and terrorists may take advantage of criminal organizations in many ways, ranging from using them to collect information on U.S. and multinational forces to obtaining supplies, munitions, or services or using their LOCs as logistic support channels. Terrorist organizations may even have their own separate criminal element or be inseparable from a criminal group. An enterprise like narcoterrorism is an example of this.

2-11. Criminal activities will usually continue and may even increase during operations in the urban environment. Criminal organizations often run black markets and illegal smuggling operations in and around urban areas. These types of activities are often established prior to the arrival of U.S. and multinational forces and may proliferate prior to or once U.S. and multinational forces arrive, especially if normal urban services are disrupted by the events that resulted in the U.S. force deployment. For the local population, these activities may be the only reliable source of jobs which allow workers to provide for their families.

INFORMATION OPERATIONS

2-12. Adversary information operations pose a threat to friendly forces. These threats can consist of propaganda, denial and deception, electronic warfare, computer network attack, and (although not a direct threat), the use of the media to achieve an objective. In general, the purposes of these attacks are to––

  • Erode domestic and international support for the mission.
  • Deny friendly forces information on enemy disposition and strength.
  • Disrupt or eavesdrop on friendly communications.
  • Disrupt the U.S. and multinational information flow.

2-13. Through the use of propaganda, adversaries try to undermine the U.S. and multinational mission by eroding popular support among the local population, the American people, and the international community. This is accomplished through savvy public relations campaigns, dissemination of falsehoods or half-truths, staging attacks on civilian sites and then passing the blame onto allied forces, and conducting other operations that make public statements by U.S. leaders appear to be lies and half-truths.

2-14. Urban terrain facilitates adversary denial and deception. The urban population provides a natural screen in which enemy forces can hide their identities, numbers, and equipment. There are other opportunities for denial and deception in cities. Threat forces can hide military equipment in culturally sensitive places—caching weapons in houses of worship or medical facilities. Threat forces can use decoys in urban terrain to cause erroneous assessments of its combat capability, strength, and disposition of assets. Decoys can be employed to absorb expensive and limited precision-guided munitions as well as cause misallocation of limited resources.

2-15. The enemy electronic warfare threat focuses on denying friendly use of the electromagnetic spectrum to disrupt communications and radar emissions. Commercially available tactical jamming equipment is proliferating throughout the world and threatens allied communication and receiving equipment. Ensuring rapid and secure communications is one of the greatest challenges of urban operations.

2-16. The media can alter the course of urban operations and military operations in general. While not a direct threat, the increasing presence of media personnel during military operations can create special challenges. Media products seen in real time without perspective can erode U.S. military support both internationally and domestically. Enemy forces will attempt to shape media coverage to suit their own needs. For example, by escorting media personnel to “civilian casualty sites,” they attempt to sway international opinion against friendly operations. The media may also highlight errors committed by U.S. and multinational forces. In this age of 24-hour media coverage, the death of even a single noncombatant can negatively affect a military campaign.

HEALTH ISSUES

2-17. Urban centers provide favorable conditions for the spread of debilitating or deadly diseases. Sanitation is often poor in urban areas. Local water and food may contain dangerous contaminants. During military operations in the urban environment, sewage systems, power generating plants, water treatment plants, city sanitation, and other services and utilities are vulnerable. When disabled or destroyed, the risk of disease and epidemics increases, which could lead to unrest, further disease, riots, and casualties.

2-22. The typical urban environment includes potential biological or chemical hazards that fall outside the realm of weapons of mass destruction. Operations within confined urban spaces may see fighting in sewers and medical facilities and the subsequent health problems that exposure to contaminants may cause. There may also be deliberate actions to contaminate an enemy’s food or water or infect an enemy. Today’s biological threats include ebola, smallpox, and anthrax.

OTHER URBAN CONCERNS

2-23. There are additional concerns regarding the conduct of military operations within the urban environment. The analyst should, to some extent, also focus on the aviation and fire hazards discussed below.

AVIATION HAZARDS

FIRE HAZARDS

 

Chapter 3
Information Sources in the Urban Environment

OVERVIEW

3-1. In the urban environment, every Soldier is an information collector. Soldiers conducting patrols, manning observation posts, manning checkpoints, or even convoying supplies along a main supply route serve as the commander’s eyes and ears.

3-2. This chapter briefly discusses some of the types of information that Soldiers on the battlefield with different specialties can provide to the intelligence staff. It is essential to properly brief these assets so that they are aware of the intelligence requirements prior to their missions and to debrief them immediately upon completion of their missions; this is to ensure the information is still current in their minds and any timely intelligence they may provide is available for further action.

SCOUTS, SNIPERS, AND RECONNAISSANCE

3-3. Scouts, snipers, and other surveillance and reconnaissance assets can provide valuable information on people and places in the urban environment. Traditionally, scouts, snipers, and reconnaissance assets are often used in surveillance roles (passive collection) from a standoff position. Operations in the urban environment, especially stability operations, may require a more active role (reconnaissance) such as patrolling for some of these assets. When employed in a reconnaissance role (active collection), these assets tend to be most useful when accompanied by an interpreter who allows them to interact with people that they encounter, which allows them to better assess the situation.

ENGINEERS

3-9. Engineers can provide significant amounts of information to the intelligence staff. They support mobility, countermobility and survivability by providing maneuver and engineer commanders with information about the terrain, threat engineer activity, obstacles, and weather effects within the AO. During the planning process engineers can provide specific information on the urban environment such as information on the effects that structures within the urban area may have on the operation, bridge weight class and conditions, and information on most likely obstacle locations and composition. Engineers can assist in assessing potential collateral damage by analyzing risks of damage caused by the release of dangerous forces, power grid and water source stability, and the viability of sewage networks. Engineers provide a range of capabilities that enhance collection efforts. Each of the engineer functions may provide varying degrees of technical expertise in support of any given assigned mission and task. These capabilities are generated from and organized by both combat and general engineer units with overarching support from geospatial means

CIVIL AFFAIRS

3-23. Civil affairs personnel are a key asset in any operation undertaken in the urban environment. The missions of civil affairs personnel keep them constantly interacting with the indigenous populations and institutions (also called IPI). Civil affairs personnel develop area studies, conduct a variety of assessments, and maintain running estimates. These studies, assessments, and running estimates focus on the civil component of an area or operation.

3-24. The basic evaluation of an area is the civil affairs area study. An area study is produced in advance of the need. It establishes baseline information relating to the civil components of the area in question in a format corresponding to the civil affairs functional areas and functional specialties. Civil affairs assessments provide a precise means to fill identified information gaps in order to inform decisionmaking. Civil affairs Soldiers perform three types of assessments: the initial assessment, the deliberate assessment, and the survey. (See FM 3-57 and ATP 3-57.60 for doctrine on civil affairs area studies and assessments.)

3-25. The civil affairs operations running estimate feeds directly into the military decisionmaking process, whether conducted during civil-affairs-only operations or integrated into the supported unit’s planning and development of the common operational picture. During course of action development and wargaming, the civil affairs operations staff ensures each course of action effectively integrates civil considerations.

3-26. Civil affairs units conduct civil information management as a core competency. Civil information management is the process whereby data relating to the civil component of the operational environment is gathered, collated, processed, analyzed, produced into information products, and disseminated (JP 3-57). Effectively executing this process results in civil information being shared with the supported organization, higher headquarters, and other U.S. Government and Department of Defense agencies, intergovernmental organizations, and NGOs.

3-27. While civil affairs forces should never be used as information collection assets, the fact that civil affairs teams constantly travel throughout the AO to conduct their missions make them good providers of combat information, if they are properly debriefed by intelligence staffs. Intelligence personnel should ask their local civil affairs team for their area studies and assessments.

MILITARY INFORMATION SUPPORT OPERATIONS

3-28. MISO units are made up primarily of Soldiers holding the psychological operations military occupational specialty. These Soldiers must have a thorough understanding of the local populace, including the effects of the information environment, and must fully understand the effects that U.S. operations are having on the populace.

Psychological operations Soldiers routinely interact with local populations in their native languages, directly influence specified targets, collect information, and deliver persuasive, informative, and directive messages. Intelligence personnel can leverage attached MISO units’ capabilities and the information they provide to gain key insights into the current sentiments and behavior of local nationals and other important groups. MISO units can be a tremendous resource to the intelligence staff; however, they rely heavily on the intelligence warfighting function.

MILITARY POLICE

3-32. Whether they are conducting area security operations, maneuver and support operations, internment and resettlement, or law and order operations, military police personnel normally have a presence across large parts of the battlefield.

In some cases, they may temporarily assume Customs duties, as they did at the main airport outside Panama City during Operation Just Cause. Generally, military police are better trained in the art of observation than regular Soldiers; with their presence at critical locations on the battlefield, they can provide a wealth of battlefield information provided that they are properly briefed on current intelligence requirements.

3-34. Military police also maintain a detainee information database which can also track detainees in stability operations. Information from this database can be useful to intelligence personnel, especially when constructing link diagrams and association matrixes.

JOINT AND DEPARTMENT OF DEFENSE

3-39. Most Army operations in urban environments are likely to be joint operations. This requires Army intelligence staffs at all levels to make sure that they are familiar with the intelligence collection capabilities and methods of Navy, Air Force, and Marine Corps units operating in and around their AO. Joint operations generally bring more robust intelligence capabilities to the AO; however joint operations also require significantly more coordination to ensure resources are being used to their fullest extent.

INTELLIGENCE SUPPORT PACKAGES

3-40. The Defense Intelligence Agency produces intelligence support packages in response to the theater or joint task force target list or a request for information. A target summary provides data on target significance, description, imagery annotations, node functions, air defenses, and critical nodal analysis. These packages support targeting of specific military and civilian installations. Intelligence support packages include—

  • Land satellite (also called LANDSAT) imagery.
  • Land satellite digital terrain elevation data-merge (also called DTED-merge) imagery.
  • Target line drawings.
  • Photography (when available).
  • Multiscale electro-optical (also called EO) imagery.

NATIONAL GEOSPATIAL-INTELLIGENCE AGENCY PRODUCTS

3-44. NGA produces a range of products that can be useful in the urban environment. These products include city graphics, urban features databases, gridded installation imagery (Secret-level products), the geographic names database, terrain analysis products, imagery intelligence briefs, and annotated graphics

MULTINATIONAL

3-47. Due to classification issues, sharing intelligence during multinational operations can be challenging. It may be the case that U.S. forces are working in a multinational force that contains both member countries with whom the United States has close intelligence ties and others with whom the United States has few or no intelligence ties. In many cases intelligence personnel from other countries have unique skills that can significantly contribute to the friendly intelligence effort.

3-48. Establishing methods of exchanging battlefield information and critical intelligence as well as coordinating intelligence collection efforts can be crucial to the overall success of the mission. Reports from multinational force members can fill intelligence gaps for the U.S. forces and the multinational force as a whole.

3-49. The unique perspective of some of the multinational partners may provide U.S. intelligence analysts with key insights. (For example, during the Vietnam War, Korean forces used to living in environments similar to Vietnamese villages often noticed anomalies that Americans missed such as too much rice cooking in the pots for the number of people visible in the village.) Likewise, few countries have the sophisticated intelligence collection assets available to U.S. forces, and information that the U.S. may provide could be critical both to their mission success and to their force protection.

INTERNATIONAL AND INTERGOVERNMENTAL ORGANIZATIONS

3-50. International organizations (not NGOs) and intergovernmental organizations will often have a presence in areas in which U.S. forces may conduct operations, especially if those areas experience some type of unrest or upheaval prior to U.S. operations. International organizations and intergovernmental organizations include such agencies as the International Criminal Police Organization (also called Interpol), the United Nations, and the North Atlantic Treaty Organization. When providing support or considering offering support to the local populace, international organizations and intergovernmental organizations usually conduct assessments of the local areas that focus on understanding the needs of the local populace, the ability of the infrastructure to enable their support or aid to be effectively provided, and the general security situation and stability of the area.

NONGOVERNMENTAL ORGANIZATIONS

3-53. As with international organizations and intergovernmental organizations, NGOs will often have a presence in areas in which U.S. forces may conduct operations. Since most of these organizations are concerned with providing support to the local populace, their presence tends to be especially prominent in areas experiencing or that recently experienced some type of unrest or upheaval prior to U.S. operations, during U.S. operations, or following U.S. operations.

3-54. NGOs strive to protect their shield of neutrality in all situations and do not generally offer copies of their assessments to government organizations. Nonetheless, it is often in their interest to make U.S. forces aware of their operations in areas under U.S. control. Representatives of individual NGOs operating in areas under U.S. control may provide U.S. forces with their detailed assessments of those areas in order to gain U.S. support either in the form of additional material aid for the local populace or for security considerations. (See JP 3-08 and FM 3-07.)

3-55. Individual NGO members are often highly willing to discuss what they have seen during their operations with U.S. forces personnel. Some NGOs have been used in the past as fronts for threat organizations seeking to operate against U.S. forces. Intelligence analysts must therefore carefully evaluate information provided by NGO personnel.

LOCAL NATIONAL AUTHORITIES

3-56. Local national authorities and former local national authorities know their populations and local infrastructure best. Key information can be gained from cooperative local national authorities or former authorities. Analysts must always be careful to consider that these authorities may be biased for any number of reasons.

3-57. Politicians usually know their populations very well or they would not be able to remain in office. They can provide detailed socio-cultural information on the populace within their region of control (for example, economic strengths and weaknesses or religious, ethnic, and tribal breakdowns). They are also usually aware of the infrastructure. Obviously, intelligence analysts must be aware that information provided by these personnel generally will be biased and almost certainly slanted in the long-term favor of that individual.

Chapter 4
Operations in the Urban Environment

OVERVIEW

4-1. In the urban environment, different types of operations (offense, defense, and stability) often occur simultaneously in adjacent portions of a unit’s AO. Intelligence support to operations in this extremely complex environment often requires a higher degree of specificity and fidelity in intelligence products than required in operations conducted in other environments. Intelligence staffs have finite resources and time available to accomplish their tasks. Realistically, intelligence staffs cannot expect to always be able to initially provide the level of specificity and number of products needed to support commanders.

4-2. Using the mission variables (METT-TC), intelligence staffs start prioritizing by focusing on the commander’s and operational requirements to create critical initial products. Requests for information to higher echelons can assist lower level intelligence sections in providing critical detail for these products. As lower level intelligence staffs create products or update products from higher, they must provide those products to higher so that higher can maintain an awareness of the current situation.

Once initial critical products have been built, intelligence staffs must continue building any additional support products required. Just as Soldiers continue to improve their foxholes and battle positions the longer they remain in place, intelligence staffs continue to improve and refine products that have already been built.

4-3. When preparing for operations in the urban environment, intelligence analysts consider the three primary characteristics of the urban environment as well as the threat.

Commanders and staffs require a good understanding of the civil considerations for the urban area as well as the situation in the surrounding region. This includes the governmental leaders and political organizations and structures, military and paramilitary forces, economic situation, sociological background, demographics, history, criminal organizations and activity, and any nongovernmental ruling elite (for example, factions, families, tribes). All are key factors although some are more important than others, depending on the situation in the target country. Intelligence personnel must assist the commander in correctly identifying enemy actions so U.S. forces can focus on the enemy and seize the initiative while maintaining an understanding of the overall situation.

4-7. Information collection is an activity that synchronizes and integrates the planning and employment of sensors and assets as well as the processing, exploitation, and dissemination systems in direct support of current and future operations (FM 3-55). This activity integrates the intelligence and operations staff functions focused on answering the commander’s critical information requirements. At the tactical level, intelligence operations, reconnaissance, security operations, and surveillance are the four primary tasks conducted as part of information collection. (See FM 3-55.) The intelligence warfighting function contributes to information collection through intelligence operations and the plan requirements and assess collection task.

4-8. Plan requirements and assess collection is the task of analyzing requirements, evaluating available assets (internal and external), recommending to the operations staff taskings for information collection assets, submitting requests for information for adjacent and higher collection support, and assessing the effectiveness of the information collection plan (ATP 2-01). It is a commander-driven, coordinated staff effort led by the G-2 or S-2. The continuous functions of planning requirements and assessing collection identify the best way to satisfy the requirements of the supported commander and staff. These functions are not necessarily sequential.

4-9. Intelligence operations are the tasks undertaken by military intelligence units and Soldiers to obtain information to satisfy validated requirements (ADRP 2-0). Intelligence operations collect information about the activities and resources of the threat or information concerning the characteristics of the operational environment. (See FM 2-0 for doctrine on intelligence operations.)

PLANNING CONSIDERATIONS

4-15. When planning for intelligence support to operations in the urban environment, the following must be accomplished:

  • Define priorities for information collection.
  • Coordinate for movement of information collection assets.
  • Coordinate for information and intelligence flow with all military intelligence units, non- military-intelligence units, other Service components and multinational organizations.
  • Establish liaison with all elements, organizations, and local nationals necessary for mission accomplishment and force protection.

4-16. One of the major factors when planning for most operations in urban environments is the local population and their potential effect on U.S. operations. Intelligence personnel must be cognizant of local national perceptions of U.S. forces, their environment, and the nature of the conflict. To engage successfully in this dynamic, U.S. forces must avoid mirror imaging, that is, imposing their own values on the threat courses of action. Careful study of the threat country, collaboration with country experts, and through the use of people with pertinent ethnic backgrounds in the wargaming process all contribute to avoiding mirror imaging.

4-18. The information collection plan must be as detailed as possible and must be regularly reviewed for changes during operations in constantly changing urban environments. The finite information collection resources available to any command must be feasibly allocated and reallocated as often as necessary in order to keep up with the fluid urban environment. Employing these assets within their capabilities, taking into consideration their limitations within the urban environment, is critical to ensuring that a focused intelligence effort is successful.

PREPARE

4-19. During the preparation for operations, intelligence staffs and collection assets must refine their products, collection plans, and reporting procedures. Establishing and testing the intelligence architecture (to include joint and multinational elements) is a critical activity during this phase. Intelligence staffs must ensure that all intelligence personnel are aware of the current situation and intelligence priorities are fully trained on both individual and collective tasks, and are aware of any limitations within the intelligence architecture that are relevant to them.

4-20. Additionally, intelligence staffs must ensure that targeting procedures are well-defined and executed. In urban environments, nonlethal targeting may be more prevalent than lethal targeting and must be fully integrated into the process.

EXECUTE

4-21. Execution of operations in urban environments requires continuous updating and refining of intelligence priorities and information collection plan as the situation changes in order to provide the necessary intelligence to the commander in a timely manner. (See ATP 2-01.) Timely reporting, processing, fusion, analysis, production, and dissemination of critical intelligence often must be done within a more compressed timeline in the fluid and complex urban environment than in other environments.

4-22. Large amounts of information are generally available for collection within the urban environment. Procedures must be set in place to sort the information to determine which information is relevant and which is not.

4-23. Reported information must always be carefully assessed and verified with other sources of intelligence and information to avoid acting on single-source reporting. In stability operations, where human intelligence is the primary source of intelligence, acting on single-source reporting is a constant pitfall. Situations may occur, however, where the consequences of not acting on unverified, single-source intelligence may be worse than any potential negative consequences resulting from acting on that unverified information.

ASSESS

4-24. As previously stated, operations in the urban environment, especially stability operations, can be extremely fluid. The intelligence staff must constantly reevaluate the TTP of U.S. forces due to the rapid changes in the situation and the threat’s adaptation to our TTP. New threat TTP or potential changes to threat TTP identified by intelligence analysts must be quickly provided to the commander and operations staff so that U.S. forces TTP can be adjusted accordingly.

4-29. Debriefing must occur as soon as possible after the completion of a mission to ensure that the information is obtained while it is still fresh in the Soldiers’ minds and to ensure that time-sensitive information is reported to intelligence channels immediately.

Appendix A
Urban Intelligence Tools and Products

OVERVIEW

A-1. The urban environment offers the analyst many challenges normally not found in other environments. The concentration of multiple environmental factors (high rises, demographic concerns, tunnels, waterways, and others) requires the intelligence analyst to prepare a detailed plan for collecting information within the urban environment.

A-2. There are numerous products and tools that may be employed in assessing the urban environment. Due to the complex nature of the urban environment, these tools and products normally will be used to assist in providing an awareness of the current situation and situational understanding.

A-3. The tools and products listed in this appendix are only some of the tools and products that may be used during operations in an urban environment. For purposes of this appendix items listed as tools are ones generally assumed to be used primarily within intelligence sections for analytical purposes. Products are generally assumed to be items developed at least in part by intelligence sections that are used primarily by personnel outside intelligence sections.

TOOLS

A-4. Intelligence analysis is the process by which collected information is evaluated and integrated with existing information to facilitate intelligence production (ADRP 2-0). There are numerous software applications available to the Army that can be used as tools to do analysis as well as to create relevant intelligence products for the urban environment. These software applications range from such programs as Analyst Notebook and Crimelink which have link analysis, association matrix, and pattern analysis software tools to the Urban Tactical Planner, which was developed by the Topographic Engineering Center as an operational planning tool and is available on the Digital Topographic Support System. The focus of this section, however, is on the types of tool that could be used in the urban environment rather than on the software or hardware that may be used to create or manipulate them. (See ATP 2-33.4 for doctrine on intelligence analysis.)

LINK ANALYSIS TOOLS

A-6. Link analysis is used to depict contacts, associations, and relationships between persons, events, activities, and organizations. Five types of link analysis tools are––

  • Link diagrams.
  • Association matrices.
  • Relationship matrices.
  • Activities matrices.
  • Time event charts.

Link Diagrams

A-7. This tool seeks to graphically depict relationships between people, events, locations, or other factors deemed significant in any given situation. Link diagrams help analysts better understand how people and factors are interrelated in order to determine key links. (See figure A-2.)

Relationship Matries

A-9. Relationship matrices are intended to depict the nature of relationships between elements of the operational area. The elements can include members from the noncombatant population, the friendly force, international organizations, and an adversary group. Utility infrastructure, significant buildings, media, and activities might also be included. The nature of the relationship between two or more components includes measures of contention, collusion, or dependency. The purpose of this tool is to demonstrate graphically how each component of the city interacts with the others and whether these interactions promote or degrade the likelihood of mission success. The relationships represented in the matrix can also begin to help the analysts in deciphering how to best use the relationship to help shape the environment.

A-10. The example relationship matrix shown in figure A-4, while not complete, is intended to show how the relationships among a representative compilation of population groups can be depicted. This example is an extremely simple version of what might be used during an operation in which many actors and other population elements are present.

A-12. Using figure A-4, there is a relationship of possible collusion that exists between the government and political group 3, and a friendly relationship between the government and the media. Some questions the intelligence analyst might ask when reviewing this information include—

  • How can the government use the media to its advantage?
  • Will the government seek to discredit political group3 using the media?
  • Will the population view the media’s reporting as credible?
  • Does the population see the government as willfully using the media to suit its own ends?

Activities Matrixes

A-13. Activities matrices help analysts connect individuals (such as those in association matrices) to organizations, events, entities, addresses, and activities—anything other than people. Information from this matrix, combined with information from association matrices, assists analysts in linking personalities as well.

LISTS AND TIMELINES OF KEY DATES

A-15. In many operations, including most stability operations, key local national holidays, historic events, and significant cultural and political events can be extremely important. Soldiers are often provided with a list of these key dates in order to identify potential dates of increased or unusual activity. These lists, however, rarely include a description of why these dates are significant and what can be expected to happen on the holiday. In some cases, days of the week themselves are significant. For example, in Bosnia weddings were often held on Fridays and celebratory fire was a common occurrence on Friday afternoons and late into the night.

As analytic tools, timelines might help the intelligence analyst predict how key sectors of the population might react to given circumstances.

CULTURE DESCRIPTION OR CULTURE COMPARISON CHART OR MATRIX

A-16. In order for the intelligence analyst to avoid the common mistake of assuming that only one perspective exists, it may be helpful to clearly point out the differences between local ideology, politics, predominant religion, acceptable standards of living, norms and mores, and U.S. norms. A culture comparison chart can be a stand-alone tool, listing just the different characteristics of the culture in question, or it can be comparative—assessing the host-nation population relative to known and familiar conditions.

PERCEPTION ASSESSMENT MATRIX

A-17. Perception assessment matrices are often used by psychological operations personnel and can be a valuable tool for intelligence analysts. Friendly force activities intended to be benign or benevolent might have negative results if a population’s perceptions are not considered, then assessed or measured. This is true because perceptions––more than reality––drive decision making and in turn could influence the reactions of entire populations. The perception assessment matrix seeks to provide some measure of effectiveness for the unit’s ability to reach an effect (for example, maintain legitimacy) during an operation. In this sense, the matrix can also be used to directly measure the effectiveness of the unit’s civil affairs, public affairs, and MISO efforts.

A-20. Perception can work counter to operational objectives. Perceptions should therefore be assessed both before and throughout an operation. Although it is not possible to read the minds of the local national population, there are several means to measure its perceptions:

  • Demographic analysis and cultural intelligence are key components of perception analysis.
  • Understanding a population’s history can help predict expectations and reactions.
  • Human intelligence can provide information on population perceptions.
  • Reactions and key activities can be observed in order to decipher whether people act based on real conditions or perceived conditions.
  • Editorial and opinion pieces of relevant newspapers can be monitored for changes in tone or opinion shifts that can steer or may be reacting to the opinions of a population group.

A-21. Perception assessment matrices aim to measure the disparities between friendly force actions and what population groups perceive.

PRODUCTS

A-23. When conducting operations in the urban environment, many products may be required. These products may be used individually or combined, as the mission requires. Many of the products listed in this appendix will be created in conjunction with multiple staff elements.

Notes on Fourth Edition of the Commander’s Critical Information Requirements

Fourth Edition of the Commander’s Critical Information Requirements (CCIRs) Insights and Best Practices Focus Paper, written by the Deployable Training Division (DTD) of the Joint Staff J7 and published by the Joint Staff J7.

http://www.jcs.mil/Doctrine/focus_papers.aspx.

Four overarching considerations:

  • CCIRs directly support mission command and commander-centric operations. Incorporate a philosophy of command and feedback in which CCIR reporting generate opportunities and decision space rather than simply answers to discrete questions.
  • CCIRs provide the necessary focus for a broad range of collection, analysis, and information flow management to better support decision making.
  • CCIR answers provide understanding and knowledge, not simply data or isolated bits of information. Providing context is important.
  • CCIRs change as the mission, priorities, and operating environment change. Have a process to periodically review and update CCIRs.

1.0 EXECUTIVE SUMMARY. CCIRs directly support mission command and commander-centric operations (see quote at right). CCIRs, as a related derivative of guidance and intent, assist joint commanders in focusing support on their decision making requirements. CCIRs support two activities:

  • Understanding the increasingly complex environment (e.g., supporting assessments that increase this understanding of the environment, defining and redefining of the problem, and informing planning guidance).
  • Commander decision making, by linking CCIRs to the execution of branch and sequel plans.

This is a necessary and broader view than the legacy role of CCIRs only supporting well-defined decision points. Commander use of CCIR provides the necessary focus for a broad range of collection, analysis, and information flow management to better support decision making.

Insights:

  • CCIRs support commanders’ situational understanding and decision making at every echelon of command (tactical, operational, and theater-strategic). They support different decision sets, focus, and event horizons at each echelon.
  • Commanders at higher echelons have found that a traditional, tactical view of CCIRs supporting time sensitive, prearranged decision requirements is often too narrow to be effective. This tactical view does not capture the necessity for better understanding the environment nor the key role of assessment at the operational level. Further, operational CCIRs, if focused at specific “tactical-level” events, have the potential to impede subordinate’s decision making and agility.
  • Develop CCIRs during design and planning, not “on the Joint Operations Center (JOC) floor” during execution.
  • Consider the role of CCIRs on directing collection, analysis, and dissemination of information supporting assessment activities – a key role of operational headquarters in setting conditions.
  • CCIRs help prioritize allocation of limited resources. CCIRs, coupled with operational priorities, guide and prioritize employment of collection assets and analysis resources, and assist in channeling the flow of information within, to, and from the headquarters.
  • Information flow is essential to the success of the decision making process. Clear reporting procedures assist in timely answering of CCIRs.
  • CCIR answers should provide understanding and knowledge, not simply data or isolated bits of information. Providing context is important.
  • Differentiate between CCIRs and other important information requirements like “wake-up criteria.” Much of this other type of information is often of a tactical nature, not essential for key operational-level decisions, and can pull the commander’s focus away from an operational role and associated decisions down to tactical issues.
  • CCIRs change as the mission, priorities, and operating environment change. Have a process to periodically review and update CCIRs.

“CCIR are not a hard set of reporting requirements limited to specific actions or events, but more a philosophy of command and feedback that can generate opportunities and decision space.” – Former CCDR

 

2.0 UNDERSTANDING TODAY’S COMPLEX ENVIRONMENT. Today’s complex operational environment has changed how we view CCIRs. Operational commanders spend much of their time working to better understand the environment, the decision calculus of potential adversaries, and progress in achieving campaign objectives. We find that this understanding, deepened by assessment, informs design, planning, and decisions.

The strategic landscape directly affects the type and scope of our decisions and also dictates what
kind of information is required to make those decisions. Today’s great power competition, interdependent global markets, readily-accessible communications, and increased use of the cyber and space domain has broadened security responsibilities beyond a solely military concern. The environment is more than a military battlefield; it’s a network of interrelated political, military, economic, social, informational, and infrastructure systems that impact on our decisions and are impacted by them. We regularly hear from the warfighters about the requirement to maintain a broader perspective of this environment.

The information revolution has clearly changed the way we operate and make decisions. We and our adversaries have unprecedented ability to transmit, receive, and disrupt data and it is growing exponentially, both in speed and volume. This has affected our information requirements in many ways. The sheer volume of information can camouflage the critical information we need. We are still working on our ability to sift through this information and find the relevant nuggets that will inform decision making. At the same time, we are recognizing the need for higher level headquarters to assist in answering subordinates’ CCIRs, either directly or through tailored decentralization, federation and common database design of our collection and analysis assets.

The lack of predictability of our potential adversaries complicates our decision requirements and supporting information requirements. Our adversaries are both nation states and non-state entities consisting of loosely organized networks with no discernible hierarchical structure. The Joint Force and our Intelligence Community is focused on better understanding the decision calculus of our adversaries, and what influences their decisions. Lastly, our adversaries no longer can be defined solely in terms of their military capabilities; likewise, neither can our CCIRs be simply focused on the military aspects of the mission and environment.

Many of our decisions and information requirements are tied to our partners. We fight as one interdependent team with our joint, interagency, and multinational partners. We depend on each other to succeed in today’s complex security environment. Likewise our decisions and information requirements are interdependent. We have seen the need for an inclusive versus exclusive mindset with our joint, interagency, and multinational partners in how we assess, plan, and make decisions.

3.0 ROLE OF COMMANDER’S CRITICAL INFORMATION REQUIREMENTS (CCIRs). Many joint commanders are fully immersed in the unified action, whole-of-government(s) approach and have broadened their CCIRs to support the decision requirements of their operational level HQ role. These decision requirements include both traditional, time-sensitive execution requirements as well as the longer term assessment, situational understanding, and design and planning requirements. This broadening of their CCIRs has provided a deeper focus for the collection and analysis efforts supporting all three event horizons.

CCIRs doctrinally contain two components: priority intelligence requirements (PIR), which are focused on the adversary and environment; and friendly force information requirements (FFIR) which are focused on friendly forces and supporting capabilities. We observed ISAF Joint Command (2010) add a third component, Host Nation Information Requirements (HNIR) to better focus on information about the host nation to effectively partner, develop plans, make decisions, and integrate with the host nation and civilian activities.

Operational-level commanders focus on attempting to understand the broader environment and how to develop and implement, in conjunction with their partners, the full complement of military and non-military actions to achieve operational and strategic objectives. They recognize that their decisions within this environment are interdependent with the decisions of other mission partners. These commanders have found it necessary to account for the many potential kinetic, nonkinetic, and informational activities of all the stakeholders as they pursue mission accomplishment and influence behavior.

The CCIRs associated with this broader comprehensive approach are different than those that support only traditional time sensitive, current operations-focused decisions. Commanders include information required for assessments in CCIR to better inform the far reaching planning decisions at the operational level.

Prioritization. We also see the important role of CCIRs in prioritizing resources. This prioritization of both collection and analysis resources enhances the quality of understanding and assessments, and ultimately results in the commander gaining better situational understanding, leading to better guidance and intent, and resulting in a greater likelihood of mission success.

We have seen challenges faced by operational-level commanders and staff that have singularly followed a more traditional “decision point-centric” approach in the use of CCIRs. Their CCIRs are focused on supporting decisions for predictable events or activities, and may often be time-sensitive. This current operations focus of their CCIRs may not correctly inform prioritization of collection and analysis efforts supporting assessment and planning in the future operations and future plans event horizon. Absent focus, these collection and analysis efforts supporting assessment and planning may be ad hoc and under-resourced.

Assessment is central to deepen understanding of the environment. We are finding that many commanders identify their critical measures of effectiveness as CCIRs to ensure appropriate prioritization of resources. This prioritization of both collection and analysis resources enhances the quality of assessments, better situational understanding, and better guidance and intent.

Supporting Subordinates’ Agility. CCIRs can support (or hinder) agility of action. CCIRs should address the appropriate commander-level information requirements given the associated decentralized/delegated authorities and approvals. Alignment of CCIRs supporting decentralized execution and authorities directly support empowerment of subordinates, while retention of CCIRs at the operational level for information supporting decentralized activities slow subordinates’ agility, add unnecessary reporting requirements, and shift the operational level HQ’s focus away from its roles and responsibilities in setting conditions.

The decentralization of both the decisions and alignment of associated CCIRs is key to agility and flexibility. Operational-level commanders help set conditions for subordinates’ success through mission-type orders, guidance and intent, and thought-out decentralization of decision/mission approval levels together with the appropriate decentralization of supporting assets. They recognize the value of decentralizing to the lowest level capable of integrating these assets. Operational commanders enable increased agility and flexibility by delegating the requisite tactical-level decision authorities to their subordinates commensurate with their responsibilities. Decentralizing approval levels (and associated CCIRs) allows us to more rapidly take advantage of opportunities in today’s operational environment as noted in the above figure. We see this as a best practice. It allows for more agility of the force while freeing the operational commander to focus on planning and decisions at the operational level.

Together with decentralization of authorities, operational commanders also assist their subordinates by helping answer the subordinates’ CCIRs either directly or through tailored decentralization, federation, and common database design of collection and analysis assets.

Insights:

  • Broaden CCIRs at the operational level to support traditional, time-sensitive execution requirements and longer term assessment, situational understanding, and design and planning requirements. Seek knowledge and understanding, versus a sole focus on data or information.
  • Use CCIRs in conjunction with operational priorities to focus and prioritize collection and analysis efforts supporting all three event horizons.
  • Many of the operational level decisions are not ‘snap’ decisions made in the JOC and focused at the tactical level, but rather require detailed analysis and assessment of the broader environment tied to desired effects and stated objectives.
  • Delegate tactical level decisions to their subordinates has allowed them to focus their efforts on the higher level, broader operational decisions.
  • Support decentralized decision authorities by helping to answer their related CCIRs. Retaining CCIR at higher level for decisions that have already been delegated to a subordinate adds unnecessary reporting requirements on those subordinates, slows their agility, and shifts higher HQ focus away from its more appropriate role of setting conditions.

4.0 CCIR DEVELOPMENT, APPROVAL, AND DISSEMINATION. Commanders drive development of CCIRs. We have seen successful use of the CCIRs process (see figure). This process lays out specific responsibilities for development, validation, dissemination, monitoring, reporting, and maintenance (i.e., modifying CCIRs). While not in current doctrine, it still effectively captures an effective process…

Operational-level commands develop many of their CCIRs during design and the planning process. We normally see decision requirements transcending all three event horizons. Some decisions in the current operations event horizon may have very specific and time sensitive information requirements, while others are broader, assessment focused, and may be much more subjective. They may also include information requirements on DIME (Diplomatic, Informational, Military, Economic) partner actions and capabilities and environmental conditions.

Branch and Sequel Execution: While many CCIRs support branch and sequel plan decision requirements at all levels, the complexity of today’s environment makes the predictive development of all the potential specific decisions (and supporting CCIRs) that an operational commander may face difficult. However, this difficulty doesn’t mean that we should stop conducting branch and sequel planning at the operational level – just the opposite. We must continue to focus on both the “why,” “so what,” “what if,” and “what’s next” at the operational level to drive collection and analysis and set conditions for the success of our subordinates. The complexity does suggest, though, that some of our branch and sequel planning at the operational level may not result in precise, predictive decision points with associated CCIRs that we may be accustomed to at the tactical level. Additionally, unlike the tactical level, much of the information precipitating operational commanders’ major decisions will likely not come off the JOC floor, but rather through interaction with others and from the results of “thought-out” operational-level assessments. Much of this information may not be in the precise form of answering a specifically worded and time-sensitive PIR or FFIR, but rather as the result of a broader assessment answering whether we are accomplishing the campaign or operational objectives or attaining desired conditions for continued actions together with recommendations on the “so what.”

Most CCIRs are developed during course of action (COA) development and analysis together with branch and sequel planning. We normally see decision points transcending all three event horizons with associated PIRs and FFIRs (and in some cases, unique IRs such as HNIRs) as depicted on the above figure. These PIRs and FFIRs may be directly associated with developed measures of effectiveness (MOE). Analysis of these MOEs helps depict how well friendly operations are achieving objectives, and may result in the decision to execute a branch or sequel plan.

Some decision points in the current operations event horizon may have very specific and time sensitive information requirements, while those supporting branch and sequel execution are normally broader and may be much more subjective. They will also probably include information requirements on “DIME” partner actions/capabilities and adversary “PMESII” conditions. Some examples:

  • Current operations decisions: These decisions will likely require time sensitive information on friendly, neutral, and adversary’s actions and disposition. Examples of decisions include: personnel recovery actions; shifting of ISR assets; targeting of high value targets; and employment of the reserve.
  • Branch plan decisions: These decisions will likely require information from assessment on areas like: the adversary’s intent and changing ‘PMESII’ conditions, DIME partner, coalition, and host nation capabilities and requests, and target audience perceptions (using non- traditional collection means such as polls). Examples of decisions include: shift of main effort; change in priority; refocusing information operations and public affairs messages; redistribution of forces; command relationship and task organization changes.
  • Sequel plan decisions: These types of decisions will be based on broader campaign assessments providing geopolitical, social, and informational analysis and capabilities of partner stakeholders. Examples of decisions include: transitions in overall phasing such as moving to a support to civil authority phase; force rotations; or withdrawal.

Planners normally develop decision support templates (DST) to lay out these kinds of decisions and the associated CCIRs in more detail (see figure). They help link CCIRs to the decisions they support. The adjacent figure depicts some of the information provided to the commander to gain his guidance and approval. These DSTs also help provide the clarity to collection and analysis resources to focus effort and information flow.

Insights:

  • Commanders drive development of CCIRs.
  • Planners help develop CCIR during the design and planning process across all three event horizons.
  • CCIRs at the operational level will likely include information requirements on “DIME” partner actions and capabilities and environmental conditions.
  • CCIRs change as the mission, priorities, and operating environment changes. Have a process to periodically review and update CCIRs to ensure relevance.

 

5.0 CCIR MONITORING AND REPORTING. Proactive attention to CCIRs is essential for JOC (and other staff) personnel to focus limited resources in support of commander’s decision making. To promote awareness and attention to the commander’s information requirements, we recommend prominent display of CCIRs within the JOC and other assessment areas.

Many of the CCIRs precipitating operational commanders’ major decisions will likely not
come off the JOC floor but rather through interaction with others and from the results of operational-level assessment. Much of this information may not be in the precise form of answering a specifically worded branch or sequel oriented CCIR, but rather as the result of a broader assessment answering whether we are accomplishing the campaign objectives together with recommendations on the “so what.”

The senior leadership is provided answers to CCIRs in many venues to include operational update assessments, battlefield circulation, and interaction with stakeholders. This information may be provided in some form of presentation media that addresses the decision requirement, associated CCIRs, and status of those CCIRs as depicted in the figure above. We often see a
JOC chart such as that portrayed in the adjacent figure for selected decision requirements. This “status” of CCIRs enables the commander to maintain situational awareness of the various criteria that the staff and stakeholders are monitoring and get a feel for the proximity and likelihood of the potential decision.

Many of the CCIRs precipitating operational commanders’ major decisions will likely not
come off the JOC floor but rather through interaction with others and from the results of operational-level assessment. Much of this information may not be in the precise form of answering a specifically worded branch or sequel oriented CCIR, but rather as the result of a broader assessment answering whether we are accomplishing the campaign objectives together with recommendations on the “so what.”

 

Insights:
· Prominently display CCIRs within the JOC, other assessment areas, and on the HQ portal to facilitate component and stakeholder awareness of CCIRs.
· Clearly specify what constitutes notification, to whom, how soon it has to be done, and how to provide status of notification efforts and results.

6.0 RELATED INFORMATION REQUIREMENTS. We see JOCs struggle to determine what constitutes a reportable event other than CCIR triggers. Many commands use “notification criteria” matrices (see figure) to clearly depict notification criteria for both CCIR and other events that spells out who needs to be notified of various events outside the rhythm of a scheduled update brief. Notification criteria and the reporting chain should be clearly understood to prevent stovepiping information or inadvertent failures in notification.

Significant events (SIGEVENTs) should be defined, tracked, reported, and monitored until all required staff action has been completed. We have seen some JOCs preemptively remove some SIGEVENTs from their “radar” before required follow- on actions have been accomplished. Once a SIGEVENT has been closed, it should be archived for record purposes and to assist the intelligence and assessment functions.